text
stringlengths 0
1.22M
|
---|
QSD IV :
2+1 Euclidean Quantum Gravity as a model to test
3+1 Lorentzian Quantum Gravity
T. Thiemann
Physics Department, Harvard University,
Cambridge, MA 02138, USA
thiemann@math.harvard.eduNew address :
Albert-Einstein-Institut, Max-Planck-Institut für Gravitationsphysik,
Schlaatzweg 1, 14473 Potsdam, Germany,
Internet : thiemann@aei-potsdam.mpg.de
(November 20, 2020
Preprint HUTMP-96/B-360)
Abstract
The quantization of Lorentzian or Euclidean 2+1 gravity by canonical methods
is a well-studied problem. However, the constraints of 2+1
gravity are those of a topological field theory and therefore resemble very
little those of the corresponding Lorentzian 3+1 constraints.
In this paper we canonically quantize Euclidean 2+1 gravity for arbitrary
genus of the spacelike hypersurface with new, classically equivalent
constraints that maximally probe the Lorentzian 3+1 situation. We choose
the signature to be Euclidean because this
implies that the gauge group is, as in the 3+1 case, $SU(2)$ rather than
$SU(1,1)$. We employ, and carry out to full completion, the new quantization
method introduced in preceding papers of this series which resulted in a
finite 3+1 Lorentzian quantum field theory for gravity.
The space of solutions to all constraints turns out to be much larger
than the one as obtained by traditional approaches, however, it is fully
included. Thus, by suitable restriction of the
solution space, we can recover all former results which gives confidence in
the new quantization methods. The meaning of the remaining
“spurious solutions” is discussed.
1 Introduction
The canonical quantization of 2+1 (pure) gravity is a well studied problem
and
the literature on this subject is extremely rich (see [] and
references therein). It
may appear therefore awkward to write yet another paper on this subject.
The point of this paper is to quantize 2+1 gravity by starting with a
new Hamiltonian (constraint) rather than the one that imposes
flatness of the connection (see, for instance, [3]). Therefore, we
are actually dealing with a new field theory. The reason why we still
can call this theory 2+1 gravity (although in the Euclidean signature) is
because classically both theories are equivalent, it is in the quantum
theory only where discrepancies arise.
The motivation to study this model comes from 3+1 gravity : In [4] a
new method is introduced to quantize the Wheeler-DeWitt constraint for
3+1 Lorentzian gravity and one arrives at a finite quantum field
theory. It is therefore of interest to check whether that quantum theory
describes a physically interesting phase of the full theory of quantum
gravity. One way to do that is to apply the formalism to a model system which
maximally tests the 3+1 theory while being completely solvable.
It is often said that 2+1 gravity in its usual treatment as for instance in
[3] is such a model which tests the 3+1 theory in various technical
and conceptual ways. The author disagrees with such statements for
a simple reason :
The constraints of usual 2+1 gravity and of 3+1 gravity are not even
algebraically similar. Thus, one has to expect that the resulting
quantum theories are mutually singular in a certain sense. We will find
that this expectation turns out to be correct.
One can partially fix this by studying Euclidean 2+1 gravity to
test Lorentzian 3+1 gravity because then the two gauge groups
($SU(2)$) coincide, in the Lorentzian signature the gauge group of
2+1 gravity would be $SU(1,1)$. However,
this is not enough : while now the Gauss
constraints of both theories generate the same motions the rest of the
constraints are still very different with respect to each other. More
precisely, the 2+1 remaining constraint says that the connection $A$ is
flat,
that is, its curvature $F$ vanishes. Thus it does not involve the momenta $E$
conjugate to the connection at all. The situation in the 3+1 theory
is very different : here we have as the remaining constraints a
constraint that generates diffeomorphisms and the famous Wheeler-DeWitt
constraint. Both constraints depend on the momenta, the Wheeler-DeWitt
constraint even non-analytically. In [5] the authors propose to
quantize the constraints $FE=FEE=0$. However, this has never been done
in the literature, one reason being that the $FEE$ constraint is as
difficult to quantize as in the the 3+1 case. Moreover, the two constraints
$FE=FEE=0$ are equivalent to the $F=0$ constraint only when the
two-metric $q$
is non-singular, that is, $\det(q)>0$ and therefore it is no surprise that
the two
theories are not even classically equivalent as was shown in [6] (for
the theory defined by the $F=0$ constraint the condition $\det(q)>0$ is put
in by hand in order to have Euclidean signature).
In this paper we are using the constraints $FE=FEE/\sqrt{\det(q)}=0$.
There are several reasons that speak for this choice :
First of all these
constraints are at least classically completely equivalent to the
$F=0$ constraints because clearly they make sense only when $\det(q)>0$.
In fact we will show that there is a field dependent non-singular map
between the Lagrange multipliers of the two theories which map the two
sets of constraints into each other.
Secondly, they are just as in the 3+1 theory non-analytic in $E$ (because
$\det(q)$ is a function of $E$) and so will test this feature of the
3+1 theory as well. In particular, both constraints are densities of weight
one and only constraints of this type have a chance to result in
densely-defined diffeomorphism covariant operators as argued in [4].
Thirdly, these constraints are maximally in analogy to all the 3+1
constraints.
The plan of the present paper is as follows :
In section 2 we review the classical theory of Euclidean 2+1 gravity and
outline our main strategy of how to arrive at a well-defined Hamiltonian
constraint operator.
In section 3 we review the necessary background information on the
the mathematical tools that have been developed for diffeomorphism
invariant theories of connections. Those Hilbert space techniques
are identical for the 2+1 and 3+1 theory so that we have one more reason
to say that the model under consideration tests the 3+1 situation.
Also we need to construct a volume operator which as in the 3+1 theory
plays a key role in the regularization of the (analog of the) Wheeler-DeWitt
constraint operator. The 2+1 volume operator turns out to be much less
singular than the 3+1 operator which has some important impact on the
regularization of the constraint operators.
In section 4 we regularize the Wheeler-DeWitt operator. Many of the details
are exactly as in the 3+1 theory although there are some crucial
differences coming from the lower dimensionality of spacetime and also
from the different singularity structure of the volume operator.
In section 5 we perform various consistency checks on the 2+1 Wheeler-DeWitt
operator obtained, in particular whether it is a linear, covariant and
anomaly-free operator.
In section 6 we construct the full set of solutions to all constraints.
It his where we encounter, besides reassuring results that give faith
in the programme started in [4], several surprises :
•
The quantum theory admits solutions which correspond to degenerate
metrics. This happens although classically such solutions do not exist
given our constraints. This should not be confused with the situation in
[6] because there degenerate metrics are allowed even at the
classical level.
•
We find an uncountable number of rigorous distributional solutions to
all constraints
which reveal an uncountable number of quantum degrees of freedom just as
in any field theory with local degrees of freedom. This is in complete
contrast to the usual treatment via the $F=0$ constraints which results
in a topological quantum field theory with only a finite number of degrees
of freedom.
•
The space of solutions contains the solutions to the quantum $F=0$
constraints as a tiny subspace.
This subspace of solutions can be equipped with an inner product
which is precisely the one that one obtains in traditional approaches.
This is reassuring that our methods lead
to well-established results and do not describe some unphysical phase of
the theory.
•
The huge rest of the solutions cannot be equipped with the inner
product appropriate for the $F=0$ constraints because they do not
correspond to measurable functions with respect to the corresponding
measure. However, there is another natural
inner product available with respect to which they are normalizable.
This inner product is likely to be the one that is appropriate also
for the physically interesting solutions of the 3+1 constraints. The
solutions to the $F=0$ constraint in turn are not normalizable with
respect to this second inner product. Thus as expected, the two sets of
constraints have solution spaces which lie in the same space of
distributions but they cannot be given the same Hilbert space topology. It
is in this sense that the quantum theories are mutually singular.
In section 7 we conclude with some speculations
of what the present paper teaches us for the 3+1 theory with regard to
the solutions that are spurious from the point of view of the $F=0$
constraint.
In the appendix we compute the spectrum of the 2+1 volume operator for
the simplest states.
Throughout the paper we mean by the wording “2+1 or two-dimensional” always
2+1 Euclidean gravity while by “3+1 or three-dimensional” we always mean
3+1 Lorentzian gravity.
2 Classical Theory
Let us start by reviewing the notation (see, for instance, [3]).
We assume that the three-dimensional spacetime is of the form
$M={\mathchoice{\hbox{\hbox to 0.0pt{\kern 3.599945pt\vrule height 7.559885pt w%
idth 1px\hss}\hbox{$\displaystyle\rm R$}}}{\hbox{\hbox to 0.0pt{\kern 3.599945%
pt\vrule height 7.559885pt width 1px\hss}\hbox{$\textstyle\rm R$}}}{\hbox{%
\hbox to 0.0pt{\kern 2.519962pt\vrule height 7.559885pt width 1px\hss}\hbox{$%
\scriptstyle\rm R$}}}{\hbox{\hbox to 0.0pt{\kern 1.799973pt\vrule height 7.559%
885pt width 1px\hss}\hbox{$\scriptscriptstyle\rm R$}}}}\times\Sigma$ where $\Sigma$ is a two-dimensional manifold of
arbitrary topology, for instance, a compact, connected two-dimensional
smooth manifold, that is, a Riemann surface of genus $g$ or an
asymptotically flat manifold. Let $e_{a}^{i}$
be the co-dyad on $\Sigma$ where $a,b,c,..=1,2$ denote tensor indices and
$i,j,k,..=1,2,3$ denote $su(2)$ indices. The fact that we are dealing with
$su(2)$ rather than $su(1,1)$ implies that the two-metric $q_{ab}:=e_{a}^{i}e_{b}^{i}$ has Euclidean signature. Moreover, let $A_{a}^{i}$ be an $su(2)$
connection and define the field $E^{a}_{i}:=\epsilon^{ab}e_{b}^{i}$ where
$\epsilon_{ab}$ is the metric-independent totally skew tensor of density
weight $-1$. Then it
turns out that the pair $(A_{a}^{i},E^{a}_{i})$ is a canonical one for the
Hamiltonian formulation of 2+1 gravity based on the Einstein Hilbert action
$S=\int_{M}d^{3}x\sqrt{|\det(g)|}R^{(3)}$ where $g$ is the three-metric and
$R^{(3)}$ its scalar curvature.
In other words, $E^{a}_{i}$ is the momentum conjugate to $A_{a}^{i}$ so that the
symplectic structure is given by
$$\{A_{a}^{i}(x),E^{b}_{j}(y)\}=\delta_{a}^{b}\delta^{i}_{j}\delta(x,y)\;.$$
(2.1)
The Hamiltonian of the theory is a linear combination of constraints,
$\int d^{2}x(\Lambda^{i}G_{i}+N^{i}C_{i})$ for some Lagrange multipiers
$\Lambda^{i},N^{i}$ where
$$\displaystyle G_{i}$$
$$\displaystyle:=$$
$$\displaystyle D_{a}E^{a}_{i}=\partial_{a}E^{a}_{i}+\epsilon_{ijk}A_{a}^{j}E^{a%
}_{k}\mbox{ : Gauss constraint,}$$
$$\displaystyle C_{i}$$
$$\displaystyle:=$$
$$\displaystyle\frac{1}{2}\epsilon^{ab}F_{ab}^{i}\mbox{ : Curvature constraint}$$
(2.2)
where $F_{ab}$ denotes the curvature of $A_{a}$. The Gauss constraint
appears also in 3+1 gravity, however, the curvature constraint is
completely different from the constraints that govern 3+1 gravity, [4].
The equivalent of $C_{i}$ in 3+1 gravity are two types of constraints, one
of them, $V_{a}$, generates diffeomorphisms, the other one, $H$, generates
dynamics. The curvature constraints $C_{i}$ on the other hand do not generate
any such gauge transformations, in fact, the connection Poisson-commutes with
$C_{i}$ and shows that it is a Dirac observable with respect to $C_{i}$. The
constraint $C_{i}=0$ imposes that the connection should be flat and thus
the classically reduced phase space becomes the cotangent bundle over the
moduli space of flat $su(2)$ connections which is finite-dimensional.
It is obvious that the quantization of the model as defined by (2)
will not give too much insight into the 3+1 situation. In the following
we will reformulate (2) in such a way that it brings us in
connection with 3+1 gravity.
It will turn out that the following compound field, called the
degeneracy vector, for reasons that will become obvious soon
$$E^{i}:=\frac{1}{2}\epsilon^{ijk}\epsilon_{ab}E^{a}_{j}E^{b}_{k}$$
(2.3)
is a crucial one. Let us compute the square of this density of weight one :
$$\displaystyle E^{i}E^{i}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\epsilon_{ab}\epsilon_{cd}[E^{a}_{i}E^{c}_{i}][E^{b}_{%
j}E^{d}_{j}]$$
(2.4)
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}q_{bd}[E^{b}_{j}E^{d}_{j}]=\frac{1}{2}q_{bd}q_{ac}%
\epsilon^{ba}\epsilon^{dc}$$
$$\displaystyle=$$
$$\displaystyle\det(q),$$
that is, the two-metric is degenerate if and only if $E_{i}=0$ is identically
zero. We also see that $\det(q)$ is manifestly non-negative. Notice that
$E^{a}_{i}E^{i}=0$.
Whenever the degeneracy vector is non-vanishing we can perform the following
non-singular transformation $(N^{i})\leftrightarrow(N^{a},N)$ for a vector field
$N^{a}$, called the shift, and a scalar function $N$, called the lapse :
$$N^{i}=N^{a}\epsilon_{ab}E^{b}_{i}+N\frac{E^{i}}{\sqrt{\det(q)}}\Leftrightarrow
N%
^{a}=\epsilon_{ijk}\frac{E^{i}E^{a}_{j}}{\det(q)}N^{k},\;N=\frac{N^{i}E^{i}}{%
\sqrt{\det(q)}}\;.$$
(2.5)
Notice that formula (2.5) respects that $N^{i},N^{a},N$ have density weight
zero. Using (2.5), we can now write the curvature constraint in the form
$$\displaystyle N^{i}C_{i}$$
$$\displaystyle=$$
$$\displaystyle N^{a}V_{a}+NH\mbox{ where}$$
$$\displaystyle V_{a}$$
$$\displaystyle:=$$
$$\displaystyle F_{ab}^{i}E^{b}_{i}\mbox{ : Diffeormorphism constraint}$$
$$\displaystyle H$$
$$\displaystyle:=$$
$$\displaystyle\frac{1}{2}F_{ab}^{i}\epsilon_{ijk}\frac{E^{a}_{j}E^{b}_{k}}{%
\sqrt{\det(q)}}\mbox{ : Hamiltonian constraint}\;.$$
(2.6)
Apart from the fact that we are in two rather than three space dimensions
these are precisely the constraints of Euclidean 3+1 gravity [4].
Since the 3+1 Euclidean Hamiltonian constraint operator plays a key
role in the quantization of the 3+1 Lorentzian Hamiltonian constraint
111In fact, once one has densely defined the Euclidean operator in 3+1
dimensions the fact that the Lorentzian operator in 3+1 is densely defined is a
simple Corollary [4].,
we claim that the set of constraints (2) bring us in maximal
contact with the 3+1 theory.
Notice that unlike in [5], we have a factor of $1/\sqrt{\det(q)}$
in the definition of the Hamiltonian constraint. This difference has two
important consequences :
1)
Classical :
The denominator in $H=F_{i}E^{i}/\sqrt{\det(q)}$ where $F_{i}:=\epsilon^{ab}F_{ab}^{i}/2$ (or $F_{ab}^{i}=\epsilon_{ab}F^{i}$) blows up as $E^{i}$ vanishes.
Since the limit $\lim_{\vec{E}\to 0}\vec{E}/||\vec{E}||$ depends on the
details of the limiting procedure we must exclude degenerate metrics
classically. This is in contrast to [6] where the authors exploit
the possible classical degeneracy of the metric when one discards the
denominator to demonstrate that one has already an infinite number
of degrees of freedom at the classical level (notice, however, that their
solutions, where $F^{i}$ or
$E^{i}$ become null, do not apply since we are dealing with $su(2)$).
2)
Quantum :
It is by now known that one of the reasons for why
$\tilde{H}:=\sqrt{\det(q)}H$ suffers from huge problems upon quantizing it
is due to the fact that $\tilde{H}$ has density weight two rather than one.
As argued in [4], only densities of weight one have a chance to
be promoted into densely defined, covariant operators. This is why we
must keep the denominator $1/\sqrt{\det(q)}$ in (2) at the quantum level.
Just like in 3+1 gravity we wish to work in a connection representation,
that is, states are going to be functions of connections. Then an immediate
problem with $H$ is that one has to give a meaning to the denominator
$1/\sqrt{\det(q)}$. In [4] that was achieved for Lorentzian
3+1 gravity by noting that the
denominator could be absorbed into a Poisson bracket with respect to a
functional $V$ of $q_{ab}$. The idea was then
to use the quantization rule that Poisson brackets should be replaced by
commutators times $1/(i\hbar)$ and to replace $V$ by an appropriate operator
$\hat{V}$. Such an operator indeed exists and it is densely defined.
Is a similar trick also available for 2+1 gravity ? At first sight the
answer seems to be in the negative because the underlying reason for why
such a trick worked for 3+1 gravity was that the co-triad $e_{a}^{i}$, the
precise analogue of the degeneracy vector $E^{i}$, considered
as a function of $E^{a}_{i}$ was integrable, the generating functional being
given by the total volume $V$ of $\Sigma$. In other words, we had
$$e_{a}^{i}=\frac{1}{2}\epsilon^{ijk}\epsilon_{abc}\frac{E^{b}_{j}E^{c}_{k}}{%
\sqrt{\det(q)}}=\frac{\delta V}{\delta E^{a}_{i}}\mbox{ with }V:=\int_{\Sigma}%
d^{3}x\sqrt{\det(q)}\;$$
(2.7)
However, if we take over the definition of $V$ (with $d^{3}x$ replaced by
$d^{2}x$) then we find instead
$$\{A_{a}^{i},V\}=\frac{\delta V}{\delta E^{a}_{i}}=\frac{q_{ab}E^{b}_{i}}{\sqrt%
{\det(q)}}\mbox{ with }V:=\int_{\Sigma}d^{2}x\sqrt{\det(q)}\;.$$
(2.8)
Thus, there seems not such a trick available in the 2+1 case. However, it is
a matter of straightforward computation to verify that indeed
$$E^{i}=\frac{1}{2}\epsilon^{ab}\epsilon_{ijk}\{A_{a}^{j},V\}\{A_{b}^{k},V\}$$
(2.9)
which does not seem to help much because what we need is
$E^{i}/\sqrt{\det(q)}$ rather than $E^{i}$ itself.
The new input needed here as compared to the 3+1 case is as follows :
Notice that if we could replace $\sqrt{\det(q)}$ by V then we could absorb
it into the Poisson brackets by using the identity
$$\frac{\{A_{a}^{j},V\}\{A_{b}^{k},V\}}{V}=4\{A_{a}^{j},\sqrt{V}\}\{A_{b}^{k},%
\sqrt{V}\}$$
As we will see, $V$ can be promoted, just as in the 3+1 case, into a
densely defined positive semi-definite operator. Therefore its
square root exists and it would follow that the last equation with
Poisson brackets replaced by commutators would make sense as an operator.
In the next section we will define a Hilbert space and the corresponding
operator.
What remains is to justify the replacement of $\sqrt{\det(q)}$ by
$V$. That this is possible we will show in the section after the following.
It happens because the Poisson bracket gives a local quantity and therefore
we may actually replace $V$ by $V(x,\epsilon)$ in
$$\{A_{a}^{i}(x),V\}\equiv\{A_{a}^{i}(x),V(x,\epsilon)\}\mbox{ where }V(B)=\int_%
{B}d^{2}x\sqrt{\det(q)}$$
is the volume of a compact region $B$ and $V(x,\epsilon)$ is the volume
of an
arbitrarily small open neighbourhood of the point $x$, the smallness governed
by $\epsilon$. It is then easy to see that $\lim_{\epsilon\to 0}V(x,\epsilon)/\epsilon^{2}=\sqrt{\det(q)}(x)$. Now, in the quantum theory
we are going to point split the quantity $H$ and we will use a regularized
$\delta$ distribution with point split parameter $\epsilon$. As we will
see, that parameter can be absorbed into $V(x,\epsilon)$ to serve as a
replacement for $\sqrt{(\det(q)}(x)$.
The details are displayed in the
following sections.
3 Quantum Theory and Volume operator
In this section we will review the definition of a Hilbert space for
diffeomorphism invariant theories of connections [7]. This will be
our kinematical framework. On that Hilbert space we are going to construct
a 2+1 volume operator which turns out to be actually more complicated
than the one for the 3+1 theory [9, 10].
3.1 Quantum kinematics
In what follows we give an extract from [7, 8]. The reader
interested in the details is urged to study those papers.
We will denote by $\gamma$ a finite piecewise analytic graph in $\Sigma$.
That is, we have analytic edges $e$ which are joined in vertices $v$.
We subdivide each edge into two parts and equip each part with an orientation
that is outgoing from the vertex (the point where these two parts meet is
a point of analyticity and therefore not a vertex of a graph, thus each
edge from now on can be viewed to be incident at precisely one vertex).
Given an $su(2)$ connection $A_{a}^{i}$ on $\Sigma$ we can compute its holonomy
(or path-ordered exponential) $h_{e}(A)$ along an edge $e$ of the graph.
Recall that all representations of $SU(2)$ are completely reducible and
that the (equivalence class of equivalent) irreducible ones can be
characterized by a half integral non-negative number $j$, the spin of
the representation. We will denote the matrix elements of the
$j$-representation at $g\in SU(2)$ by $\pi_{j}(g)$.
Consider now a vertex $v$ of the graph and the edges $e_{1},..,e_{n}$ incident
at $v$, that is, the graph has valence $n$. Under a gauge transformation
$g$ at $v$ the holonomy transforms as
$h_{e_{i}}\to gh_{e_{i}},\;i=1,..,n$. Now consider the transformation of
the following function
$$\otimes_{i=1}^{n}\pi_{j_{i}}(h_{e_{i}})\to[\otimes_{i=1}^{n}\pi_{j_{i}}(g)]%
\cdot[\otimes_{i=1}^{n}\pi_{j_{i}}(h_{e_{i}})]\;.$$
We are interested in making this function gauge invariant at $v$. To that end
we orthogonally decompose the tensor product of the $\pi_{j_{i}}(g)$ into
irreducibles
and look for the independent singlets in that decomposition. There is an
orthogonal projector $c_{v}$ on each of these singlets, we say that it is
compatible with the spins $j_{1},..,j_{n}$, and
so we can make our function gauge invariant at $v$ by contracting :
$c_{v}\cdot\otimes_{i=1}^{n}\pi_{j_{i}}(h_{e_{i}})$.
If we do that for each
vertex we obtain a completely gauge invariant function called a
spin-network function. Thus a spin-network function is labelled by
a graph $\gamma$, a colouring of its edges $e$ with a spin $j_{e}$ and
a dressing of each vertex $v$ with a gauge-invariant projector $c_{v}$.
If we denote by $E(\gamma),V(\gamma)$ the set of edges and vertices of
$\gamma$ respectively then we use the shorthand notation
$$T_{\gamma,\vec{j},\vec{c}}\mbox{ where }\vec{j}:=\{j_{e}\}_{e\in E(\gamma)},\;%
\vec{c}:=\{c_{v}\}_{v\in V(\gamma)},\;$$
for that spin-network function.
The Hilbert space ${\cal H}$ that we are going to use for gauge invariant
functions
of connections is most easily described by saying that the set of all
spin-network functions is a complete orthonormal basis of $\cal H$ (so
each spin-network function comes with a specific finite normalization
factor). Notice that therefore $\cal H$ is not separable. Another
characterization of $\cal H$ which is very useful is to display it as a
certain $L_{2}$ space. To that end, consider the finite linear combinations
$\Phi$ of spin-network functions. $\Phi$ can be turned into an Abelian
$C^{\star}$ algebra by saying that involution is just complex conjugation
and by completing it with respect to the $sup-$norm over the space
${{\cal A}/{\cal G}}$ of smooth connections modulo gauge transformations. That
$C^{\star}$ algebra is isometric
isomorphic by standard Gel’fand techniques to the $C^{\star}$ algebra of
continuous functions $C({\overline{{\cal A}/{\cal G}}})$ where ${\overline{{\cal A}/{\cal G}}}$ is the set of all
homomorphisms from the original algebra into the complex numbers. The space
${\overline{{\cal A}/{\cal G}}}$, as the notation suggests, is a certain extension of ${{\cal A}/{\cal G}}$ and
will be called the set of distributional connections. Indeed, it is
the maximal extension such that (the Gel’fand transform of the)
spin-network functions
are continuous. By standard results, the resulting topology is such that
${\overline{{\cal A}/{\cal G}}}$ is a compact Hausdorff space and as such positive linear
functionals $\Gamma$ on $C({\overline{{\cal A}/{\cal G}}})$ are in one to one correspondence with
regular Borel measures $\mu$ on ${\overline{{\cal A}/{\cal G}}}$ via $\Gamma(f)=:\mu(f)=\int_{\overline{{\cal A}/{\cal G}}}d\mu f$.
Now the measure $\mu_{0}$ underlying $\cal H$ is completely characterized
by the integral of spin-network functions and is given by
$\mu_{0}(T_{\gamma,\vec{j},\vec{c}})=1$ if $T_{\gamma,\vec{j},\vec{c}}=1$
and $0$ otherwise. So we have ${\cal H}=L_{2}({\overline{{\cal A}/{\cal G}}},d\mu_{0})$ and
spin-network functions play the same role for $\mu_{0}$ that Hermite
functions play for Gaussian measures.
In the sequel we will topologize the space $\Phi$ of finite linear
combinations of spin-network functions in a different way and we will call
$\Phi$ henceforth the space of cylindrical functions. A function
$f_{\gamma}$ is said
to be cylindrical with respect to a graph $\gamma$ if it can be written as
a finite linear combination of spin-network functions on that $\gamma$. The
norm of $f_{\gamma}$ will be the $L_{1}$ norm $||f_{\gamma}||_{1}=\sum_{I}|<T_{I},f>|$ which equips $\Phi$ with the structure of a topological vector
space. The distributional
dual $\Phi^{\prime}$ is the set of all continuous linear functionals on
$\Phi$.
Certainly every element of $\cal H$ is an element of $\Phi^{\prime}$ by the Schwarz
inequality and every element of $\Phi$ is trivially an element of $\cal H$.
Thus we have the inclusion $\Phi\subset{\cal H}\subset\Phi^{\prime}$ (this is not
a Gel’fand triple in the strict sense because the topology on $\Phi$ is
not nuclear).
This furnishes the quantum kinematics. Notice that we can take over the
results
from [7] without change concerning the Diffeomorphism constraint :
given an analyticity preserving diffeomorphism $\varphi$ we have a unitary
operator on $\cal H$ which acts on a function cylindrical with respect to
a graph as $\hat{U}(\varphi)f_{\gamma}=f_{\varphi(\gamma)}$, that is, the
diffeomorphism group $\mbox{Diff}(\Sigma)$ is unitarily represented.
This implies that one can group average with respect to the
diffeomorphism group as in [7]. We will return to this point in section
6.
3.2 The 2+1 volume operator
The plan of this subsection is as follows :
Since $\hat{E}^{i}(x)$ is a density of weight one, it makes sense that it
will give rise to a well-defined and diffeomorphism-covariantly defined
operator valued distribution. In a second step we will point-split
$\det(q)=E^{i}E^{i}$ and take the square root of the resulting
operator. Again, since $\sqrt{\det(q)}$ is a density of weight one,
it can be turned into a well-defined operator-valued distribution even in
regulated form and the limit as the regulator is removed exists.
Let us then begin with $E^{i}$. Let as in the previous section $f_{\gamma}$
denote a function cylindrical with respect to a graph $\gamma$ and denote
by $E(\gamma)$ its set of edges. Edges are, by suitably subdividing them
into two halves, in the sequel always supposed to be oriented as outgoing
at a vertex. We will compute the action of various operators first on
functions of smooth connections and then extend the end result to all of
${\overline{{\cal A}/{\cal G}}}$.
Let $\delta_{\vec{\epsilon}}(x,y)=\delta_{\epsilon_{1}}(x^{1},y^{1})\delta_{%
\epsilon_{2}}(x^{2},y^{2})$ be any
two-parameter family of smooth functions of compact support such that
$\lim_{\epsilon_{1},\epsilon_{2}\to 0}\int_{\Sigma}d^{2}y\delta_{\epsilon}(x,y)%
f(y)=f(x)$
for any, say smooth, function on $\Sigma$ where $\vec{\epsilon}=(\epsilon_{1},\epsilon_{2})$ parametrizes
the size of the support. Consider the point-split operator
$$\hat{E}^{i}_{\vec{\epsilon},\vec{\epsilon}^{\prime}}(x):=\frac{1}{2}\epsilon_{%
ab}\epsilon^{ijk}\int_{\Sigma}d^{2}y\int_{\Sigma}d^{2}z\delta_{\vec{\epsilon}}%
(x,y)\delta_{\vec{\epsilon}^{\prime}}(x,z)\hat{E}^{a}_{j}(y)\hat{E}^{b}_{k}(z)$$
(3.1)
and apply it to $f_{\gamma}$. Notice that upon replacing
$\hat{E}^{a}_{i}(x)=-i\hbar\delta/\delta A_{a}^{i}(x)$
$$\hat{E}^{a}_{i}(x)f_{\gamma}=-i\hbar\sum_{e\in E(\gamma)}\int_{0}^{1}dt\delta(%
x,e(t))\dot{e}^{a}(t)X^{i}_{e}(t)f_{\gamma}$$
(3.2)
where $X^{i}_{e}(t)=\mbox{tr}([h_{e}(0,t)\tau_{i}h_{e}(t,1)]^{T}\partial/\partial h_{%
e}(0,1))$, $h_{e}(a,b)$ is the holonomy from parameter value $a$ to $b$
and $\tau_{i}$ are generators of $su(2)$ with structure constants
$\epsilon_{ijk}$. We also need the quantity
$X^{ij}_{e}(s,t)=\mbox{tr}([h_{e}(0,s)\tau_{i}h_{e}(s,t)\tau_{j}h_{e}(t,1)]^{T}%
\partial/\partial h_{e}(0,1))$ for $s<t$ (modulo $1$). Then it is easy to
see that
$$\displaystyle\hat{E}^{i}_{\vec{\epsilon},\vec{\epsilon}^{\prime}}(x)f_{\gamma}$$
(3.3)
$$\displaystyle=$$
$$\displaystyle-\hbar^{2}\frac{1}{2}\epsilon_{ab}\epsilon^{ijk}\sum_{e,e^{\prime%
}\in E(\gamma)}\int_{\Sigma}d^{2}y\int_{\Sigma}d^{2}z\delta_{\vec{\epsilon}}(x%
,y)\delta_{\vec{\epsilon}^{\prime}}(x,z)\times$$
$$\displaystyle\times$$
$$\displaystyle\int_{0}^{1}dt\delta(y,e(t))\dot{e}^{a}(t)\int_{0}^{1}dt^{\prime}%
\delta(z,e^{\prime}(t^{\prime}))\dot{e}^{\prime b}(t^{\prime})\times$$
$$\displaystyle\times$$
$$\displaystyle[X^{k}_{e^{\prime}}(t^{\prime})X^{j}_{e}(t)+\delta_{e,e^{\prime}}%
\{\theta(t,t^{\prime})X^{jk}_{e}(t,t^{\prime})+\theta(t^{\prime},t)X^{jk}_{e}(%
t^{\prime},t)\}]f_{\gamma}$$
$$\displaystyle=$$
$$\displaystyle-\hbar^{2}\frac{1}{2}\epsilon_{ab}\epsilon^{ijk}\sum_{e,e^{\prime%
}\in E(\gamma)}\int_{0}^{1}dt\dot{e}^{a}(t)\int_{0}^{1}dt^{\prime}\dot{e}^{%
\prime b}(t^{\prime})\delta_{\vec{\epsilon}}(x,e(t))\delta_{\vec{\epsilon}^{%
\prime}}(x,e^{\prime}(t^{\prime}))\times$$
$$\displaystyle\times$$
$$\displaystyle[X^{k}_{e^{\prime}}(t^{\prime})X^{j}_{e}(t)+\delta_{e,e^{\prime}}%
\{\theta(t,t^{\prime})X^{jk}_{e}(t,t^{\prime})+\theta(t^{\prime},t)X^{jk}_{e}(%
t^{\prime},t)\}]f_{\gamma}$$
where $\theta(s,t)=1$ if $s<t$ and $0$ otherwise.
We are now interested
in the limit $\epsilon\to 0$ and proceed similar as in [10]. We must
adapt the regularization to each pair $e,e^{\prime}$ to get a well-defined result.
1)
Case $e=e^{\prime}$.
If $x$ does not lie on $e$ then for sufficiently small $\vec{\epsilon}$ we
must get
$\delta_{\vec{\epsilon}}(x,e(t))=0$ for any $t\in[0,1]$. Thus in the
limit we get a
non-vanishing contribution if and only if there exists a value $t_{x}\in[0,1]$
such that $e(t_{x})=x$ (there is at most one such value $t_{x}$ because edges
are not self-intersecting). Since $\dot{e}$ is nowhere vanishing we must
have $\dot{e}^{1}(t_{x})\not=0$ (switch $1\leftrightarrow 2$ if necessary).
We send $\epsilon_{1},\epsilon_{1}^{\prime}\to 0$ and find that
$\delta_{\vec{\epsilon}}(x,e(t))\to\delta_{\epsilon_{2}}(x^{2},e^{2}(t))\delta(%
t-t_{x})/|\dot{e}^{1}(t_{x})|$ and similar for
$\delta_{\vec{\epsilon}^{\prime}}(x,e^{\prime}(t^{\prime}))$. Inserting this into (3.3)
we find that there is no contribution for $e=e^{\prime}$ because of the two zeroes
$0=\epsilon_{ab}\dot{e}^{a}(t_{x})\dot{e}^{b}(t_{x})$ and
$0=\epsilon_{ijk}[X^{ij}_{e}(t_{x},t_{x})+X^{ji}_{e}(t_{x},t_{x})]$. Notice that it was
crucial to have $\epsilon_{2},\epsilon_{2}^{\prime}$ still finite as otherwise the
appearing $\delta_{\epsilon_{2}}(0)\delta_{\epsilon_{2}^{\prime}}(0)$ would be
meaningless.
2)
Case $e\not=e^{\prime}$.
If again $x$ does not lie on both $e,e^{\prime}$ then by choosing $\vec{\epsilon},\vec{\epsilon}^{\prime}$ sufficiently small we must get zero. Therefore $e,e^{\prime}$
must intersect and as we have divided edges into two halves they can
intersect at most in their common starting point corresponding to $t=t^{\prime}=0$
which is thus a vertex $v$ of the graph $\gamma$.
A)
Subcase
Consider first the case
that $e,e^{\prime}$ have co-linear tangents at $t=0$ and let us assume that
$\dot{e}^{1}(0),\dot{e}^{\prime 1}(0)\not=0$ (switch $1\leftrightarrow 2$
if necessary). Then we first send $\epsilon_{1},\epsilon_{1}^{\prime}\to 0$ which
results in
$$\delta_{\vec{\epsilon}}(x,e(t))\delta_{\vec{\epsilon}^{\prime}}(x,e^{\prime}(t%
^{\prime}))\to\delta_{\epsilon_{2}}(x^{2},e^{2}(t))\delta_{\epsilon_{2}^{%
\prime}}(x^{2},e^{\prime 2}(t^{\prime}))\frac{\delta(t)\delta(t^{\prime})}{|%
\dot{e}^{1}(0)\dot{e}^{\prime 1}(0)|}$$
and thus performing the two $t$ integrals we get zero as above because
$0=\epsilon_{ab}\dot{e}^{a}(0)\dot{e}^{\prime b}(0)$ by assumption.
B)
Subcase
We are left with the case that the tangents of $e,e^{\prime}$ are linearly
independent at $x=v$. We replace
$\delta_{\vec{\epsilon}}(x,e(t))\delta_{\vec{\epsilon}^{\prime}}(x,e^{\prime}(t%
^{\prime}))$
by $\delta_{\vec{\epsilon}}(e^{\prime}(t^{\prime}),e(t))\delta_{\vec{\epsilon}^{%
\prime}}(x,v)$ and send first $\vec{\epsilon}\to 0$.
Then
$$\delta_{\vec{\epsilon}}(e^{\prime}(t^{\prime}),e(t))\to\frac{\delta(t)\delta(t%
^{\prime})}{|\epsilon_{ab}\dot{e}^{a}(0)\dot{e}^{\prime b}(0)|}$$
and we can perform the integral. Since we are integrating over a square
$[0,1]^{2}$ and
the two-dimensional delta-distribution is supported at a corner we pick up
a factor of $1/4$ upon setting $t=t^{\prime}=0$ and dropping the integral. At
last we send $\vec{\epsilon}^{\prime}\to 0$.
Summarizing, we find ($V(\gamma)$ denotes the set of vertices of $\gamma$)
$$\hat{E}^{i}(x)f_{\gamma}=-\frac{\hbar^{2}}{4\cdot 2}\sum_{v\in V(\gamma)}%
\delta(x,v)\sum_{e,e^{\prime}\in E(\gamma),e\cap e^{\prime}=v}\mbox{sgn}(e,e^{%
\prime})\epsilon^{ijk}X^{j}_{e}X^{k}_{e^{\prime}}f_{\gamma}$$
(3.4)
where $X^{i}_{e}:=X^{i}_{e}(0)$ is easily recognized as the right invariant
vector field on $SU(2)$ evaluated at $g=h_{e}(0,1)$ and $\mbox{sgn}(e,e^{\prime})$
is the sign of
$\epsilon_{ab}\dot{e}^{a}(0)\dot{e}^{\prime b}(0)$ and so is an orientation
factor. This furnishes the definition of the operator corresponding to
the degeneracy vector.
We now will define the volume operator for any compact region
$B\subset\Sigma$. Our first task is to define an operator corresponding
to $\det(q)$ and then to take its square root. Since $\det(q)$ is a
density of weight two we expect this to be quite singular, in fact
the naive definition $\widehat{\det(q)}(x):=\hat{E}^{i}(x)\hat{E}^{i}(x)$
does not make any sense given the expression (3.4) which involves
a factor of $\delta(x,v)$. Thus we are lead to point-split the two
degeneracy vector operators and to hope that 1) the regulated operator is
positive so that it makes sense to take its square root and 2) that
one can remove the regulator from the square root. Let us then define
similar as above
$$\widehat{\det(q)}_{\vec{\epsilon},\vec{\epsilon}^{\prime}}(x):=\int_{\Sigma}d^%
{2}y\int d^{2}z\delta_{\vec{\epsilon}}(x,y)\delta_{\vec{\epsilon}^{\prime}}(x,%
z)\hat{E}^{i}(y)\hat{E}^{i}(z)$$
(3.5)
and apply it to a function cylindrical with respect to a graph $\gamma$.
Given (3.4) the result is easily seen to be
$$\displaystyle\widehat{\det(q)}_{\vec{\epsilon},\vec{\epsilon}^{\prime}}(x)f_{\gamma}$$
(3.6)
$$\displaystyle=$$
$$\displaystyle\frac{\hbar^{4}}{64}\sum_{v,v^{\prime}\in V(\gamma)}\delta_{\vec{%
\epsilon}}(x,v)\delta_{\vec{\epsilon}^{\prime}}(x,v^{\prime})\times$$
$$\displaystyle\times$$
$$\displaystyle\sum_{e_{1},e_{2}\in E(\gamma),e_{1}\cap e_{2}=v}\;\;\sum_{e_{1}^%
{\prime},e_{2}^{\prime}\in E(\gamma),e_{1}^{\prime}\cap e_{2}^{\prime}=v^{%
\prime}}\mbox{sgn}(e_{1},e_{2})\mbox{sgn}(e_{1}^{\prime},e_{2}^{\prime})\times$$
$$\displaystyle\times$$
$$\displaystyle[\epsilon^{ijk}X^{j}_{e_{1}}X^{k}_{e_{2}}][\epsilon^{imn}X^{m}_{e%
_{1}^{\prime}}X^{n}_{e_{2}^{\prime}}]f_{\gamma}\;.$$
We now will accomplish both hopes 1), 2) stated above by appropriately
choosing the regulators.
1) Choose $\vec{\epsilon}=:\vec{\epsilon}^{\prime}$, then
we are able to display (3.5) as a
square of an operator
$$\displaystyle\widehat{\det(q)}_{\vec{\epsilon},\vec{\epsilon}}(x)f_{\gamma}$$
(3.7)
$$\displaystyle=$$
$$\displaystyle\{\frac{\hbar^{2}}{8}\sum_{v\in V(\gamma)}\delta_{\vec{\epsilon}}%
(x,v)\sum_{e_{1},e_{2}\in E(\gamma),e_{1}\cap e_{2}=v}\mbox{sgn}(e_{1},e_{2})[%
\epsilon^{ijk}X^{j}_{e_{1}}X^{k}_{e_{2}}]\}^{2}f_{\gamma}\;.$$
Since $X^{i}_{e_{1}}X^{j}_{e_{2}}$ commute for $e_{1}\not=e_{2}$ and because
$iX_{e}^{i}$ is essentially self-adjoint with range in its domain, so is
$X^{i}_{e_{1}}X^{j}_{e_{2}}$ and therefore the whole operator corresponding
to one factor in (3.7). Thus, (3.7) is a square of
essentially self-adjoint operators with range in its domain and so it
is positive semi-definite. Therefore its square root is well defined.
2) Choose $\vec{\epsilon}$ small enough such that
$\delta_{\vec{\epsilon}}(x,v)\delta_{\vec{\epsilon}}(x,v^{\prime})=\delta_{v,v^%
{\prime}}[\delta_{\vec{\epsilon}}(x,v)]^{2}$, that is, given $\gamma,x$
we must choose $\vec{\epsilon}$ so small that for $v\not=v^{\prime}$ not both
of them can be in the support of the function $\delta_{\vec{\epsilon}}(x,.)$
which is always possible. Then we may write (3.7) as
$$\displaystyle\widehat{\det(q)}_{\vec{\epsilon},\vec{\epsilon}}(x)f_{\gamma}$$
(3.8)
$$\displaystyle=$$
$$\displaystyle\frac{\hbar^{4}}{64}\sum_{v\in V(\gamma)}[\delta_{\vec{\epsilon}}%
(x,v)]^{2}\{\sum_{e_{1},e_{2}\in E(\gamma),e_{1}\cap e_{2}=v}\mbox{sgn}(e_{1},%
e_{2})[\epsilon^{ijk}X^{j}_{e_{1}}X^{k}_{e_{2}}]\}^{2}f_{\gamma},$$
take its square root and define this to be the regulated operator
corresponding to $\sqrt{\det(q)}$ :
$$\widehat{\sqrt{\det(q)}}_{\vec{\epsilon}}(x)f_{\gamma}:=\sqrt{\widehat{\det(q)%
}_{\vec{\epsilon},\vec{\epsilon}}(x)}f_{\gamma}\;.$$
(3.9)
In considering the limit $\vec{\epsilon}\to 0$ notice that for small enough
$\vec{\epsilon}$ at most one vertex of $\gamma$ lies in the support of
$\delta_{\vec{\epsilon}}(x,.)$. Therefore we can take the sum over
vertices and the factor $[\delta_{\vec{\epsilon}}(x,v)]^{2}$ out of the
square root and find that
$$\widehat{\sqrt{\det(q)}}_{\vec{\epsilon}}(x)f_{\gamma}=\frac{\hbar^{2}}{8}\sum%
_{v\in V(\gamma)}\delta_{\vec{\epsilon}}(x,v)\sqrt{\{\sum_{e_{1},e_{2}\in E(%
\gamma),e_{1}\cap e_{2}=v}\mbox{sgn}(e_{1},e_{2})[\epsilon^{ijk}X^{j}_{e_{1}}X%
^{k}_{e_{2}}]\}^{2}}f_{\gamma}\;.$$
(3.10)
But now the limit $\vec{\epsilon}\to 0$ is trivial to take, we finally
find that
$$\widehat{\sqrt{\det(q)}}(x)f_{\gamma}=\frac{\hbar^{2}}{8}\sum_{v\in V(\gamma)}%
\delta(x,v)\sqrt{\{\sum_{e_{1},e_{2}\in E(\gamma),e_{1}\cap e_{2}=v}\mbox{sgn}%
(e_{1},e_{2})[\epsilon^{ijk}X^{j}_{e_{1}}X^{k}_{e_{2}}]\}^{2}}f_{\gamma}$$
(3.11)
or in integrated form
$$\displaystyle\hat{V}(B)f_{\gamma}:=[\int_{B}d^{2}x\widehat{\sqrt{\det(q)}}(x)]%
f_{\gamma}$$
(3.12)
$$\displaystyle=$$
$$\displaystyle\frac{\hbar^{2}}{8}\sum_{v\in V(\gamma)\cap B}\sqrt{\{\sum_{e_{1}%
,e_{2}\in E(\gamma),e_{1}\cap e_{2}=v}\mbox{sgn}(e_{1},e_{2})[\epsilon^{ijk}X^%
{j}_{e_{1}}X^{k}_{e_{2}}]\}^{2}}f_{\gamma}\;.$$
Formula (3.12) motivates to introduce the “volume operator at a point”
$\hat{V}_{v}$ : For each integer $n\geq 2$ define $\{[v,n]\}$ to be the set of
germs of $n$ analytical edges incident at $v$ (a germ of an analytical
edge at a point $v$ is a complete set of analytical data available at
$v$ that are necessary to reconstruct it, that is, essentially the
coefficients of its Taylor series). For a germ
$\vec{e}_{n}:=(e_{1},..,e_{n})\in\{[v,n]\}$ define
$$\hat{V}_{\vec{e}_{n}}:=\sqrt{\{\sum_{e_{I},e_{J}\in\vec{e}}\mbox{sgn}(e_{I},e_%
{J})[\epsilon^{ijk}X^{j}_{e_{I}}X^{k}_{e_{J}}]\}^{2}}$$
where the right invariant vector field $X_{e}^{i}(g)=X_{e}^{i}(gh)\forall h\in SU(2)$,
due to right invariance, depends really only on the germ of the edge $e$
because
it acts on a function in the same way no matter how ”short” the segment
of $e$ is on which that function actually depends, as long as that
segment starts at $v=e(0)$. In particular all $X_{e}^{i}$, $e$ incident at $v$,
commute as long as their germs are different. Then
$$\hat{V}(B)=\sum_{v\in B}\hat{V}_{v}\mbox{ where }\hat{V}_{v}=\sum_{n=2}^{%
\infty}\sum_{\vec{e}_{n}\in\{[v,n]\}}\hat{V}_{\vec{e}_{n}}\;.$$
(3.13)
We see that $\hat{V}(B)$ is a densely defined, essentially self-adjoint,
positive semi-definite operator on $\cal H$ for each bounded region
$B\subset\Sigma$. Its most interesting property is that it acts
non-trivially only at vertices of the graph underlying a cylindrical
function,
moreover, that vertex has to be such that at least two edges incident at
it have linearly independent tangents there. This is in complete analogy
with the volume operator of the 3+1 theory just that we need to replace
everywhere valence three by valence two. Unlike the the three-dimensional
volume operator, however, its two-dimensional ”brother” does not vanish at
two-valent and three-valent vertices at all as long as there are at least
two edges with linearly independent tangents at the vertex under
consideration. As we will see in the appendix, the two-dimensional
volume operator is even positive definite on gauge invariant
functions with two-and three valent vertices while the three-dimensional
volume operator annihilates such functions identically.
This is to be expected because by inspection of (3.13) the principal
symbol of that operator is non-singular on two-and three valent
vertices while in the three-dimensional case it is singular.
The fact that the volume operator acts only at vertices of the graph will
enable us to take the infra-red limit in case we are dealing with
asymptotically flat topologies and also ensure that the ultra-violet limit
exists. Thus, the volume operator acts both as an IR and as an UV
dynamical regulator, a point of view emphasized in [11].
Remark :
Notice that $q_{ab}=\{A_{a}^{i},V\}\{A_{b}^{i},V\}$ just as in the
three-dimensional case. This observation lead in the three-dimensional case
to the construction of a length operator [12]. The only crucial property
that was necessary to construct this operator was that the volume operator
acts only at vertices. Since that is true for the two-dimensional
operator as well we can therefore take over all the results and
formulae from [12] to the two-dimensional case, except for the
obvious differences which are due to different dimension and algebraic
expressions in terms of right invariant vector fields
of the volume operators. In particular, although the eigenvalues of the
length operators are certainly different, qualitatively the spectrum is
still discrete, the operator is positive semi-definite and essentially
self-adjoint and the length of a curve as measured by a spin-network
state is different from zero only if at least one edge of the graph crosses
the curve, though not necessarily in a vertex. Thus we automatically have
a two-dimensional length operator as well.
The fact that the two-dimensional length operator is less degenerate than
the three-dimensional one can be traced back to the observation that
what is length in two dimensions is what is area in three dimensions.
4 Regularization
This section is divided into three parts : in the first part we will derive
a regulated Wheeler-DeWitt operator. The regularization consists in a
triangulation of
$\Sigma$ which is kept arbitrary at this stage. In the next part we will
specify the properties that we wish to impose on the triangulation and
then make a particular choice which satisfies those properties.
Finally in the last part we complete the regularization by employing that
triangulation and take the continuum limit which then equips us with a
densely defined family of operators, one for each graph.
The presentation will be kept largely parallel to the one in [4] in
order to fasciliate comparison.
4.1 Derivation of the regulated operator
We wish to define an operator corresponding to
$$\displaystyle H(N)$$
$$\displaystyle:=$$
$$\displaystyle\int_{\Sigma}d^{2}xNF_{i}\frac{E^{i}}{\sqrt{\det(q)}}$$
(4.1)
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\int d^{2}xN\epsilon^{ab}\epsilon_{ijk}F_{i}\frac{\{A_%
{a}^{j},V\}\{A_{b}^{k},V\}}{\sqrt{\det(q)}}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\int N\epsilon_{ijk}F_{i}\frac{\{A^{j},V\}\wedge\{A^{k%
},V\}}{\sqrt{\det(q)}}$$
$$\displaystyle=$$
$$\displaystyle-\int N\mbox{tr}(F\frac{\{A,V\}\wedge\{A,V\}}{\sqrt{\det(q)}})$$
where we have used that $\mbox{tr}(\tau_{i}\tau_{j}\tau_{k})=-\epsilon_{ijk}/2$
and (2.9). Following the idea outlined in section 2
consider now a point splitting of the above expression as follows : Let
$\epsilon$ be a small number and $\chi_{\epsilon}(x,y):=\theta(\frac{\epsilon}{2}-|x^{1}-y^{1}|)\theta(\frac{%
\epsilon}{2}-|x^{2}-y^{2}|)$
where $\theta(t)=1$ if $t>0$ and $0$ otherwise, that is, $\chi_{\epsilon}$
is the characteristic function of square of coordinate volume $\epsilon^{2}$ .
Moreover, it is just
true that $\{A_{a}^{i}(x),V\}=\{A_{a}^{i}(x),V(x,\epsilon)\}$ where
$$V(x,\epsilon):=\int_{\Sigma}d^{2}y\chi_{\epsilon}(x,y)\sqrt{\det(q)(y)}$$
is the volume of the square around $x$ as measured by $q_{ab}$. Notice that
trivially $\lim_{\epsilon\to 0}V(x,\epsilon)/\epsilon^{2}=\sqrt{\det(q)(x)}$.
Therefore we have the identity (we write the density $F^{i}$ as a
2-form)
$$\displaystyle\lim_{\epsilon\to 0}\int N(x)\mbox{tr}(F(x)\int\chi_{\epsilon}(x,%
y)\frac{\{A(y),V\}\wedge\{A(y),V\}}{V(y,\epsilon)})$$
(4.2)
$$\displaystyle=$$
$$\displaystyle\lim_{\epsilon\to 0}\int N(x)\mbox{tr}(F(x)\int\frac{\chi_{%
\epsilon}(x,y)}{\epsilon^{2}}\frac{\{A(y),V\}\wedge\{A(y),V\}}{V(y,\epsilon)/%
\epsilon^{2}})$$
$$\displaystyle=$$
$$\displaystyle\int N(x)\mbox{tr}(F(x)\int[\lim_{\epsilon\to 0}\frac{\chi_{%
\epsilon}(x,y)}{\epsilon^{2}}]\frac{\{A(y),V\}\wedge\{A(y),V\}}{\lim_{\epsilon%
\to 0}V(y,\epsilon)/\epsilon^{2}})$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}H(N),$$
that is, the point splitting singularity $1/\epsilon^{2}$ was absorbed
into $V(y,\epsilon)$.
The limit identity (4.2) motivates to define a point split expression
$$\displaystyle H_{\epsilon}(N):=-\int N(x)\mbox{tr}(F(x)\int\chi_{\epsilon}(x,y%
)\frac{\{A(y),V\}\wedge\{A(y),V\}}{V(y,\epsilon)})$$
(4.3)
$$\displaystyle=$$
$$\displaystyle-\int N(x)\mbox{tr}(F(x)\int\chi_{\epsilon}(x,y)\frac{\{A(y),V(y,%
\epsilon)\}}{\sqrt{V(y,\epsilon)}}\wedge\frac{\{A(y),V(y,\epsilon)\}}{\sqrt{V(%
y,\epsilon)}})$$
$$\displaystyle=$$
$$\displaystyle-4\int N(x)\mbox{tr}(F(x)\int\chi_{\epsilon}(x,y)\{A(y),\sqrt{V(y%
,\epsilon)}\}\wedge\{A(y),\sqrt{V(y,\epsilon)}\}),$$
that is, the simple formula $\{.,\sqrt{V(y,\epsilon)}\}=\{.,V(y,\epsilon)\}/(2V(y,\epsilon))$ enabled us to bring the volume functional from the
denominator into the nominator, of course, inside the Poisson bracket.
The idea is now to replace Poisson brackets by commutators and the volume
functional by the volume operator and then take the limit $\epsilon\to 0$.
In order to do that we must first write (4.2) in such quantities on
which the volume operator knows how to act. Since, as obvious from the
previous section, it only knows how to act on
functions of holonomies along edges we must replace the connection
field $A_{a}^{i}$ in (4.3) by holonomies. We are thus forced to
introduce a triangulation of $\Sigma$.
Denote by $\Delta$ a solid triangle. Single out one of the corners of the
triangle and call it $v(\Delta)$, the basepoint of $\Delta$. At $v(\Delta)$
there are incident two edges $s_{1}(\Delta),s_{2}(\Delta)$
of $\partial\Delta$ which we
equip with outgoing orientation, that is, they start at $v(\Delta)$.
We fix the labelling as follows : let $s$ be the analytic extension
of $s_{1}(\Delta)$ and $\bar{s}_{1}(\Delta)$ the half of $s$ starting at
$v(\Delta)$ but not including $s_{1}(\Delta)-\{v\}$ with outgoing
orientation at $v(\Delta)$. Let $U$ be a sufficiently small neighbourhood
of $v(\Delta)$ which is split into two halves by $s$. Define the upper half $U^{+}$ of $U$
to be that half of $U$ which one intersects as one turns $s_{1}(\Delta)$
counterclockwise into $\bar{s}_{1}(\Delta)$.
Now we require that there exists $U$ such that $U\cap s_{2}(\Delta)=U^{+}\cap s_{2}(\Delta)$, that is, $s_{2}(\Delta)$ intersects the upper half of
$U$.
Definition 4.1
Two analytical edges $e_{1},e_{2}$ incident and outgoing at $v=e_{1}\cap e_{2}$
will be said to be right oriented iff
there exists a neighbourhood $U$ of $v$, its upper half $U^{+}$
being defined by $e_{1}$, such that $e_{2}$ intersects $U^{+}$.
This prescription is obviously
diffeomorphism invariant. Notice that we did not, as it is usually done
for triangulations, require that the tangents of the edges bounding $\Delta$
must have linearly independent tangents at their intersection. If they
are linearly independent then our prescription is equivalent to saying that
$\epsilon_{ab}\dot{s}_{1}^{a}(0)\dot{s}_{2}^{b}(0)>0$
Finally, let $a(\Delta)$ denote the remaining edge of $\partial(\Delta)$,
called the arc of $\Delta$, whose orientation we fix by requiring that it
runs from the endpoint of $s_{1}(\Delta)$ to the endpoint of $s_{2}(\Delta)$.
Then $\partial\Delta=\alpha_{12}(\Delta)=s_{1}(\Delta)\circ a(\Delta)\circ s_{2}(%
\Delta)^{-1}$ is called the loop of $\Delta$ based at $v(\Delta)$. We
define also $\alpha_{21}(\Delta):=\alpha_{12}(\Delta)^{-1}$.
Let us now write the integral
over $\Sigma\times\Sigma$ in (4.3) as a double sum of integrals
over $\Delta\times\Delta^{\prime}$ where $\Delta,\Delta^{\prime}$ are triangles of some
triangulation $T$ of $\Sigma$
$$H_{T,\epsilon}(N)=-4\sum_{\Delta,\Delta^{\prime}\in T}\mbox{tr}(\int_{\Delta^{%
\prime}}N(x)F(x)\int_{\Delta}\chi_{\epsilon}(x,y)\{A(y),\sqrt{V(y,\epsilon)}\}%
\wedge\{A(y),\sqrt{V(y,\epsilon)}\})\;.$$
(4.4)
The purpose of the notation just introduced is that we may approximate, for
sufficiently fine triangulation, each of the integrals by a function of
holonomies as follows : Let $\delta$ be a small parameter and $s_{i}(\Delta)$
be the image of $[0,\delta]$ under the path $s_{i}(\Delta,t)$. Then, using
smoothness of the connection we find
$$\displaystyle N(v(\Delta^{\prime}))\chi_{\epsilon}(v(\Delta^{\prime}),y)%
\epsilon^{ij}h_{\alpha_{ij}(\Delta^{\prime})}$$
(4.5)
$$\displaystyle=$$
$$\displaystyle 2\delta^{2}N(v(\Delta^{\prime}))\chi_{\epsilon}(v(\Delta^{\prime%
}),y)\frac{\dot{s}_{1}^{a}(\Delta^{\prime},0)\dot{s}_{2}^{b}(\Delta^{\prime},0%
)}{2}F_{ab}(v(\Delta^{\prime}))+o(\delta^{3})$$
$$\displaystyle=$$
$$\displaystyle 2\int_{\Delta^{\prime}}\chi_{\epsilon}(x,y)N(x)F(x)+o(\delta^{3}%
)\mbox{ and }$$
$$\displaystyle\chi_{\epsilon}(x,v(\Delta))\epsilon^{ij}h_{s_{i}(\Delta)}\{h_{s_%
{i}(\Delta)}^{-1},\sqrt{V(v(\Delta),\epsilon)}\}h_{s_{j}(\Delta)}\{h_{s_{j}(%
\Delta)}^{-1},\sqrt{V(v(\Delta),\epsilon)}\}$$
$$\displaystyle=$$
$$\displaystyle\chi_{\epsilon}(x,v(\Delta))\delta^{2}\dot{s}_{1}^{a}(\Delta,0)%
\dot{s}_{2}^{b}(\Delta,0)\{A_{a}(v(\Delta)),\sqrt{V(v(\Delta),\epsilon)}\}\{A_%
{b}(v(\Delta)),\sqrt{V(v(\Delta),\epsilon)}\})$$
$$\displaystyle=$$
$$\displaystyle 2\int\chi_{\epsilon}(x,y)\{A(y),\sqrt{V(y,\epsilon)}\}\wedge\{A(%
y),\sqrt{V(y,\epsilon)}\}+o(\delta^{3})$$
since the area of $\Delta$ is approximately $\delta^{2}\epsilon_{ab}\dot{s}_{1}^{a}(\Delta,0)\dot{s}_{2}^{b}(\Delta,0)/2$
so that both integrals are of order $\delta^{2}$ provided that the
tangents of $\partial(\Delta)$ at $v(\Delta)$ are linearly independent.
Thus, up to an error of
order $\delta^{2}$ which vanishes in the limit as the we remove the
triangulation we may substitute (4.4) by
$$\displaystyle H_{T,\epsilon}(N)$$
$$\displaystyle=$$
$$\displaystyle-2\sum_{\Delta,\Delta^{\prime}\in T}\epsilon^{ij}\epsilon^{kl}N(v%
(\Delta^{\prime}))\chi_{\epsilon}(v(\Delta^{\prime}),v(\Delta))\times$$
$$\displaystyle\times$$
$$\displaystyle\mbox{tr}(h_{\alpha_{ij}(\Delta^{\prime})}h_{s_{k}(\Delta)}\{h_{s%
_{k}(\Delta)}^{-1},\sqrt{V(v(\Delta),\epsilon)}\}h_{s_{l}(\Delta)}\{h_{s_{l}(%
\Delta)}^{-1},\sqrt{V(v(\Delta),\epsilon)}\})\;.$$
The result (4.1) is still purely classical and becomes $H(N)$ when
taking
1) first the continuum limit (that is, refining the triangulation ad
infinitum) and
2) taking $\epsilon\to 0$ on smooth connections $A_{a}^{i}$ and
smooth momenta $E^{a}_{i}$.
A second way to guide the limit and that leads to $H(N)$
is by “synchronizing”
$\epsilon\approx\delta$ and to take $\delta\to 0$ as follows : for each
$\Delta$ define
$$\epsilon(\Delta):=\sqrt{|\epsilon_{ab}\dot{s}^{a}_{1}(\Delta,0)\dot{s}^{a}_{2}%
(\Delta,0)|}\delta,$$
replace for each $\Delta^{\prime}$ :
1)$\chi_{\epsilon}(v(\Delta),v(\Delta^{\prime}))$ by
$\chi_{\epsilon(\Delta^{\prime})}(v(\Delta),v(\Delta^{\prime}))$
and
2) $V(v(\Delta^{\prime}),\epsilon)$ by $V(v(\Delta^{\prime}),\epsilon(\Delta^{\prime}))$
and then take $\delta\to 0$. Notice that this corresponds to introducing
$\epsilon(y)=\rho(y)\delta$ instead of $\epsilon$ in (4.2) where
$\rho(y)$ is an almost nowhere (with respect to $d^{2}x$)
vanishing function such that $\rho(v(\Delta))\delta=\epsilon(\Delta)$.
Clearly $\rho$ must be almost nowhere vanishing as otherwise we do not
get a $\delta$ distribution in the limit $\delta\to 0$.
Notice that the set of $v(\Delta^{\prime})$’s has $d^{2}x$ measure zero so that
a vanishing $\rho(v(\Delta^{\prime})$ is not worrysome.
It will be this latter limit which is meaningful in the quantum theory.
We have managed to write $H(N)$ in terms of holonomies
up to an error which vanishes in either of the limits that we have indicated.
The next step is to turn (4.1) into a quantum operator. This now just
consists in replacing $V(v(\Delta),\epsilon)$ by
$\hat{V}(v(\Delta),\epsilon)$ and Poisson brackets by commutators times
$1/(i\hbar)$ because we work in a connection representation. The result is
$$\displaystyle\hat{H}_{T,\epsilon}(N)$$
$$\displaystyle=$$
$$\displaystyle\frac{2}{\hbar^{2}}\sum_{\Delta,\Delta^{\prime}\in T}\epsilon^{ij%
}\epsilon^{kl}N(v(\Delta^{\prime}))\chi_{\epsilon}(v(\Delta^{\prime}),v(\Delta%
))\times$$
(4.7)
$$\displaystyle\times$$
$$\displaystyle\mbox{tr}(h_{\alpha_{ij}(\Delta^{\prime})}h_{s_{k}(\Delta)}[h_{s_%
{k}(\Delta)}^{-1},\sqrt{\hat{V}(v(\Delta),\epsilon)}]h_{s_{l}(\Delta)}[h_{s_{l%
}(\Delta)}^{-1},\sqrt{\hat{V}(v(\Delta),\epsilon)}])\;.$$
We wish to show that (4.7) is densely defined in the limit
$\epsilon\to 0$ no matter how we choose the triangulation $T$, as long as it
is finite, thereby showing that
the regulator $\epsilon$ can be removed without encountering any
singularity. Thus, we prescribe the $\epsilon\to 0$ limit before
taking the limit of infinitely fine triangulation (continuum limit) and
therefore have interchanged the order of limits as compared to the classical
theory. However, as we will show shortly, one arrives at the same result
when synchronizing $\epsilon\approx\delta$
and taking $\delta$ sufficiently small but finite for the moment being which
corresponds to
the second way to guide the classical limit indicated above and therefore
interchanging the limits is allowed.
For that purpose let $f_{\gamma}$
be a function which is cylindrical with respect to a graph. Consider
first some triangle $\Delta$ which does not intersect $\gamma$ at all.
Then it is easy to see that
$$[h_{s_{l}(\Delta)}^{-1},\sqrt{\hat{V}(v(\Delta),\epsilon)}])f_{\gamma}=0\;$$
The reason for this is that the graphs $\gamma$ and $\gamma\cup s_{l}(\Delta)$
then do not have any two-valent vertex in the box around
$v(\Delta)$ parametrized by $\epsilon$ other than the vertices of
$\gamma$ themselves. Thus the volume operator does not act on
$h_{s_{l}(\Delta)}^{-1}$ and the commutator vanishes. It follows that
only tetrahedra which intersect the graph contribute in (4.7).
So let $\gamma\cap\Delta\not=\emptyset$. For the same reason as above we
find a non-zero contribution only if $s_{1}(\Delta)$ or $s_{2}(\Delta)$ intersect
$\gamma$, that $a_{12}(\Delta)$ alone intersects $\gamma$ is not sufficient.
Moreover, still
for the same reason, if $s_{i}(\Delta)$ intersects $\gamma$ but not in the
starting point of $s_{i}(\Delta)$ then we still get zero upon choosing
$\epsilon$ sufficiently small so that the intersection point p lies outside
the support of the characteristic function, that is,
$\chi_{\epsilon}(v(\Delta),p)=0$. Thus a triangle
$\Delta$ contributes to
(4.7) if and only if $v(\Delta)\in\gamma$. But if that is true then
we may replace $\hat{V}(v(\Delta),\epsilon)$ by the operator
$\hat{V}_{v(\Delta)}$ defined in (3.13) and so the $\epsilon$-dependence
of $\hat{V}(v(\Delta),\epsilon)$ has dropped out. The remaining
$\epsilon$-dependence now just rests in the function
$\chi_{\epsilon}(v(\Delta^{\prime}),v(\Delta))$. Now, since we let $\epsilon\to 0$
first, at finite triangulation, we conclude altogether that the
unrestricted double sum over triangles in (4.7) collapses to a
double sum over triangles subject to the condition that their basepoints
coincide and lies on the graph. In formulae
$$\displaystyle\hat{H}_{T}(N)f_{\gamma}:=\lim_{\epsilon\to 0}\hat{H}_{T,\epsilon%
}(N)f_{\gamma}$$
(4.8)
$$\displaystyle=$$
$$\displaystyle\frac{2}{\hbar^{2}}\sum_{\Delta,\Delta^{\prime}\in T,v:=v(\Delta)%
=v(\Delta^{\prime})\in\gamma}\epsilon^{ij}\epsilon^{kl}N(v)\times$$
$$\displaystyle\times$$
$$\displaystyle\mbox{tr}(h_{\alpha_{ij}(\Delta^{\prime})}h_{s_{k}(\Delta)}[h_{s_%
{k}(\Delta)}^{-1},\sqrt{\hat{V}_{v}}]h_{s_{l}(\Delta)}[h_{s_{l}(\Delta)}^{-1},%
\sqrt{\hat{V}_{v}}])f_{\gamma}$$
which displays $\hat{H}_{T}(N)$ as a densely defined operator which
does not suffer from any singularities because at finite triangulation
there are only a finite number of terms involved in (4.8), even
if $\Sigma$ is not compact.
Notice that in the $\epsilon\to 0$ limit we have recovered a gauge
invariant operator as we should.
Let us now show that one arrives at the same result by synchronizing
$\epsilon\approx\delta$ as above and taking $\delta$ sufficiently small
but still finite : Namely, by choosing $\epsilon(\Delta^{\prime})$ as above
we have arranged that only the starting points of the $s_{i}(\Delta^{\prime})$ are
covered
by the $\epsilon(\Delta^{\prime})$-box around $v(\Delta^{\prime})$ that underlies the
definition
of $\hat{V}(v(\Delta^{\prime}),\epsilon(\delta^{\prime}))$. This implies first of all
that we need
to sum only over $v(\Delta)=v(\Delta^{\prime})$. Next, as we will be forced to
adapt the triangulation to the graph anyway, we
can arrange that the $\Delta$ intersect $\gamma$ only either in whole
edges or in vertices of $\Delta$. If that is the case, then it follows that
$[h_{s_{i}(\Delta)},\sqrt{\hat{V}(v(\Delta),\epsilon(\Delta))}]f_{\gamma}$ is
non-vanishing only if $s_{i}(\Delta)$ intersects $\gamma$ in $v(\Delta)$
because the end-point is not covered by the $\epsilon(\Delta)$-box and
if $s_{i}(\Delta)$ is contained in $\gamma$ but does not start in a
vertex of $\gamma$ then the commutator vanishes due to the properties of the
volume operator. This is enough to see that we arrive at (4.8) again.
In either way of taking the limit
we are now left with taking the continuum limit $\delta\to 0$ of refining
the triangulation ad infinitum which we denote as $T\to\infty$. Certainly that
limit depends largely on the choice of the limit $T\to\infty$. For instance,
if we are not careful and refine $T$ in such a way that the number of
basepoints of triangles that intersect $\gamma$ diverges we will not get
a densely defined operator. We see that we must choose $T$ according to
$\gamma$ so that we get actually a family of operators
$$\hat{H}_{\gamma,T}(N)=\hat{H}_{T(\gamma)}(N)$$
where $T(\gamma)$ is a triangulation adapted to $\gamma$ together with a
well-defined refinement procedure $T\to\infty$. We will propose such a
$T(\gamma)$
in the next subsection guided by some physical principles. It will then
be our task to verify that the family $(\hat{H}_{\gamma,T}(N))$ still
defines a linear operator.
4.2 Choice of the triangulation
So far everything what we said was in complete analogy with the
three-dimensional case
[4] except that there we did not even need a point-splitting. In
particular, (4.8) is the precise counterpart of the three-dimensional
Euclidean Hamiltonian constraint operator.
What is different now is that in the 3+1 case the volume operator was
much more degenerate than in the 2+1 case, a result of which was that
a basepoint of a simplex had to coincide with a vertex of the graph in
order to contribute without further specification of the triangulation.
Therefore, it was sufficient to adapt the triangulation to the
graph in such a way that, among other things, the number of simplices
intersecting a vertex stays constant as one refines the triangulation in
order to arrive at well-defined continuum limit. In the 2+1 case that is not
true any longer and one must worry about the number of
triangles intersecting the graph $\gamma$ off the vertices of $\gamma$.
Let us adopt the physical principles listed in [4] which should
guide one of how to choose the triangulation. In brief, they were :
1)
The amount of ambiguity arising from the choice of the
triangulation should be kept to a minimum.
2)
The resulting operator should be non-trivial and not annihilate every state.
3)
The choice of the contributing $\Delta$ should be diffeomorphism
covariant as to interact well with the diffeomorphism invariance of the
theory.
4)
The choice of the $\Delta$ should be canonical and not single
out one part of the graph as compared to the other or one graph as
compared to another.
5)
The family of operators $\hat{H}_{\gamma}(N)$ should define a linear operator
$\hat{H}(N)$ (cylindrical consistency).
6)
The resulting operator $\hat{H}_{\gamma,T}(N)$ should be densely
defined with a well-defined continuum limit. That is, if $\Psi\in\Phi^{\prime}$ is a
diffeomorphism invariant distribution and $f_{\gamma}$ a function
cylindrical with respect to a graph $\gamma$ then
$$\lim_{T\to\infty}\Psi(\hat{H}_{\gamma,T}(N)f_{\gamma})=:\Psi(\hat{H}(N)f_{%
\gamma}))$$
exists. The fact that $\Psi$ is diffeomorphism invariant is because we
actually want to define $\hat{H}(N)$ on solutions to the diffeomorphism
constraint which turn out to be distributions [7]
and so the above limit is
the precise sense in which $\hat{H}(N)$ is defined on distributions.
7)
The operator $\hat{H}(N)$ should be free of anomalies, that is,
$$\Psi([\hat{H}(M),\hat{H}(N)]\phi)=0$$
for each $\phi\in\Phi$ and every diffeomorphism invariant $\Psi\in\Phi^{\prime}$.
Since we wish to obtain a densely defined operator no matter how fine the
triangulation while keeping the extra structure coming from the
triangulation to a minimum we are naturally lead to impose that the
triangles that intersect $\gamma$ in its basepoint must be constant in
number. There are only two diffeomorphism invariantly different
possibilities : either $v(\Delta)$ is a vertex of $\gamma$ or it lies
on an edge of $\gamma$ between its endpoints. Since we want to get a
non-vanishing operator one of the two or both scenarios should happen.
Suppose first then that $v(\Delta)$ is an interiour point of an edge $e$.
Then there is no natural way how to choose the triangle $\Delta$ itself :
the only structure available is the edge $e$ and one may therefore choose
one of $s_{i}(\Delta)$, say $s_{1}(\Delta)$, to lie
entirely in $e$. But then $s_{2}(\Delta)$ should certainly not lie in $e$
otherwise $v(\Delta)$ would be a vertex of $\gamma\cup\partial\Delta$
with only co-linear tangents of edges incident at it and the volume operator
$\hat{V}_{v(\Delta)}$ would vanish. Thus there is at least a huge
ambiguity in how to choose $s_{2}(\Delta)$.
If, on the other hand, $v(\Delta)$ is a vertex of the graph then there
are at least two edges $e,e^{\prime}$ of $\gamma$ incident at it and now it is a
natural choice to assume that $s_{i}(\Delta)$ coincide with segments of
$e,e^{\prime}$.
In conclusion, guided by the principle of introducing as few ambiguous
elements as possible into the triangulation we are motivated to exclude
that a $v(\Delta)$ is an interiour point of an edge or that it anyway
does not contribute. The latter can be achieved by assuming that the
edges $s_{i}(\Delta)$ have co-linear tangents at $v$ in this case.
Now we are left with those $v(\Delta)$ that are vertices of $\gamma$.
Following the principle that our prescription should be canonical we
must have that either each vertex of $\gamma$ is a basepoint of some
$\Delta$ or none. Since the latter possibility is excluded by the
principle of non-triviality we are now concerned with the issue of how many
$\Delta$’s should have basepoint in each $v\in V(\gamma)$. A natural
answer to this question is that there should be as many such $\Delta$’s as
pairs of edges incident at $v$ because otherwise we would single out one
pair to another. However, we still need to fulfill the requirement that
the $\Delta$’s must come from a triangulation. Both observations motivate
to define a whole family of triangulations adapted to $\gamma$ and to
average over them.
Finally, we must fix in a diffeomorphism covariant way how to attach
the arcs $a_{12}(\Delta)$ to $\gamma$. Notice that since $\Delta$ is a
part of a triangulation with $v(\Delta)$ a vertex of $\gamma$ and with
$s_{i}(\Delta)$ segments of edges of $\gamma$ incident at $v$, it is
possible that the endpoints of $a_{12}(\Delta)$ are actually basepoints
of of other triangles $\Delta^{\prime}$. This we either must avoid by choice
(which is possible) or we must impose that the tangents of $s_{i}(\Delta)$
and $a_{12}(\Delta)$ are co-linear at the endpoints of $a_{12}(\Delta)$.
As we will see, only the latter possibility leads to an anomaly-free theory.
This furnishes our preliminary investigation of how to choose $T(\gamma)$.
We will now prescribe $T(\gamma)$. The prescription is simpler but very
similar to the three-dimensional case.
Fix a vertex $v$ of $\gamma$ and let $n$ denote its valence. We can
label the edges of $\gamma$ incident at $v$ in such a way that
1) the pairs $(e_{1},e_{2}),(e_{2},e_{3}),..,(e_{n-1},e_{n})$ are right oriented
and possibly also $(e_{n},e_{1})$ is right oriented according to definition
(4.1) and
2) as one encircles $v$ counter-clockwise, one does not cross any other
edge after one crosses $e_{i}$ and before one crosses $e_{i+1}$ where
$e_{n+1}\equiv e_{1}$. We are
going to construct a triangle $\Delta$ associated
with each such right oriented pair which we will call $(e_{1},e_{2})$ from
now on. We do not, in contrast to the 3+1 theory, construct a triangle
associated with each pair because then $a_{12}$ in two dimensions would
intersect
not only $s_{1},s_{2}$ but also other edges of the graph which we must avoid
in order to have an anomaly-free theory as we will see. Moreover, in two
dimensions the way we ordered the edges incident at $v$ is very natural
and not available in three dimensions.
Finally, let $E(v)$ equal $n$ if $(e_{n},e_{1})$ is right oriented,
otherwise let it equal $n-1$. In particular, for $n=2$ we must have
$E(v)=1$.
We choose now
$s_{i}(\Delta)$ to be any segment of $e_{i}$ which does not include the other
endpoint of $e_{i}$ different from $v$ and which starts at $v$. Furthermore,
connect the endpoints of $s_{1}(\Delta)$ with the endpoint of
$s_{2}(\Delta)$ by an arc $a_{12}(\Delta)$ with the special property
that the tangent of $a_{12}(\Delta)$ is
1) parallel to the tangent of $s_{1}(\Delta)$
at the end-point of $s_{1}(\Delta)$ and
2) anti-parallel to the tangent of $s_{2}(\Delta)$
at the end-point of $s_{2}(\Delta)$.
Two remarks are in order :
a) Notice that we do not have to worry about any other edge of $\gamma$
intersecting $a_{12}(\Delta)$ because in two dimensions the topology
of the routing of $a_{12}$ through the edges of $\gamma$ is very simple :
there is no way that $a_{12}$ can intersect any other edges of $\gamma$
other than $s_{1},s_{2}$ given the labelling of $e_{i}$ made above. This is in
contrast to the three-dimensional case where the topology of the routing
was extremely complicated to prescribe in a diffeomorphism covariant way.
b) In contrast to the three-dimensional case we here prescribed the
$C^{1}$ properties of the edges $s_{1},s_{2},a_{12}$ at their intersection points.
The reason for this will become evident only later when we prove
anomaly-freeness. We will see that the $C^{1}$ property of the
intersection is crucial.
Whenever $(e_{n},e_{1})$ is a right oriented pair the $n$ triangles saturate
$v$. Otherwise there are only $n-1$ triangles and they do not yet
saturate $v$. We follow the approach proposed in [4] in order to
achieve saturation. Namely, we take each of the $E(v)$ triangles and
construct three more from it such that they altogether saturate $v$. Then
we average over the $E(v)$ triangulations based on using only one such
quadrupel of triangles. The details are as follows :
Let $s_{i}(t),a_{12}(t)$ be a parametrization of $s_{i},a_{12}$ with $t\in[0,1]$. Let
$s_{\bar{i}}(t):=v-(s_{i}(t)-v)=2v-s_{i}(t)$,
$a_{\bar{1}\bar{2}}(t):=2v-a_{12}(t)$,
$a_{\bar{2}1}(t):=s_{\bar{2}}(1)+t(s_{1}(1)-s_{\bar{2}}(1))$,
$a_{2\bar{1}}(t):=s_{2}(1)+t(s_{\bar{1}}(1)-s_{2}(1))$.
Then it is easy to see that $(s_{\bar{1}},s_{\bar{2}}),(s_{\bar{2}},s_{1}),(s_{2},s_{\bar{1}})$ are right oriented pairs and that
the four triangles $\Delta_{12},\Delta_{\bar{1}\bar{2}},\Delta_{\bar{2}1},\Delta_{2\bar{1}}$ based on these triples of edges
saturate $v$ (use $a_{12}(0)=s_{1}(1),a_{12}(1)=s_{2}(1)$ to see this).
Let now $S_{i}(v)$ denote the region in $\Sigma$ filled by these four triangles
based on a pair of edges $(e_{i},e_{i+1})$ incident at $v$. Also denote by
$\Delta_{i}(v)$ the original triangle defined by $s_{1},s_{2},a_{12}$ for that
pair from which we constructed the remaining three triangles as above.
Let $S(v):=\cup_{i=1}^{E(v)}S_{i}(v)$ be the union of these regions given
by all the $E(v)$ pairs and let $\bar{S}_{i}(v)=S(v)-S_{i}(v)$. We will
choose all the triangles so small that the $S(v)$ are mutually disjoint.
Finally, let $S=\cup_{v\in V(\gamma)}S(v)$ and $\bar{S}=\Sigma-S$. Then
we can trivially decompose any integral over $\Sigma$ as follows
$$\displaystyle\int_{\Sigma}$$
$$\displaystyle=$$
$$\displaystyle\int_{\bar{S}}+\sum_{v\in V(\gamma)}\int_{S(v)}$$
(4.9)
$$\displaystyle=$$
$$\displaystyle\int_{\bar{S}}+\sum_{v\in V(\gamma)}\frac{1}{E(v)}\sum_{i=1}^{E(v%
)}[\int_{\bar{S}_{i}(v)}+\int_{S_{i}(v)}]$$
$$\displaystyle=$$
$$\displaystyle[\int_{\bar{S}}+\sum_{v\in V(\gamma)}\frac{1}{E(v)}\sum_{i=1}^{E(%
v)}\int_{\bar{S}_{i}(v)}]+\sum_{v\in V(\gamma)}\frac{1}{E(v)}\sum_{i=1}^{E(v)}%
\int_{S_{i}(v)}$$
$$\displaystyle=$$
$$\displaystyle[\int_{\bar{S}}+\sum_{v\in V(\gamma)}\frac{1}{E(v)}\sum_{i=1}^{E(%
v)}\int_{\bar{S}_{i}(v)}]+[\sum_{v\in V(\gamma)}\frac{4}{E(v)}\sum_{i=1}^{E(v)%
}\int_{\Delta_{i}(v)}+o(\delta^{3})]\;.$$
In the last line we have exploited that for smooth integrands and small
triangles the integral over each of the four triangles constructed is
the same up to higher order in the parameter $\delta$ introduced before
equation (4.5). It is clear that the term in the first square
bracket of the last line in (4.9) is a sum of integrals over regions of
$\Sigma$ each of which does not contain vertices of $\gamma$.
We are now ready to specify the family of triangulations $T(\gamma)$ of
$\Sigma$
which by (4.5) can actually be reduced to a family of triangulations of
$\bar{S},\bar{S}_{i}(v),\Delta_{i}(v)$ for $v\in V(\gamma),i=1,..,E(v)$ :
1) Triangulate $\Delta_{i}(v)$ by $\Delta_{i}(v)$
2) Triangulate $\bar{S}$ and $\bar{S}_{i}(v)$ arbitrarily subject to the
condition that no basepoint of a triangle should lie on an edge of $\gamma$
or that all tangents at an intersection with an edge of $\gamma$ are
co-linear
3) The triangles $\Delta_{i}(v)$ collapse to $v$ as $T\to\infty$ in such a
way that all graphs $\gamma\cup\Delta_{i}(v)$ are diffeomorphic as
$T\to\infty$. In fact as long as we keep the prescription of how to
choose $s_{i}(\Delta),a_{12}(\Delta)$ specified above, all the graphs
$\gamma\cup\Delta$ are related by an analyticity preserving smooth
diffeomorphism no matter how “large” $\Delta$. Namely, such
diffeomorphisms can leave the image of $\gamma$ invariant while putting
$a_{12}$ in any diffeomorphic shape.
Notice that now we have a well-defined prescription for the continuum limit
because by construction the triangles that triangulate $\bar{S},\bar{S}_{i}(v)$
do not contribute to the operator (4.8). The fact that the number
of triangles that have their basepoint in vertices of the graph (which
are the only ones that contribute) stays constant (namely $E(v)$)
indicates that the continuum operator will be densely defined.
4.3 Continuum Limit
Let us summarize : having specified the triangulation we have triangles
$\Delta(\gamma,T)$ associated with the graph, more precisely $E(v)$ for each
vertex $v$ of $\gamma$, the index $T$ indicating that the continuum limit
has not been taken yet. Then the regulated operator (4.8) becomes
$$\displaystyle\hat{H}_{T}(N)f_{\gamma}:=\hat{H}_{\gamma,T}(N)f_{\gamma}:=\frac{%
1}{\hbar^{2}}\sum_{v\in V(\gamma)}N(v)(\frac{4}{E(v)})^{2}\sum_{v(\Delta),v(%
\Delta^{\prime})=v}\epsilon^{ij}\epsilon^{kl}\times$$
(4.10)
$$\displaystyle\times$$
$$\displaystyle\mbox{tr}(h_{\alpha_{ij}(\Delta^{\prime})}h_{s_{k}(\Delta)}[h_{s_%
{k}(\Delta)}^{-1},\sqrt{\hat{V}_{v}}]h_{s_{l}(\Delta)}[h_{s_{l}(\Delta)}^{-1},%
\sqrt{\hat{V}_{v}}])f_{\gamma}$$
where we have dropped the dependence of the $\Delta$ on $\gamma,T$. Now,
since as $T\to\infty$ all holonomies approach unity, the limit $T\to\infty$
does not have any meaning on the Hilbert space ${\cal H}=L_{2}({\overline{{\cal A}/{\cal G}}},d\mu_{0})$.
Indeed, on smooth connections we would get zero while on distributional
connections the limit does not exist. Thus, the limit $T\to\infty$ must
be understood in another way. Indeed, recall that we wanted to impose the
Hamiltonian constraint actually on diffeomorphism invariant distributions
$\Psi\in\Phi^{\prime}$. Now, the operator $\hat{H}_{T}(N)$ defines for each
$T$ an operator $(\hat{H}_{T}(N))^{\prime}$ on $\Phi^{\prime}$ by the equation
$$[(\hat{H}_{T}(N))^{\prime}\Psi](\phi):=\Psi(\hat{H}_{T}(N)\phi)\;\forall\Psi%
\in\Phi^{\prime},\;\phi\in\Phi$$
(4.11)
because $\hat{H}_{T}(N)$ has domain and range in $\Phi$ which is dense in
$\cal H$, for each $T$. Now, if $\Psi$ is diffeomorphism invariant then
$$[(\hat{H}(N))^{\prime}\Psi](f_{\gamma}):=\lim_{T\to\infty}[(\hat{H}_{T}(N))^{%
\prime}\Psi](f_{\gamma})=\Psi(\hat{H}_{\gamma,T_{0}}(N)f_{\gamma})$$
(4.12)
for each function $f_{\gamma}$ cylindrical with respect to a graph $\gamma$
and for each $\gamma$. In other words, the number
$\Psi(\hat{H}_{\gamma,T}(N)f_{\gamma})$ does not change under variation of
$T$ which by prescription corresponded to a diffeomorphism and so on
diffeomorphism invariant states we may evaluate it on any finite value $T_{0}$
and the $T\to\infty$ limit is trivial. It follows that on diffeomorphism
invariant states the continuum limit is already taken for
$(\hat{H}_{T}(N))^{\prime}$.
In fact, it is easy to see that this result can be extended to any product
$[\hat{H}_{T}(N_{1})\hat{H}_{T}(N_{2})..\hat{H}_{T}(N_{n})]^{\prime}$ because the triangles
attached have, at each level, an unambiguously defined diffeomorphism
covariant location. This observation is needed in order to give sense to
commutator computations [4, 13].
In the sequel we will drop the index $T$ and understand that when finally
evaluating everything on diffeomorphism invariant distributions the
value of $T$ is irrelevant.
5 Consistency
There are two kinds of consistencies to be discussed :
The first is the cylindrical consistency, that is, we have obtained
a family of operators $(\hat{H}_{\gamma}(N))_{\gamma}$ which should be
projections to cylindrical subspaces of a “mother” $\hat{H}(N)$.
That such a $\hat{H}(N)$ exists has to be proved.
The second is that we need to make sure that $\hat{H}(N)$ does not
suffer from quantum anomalies.
5.1 Cylindrical Consistency
In proving that a family of operators $(\hat{O}_{i},D_{i})_{i\in I}$ on a
Hilbert space $\cal H$, where $D_{i}$ is the domain of $\hat{O}_{i}$
and where $I$ is some
partially ordered index set $I$ with ordering relation $<$, is cylindrically
consistent we need to reveal that whenever $i<j$ that $\hat{O}_{j}$ is an
extension of $\hat{O}_{i}$, that is
1) The domain of $\hat{O}_{i}$ is contained in that of $\hat{O}_{j}$,
$D_{i}\subset D_{j}$ and
2) The restriction of $\hat{O}_{j}$ to $D_{i}$ coincides with $\hat{O}_{i}$,
$(\hat{O}_{j})_{|i}=\hat{O}_{i}$.
Let us check that this is the case for our operator family. Recall that
a spin-network state depends on all of its edges non-trivially in the
sense that all edges carry spin $j>0$. The space $\Phi_{\gamma}$ is the set
of finite linear combinations of spin-network states which depend on the
graph $\gamma$. Now, while the set of graphs can be partially ordered by
the inclusion relation, the set of cylindrical functions cannot because
a function which is defined on a smaller graph is defined also on any
bigger graph that properly contains it, however, the additional edges in
that graph automatically carry spin zero and so the cylindrical subspaces
cannot be compared. Another way of saying this is that given a
cylindrical function $f$ we can uniquely decompose it as
$f=\sum_{\gamma}f_{\gamma},\;f_{\gamma}\in\Phi_{\gamma}$ and on $f_{\gamma}$ we
have unambiguously $\hat{H}(N)f_{\gamma}=\hat{H}_{\gamma}(N)f_{\gamma}$. We
cannot write $\hat{H}(N)f_{\gamma}=\hat{H}_{\gamma^{\prime}}(N)f_{\gamma}$ with
$\gamma\subset\gamma^{\prime}$ because there is a condition on the spins of the
edges of $\gamma^{\prime}$ involved when applying $\hat{H}_{\gamma^{\prime}}$ which is
not satisfied for $f_{\gamma}$. In other words, $\Phi_{\gamma}\cap\Phi_{\gamma^{\prime}}=\emptyset$ if $\gamma\not=\gamma^{\prime}$
We conclude that the family $(\hat{H}_{\gamma}(N))$ is trivially
cylindrically consistently defined and therefore defines a linear operator
on all of $\cal H$.
5.2 Anomaly-freeness
Recall that the classical Dirac algebra is given by
$$\{H(M),H(N)\}=\int_{\Sigma}d^{2}x(M{,a}N-MN_{,a})q^{ab}V_{b}$$
where $V_{a}$ is the vector constraint. That is, the Poisson bracket
between two Hamiltonian constraints evaluated on the constraint surface
defined by the diffeomorphism constraint vanishes.
In the quantum theory one would therefore like to verify that naively
$[\hat{H}(M),\hat{H}(N)]f=0$ for any state $f$ that satisfies $\hat{V}_{a}f=0$.
Several subtleties arise :
1)
The solutions $\hat{V}_{a}f$ are in general no elements of the Hilbert
space but generalized eigenvectors (distributions). Indeed, in this
context the solutions of the diffeomorphism constraint are not elements of
$\cal H$ but of $\Phi^{\prime}$ where we have the proper inclusion
$\Phi\subset{\cal H}\subset\Phi^{\prime}$. Thus, since $\hat{H}(N)$ is defined only
on $\Phi$, the only operator that is defined on $\Phi^{\prime}$ is the dual
$(\hat{H}(N))^{\prime}$ via the pairing
$\Psi(\hat{H}(N)\phi)=[(\hat{H}(N))^{\prime}\Psi](\phi)$.
2)
Observe that the operator $(\hat{H}(N))^{\prime}$ was not defined on every
distribution but actually only on those that are solutions of the
diffeomorphism constraint. Now even if $\Psi$ is diffeomorphism invariant,
that is,
$\Psi(\hat{U}(\varphi)f_{\gamma}):=\Psi(f_{\varphi(\gamma)})=\Psi(f_{\gamma})$,
then $(\hat{H}(N))^{\prime}\Psi$ is not any longer as one can easily check. Thus
we cannot verify that $[(\hat{H}(M))^{\prime},(\hat{H}(N))^{\prime}]\Psi=0$, this equation
is simply not defined. However, what is well-defined is
$([\hat{H}(M),\hat{H}(N)])^{\prime}\Psi=0$ and this is what we are going to verify.
Indeed, there is no hope to make sense out of
$[(\hat{H}(M))^{\prime},(\hat{H}(N))^{\prime}]\Psi$ since not even classically $H(M)$ is
diffeomorphism covariant. On the other hand, one could proceed as in [13]
and define $\hat{H}^{\prime}(M)\hat{H}^{\prime}(N):=(\hat{H}(N)\hat{H}(N))^{\prime}$ which
makes sense again on diffeomorphism invariant states.
3)
One might be even more ambitious and ask that
$$([\hat{H}(M),\hat{H}(N)])^{\prime}=(\widehat{\int_{\Sigma}d^{2}x(M_{,a}N-MN_{,%
a})q^{ab}V_{b}})^{\prime}$$
(5.1)
that is, the Dirac algebra is faithfully implemented in the quantum theory.
However, there are several issues that prevent us from doing so. First of
all, the generator of diffeomorphisms, $V_{a}$, does not have a quantum
analogue, the diffeomorphism group does not act strongly continuously on
$\cal H$. So the only thing that we can hope to obtain is something
like $\hat{O}^{\prime}[\hat{U}(\varphi)-1]$ for the right hand side of
(4.12) for some $\varphi\in\mbox{Diff}(\Sigma)$ and some dual operator
$\hat{O}^{\prime}$ (notice that $\hat{U}(\varphi)^{\prime}=\hat{U}(\varphi^{-1})$ can be
extended
to all of $\Phi^{\prime}$). Secondly, the situation is even worse for $q^{ab}$.
Thirdly,
since, as we said, $([\hat{H}(M),\hat{H}(N)])^{\prime}$ is only well-defined on
diffeomorphism invariant distributions, then either the dual of the
commutator vanishes or it does not. In the latter case there is an anomaly
even in the sense of $\hat{U}(\phi)-1$. In the former case we get just
zero but then we can trivially make an equality of the form
$$([\hat{H}(M),\hat{H}(N)])^{\prime}=\hat{O}^{\prime}[\hat{U}(\varphi)-1]$$
for any $\hat{O}^{\prime}$ that we like. It then remains to ask whether one can
somehow make sense out of an operator corresponding to the combination
$\int_{\Sigma}d^{2}x(M_{,a}N-MN_{,a})q^{ab}V_{b}$ and that
is actually the case : We will not prove this assertion here but refer
the reader to [13] which treats the 3+1 case but from which
it is obvious that the result can be extended to the 2+1 case.
Summarizing, we will check that $([\hat{H}(M),\hat{H}(N)])^{\prime}\Psi=0$
on diffeomorphism invariant states. The key element of the proof is the
following : as is obvious from (4.10), if $f_{\gamma}$ is a function
cylindrical with respect to a graph, then $\hat{H}(N)f_{\gamma}$ is a
linear combination of functions each of which depends on graphs with new
vertices not
contained in $\gamma$. More precisely, if $v\in V(\gamma)$ then
for each triangle $\Delta$ based at $v$ there is a term
$\frac{4N_{v}}{E(v)}\hat{H}_{\Delta}f_{\gamma}$ and this function is a linear
combination of functions $f^{\prime}$ each of which depends on a graph
$\gamma^{\prime}$ contained in the following list :
$\gamma\cup\Delta,(\gamma\cup\Delta)-s_{1}(\Delta),(\gamma\cup\Delta)-s_{2}(%
\Delta),(\gamma\cup\Delta)-(s_{1}(\Delta)\cup s_{2}(\Delta))$. Whether they
appear depends on the spins
of the graph $\gamma$. In any case these functions $f^{\prime}$ depend on two more
vertices $v_{1},v_{2}$ coming from the endpoints of the arc $a_{12}(\Delta)$.
They may not depend on the original vertex $v$ if that vertex was
two-valent with spins of the edges $e_{i}$ corresponding to $s_{i}(\Delta)$ being
$j=1/2$ for both $i=1,2$. In that case
$[h_{s_{i}(\Delta(v))}^{-1},\hat{V}_{v}]f^{\prime}=0$ because
neither $f^{\prime}$ nor $h_{s_{i}(\Delta(v))}^{-1}f^{\prime}$ depend on graphs with more
than one edge incident at $v$.
The point is now that $[h_{s_{i}(\Delta(v_{1}))}^{-1},\hat{V}_{v_{1}}]f^{\prime}=[h_{s_{i}(\Delta(v_{%
2}))}^{-1},\hat{V}_{v_{2}}]f^{\prime}=0$. The reason for this is that
the vertices $v_{1},v_{2}$ in the graphs on which $f^{\prime}$ and
$h_{s_{i}(\Delta(v))}^{-1}f^{\prime}$ depend does not have edges with linearly
independent tangents incident at it so that the volume operator annihilates
these functions.
Let us now write (4.10) in the form
$$\displaystyle\hat{H}_{\gamma}(N)$$
$$\displaystyle=$$
$$\displaystyle\frac{32}{\hbar^{2}}\sum_{v\in V(\gamma)}N(v)\hat{H}_{\gamma,v}$$
$$\displaystyle\hat{H}_{\gamma,v}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{E(v)^{2}}\sum_{v(\Delta),v(\Delta^{\prime})=v}\hat{H}_{%
\gamma,v,\Delta,\Delta^{\prime}}$$
$$\displaystyle\hat{H}_{\gamma,v,\Delta,\Delta^{\prime}}$$
$$\displaystyle=$$
$$\displaystyle\epsilon^{ij}\epsilon^{kl}\mbox{tr}(h_{\alpha_{ij}(\Delta^{\prime%
})}h_{s_{k}(\Delta)}[h_{s_{k}(\Delta)}^{-1},\sqrt{\hat{V}_{v}}]h_{s_{l}(\Delta%
)}[h_{s_{l}(\Delta)}^{-1},\sqrt{\hat{V}_{v}}])$$
(5.2)
The function $\hat{H}_{\gamma,v}f_{\gamma}$ now can be written as a linear
combination of functions $f^{\prime}_{\gamma^{\prime}}$ each of which depends on a graph
$\gamma^{\prime}$ which is a proper subgraph of the graph
$\gamma(v):=\gamma\cup_{v(\Delta)=v(\Delta^{\prime})=v}[\Delta\cup\Delta^{%
\prime}]$ and
we will mean by $\hat{H}_{\gamma(v),v^{\prime}}$ the operator that reduces to
$\hat{H}_{\gamma^{\prime},v^{\prime}}$ on $f_{\gamma^{\prime}}$ for each $v^{\prime}\in V(\gamma^{\prime})$ and
is zero if $v^{\prime}\not\in V(\gamma^{\prime})$.
With this preparation we evaluate
$$\displaystyle[\hat{H}(M),\hat{H}(N)]f_{\gamma}=\sum_{v\in V(\gamma)}[N_{v}\hat%
{H}(M)-M_{v}\hat{H}(N)]\hat{H}_{\gamma,v}f_{\gamma}$$
(5.3)
$$\displaystyle=$$
$$\displaystyle\sum_{v\in V(\gamma)}\sum_{v^{\prime}\in V(\gamma(v))}[N_{v}M_{v^%
{\prime}}-M_{v}N_{v^{\prime}}]\hat{H}_{\gamma(v),v^{\prime}}\hat{H}_{\gamma,v}%
f_{\gamma}$$
$$\displaystyle=$$
$$\displaystyle\sum_{v,v^{\prime}\in V(\gamma)}[N_{v}M_{v^{\prime}}-M_{v}N_{v^{%
\prime}}]\hat{H}_{\gamma(v),v^{\prime}}\hat{H}_{\gamma,v}f_{\gamma}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\sum_{v,v^{\prime}\in V(\gamma)}[N_{v}M_{v^{\prime}}-M%
_{v}N_{v^{\prime}}][\hat{H}_{\gamma(v),v^{\prime}}\hat{H}_{\gamma,v}-\hat{H}_{%
\gamma(v^{\prime}),v}\hat{H}_{\gamma,v^{\prime}}]f_{\gamma}\;.$$
Here we have used our notation to write the commutator as a double sum
in $v\in V(\gamma),v^{\prime}\in V(\gamma(v))$ in the second line, then in the
third line we have used the important fact that the constraint does not
act at the new vertices that it creates so that the sum over $v^{\prime}\in V(\gamma(v))$ collapses to a sum over the original $v^{\prime}\in V(\gamma)$ and
in the last step we have used the antisymmetry in the lapse functions to
write the product of operators as their antisymmetrized sum of products.
Clearly the term with $v=v^{\prime}$ vanishes trivially. If $v\not=v^{\prime}$ then,
since $\hat{H}_{\gamma,v}$ manipulates the graph only in a small
neighbourhood of $v$, we can commute the two operators in the last line
of (5.3) to write both with the vertex $v$ to the right hand side
as
$$[\hat{H}(M),\hat{H}(N)]f_{\gamma}=\frac{1}{2}\sum_{v,v^{\prime}\in V(\gamma)}[%
N_{v}M_{v^{\prime}}-M_{v}N_{v^{\prime}}][\hat{H}_{\gamma(v),v^{\prime}}\hat{H}%
_{\gamma,v}-\hat{H}_{\gamma,v^{\prime}}\hat{H}_{\gamma(v^{\prime}),v}]f_{%
\gamma}\;.$$
(5.4)
Now by inspection of (5.2) we see that the last square bracket
is a linear combination of functions of the type $f-f^{\prime}$ where $f,f^{\prime}$ are
related by an analyticity preserving diffeomorphism by construction of
the triangulation which relates different choices for the loop attachment by
such a diffeomorphism (this point is explained in more detail in
[4]). Thus when evaluating (5.4) on a diffeomorphism invariant
state we can remove those diffeomorphisms and obtain just zero.
This suffices to show $([\hat{H}(M),\hat{H}(N)])^{\prime}=0$.
Notice that it was essential in the argument that the additional vertices
created by $\hat{H}(N)$ when acting on $f_{\gamma}$ do not contribute as we
showed. If that was not the case the commutator would not vanish on
diffeomorphism invariant distributions which is why we attached the loop
in such a particular, $C^{1}$, way.
6 Solving the theory
This section is divided into two parts : In the first part we will
describe the complete space of solutions to both the diffeomorphism and
Hamiltonian constraint. In the second part this solution space is
shown to contain the solutions to the curvature constraint which can be
formulated in our language as well [14]. One can equip the solution
space with at least two very natural inner products.
One of them is the inner product appropriate for the curvature
constraint, the other one arises from direct construction of the
solutions in the first part this section. Neither of these inner
products give all solutions a finite norm.
6.1 Complete set of solutions to all constraints
Let be given a spin-network state $T_{\gamma,\vec{j},\vec{c}}$ and let
$$\{T_{\gamma,\vec{j},\vec{c}}\}:=\{\hat{U}(\varphi)T_{\gamma,\vec{j},\vec{c}},%
\;\varphi\in\mbox{Diff}(\Sigma)\}$$
be its orbit under the diffeomorphism group of analyticity preserving smooth
diffeomorphisms. We define a diffeomorphism invariant distribution on $\Phi$
by
$$[T_{\gamma,\vec{j},\vec{c}}]:=\sum_{T\in\{T_{\gamma,\vec{j},\vec{c}}\}}T\;.$$
That this is a continuous linear functional on $\Phi$ follows from
the fact that the spin-network states form an orthonormal basis
by the argument given in [13]. Therefore
It is also clear that every diffeomorphism
invariant state is a linear combination of such
$[T_{\gamma,\vec{j},\vec{c}}]$’s so that by this procedure we can claim
to have found the general solution $\Psi$ to the diffeomorphism constraint
$\Psi(\hat{U}(\varphi)f)=\Psi(f)\;\forall\varphi\in\mbox{Diff}(\Sigma)$.
We will call this space $\Phi^{\prime}_{Diff}$
Given any $f\in\Phi$ we can uniquely decompose it as
$f=\sum_{I}f_{I}T_{I}$ where $f_{I}$ are some constants and $T_{I}$ are
spin-network states. We then define $[f]:=\eta_{Diff}f:=\sum_{I}f_{I}[T_{I}]$.
Notice that
one cannot define $[f]$ as the sum of all states which are in its orbit
under diffeomorphisms since spin-network states defined on different
graphs have uncountably infinite sets of diffeomorphisms that move one
graph but not another. We have been here imprecise with the issue
of graph symmetries which alter the above formulae somewhat. See [13]
for more details.
This construction can be used to define an inner product on $\Phi^{\prime}_{Diff}$
by
$$<[f],[g]>_{Diff}:=[f](g)$$
which is clearly a positive definite
sesquilinear form and equips $\Phi^{\prime}_{Diff}$ with the structure of a
pre-Hilbert space.
Remark :
It has been shown in [7] that if there are only strongly
diffeomorphism invariant observables in the theory then
those observables define a superselection rule, namely, they cannot map
between spin-network states based on graphs which are in different
diffeomorphism equivalence classes. As a result, the group average could
be defined differently in every sector, that is, the inner product in
every sector can be chosen individually which amounts to the ambiguity
that the particular way of averaging given by $[f]$ is
not selected by physical principles, meaning that for every
diffeomorphism equivalence class of graphs there could be a different
constant that multiplies $[T_{\gamma,\vec{j},\vec{c}}]$ in $[f]$.
However, as there are clearly weakly diffeomorphism invariant observables
which, together with the Hamiltonian constraint map between those sectors,
there is no superselection rule and the way we have averaged is selected
by the requirement that averaged spin-network states remain orthonormal
[13].
We now wish to employ this result to find the general solution to all
constraints. To that end, consider the set
$$R:=\{\hat{H}(N)\phi,\;N\in{\cal S},\;\phi\in\Phi\},$$
the range of the Hamiltonian constraint on $\Phi$ where ${\cal S}$
denotes the usual Schwartz space of test functions of rapid decrease.
Consider its orthogonal complement in $\Phi$ denoted $S:=R^{\perp}\subset\Phi$.
Finally, consider the set $\{[s],\;s\in S\}$. Then it is
easy to see that every solution to all constraints is a linear combination
of elements of this set and we will call the resulting span $\Phi^{\prime}_{phys}$.
Namely, let $s=\sum_{I}s_{I}T_{I}\in S$ then
by definition $\sum_{I}\bar{s}_{I}f_{I}=0$ for any $f=\sum_{I}f_{I}T_{I}\in R$.
Thus $[s](f)=\sum_{I}\bar{s}_{I}[T_{I}](f)=\sum_{I}\bar{s}_{I}f_{I}=0\forall f\in R$.
A geometrical construction of the space $S$ was given for the
three-dimensional theory in [4]. Here we could proceed similarly.
However, since this is only a model we restrict ourselves to showing
that the space $\Phi^{\prime}_{phys}$ is uncountably infinite dimensional. Namely,
a particular simple class of vectors in $S$ consists of those elements of
$\Phi$ which are linear combinations of spin-network states whose
underlying graph is not of the form
$\gamma\cup\alpha_{12}(\Delta),[\gamma\cup\alpha_{12}(\Delta)]-s_{1}(\Delta),[%
\gamma\cup\alpha_{12}(\Delta)]-s_{2}(\Delta),[\gamma\cup\alpha_{12}(\Delta)]-[%
s_{1}(\Delta)\cup s_{2}(\Delta)]$ for any
$\Delta=\Delta(\gamma)$, $v(\Delta)\in V(\gamma)$ and $\gamma$ is a
graph underlying the same restriction but is otherwise arbitrary.
This particular class of solutions has the property that all of the
resulting $[s]$ are normalizable with respect to $<.,.>_{Diff}$
while genuine elements of $\Phi^{\prime}_{phys}$ will
not be normalizable with respect to the kinematical inner product on
$\Phi^{\prime}_{Diff}$. On the other hand, since the 2+1 Hamiltonian constraint
really resembles the 3+1 Euclidean Hamiltonian constraint it follows from
the redults on the kernel of the Euclidean Hamiltonian constraint given in
[4, 13] that every solution is a (possibly infinite) linear
combination of basic solutions each of which is in fact normalizable
with respect to $<.,.>_{Diff}$.
We will see that the solutions to the curvature
constraint are not normalizable with respect to $<.,.>_{Diff}$ and one needs
to define another appropriate inner product $<.,.>_{curv}$ on the subset
of $\Phi^{\prime}_{phys}$ corresponding to the solutions of the curvature
constraint. However, it will turn out that the natural inner product
$<.,.>_{phys}$ for our Hamiltonian constraint as suggested by [13] is
such that curvature constraint solutions are still not normalizable and
so $<.,.>_{curv}$ and $<.,.>_{phys}$ define genuinely non-isometric
Hilbert spaces. We will turn to that issue in the next subsection.
6.2 Comparison with the Topological Quantum Field Theory
As shown in [14], in our language a solution to the curvature
constraint
$F_{i}=0$ in the quantum theory is a distribution $\Psi_{f}\in\Phi^{\prime}$ given by
$\Psi_{f}:=\delta_{\mu_{0}}(F)f$ for any $f\in\Phi$. Here $\delta_{\mu_{0}}(F)$
is a $\delta$ distribution with respect to the inner product on ${\cal H}$
which has support on the space of flat connections modulo gauge
transformation $\cal M$. More precisely, we have the following :
Any function on $\cal M$ is a gauge invariant function which depends
on the connection only through the holonomies along (representants of) the
independent generators $\alpha_{1},..,\alpha_{n}$ of the fundamental group
$\pi_{1}(\Sigma)$, that is,
$f(A_{0})=f_{n}(h_{\alpha_{1}}(A_{0}),..,h_{\alpha_{n}}(A_{0}))$
for $A_{0}\in{\cal M}$. The
measure $\nu_{0}$ on $\cal M$ for gauge group $G$ is defined by
$$\int_{{\cal M}}d\nu_{0}(A_{0})f(A_{0}):=\int_{G^{n}}d\mu_{H}(g_{1})..d\mu_{0}(%
g_{n})f_{n}(g_{1},..,g_{n})$$
where $\mu_{H}$ denotes the Haar measure on $G$. Then the delta distribution
for flat connections is given by
$$\delta_{\mu_{0}}(F(A)):=\int_{{\cal M}}d\nu_{0}(A_{0})\delta_{\mu_{0}}(A_{0},A)$$
(6.1)
where
$$\delta_{\mu_{0}}(A_{0},A)=\sum_{\gamma,\vec{j},\vec{c}}\overline{T_{\gamma,%
\vec{j},\vec{c}}(A)}T_{\gamma,\vec{j},\vec{c}}(A_{0})$$
(6.2)
and the sum runs over all possible spin-network states. It is
possible to arrive at (6.1) from first principles by following
the group average proposal [14].
It is also possible to write (6.1) as a linear combination of
distributions in $\Phi^{\prime}_{Diff}$. To that
end, denote by $I$ the label of a spin-network state and define
$T_{[I]}:=[T_{I}]$. Notice that the integral
$k_{I}:=\int_{{\cal M}}d\nu_{0}(A_{0})T_{I}(A_{0})=:k_{[I]}$ is
diffeomorphism invariant and thus only depends only on $[I]$. Then we may
write
$$\delta_{\mu_{0}}(F(A))=\sum_{[I]}k_{[I]}\overline{T_{[I]}(A)}\;.$$
(6.3)
It is easy to see [14] that (6.3) is a distribution on
$\Phi^{\prime}_{Diff}$ and certainly it is a distribution on $\Phi$.
However, (6.3) is not normalizable with respect to
$<.,.>_{Diff}$ :
To see this we use (6.3) to notice that we can write
$\delta(F(A))=\eta_{Diff}f_{F}$ where $f_{F}:=\sum_{[I]}\overline{k_{[I]}}T_{I_{0}([I])}(A)$ and $I_{0}([I])\in[I]$ is an arbitrary choice.
Thus by definition of the inner product between diffeomorphism invariant
distributions we find
$$||\delta_{\mu_{0}}(F)||_{Diff}^{2}=(\eta_{Diff}f_{F})(f_{F})=\sum_{[I]}|k_{[I]%
}|^{2}$$
(6.4)
where the sum is over diffeomorphism equivalence classes of spin-network
labels. But quantity (6.4) is just plainly infinite :
Namely, it follows from the definition of $k_{I}=k_{[I]}$
that $k_{[I]}=T_{I}(A=0)$ whenever the graph underlying $I$ is contractable.
There are an at least countably infinite number of contractable,
mutually non-diffeomorphic, non-trivial graphs $\gamma_{n},\;n=1,2,..$
in any $\Sigma$. An example is given by choosing $\gamma_{n}$ to be an
$n$-link, that is, a union of $n$ mutually non-intersecting loops
$\alpha_{1},..,\alpha_{n}$
homeomorphic to a circle each of which is homotopically trivial
(contractable). Choose $I_{n}$ such that
$T_{I_{n}}(A)=\prod_{k=1}^{n}T_{\alpha_{k}}(A)$ where
$T_{\alpha}(A):=\mbox{tr}(h_{\alpha}(A))$ is the Wilson-Loop function and
$h_{\alpha}(A)$ denotes the holonomy of $A$ along the loop $\alpha$.
Using the basic integral $\int_{SU(2)}d\mu_{H}(g)\bar{g}_{AB}g_{CD}=\frac{1}{2}\delta_{AC}\delta_{BD}$ it is
easy to see that $T_{I_{n}}$ provide an orthonormal system of spin-network
states. But $T_{I_{n}}(A=0)=2^{n}$ and so (6.4) contains the
meaningless sum $\sum_{n=1}^{\infty}2^{n}$.
We must check whether or not $\Psi_{f}$ is also a solution to the constraint
$\hat{H}(N)$ (it obviously is diffeomorphism in variant). To that end we must
compute
$$\displaystyle\Psi_{f}(\hat{H}(N)f_{\gamma})$$
$$\displaystyle=$$
$$\displaystyle\int_{\overline{{\cal A}/{\cal G}}}d\mu_{0}\delta_{\mu_{0}}(F(A))%
(\overline{f}\hat{H}_{\gamma}(N)f_{\gamma})(A)$$
(6.5)
$$\displaystyle=$$
$$\displaystyle\int_{{\cal M}}d\nu_{0}(A_{0})(\overline{f}\hat{H}_{\gamma}(N)f_{%
\gamma})(A_{0})=0$$
because either $\hat{H}_{\gamma}(N)f_{\gamma}$ is identically zero or it is
a linear combination of the vectors (recall (5.2))
$$\hat{H}_{\gamma,v,\Delta,\Delta^{\prime}}f_{\gamma}=-2\epsilon^{ij}\epsilon^{%
kl}\mbox{tr}(h_{\alpha_{ij}(\Delta^{\prime})}\tau_{m})\mbox{tr}(\tau_{m}h_{s_{%
k}(\Delta)}[h_{s_{k}(\Delta)}^{-1},\sqrt{\hat{V}_{v}}]h_{s_{l}(\Delta)}[h_{s_{%
l}(\Delta)}^{-1},\sqrt{\hat{V}_{v}}])f_{\gamma}$$
which therefore are proportional to the matrix elements of
$[h_{\alpha_{12}}-h_{\alpha_{12}}^{-1}]$ for a contractable loop $\alpha$
which
vanishes on $A_{0}\in{\cal M}$. Here we have used the $su(2)$ Fierz identity
$\mbox{tr}(\tau_{i}A)\mbox{tr}(\tau_{i}B)=\mbox{tr}(A)\mbox{tr}(B)/4-\mbox{tr}(%
AB)/2$ together with $\epsilon^{ij}\mbox{tr}(h_{\alpha_{ij}})=\mbox{tr}(h_{\alpha_{12}})-\mbox{tr}(h%
_{\alpha_{12}}^{-1})=0$, a particular
property of $SU(2)$ (we did not need to use this, the result holds for
general $G$).
Thus, any solution to the curvature constraint is a solution to the
Hamiltonian constraint. However, as we have demonstrated, there are an
infinite number of more solutions to the Hamiltonian constraint, in
particular those which are normalizable with respect to $<.,.>_{Diff}$
and no solution to the curvature constraint has this property. Notice that
the inner product on the space of solutions to the curvature constraint
comes from a group averaging map, it is just given by ([14])
$$<\Psi_{f},\Psi_{g}>_{Curv}:=\Psi_{f}(g)=\int_{{\cal M}}d\nu_{0}\overline{f}g\;.$$
It is now tempting to view this result as the restriction to the special
solutions of the curvature constraint of a more general inner product
appropriate for the Hamiltonian constraint.
There seems to be an unsurmountable obstacle : the Hamiltonian constraint
is not a self-adjoint operator on $\cal H$ and so group averaging as
defined in [7] cannot be employed. Moreover, group-averaging
really means to exponentiate the Hamiltonian constraint and that in turn
implies that we know the motions it generates and thus we would have to
completely solve the theory. Thus, it seems that we cannot
define a map $\eta\;:\;\Phi\to\Phi^{\prime}_{phys};\;f\to\eta f$. However, in the
case that we have self-adjoint constraint operator, the group-average
algorithm is nothing else than a sophisticated way to construct the
projector onto the distributional kernel of the constraint operator
(this is explained in more detail in [13]). We are therefore
lead to define the map $\eta$, in the case that we do not have a self-adjoint
constraint operator, as a certain (generalized) projector on the kernel of
the constraint operator. As in [14] we split the problem into two
parts and proceed as follows :
Given $f\in\Phi$ we have a group averaging map
$\eta_{Diff}:\;\Phi\to\Phi^{\prime}_{Diff}$ defined by $\eta_{Diff}(f):=[f]$
and an inner product defined by $<[f],[g]>_{Diff}:=[f](g):=<[f],g>:=\int_{\overline{{\cal A}/{\cal G}}}d\mu_{0}%
\overline{[f]}g$. We define now
$\Phi_{Ham}:=\Phi^{\prime}_{Diff}$ and would like to define a map
$\eta_{Ham}:\;\Phi_{Ham}\to\Phi^{\prime}_{Ham}$. The space $\Phi^{\prime}_{Ham}$
coincides with $\Phi^{\prime}_{phys}$ when viewed as a space of distributions on
$\Phi$ via the map $\eta:=\eta_{Ham}\circ\eta_{Diff}$. It remains to
construct $\eta_{Ham}$.
As we have seen, the
elements $[s]\in\Phi^{\prime}_{phys},\;s\in S$ span $\Phi^{\prime}_{phys}$. Moreover, by
explicit construction (given for the 3+1 theory in [4]) we can
orthonormalize them with respect to $<.,.>_{Diff}$ thus exploiting that
in our case all these $[s]$ are normalizable with respect to
$<.,.>_{Diff}$. We
obtain particular elements $\psi_{\mu}\in\Phi^{\prime}_{Ham}\cap\Phi_{Ham}$ with
the property that $<\psi_{\mu},\psi_{\nu}>_{Diff}=\delta_{\mu,\nu}$. We are
now ready to define the projector $\eta_{Ham}$ : given
$\psi\in\Phi_{Ham}$ define
$$\eta_{Ham}\psi:=\sum_{\mu}\psi_{\mu}\psi_{\mu}(\psi):=\sum_{\mu}\psi_{\mu}<%
\psi_{\mu},\psi>_{Diff}\;.$$
(6.6)
Notice that even if not all of the $[s]$ would be normalizable with
respect to $<.,.>_{Diff}$ then one could still take (6.6) as the
group average map, just the elements $\psi_{\mu}$ now form a basis in
the generalized sense that they are mutually orthogonal in the
sense of generalized eigenvectors (similar to usual
momentum generalized eigenfunctions of ordinary quantum
mechanics which are not really orhtonormal in the Hilbert space sense but
only orthogonal in the sense of $\delta$ distributions).
The fact that the $\psi_{\mu}$ are normalizable with respect to $<.,.>_{Diff}$
displays $\eta_{Ham}$ as a projector on a genuine subspace of
${\cal H}_{Diff}$.
Observe the dual role of the $\psi_{\mu}$ which we can view both as elements
of $\Phi_{Ham}^{\prime}$ and as elements of $\Phi_{Ham}=\Phi^{\prime}_{Diff}\subset{\cal H}_{Diff}$.
In particular, notice the peculiar identity $\eta_{Ham}\psi_{\mu}=\psi_{\mu}$.
We now simply define an inner product on the elements $\eta_{Ham}\psi$ by
$$<\eta_{Ham}\psi,\eta_{Ham}\psi^{\prime}>_{Ham}:=(\eta_{Ham}\psi)(\psi^{\prime}%
)==\sum_{\mu}\overline{<\psi_{\mu},\psi>_{Diff}}<\psi_{\mu},\psi^{\prime}>_{Diff}$$
(6.7)
for each $\psi,\psi^{\prime}\in\Phi_{Ham}$. Expression (6.7) is
clearly a positive semi-definite sesquilinear form with the property that
the $\eta_{\psi_{\mu}}$ remain orthonormal. It is also independent of
the orthonormal system $\psi_{\mu}$ (the label $\mu$ is “nicely split”
into a discrete piece and a continuous piece and $\psi_{\mu}$’s are
orthonormal with respect to both pieces in the sense of Kronecker
$\delta$’s, see [13] for details).
We now combine the two group average maps to obtain
$$\eta:=\eta_{phys}:=\eta_{Ham}\circ\eta_{Diff}\;:\;\Phi:=\Phi_{phys}\to\Phi^{%
\prime}_{phys}:=\Phi^{\prime}_{Ham},\;f\to\eta_{Ham}[f]=\sum_{\mu}\psi_{\mu}%
\psi_{\mu}(f)$$
(6.8)
and the physical inner product for the elements $\eta_{phys}f$ becomes
$$\displaystyle<\eta_{phys}f,\eta_{phys}f>_{phys}$$
$$\displaystyle=$$
$$\displaystyle<\eta_{Ham}[f],\eta_{Ham}[g]>_{Ham}$$
(6.9)
$$\displaystyle=$$
$$\displaystyle\sum_{\mu}\overline{<\psi_{\mu},[f]>_{Diff}}<\psi_{\mu},[g]>_{Diff}$$
$$\displaystyle=$$
$$\displaystyle\sum_{\mu}\overline{\psi_{\mu}(f)}\psi_{\mu}(g)=(\eta_{phys}f)(g)$$
where in the second before the last equality $\psi_{\mu}$ is viewed as an
element of $\Phi^{\prime}$.
Notice that with this definition
${\cal H}_{phys}\subset{\cal H}_{Diff}$. That this makes sense is shown
in [13]. In other words, infinite linear combinations of elements
of $\psi_{\mu}$ are allowed but only with suitably converging coefficients.
As we have already shown that $\Psi_{f}\not\in{\cal H}_{diff}$ it follows that
$\Psi_{f}\not\in{\cal H}_{phys}$, no solution to the curvature constraint
is normalizable with respect to $<.,.>_{Ham}$.
Since what really determines the physical inner product is the
Hamiltonian constraint, for instance via the group average approach, this
result was expected given the totally different algebraic structure of
the two sets of constraints. In particular, the scalar product
$<.,.>_{curv}$ is rather unnatural from the point of view of the
Hamiltonian constraint.
The reverse question, whether $||.||_{Ham}$ normalizable elements of
$\Phi^{\prime}_{Ham}$ have finite norm with respect to $||.||_{curv}$ cannot even
be asked in general because a general element of $\Phi^{\prime}_{Ham}$ cannot be
written as $\Psi_{f},\;f\in\Phi$.
We conclude that the sectors of the theory described by either of the
inner products $<.,.>_{Curv}$ and $<.,.>_{phys}$
are mutually singular (that is, the underlying measures of
the scalar products are singular). On the other hand, as far as the
space of solutions to the constraint is concerned we find that all solutions
to the curvatur constraint are annihilated by the Hamiltonian constraint.
Moreover, if we choose
$<.,.>_{Curv}$ as the inner product then we find complete agreement with
the results in [3] although our set of constraints and our
quantization approach was totally different from the outset. Since we copied
step by step the quantization procedure of [4] to maximal extent, we
conclude that this procedure does lead to the correct answer in the
present model which is a small but non-trivial check whether the
proposal of [4] is reliable or not.
7 Conclusions
The aim of the present paper was to check whether the method of quantizing
3+1 general relativity by the method proposed in [4] is reliable
in the sense that when that procedure is applied to well-known models
we get the known the results. When applied to 2+1 Euclidean gravity we find
complete agreement thus giving faith in those methods, the more, as 2+1
Euclidean gravity is maximally similar to 3+1 Lorentzian gravity as far as
the algebraic structure of the constraints and the gauge group are
concerned (at least when we consider the Hamiltonian rather than the
curvature constraint).
On the other hand, the quantum theory as obtained by our
approach has a much bigger space of solutions to all constraints than the
space as obtained by traditional approaches while the latter is properly
included in our solution space. A natural question that arises is then
what to do with those extra solutions and how to interprete them.
In particular, there seems to be a clash between the number of classical
and quantum degrees of freedom.
Now, a hint of how to interprete
these solutions is that many of them are of the form $[s],\;s\in S$ and so
$s$ is a cylindrical function. Therefore the volume operator $\hat{V}(B)$
vanishes on $s$ for almost every $B$. This suggests that $[s]$ is a
spurious solution because if we want the classical theory defined by
curvature and Hamiltonian and diffeomorphism constraint to be equivalent
(recall that we had to impose $\det(q)>0$ classically)
then we must really ask that the volume operator $\hat{V}(B)$ is strictly
positive for any $B$, before taking the diffeomorphism constraint into
account.
We conclude that no cylindrical $s$ should give rise to an element of
$\Phi^{\prime}_{phys}$ via $s\to[s]$ (which can be achieved by superposition
of an infinite number of the $[s]$ or by considering infinite graphs)
which would presumably remove the clsh between numbers of degrees of
freedom alluded to above.
This latter observation gives rise to the speculation that also many of
the solutions found in [4] for the 3+1 theory should be spurious
because in the classical
theory we have to impose the anholonomic condition $\det(q)>0$ as well.
On the other hand it may be desirable to allow for degenerate solutions
at the quantum level because by passing through singularities of the
metric one can describe changes in the topology of the hypersurface
$\Sigma$. Therefore one may expect that some of the solutions actually
carry topological information and, moreover, that although we have
started from a fixed topology in the classical theory we end up describing
all topologies at the quantum level222This speculation on the
conceivably topological meaning of the solutions is due to Abhay
Ashtekar..
The complete answer to this puzzle is left to future investigations.
Acknowledgements
This project was suggested by Abhay Ashtekar during the wonderful
ESI workshop in Vienna, July-August 1997. Many thanks to the ESI and to
the organizers of the workshop, Peter Aichelburg and Abhay Ashtekar,
for providing this excellent research environment.
This research project was supported in part by the ESI and by DOE-Grant
DE-FG02-94ER25228 to Harvard University.
Appendix A Spectral analysis of the two-dimensional volume operator
Since the gauge group is still $SU(2)$ we may copy the results from [10]
to compute the full spectrum of the two-dimensional volume operator. In
particular it follows immediately that this operator is essentially
self-adjoint, positive semi-definite and that its spectrum is entirely
discrete. This holds on either gauge invariant or non-gauge invariant
functions.
In this appendix we restrict ourselves to the part of the spectrum coming
from graphs with vertices of valence not larger than three, that is, we
display the eigenvalues of the operator $\hat{V}_{v}$
of (3.13) restricted to vertices $v$ of valence $n=2,3$ on gauge
invariant functions. Indeed, as the volume operator cannot change the
graph or the colouring of the edges of the graph with spin quantum
numbers of a spin-network state it follows that it can change at most its
vertex contractors. However, given the spins of the edges incident at $v$,
the space of vertex contractors is one-dimensional for $n=2,3$ by
elementary Clebsh-Gordon theory. Therefore spin-network states all of
whose vertices have at most valence three must be eigenvectors of the
volume operator (in any dimension). Notice also that all the $\hat{V}_{v}$ for
different $v$’s are mutually commuting. In three dimensions
these spin-network states are in the kernel of the volume operator, in two
dimensions none of them is annihilated as we will show (as long as the
tangents of the edges at $v$ span a plane).
For $n=3$ there are only two generic non-trivial situations : Either
(Case A) no two of $e_{1},e_{2},e_{3}$ have co-linear tangents at $v$ or
(Case B) two of them, say $e_{1},e_{2}$ have co-linear tangents at $v$ but not
$e_{1},e_{3}$ or $e_{2},e_{3}$. Here $e_{1},e_{2},e_{3}$ are the three edges incident
at $v$ which are coloured with spins $j_{1},j_{2},j_{3}\in\{j_{1}+j_{2},j_{1}+j_{2}-1,..,|j_{1}-j_{2}|\}$ respectively. We can get the
eigenvalue for the case $n=2$ by taking the result for $n=3$ and setting
for instance $j_{3}=0,j_{1}=j_{2}=j\not=0$.
In the calculations that follow we will use the following notation :
$$\displaystyle\hat{q}_{v}$$
$$\displaystyle:=$$
$$\displaystyle[\frac{4}{\hbar^{2}}\hat{V}_{v}]^{2}=\hat{E}_{v}^{i}\hat{E}_{v}^{i}$$
$$\displaystyle\hat{E}_{v}^{i}$$
$$\displaystyle:=$$
$$\displaystyle\frac{1}{2}\sum_{1\leq I,J\leq 3}\mbox{sgn}(e_{I},e_{J})X^{i}_{IJ%
}\mbox{ where }X^{i}_{IJ}:=\epsilon_{ijk}X^{j}_{I}X^{k}_{J}$$
$$\displaystyle X^{i}_{I}$$
$$\displaystyle:=$$
$$\displaystyle X^{i}_{e_{I}},\;\vec{X}_{I}:=(X^{i}_{I}),\;X_{IJ}=X^{i}_{I}X^{i}%
_{J},\;\Delta_{I}:=X_{II}$$
(A.1)
and it is implied that $I,J,K\in\{1,2,3\}$ are mutually different so that
$[X^{i}_{I},X^{j}_{J}]=0$. As the notation suggests, $\Delta_{I}$ is the Laplacian
on $SU(2)$ with spectrum $-j(j+1),\;2j\geq 0$ integral.
Notice that $X^{i}_{IJ}=-X^{i}_{JI}$ so that
$$\hat{E}_{v}^{i}=\sum_{1\leq I<J\leq 3}\mbox{sgn}(e_{I},e_{J})X^{i}_{IJ}\;.$$
As in the main text we will use generators of $su(2)$ with structure
constants $+\epsilon_{ijk}$ which implies that $[X^{i}_{I},X^{j}_{I}]=-\epsilon_{ijk}X^{k}_{I}$ and so $\epsilon_{ijk}X^{i}_{I}X^{j}_{I}=-X^{k}_{I}$ (the minus sign comes
from the right rather then left invariance).
There are some identities among these quantities that we are going to use.
The first one is the familiar spin recoupling identity
$$2X_{IJ}=[\vec{X}_{I}+\vec{X}_{J}]^{2}-\Delta_{I}-\Delta_{J}=\Delta_{K}-\Delta_%
{I}-\Delta_{J}$$
(A.2)
where in the second equality we have used the fact that
$\vec{X}_{1}+\vec{X}_{2}+\vec{X}_{3}=0$, that is, the total angular momentum
operator vanishes of on gauge invariant functions. Then if $f$ is gauge
invariant
$$[\vec{X}_{I}+\vec{X}_{J}]^{2}f=-[\vec{X}_{I}+\vec{X}_{J}]\vec{X}_{K}f=-\vec{X}%
_{K}[\vec{X}_{I}+\vec{X}_{J}]f=[\vec{X}_{K}]^{2}f=\Delta_{K}f$$
and of course the $\Delta_{I}$ commute with every $X^{i}_{I}$. The next identity
is, using basic $\epsilon_{ijk}$ arithmetic
$$\vec{X}_{IJ}^{2}=X^{i}_{I}X^{j}_{J}(X_{I}^{i}X^{j}_{J}-X^{j}_{I}X^{i}_{J})=%
\Delta_{I}\Delta_{J}-X^{i}_{I}([X^{j}_{J},X^{i}_{J}]+X^{i}_{J}X^{j}_{J})X^{j}_%
{I}=\Delta_{I}\Delta_{J}+X_{IJ}-X_{IJ}^{2}$$
(A.3)
and by very similar arguments
$$\vec{X}_{IJ}\vec{X}_{JK}=-\Delta_{J}X_{IK}+X_{IJ}X_{JK}+\epsilon_{ijk}X^{i}_{I%
}X^{j}_{K}X^{k}_{J}\;.$$
(A.4)
The last term in (A.4) is essentially the basic operator from which
the tree-dimensional volume operator is built and which vanishes in the
three-valent case on gauge invariant functions. Indeed, replacing, say
$\vec{X}_{J}=-\vec{X}_{I}-\vec{X}_{K}$ and using the $su(2)$ algebra for the
$\vec{X}^{i}_{I}$ we see that that term vanishes.
Remarkably, upon substituting
for $X_{IJ}$ according to (A.2) we find
$$\vec{X}_{IJ}\vec{X}_{JK}=\frac{1}{4}[2(\Delta_{I}\Delta_{J}+\Delta_{J}\Delta_{%
K}+\Delta_{K}\Delta_{I})-(\Delta_{I}^{2}+\Delta_{J}^{2}+\Delta_{K}^{2})]$$
(A.5)
which is independent of the choice of the pairs $(IJ),(JK)$.
We have now all tools available to finish the calculation.
We will treat cases A, B separately.
A)
We may label edges without loss of generality such that
$\mbox{sgn}(e_{1},e_{2})=\mbox{sgn}(e_{2},e_{3})=\mbox{sgn}(e_{3},e_{1})=1$, that is,
we cross $e_{1},e_{2},e_{3}$ in this sequence as we encircle $v$ counter-clockwise.
Then $\vec{\hat{E}}_{v}=\vec{X}_{12}+\vec{X}_{23}+\vec{X}_{31}$. We just need
to use (A.2)-(A.5) and to be careful with the order of $I,J$ in
$\vec{X}_{IJ}$ to find after tedious algebra that
$\hat{q}_{v}=[\vec{\hat{E}}_{v}]^{2}$ is just given by
$$\displaystyle\hat{q}_{v}$$
$$\displaystyle=$$
$$\displaystyle\frac{9}{4}[2(\Delta_{1}\Delta_{2}+\Delta_{2}\Delta_{3}+\Delta_{3%
}\Delta_{1})-(\Delta_{1}^{2}+\Delta_{2}^{2}+\Delta_{3}^{2})]$$
(A.6)
$$\displaystyle-$$
$$\displaystyle\frac{1}{2}(\Delta_{1}+\Delta_{2}+\Delta_{3})\;.$$
Thus, the eigenvalue is obtained by replacing $\Delta_{I}$ by $-j_{I}(j_{I}+1)$.
Expression (2) looks worrysome : is the eigenvalue going to be
non-negative ? A moment of reflection reveals that it is even strictly
positive
unless $j_{1}=j_{2}=j_{3}=0$ in which case it vanishes : It will be sufficient to
show that the operator
in the first line of (A.6) has non-negative eigenvalue. We just need
to remember that $j_{1},j_{2},j_{3}$ are not arbitrary. We may assume without loss
of generality that $j_{2}\geq j_{1}$ such that $j_{3}\in\{j_{1}+j_{2},j_{1}+j_{2}-1,..,j_{2}-j_{1}\}$. We have
$$f(\Delta_{3}):=2(\Delta_{1}\Delta_{2}+\Delta_{2}\Delta_{3}+\Delta_{3}\Delta_{1%
})-(\Delta_{1}^{2}+\Delta_{2}^{2}+\Delta_{3}^{2})=4\Delta_{1}\Delta_{2}-(%
\Delta_{3}-\Delta_{1}-\Delta_{2})^{2}$$
(A.7)
which takes, in terms of eigenvalues, its lowest value at maximum value
of the function $|\Delta_{3}-\Delta_{1}-\Delta_{2}|$. Given arbitrary $j_{1}\leq j_{2}$,
since $-\Delta_{3}$ is a strictly increasing function of $j_{3}$, we find that
the extrema of that function are found for the extremal values $j_{3}=j_{2}\pm j_{1}$ and are given by $|2j_{1}j_{2}|$ and $|-2j_{1}(j_{2}+1)|$ respectively.
Then (A.7) reveals that $f(\Delta_{3})\geq 4j_{1}(j_{2}+1)(j_{2}-j_{1})\geq 0$ because $j_{2}\geq j_{1}$.
In case that we consider a two-valent vertex, we may just set $\Delta_{3}=0,\;\Delta_{1}=\Delta_{2}=\Delta$ and find the extremely simple result
$$\hat{q}_{v}=-\Delta\;.$$
(A.8)
B)
We may, without loss of generality, label edges such that $e_{1},e_{2}$ have
co-linear tangents at $v$ (that is, $\mbox{sgn}(e_{1},e_{2})=0$) and such that
$\mbox{sgn}(e_{1},e_{3})=\mbox{sgn}(e_{3},e_{2})=1$. Then
$\vec{\hat{E}}_{v}=\vec{X}_{13}+\vec{X}_{32}$. The same algebraic
manipulations show that we get now for $\hat{q}_{v}=[\vec{\hat{E}}_{v}]^{2}$
the expression
$$\hat{q}_{v}=[2(\Delta_{1}\Delta_{2}+\Delta_{2}\Delta_{3}+\Delta_{3}\Delta_{1})%
-(\Delta_{1}^{2}+\Delta_{2}^{2}+\Delta_{3}^{2})]-\Delta_{3}$$
(A.9)
which is positive unless, of course, $\Delta_{3}=0$ in which case it vanishes.
We conclude that the two-dimensional volume operator has a much smaller
kernel than the three-dimensional one, in particular, two and three-valent
vertices, whether gauge invariant or not, do not contribute to the kernel.
References
[1]
[2]
S. Carlip, “Lectures in (2+1)-Dimensional Gravity”, Preprint
UCD-95-6, grqc/9503024
S. Carlip, “The Statistical Mechanics of the Three-Dimensional Euclidean
Black Hole”, Preprint UCD-96-13, gr-qc/9606043
V. Moncrief, J. Math. Phys. 30 (1989) 2907
[3]
E. Witten, Nucl. Phys. B311 (1988) 46
[4]
T. Thiemann, Phys. Lett. B 380 (1996) 257-264
T. Thiemann, “Quantum Spin Dynamics (QSD)”, Harvard
University Preprint HUTMP-96/B-359, gr-qc/9606089
T. Thiemann, “Quantum Spin Dynamics (QSD) II : The Kernel of the
Wheeler-DeWitt Constraint Operator”,
Harvard University Preprint HUTMP-96/B-352, gr-qc/9606090
[5]
A. Ashtekar, V. Husain, C. Rovelli, J. Samuel, L. Smolin,
Class. Quantum Grav. 6, L183 (1989).
[6]
F. Barbero, M. Varadarajan, “The Phase Space off 2+1 Dimensional
Gravity in the Ashtekar Formulation, Nucl. Phys. B415 (1994) 515
[7]
A. Ashtekar, J. Lewandowski, D. Marolf, J. Mourão, T.
Thiemann, “Quantization for diffeomorphism invariant theories
of connections with local degrees of freedom”, Journ. Math. Phys.
36 (1995) 519-551
[8]
A. Ashtekar and C.J. Isham,
Class. & Quan. Grav. 9, 1433 (1992)
A. Ashtekar and J. Lewandowski, “Representation
theory of analytic holonomy $C^{\star}$ algebras”, in Knots and
quantum gravity, J. Baez (ed), (Oxford University Press, Oxford 1994)
A. Ashtekar and J. Lewandowski, “Differential
geometry on the space of connections via graphs and projective
limits”, Journ. Geo. Physics 17 (1995) 191
A. Ashtekar and J. Lewandowski, J. Math. Phys. 36, 2170
(1995).
D. Marolf and J. M. Mourão, “On the support of the
Ashtekar-Lewandowski measure”, Commun. Math. Phys. 170 (1995)
583-606
A. Ashtekar, J. Lewandowski, D. Marolf, J. Mourão
and T. Thiemann, “A manifestly gauge invariant approach to quantum
theories of gauge fields”, in Geometry of constrained dynamical
systems, J. Charap (ed) (Cambridge University Press, Cambridge,
1994); “Constructive quantum gauge field theory in two space-time
dimensions” (CGPG preprint).
[9]
C. Rovelli, L. Smolin, “Discreteness of volume and
area in quantum gravity” Nucl. Phys. B 442 (1995) 593, Erratum :
Nucl. Phys. B 456 (1995) 734
A. Ashtekar, J. Lewandowski, “Quantum Geometry III : Volume Operators”,
(in preparation)
J. Lewandowski, “Volume and Quantizations”, Class. Quantum Grav. 14
(1997) 71-76
R. De Pietri, C. Rovelli, “Geometry eigenvalues and scalar product from
recoupling
theory in loop quantum theory”, Reprint UPRF-96-444, gr-qc/9602023
[10]
T. Thiemann, “Complete formula for the matrix elements of the
volume operator in canonical quantum gravity”, Harvard University Preprint
HUTMP-96/B-353 gr-qc/9606091
[11]
T. Thiemann, “QSD V : Quantum Gravity as the Natural
Regulator of Matter Quantum Field Theories”, Harvard University Preprint
HUTMP-96/B-357
[12]
T. Thiemann, “A length operator for canonical quantum gravity”,
Harvard University Preprint HUTMP-96/B-354, gr-qc/9606092
[13]
T. Thiemann, “QSD III : Quantum Constraint Algebra and
Physical Scalar Product in Quantum General Relativity”,
Harvard University Preprint HUTMP-97/B-363
[14]
D. Marolf, J. Mourão, T. Thiemann, “The status of
Diffeomorphism Superselection in Euclidean 2+1 Gravity”, HUTMP-97/B-360,
gr-qc/9701068 |
A Spitzer Search for Transits of Radial Velocity Detected Super-Earths
J. A. Kammer11affiliation: Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA **affiliation: jkammer@caltech.edu ,
H. A. Knutson11affiliation: Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA ,
A. W. Howard22affiliation: Institute for Astronomy, University of Hawaii, Honolulu, HI 96822, USA ,
G. P. Laughlin33affiliation: Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064, USA ,
D. Deming44affiliation: Department of Astronomy, University of Maryland at College Park, College Park, MD 20742, USA ,
K. O. Todorov55affiliation: Institute for Astronomy, ETH Zürich, 8093 Zürich, Switzerland ,
J.-M. Desert66affiliation: CASA, Department of Astrophysical and Planetary Sciences, University of Colorado, Boulder, CO 80309, USA 11affiliation: Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA ,
E. Agol77affiliation: Department of Astronomy, University of Washington, Seattle, WA 98195, USA ,
A. Burrows88affiliation: Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA ,
J. J. Fortney33affiliation: Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064, USA ,
A. P. Showman99affiliation: Lunar and Planetary Laboratory, University of Arizona, Tucson, AZ 85721, USA ,
N. K. Lewis1010affiliation: Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
(Submitted to ApJ, October 16, 2013)
Abstract
Unlike hot Jupiters or other gas giants, super-Earths are expected to have a wide
variety of compositions, ranging from terrestrial bodies like our
own to more gaseous planets like Neptune. Observations of transiting
systems, which allow us to directly measure planet masses and radii
and constrain atmospheric properties, are key to understanding the
compositional diversity of the planets in this mass range. Although
Kepler has discovered hundreds of transiting super-Earth candidates over the
past four years, the majority of these planets orbit stars that are
too far away and too faint to allow for detailed atmospheric characterization
and reliable mass estimates. Ground-based transit surveys focus on
much brighter stars, but most lack the sensitivity to detect planets
in this size range. One way to get around the difficulty of finding
these smaller planets in transit is to start by choosing targets that
are already known to contain super-Earth sized bodies detected using
the radial velocity technique. Here we present results from a Spitzer program to observe six of the most
favorable RV detected super-Earth systems, including HD 1461, HD 7924,
HD 156668, HIP 57274, and GJ 876. We find no evidence for transits
in any of their 4.5 $\mu$m flux light curves, and place limits on the
allowed transit depths and corresponding planet radii that rule out
even the most dense and iron-rich compositions for these objects. We also
observed HD 97658, but the observation window was based on a possible ground-based
transit detection (Henry et al., 2011) that was later ruled out; thus
the window did not include the predicted time for the transit detection recently made by MOST
(Dragomir et al., 2013).
Subject headings:eclipses - planetary systems - techniques: photometric
1. Introduction
Super-Earths are a unique class of planets that have masses ranging between that of Earth and Neptune. They may form via diverse pathways (e.g., Hansen & Murray, 2012; Chiang & Laughlin, 2013), and current observational constraints indicate a wide range of bulk densities and compositions for these planets (Valencia et al., 2010, 2013; Fortney et al., 2013). By characterizing the properties of these unique worlds, which have no solar system analogue, we can learn more about their physical properties and their corresponding formation channels. Although results from the Kepler survey indicate that super-Earths are common (Howard et al., 2012; Fressin et al., 2013), current surveys have found only three super-Earths (GJ 1214b, 55 Cnc e, HD 97658b) in transit around stars bright enough for detailed atmospheric characterization. This kind of characterization is crucial for constraining the bulk compositions of these planets, as the presence of a thick atmosphere otherwise leads to degeneracies in models of their interior structure (e.g., Rogers & Seager, 2010).
Methods for finding nearby transiting super-Earths include efforts from both the ground and space. Ground-based transit surveys typically focus on observations of smaller M-type stars (Berta et al., 2012; Giacobbe et al., 2012; Kovács et al., 2013), as these have more favorable planet-star radius ratios; however, to date these ground-based surveys have yielded only one super-Earth discovery, that of GJ 1214b (Charbonneau et al., 2009), and their sensitivity to transits around larger Sun-like stars is limited. Space telescopes offer several advantages over ground-based transit surveys, as they are typically more sensitive and can observe their targets continuously. In 2017, the TESS space telescope will begin an all-sky survey of bright, nearby FGKM dwarf stars (Ricker et al., 2010). Until that time, searches for transits of super-Earths detected using the radial velocity method provide a promising route to increase the number of such systems. This is the approach taken by the MOST space telescope, and it has resulted in the discovery of transits for 55 Cnc e and HD 97658b (Winn et al., 2011; Dragomir et al., 2013).
The Spitzer space telescope provides a comparable platform for transit surveys of RV detected super-Earths, and benefits from a higher photometric precision than MOST. Gillon et al. (2010, 2012) have previously utilized Spitzer to rule out transits for the super-Earth HD 40307b and to further characterize the properties of the transiting super-Earth 55 Cnc e as part of a search for nearby transiting low-mass planets. This paper presents the results of six additional Spitzer observations of super-Earth systems. In §2 we overview the radial velocity data and transit window predictions for these objects. We provide descriptions of the 4.5 $\mu$m Spitzer observations along with data reduction methods and transit model analysis in §3, followed by discussion and conclusions of this work in §4 and §5, respectively.
2. Target System Properties and Radial Velocity Measurements
2.1. System Properties
HD 1461b has a minimum mass of 8.1$M_{\oplus}$ and orbits a G-type star with a period of 5.77 days. Its eccentricity is estimated to be fairly low at 0.16. Two other planets with minimum masses of 28 and 87$M_{\oplus}$ may exist in the system at periods of 446 and 5017 days but have yet to be confirmed (Rivera et al., 2010a).
HD 7924b has a minimum mass of 9.26$M_{\oplus}$ and orbits a K-type star with a period of 5.40 days. Eccentricity of the planet is close to zero and fixed at this value in the fits here. No additional planets have been reported in this system (Howard et al., 2009).
HD 97658b has a minimum mass of 8.2$M_{\oplus}$ and orbits a K-type star with a period of 9.50 days. Its eccentricity is estimated at around 0.13. No other planets have been reported in this system (Howard et al., 2011b). Note that a transit detection and further constraints on planet properties have been recently made by Dragomir et al. (2013) using MOST; see §4 for details and discussion of this target.
HD 156668b has a minimum mass of 4.15$M_{\oplus}$ and orbits a K-type star with a period of 4.64 days. Orbital solutions from fits to RV measurements were found for both eccentricities of 0 (fixed) and 0.22, and include the possible effects of one additional planet candidate in the system with a minimum mass of 45$M_{\oplus}$ and a period of 810 days (Howard et al., 2011a).
HIP 57274b has a minimum mass of 11.6$M_{\oplus}$ and orbits a K-type star with a period of 8.14 days. Orbital solutions from fits to RV measurements were found for both eccentricities of 0 (fixed) and 0.20. HIP 57274 also has two additional detected planets in the system, one with a minimum mass of 0.4$M_{Jup}$ and a period of 32 days, and the other with a minimum mass of 0.53$M_{Jup}$ and a period of 432 days (Fischer et al., 2012).
GJ 876d has a minimum mass of 5.85$M_{\oplus}$ and orbits an M-type star with a period of 1.94 days. This planet is estimated to have an eccentricity of about 0.21, and is the inner-most planet in a system with at least three others. These include a second planet with a minimum mass of 0.71$M_{Jup}$ and a period of 30 days, and a third planet with a minimum mass of 2.3$M_{Jup}$ and a period of 61 days (Rivera et al., 2005; Correia et al., 2010). A fourth planet was also recently detected with a minimum mass of 14.6$M_{\oplus}$ and a period of 124 days (Rivera et al., 2010b).
2.2. Radial Velocity Ephemerides
The required length of the observation window, and therefore the constraint that radial velocity measurements placed on ephemerides, limited the initial selection of targets for transit investigation. We chose six targets for this Spitzer program that had relatively low uncertainties for their predicted transit times and for most cases required observation windows with durations less than 20 hours. We also excluded any super-Earths with existing Spitzer observations spanning predicted transit windows.
Details on the target system properties and the RV determined ephemerides are given in Table 1. We utilize updated ephemerides obtained by a fit to both published and unpublished data for these systems from the California Planet Search group (Howard et al., in prep). Our fits for HD 1461 appear to prefer an eccentric solution, and we therefore leave eccentricity as a free parameter. For HD 7924 we assume a circular orbit for the planet, as there was no convincing evidence for a non-zero eccentricity. We used the preliminary transit detection from Henry et al. (2011) to define our transit window for HD 97658; see §4 for a complete discussion of this target. For HD 156668 there was marginal evidence for a non-zero eccentricity, and we therefore selected a modestly longer transit window spanning both the circular and eccentric preditions for the transit time. The transit times of GJ 876d are expected to deviate from a linear ephemeris due to perturbations from the other planets in the system, and we therefore calculated individual transit windows spanning the epoch of our observations using an N-body integration of the planet parameters given in Table 2 of Rivera et al. (2010b).
3. Spitzer 4.5 micron Data Acquisition and Reduction Methodology
3.1. Photometry and Intrapixel Sensitivity
Each of the six observations by Spitzer used the Infra-Red Array Camera (IRAC) in the 4.5 $\mu$m channel operated in sub-array mode; additional details are shown in Table 2. In all data sets, we extract flux information from the BCD files provided by the Spitzer pipeline. We calculate the flux using techniques described in several previous studies (Knutson et al., 2012; Lewis et al., 2013; Todorov et al., 2013). First, we find the center of the stellar point spread function using a flux-weighted centroiding routine, then we perform aperture photometry, testing both fixed and time variable aperture sizes. The fixed aperture radii we tested ranged from 2.0 to 3.0 pixel widths, in steps of 0.1; the time variable apertures were scaled based on the noise pixel parameter (Mighell, 2005). The noise pixel parameter is proportional to the square of the full width half max of the stellar point spread function, and described by Equation 1 below:
$$\beta=\frac{(\sum\limits_{n}I_{n})^{2}}{\sum\limits_{n}I_{n}^{2}}$$
(1)
where $I_{n}$ is the measured intensity of the $n^{th}$ pixel.
We then empirically re-scale the noise pixel aperture radii either as $r=a\sqrt{\beta}$, where $a$ is a scaling factor between 0.8 and 1.7 pixel widths, in steps of 0.1; or alternatively as $r=\sqrt{\beta}+C$, where $C$ is a constant between -0.2 and 1.0 pixel widths, also in steps of 0.1.
We account for variations in intrapixel sensitivity by adopting a nearest
neighbor weighting algorithm, such that
the flux at each time step is normalized by a weighted sum of its
50 nearest neighbors in X and Y space on the pixel array, as described in Knutson et al. (2012) and Lewis et al. (2013).
We then evaluate each of the aperture radius models to find the lowest resulting scatter in the residuals of the fitted lightcurve. The best fit aperture radius varied depending on target, but for these observations in all cases an adjustment based on noise pixel yielded improvements over fixed aperture photometry; however, both methods resulted in null transit detections. The median best fit aperture radius for each observation is shown in Table 2. Figure 1 shows the raw flux photometry for each observation. Figure 2 shows the corresponding normalized flux photometry after utilizing the nearest neighbor algorithm.
3.2. Transit Models and Uncertainty Estimation
We fix the orbital parameters for each planet to the values obtained from the radial velocity measurements, and only
the time of transit center, the planet radius, and the impact parameter
are varied in the fits. The forward model for a transit (Mandel & Agol, 2002) takes as input these three transit parameters, as well as the orbital period and planet semi-major
axis from RV measurements, and limb darkening coefficients based on
each target’s stellar parameters (Sing, 2010).
Characterization of transit parameter posterior likelihoods is carried
out using a pseudo-grid search method: given a fixed impact
parameter and transit center time, a best fit planet radius is found
by Levenberg-Marquardt chi-squared minimization. Planet radius is
effectively allowed to be ânegativeâ in these fits by calculating a transit light curve
using the absolute value of the planet radius, then inverting the curve for negative radius values.
This is done in order not to bias the fits and to better characterize the noise level of the observations.
Figure 3 shows histograms of planetary radii for a fixed impact parameter of zero (an equatorial transit), and fixed transit center times
that are stepped across the window of observation in increments of approximately 30 seconds. This effectively finds the best fit planet radius at each
location in the lightcurve. As no significant transits are detected in any of the lightcurves,
these histograms characterize the magnitude of the combined Gaussian (white)
and time correlated (red) noise, and therefore provide empirical thresholds for detection of possible transits.
These 2$\sigma$ thresholds are shown in Table 3, along with values of planet radii corresponding to models with
100% $Fe$, 100% $MgSiO_{3}$, and 100% $H_{2}O$ bulk composition (Zeng & Sasselov, 2013), derived using the planet minimum
masses found from RV measurements. Although we expect that a pure iron planet would be very unlikely based on
current planet formation models, this limiting case allows us to place a strict lower limit on the range of possible radii
for our target planets. Our estimated radii also assume that the planets have negligible atmospheres, and the
presence of a thick atmosphere would only serve to increase the transit depth for a given interior composition.
In addition to determining transit detection limits,
we confirm the validity of these limits by inserting artificial transits with depths above
the detection threshold into the data and verifying that we can reliably retrieve them in our fits.
Analysis of these artificially inserted transits yielded consistent
results for detection thresholds of planetary radii.
In Figure 4 we evaluate the sensitivity of our detection limits to changes in the assumed impact parameter $b$. We find that our limits on planetary radius are fairly insensitive to changes in impact parameter, though this sensitivity varies depending on the target. The limits for HD 97658 and HD 156668 remain nearly constant out to impact parameters of 0.9, while the thresholds for the other targets tend to vary more noticeably but still mostly remain below a planet radius of 1$R_{\oplus}$. The unusual behavior of HD 7924 in this case is likely due to a correlated noise feature in the observed lightcurve of similar duration and depth as a model transit with an impact parameter of around 0.5 and a planet radius of about 1.1$R_{\oplus}$. The relatively short duration of the HD 7924 observation influences the sensitivity of this impact parameter test to the noise in the data, but note that even in this case, a planet radius of 1.1$R_{\oplus}$ remains an unphysical solution.
4. Discussion
As the 2$\sigma$ thresholds for possible transits are less than 1$R_{\oplus}$ in all cases, well below the radius of a pure iron core model (Zeng & Sasselov, 2013), we therefore conclude that transits for all of our targets are conclusively ruled out within the window of our observations. Table 3 shows the posterior likelihood that the planets may still transit outside the Spitzer observation windows. For several cases the probability of transit has been all but eliminated, while for others we calculate the individual probability of transit remains no higher than 1.4%.
For the case of GJ 876d, a null transit result is in agreement with the initial photometric measurements of Rivera et al. (2005).
However, we note that our non-detection of a transit for HD 97658b appears on initial inspection to conflict with a recent paper by Dragomir et al. (2013) announcing the detection of transits with MOST. We centered our Spitzer transit window using the predicted transit time from the preliminary ground-based transit detection of Henry et al. (2011). Subsequent follow-up observations by Dragomir et al. (2012) taken within a month of our Spitzer observations demonstrated that the planet did not transit at the time predicted by Henry et al.; our data provide additional support for this conclusion. A later re-analysis by Henry et al. indicated that the apparent transit detection was caused by an airmass effect in the original observations.
Using the updated transit ephemeris from the recent MOST detection, we calculate a predicted transit center time of $2455982.17\pm 0.06$, approximately 13 hours earlier than the Spitzer observation window that started at $2455982.73$. We therefore conclude that our non-detection of a transit for HD 97658 is consistent with the transit ephemeris reported by Dragomir et al. (2013).
5. Conclusions
We find no evidence for transits in any of the systems targeted by this survey. There remains some probability that a transit occurred outside the observation window for each target; we know this occurred for HD 97658b, but the probability is extremely small for our other targets as shown in Table 3. Excluding HD 97658, we calculate a cumulative prior transit probability of 29.4%; it is therefore not surprising that no transits were detected, but the high value of such transiting systems more than justifies the investment of Spitzer time.
Although no transits were detected in this work, future prospects of utilizing this method for super-Earth discovery remain high. By our estimates the majority of super-Earths with well-constrained ephemerides have already been observed by either Spitzer, MOST, or both, but we expect that current and next-generation radial velocity surveys will produce an ever-growing number of such systems in the coming years. Until the launch of TESS, this method remains one of the most promising avenues for detecting transiting super-Earths around bright, nearby stars.
J.-M.D. and N.K.L. acknowledge funding from NASA through the Sagan Exoplanet Fellowship program administered by the NASA Exoplanet Science Institute (NExScI). This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
References
Berta et al. (2012)
Berta, Z. K., et al. 2012, AJ, 144, 145
Charbonneau et al. (2009)
Charbonneau, D., et al. 2009, Nature, 462, 891
Chiang & Laughlin (2013)
Chiang, E., & Laughlin, G. 2013, MNRAS, 431, 3444
Correia et al. (2010)
Correia, A., et al. 2010, A&A, 511, A21
Dragomir et al. (2012)
Dragomir, D., et al. 2012, ApJ, 759, L41
Dragomir et al. (2013)
Dragomir, D., et al. 2013, ApJ, 772, L2
Fischer et al. (2012)
Fischer, D. A., et al. 2012, ApJ, 745, 21
Fortney et al. (2013)
Fortney, J. J., et al. 2013, ApJ, in press (arXiv:1306.4329)
Fressin et al. (2013)
Fressin, F., et al. 2013, ApJ, 766, 81
Giacobbe et al. (2012)
Giacobbe, P., et al. 2012, MNRAS, 424, 3101
Gillon et al. (2010)
Gillon, M., et al. 2010, A&A, 518, A25
Gillon et al. (2012)
Gillon, M., et al. 2012, A&A, 539, A14
Hansen & Murray (2012)
Hansen, B. M., & Murray, N. 2012, ApJ, 751, 158
Henry et al. (2011)
Henry, G. W., et al. 2011, ApJ, withdrawn (arXiv:1109.2549)
Howard et al. (2009)
Howard, A. W., et al. 2009, ApJ, 696, 75
Howard et al. (2011a)
Howard, A. W., et al. 2011a, ApJ, 726, 73
Howard et al. (2011b)
Howard, A. W., et al. 2011b, ApJ, 730, 10
Howard et al. (2012)
Howard, A. W., et al. 2012, ApJS, 201, 15
Knutson et al. (2012)
Knutson, H. A., et al. 2012, ApJ, 754, 22
Kovács et al. (2013)
Kovács, G., et al. 2013, MNRAS, in press (arXiv:1304.1399)
Lewis et al. (2013)
Lewis, N. K., et al. 2013, ApJ, 766, 95
Mandel & Agol (2002)
Mandel, K., & Agol, E. 2002, ApJ, 580, L171
Mighell (2005)
Mighell, K. J. 2005, MNRAS, 361, 861
Ricker et al. (2010)
Ricker, G. R., et al. 2010, BAAS, 42, 459
Rivera et al. (2005)
Rivera, E. J., et al. 2005, ApJ, 634, 625
Rivera et al. (2010a)
Rivera, E. J., et al. 2010a, ApJ, 708, 1492
Rivera et al. (2010b)
Rivera, E. J., et al. 2010b, ApJ, 719, 890
Rogers & Seager (2010)
Rogers, L. A., & Seager, S. 2010, ApJ, 712, 974
Sing (2010)
Sing, D. 2010, A&A, 510, A21
Todorov et al. (2013)
Todorov, K. O., et al. 2013, ApJ, 770, 102
Valencia et al. (2013)
Valencia, D., et al. 2013, ApJ, 775, 10
Valencia et al. (2010)
Valencia, D., et al. 2010, A&A, 516, 20
Winn et al. (2011)
Winn, J., et al. 2011, ApJ, 737, L18
Zeng & Sasselov (2013)
Zeng, L., & Sasselov, D. 2013, PASP, 125, 227 |
Consistent and Complementary Graph Regularized
Multi-view Subspace Clustering
Qinghai Zheng,1 Jihua Zhu,1 Zhongyu Li,1 Shanmin Pang,1 Jun Wang,2 Lei Chen3
1School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2Shanghai Institute for Advanced Communication and Data Science,
School of Communication and Information Engineering,Shanghai University, Shanghai 200444, China
3Jiangsu Key Laboratory of Big Data Security and Intelligent Processing,
Nanjing University of Posts and Telecommunications, Nanjing 210023, China
Corresponding Author
Abstract
This study investigates the problem of multi-view clustering, where multiple views contain consistent information and each view also includes complementary information. Exploration of all information is crucial for good multi-view clustering. However, most traditional methods blindly or crudely combine multiple views for clustering and are unable to fully exploit the valuable information. Therefore, we propose a method that involves consistent and complementary graph-regularized multi-view subspace clustering (GRMSC), which simultaneously integrates a consistent graph regularizer with a complementary graph regularizer into the objective function. In particular, the consistent graph regularizer learns the intrinsic affinity relationship of data points shared by all views. The complementary graph regularizer investigates the specific information of multiple views. It is noteworthy that the consistent and complementary regularizers are formulated by two different graphs constructed from the first-order proximity and second-order proximity of multiple views, respectively. The objective function is optimized by the augmented Lagrangian multiplier method in order to achieve multi-view clustering. Extensive experiments on six benchmark datasets serve to validate the effectiveness of the proposed method over other state-of-the-art multi-view clustering methods.
Consistent and Complementary Graph Regularized
Multi-view Subspace Clustering
Qinghai Zheng,1 Jihua Zhu,1††thanks: Corresponding Author Zhongyu Li,1 Shanmin Pang,1 Jun Wang,2 Lei Chen3
1School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2Shanghai Institute for Advanced Communication and Data Science,
School of Communication and Information Engineering,Shanghai University, Shanghai 200444, China
3Jiangsu Key Laboratory of Big Data Security and Intelligent Processing,
Nanjing University of Posts and Telecommunications, Nanjing 210023, China
Introduction
Clustering is an important task in unsupervised learning, which can be a preprocessing step to assist other learning tasks or a stand-alone exploratory tool to uncover underlying information from data (?). The goal of clustering is to group unlabeled data points into corresponding categories according to their intrinsic similarities. Many effective clustering algorithms have been proposed, such as k-means clustering (?), spectral clustering (?) and subspace clustering (?; ?). However, these methods are designed for single-view rather than multi-view data from various fields or different measurements common in many real-world applications. Unlike single-view data, multi-view data contains both the consensus information and complementary information for multi-view learning. (?). Therefore, an important issue of multi-view clustering is how to fuse multiple views properly to mine the underlying information effectively. Evidently, it is not a good choice to use a single-view clustering algorithm on multi-view data straightforward (?; ?; ?). In this study, we consider the multi-view clustering problem based on the subspace clustering algorithm (?; ?), owing to its good interpretability and promising performance in practice.
Multi-view subspace clustering assumes that all views are constructed based on a shared latent subspace and pursues a common subspace representation for clustering (?). Many multi-view subspace clustering methods have been proposed in recent years (?; ?; ?; ?; ?; ?). Although good clustering results can be obtained in practice, there are some deficiencies in the existing methods. First, some methods deal with multiple views separately and combine clustering results of different views directly. As a result, the relationship among multiple views is ignored during the clustering process. Second, most existing methods only take the consensus information or the complementary information of multi-view data into consideration rather than explore both of them. Third, a few methods integrate graph information of multiple views into the subspace representation for improving clustering results, however, only the first-order similarity (?; ?) of data points in multi-view data is considered and employed as is, which is oversimplified for multi-view clustering. Actually, the first-order similarity is an observed pairwise proximity, with the local graph information lacking in the global graph structure (?). Moreover, the clustering structure of the first-order proximity has often discordance among different views, because different views have different statistic properties.
To address the above-mentioned limitations of the existing clustering methods, a graph-regularized multi-view subspace clustering (GRMSC) methods is presented in this study. Considering that clustering results should be unified across different views, it is vital for multi-view clustering to integrate information of multiple views in a suitable way (?; ?). In the proposed method, low-rank representation (LRR) (?) is performed on all views jointly, and a common subspace representation is obtained and accompanied with two graph regularizers: a consistent graph regularizer based on the first-order proximity to explore the consensus information of all views, and a complementary graph regularizer based on the second-order proximity to explore the complementarity of different views. Figure 1 illustrates the complete framework for the proposed method. The consistent and complementary graph regularizers are discussed in detail consequently. To achieve multi-view clustering, an algorithm based on the augmented Lagrangian multiplier (ALM) method (?) is designed to optimize the proposed objective function. Finally, clustering results are achieved by applying spectral clustering on the affinity matrix calculated based on the common subspace representation. Comprehensive experiments on six benchmark datasets are conducted to validate the superior performance of the proposed multi-view clustering method compared with the existing state-of-the-art clustering methods.
The main contributions of this study are as follows:
1)
A novel GRMSC method is proposed to perform clustering on multiple views simultaneously by fully exploring the intrinsic information of multi-view data;
2)
A consistent graph regularizer and a complementary graph regularizer are introduced to integrate the multi-view information in a suitable way for multi-view clustering;
3)
An effective algorithm based on the ALM method is developed and extensive experiments are conducted on six real-world datasets to confirm the superiority of the proposed method.
Related Works
In recent years, many multi-view clustering methods have been proposed. Based the way the views are combined, most existing methods can be classified roughly into three groups (?): co-training or co-regularized, graph-based, and subspace-learning-based methods.
Multi-view clustering methods of the first type (?; ?; ?) often combine multiple views under the assumption that all views share the same common eigenvector matrix (?; ?). For example, co-regularized multi-view spectral clustering (?) learns the graph Laplacian eigenvectors of each view separately, and then utilizes them to constrain other views to obtain the same clustering results. The graph-based method (?; ?; ?; ?; ?) explores the underlying information of multi-view data by fusing different graphs. For instance, robust multi-view subspace clustering (RMSC) (?) pursues a latent transition probability matrix of all views via low rank and sparse decomposition, and then obtains clustering results based on the standard Markov chain. Auto-weighted multiple graph learning (AMGL) (?) integrates all graphs, with auto-weighted factors based on the fact that different views are associated with incomplete information for real manifold learning and have the same clustering results. Multi-view consensus graph clustering (MCGC) (?) achieves clustering results by learning a common shared graph of all views with a constrained Laplacian rank constraint. Graph-based multi-view clustering (GMC) (?) introduces an auto-weighted strategy and a constrained Laplacian rank constraint to construct a unified graph matrix for multiple views. Many multi-view subspace clustering approaches (?; ?; ?; ?; ?; ?) have been proposed as well based on the idea that multiple views have the same latent subspace and a common shared subspace representation. Low-rank tensor-constrained multi-view subspace clustering (LT-MSC) (?) and tensor-singular value decomposition based multi-view subspace clustering (t-SVD-MSC) (?) seeks the low-rank tensor subspace to explore the high-order correlations of multi-view data for clustering fully. Latent multi-view subspace clustering (LMSC) (?) seeks an underlying latent representation, which is the origin of all views, and runs the low-rank representation algorithm on the learning latent representation simultaneously. Multi-view low-rank sparse subspace clustering (MLRSSC) (?) aims to learn a joint subspace representation and constructs a shared subspace representation with both the low-rank and sparsity constraints.
Even though the various multi-view clustering methods are based on different theories, the key objective of them all is one, i.e., achieving promising clustering results by combining multiple views properly and exploring the underlying clustering structures of multi-view data fully. Unlike most existing methods, the method proposed in this study integrates the first- and second-order graph information into the multi-view subspace clustering process by introducing a consistent graph regularizer and a complementary graph regularizer so that both consensus information and complementary information of multi-view data can be explored simultaneously.
The Proposed Approach
In this section, we discuss the GRMSC approach. Figure 1 presents the complete framework for the proposed method, and Table 1 presents the symbols used in this paper.
Given the multi-view data ${\bf X}=\{{{\bf X}^{(k)}}\}_{k=1}^{v}$, samples of which are drawn from $c$ multiple subspaces, the proposed method can be decomposed into three parts: the low-rank representation on multiple views, consistent graph regularizer, and complementary graph regularizer. The methods can process all views simultaneously, and the intrinsic information can be fully explored.
Low-Rank Representation on Multiple Views
Under the assumption that all views have the same clustering results, LRR (?) is performed on all views and a common shared subspace representation is achieved. Consequently, an optimization problem can be written as follows:
$$\begin{array}[]{l}\mathop{\min}\limits_{{\bf Z},{{\bf E}^{(k)}}}{\kern 1.0pt}{%
\kern 1.0pt}{\kern 1.0pt}{\left\|{\bf Z}\right\|_{*}}+{\lambda}\sum\limits_{k=%
1}^{v}{{{\left\|{{{\bf E}^{(k)}}}\right\|}_{2,1}}}\\
{\rm{s}}{\rm{.t}}{\rm{.}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{{\bf X}^{(k)}%
}={{\bf X}^{(k)}}{\bf Z}+{{\bf E}^{(k)}},\end{array}$$
(1)
where $\bf{Z}$ is the common subspace representation whose columns denote the representation of corresponding samples, ${\bf E}^{(k)}$ indicates the sample-specific error of the $k$th view, and $\lambda$ is the trade-off parameter.
Evidently, the above problem deals with all views simultaneously. However, the information of multiple views cannot be investigated properly in this way, because the low-rank constraint on the common $\bf{Z}$ ignores the specific information of different views. Moreover, the graph information, which is vital for clustering, is not employed in this formulation. A consistent graph regularizer and a complementary regularizer are introduced to handle these limitations.
Consistent Graph Regularizer
Most existing graph-based multi-view clustering approaches employ graphs with first-order proximity for clustering, whose elements denote pairwise similarities between two data points. In this study, Gaussian kernels are utilized to define proximity matrices of all views. Taking the $k$th view as an example, we have the following formula
$${\bf S}_{ij}^{(k)}=\exp(-\frac{{\left\|{{{\bf X}_{i}^{(k)}}-{{\bf X}_{j}^{(k)}%
}}\right\|_{2}^{2}}}{{{\sigma^{2}}}}),$$
(2)
where ${\bf S}_{ij}^{(k)}$ denotes the similarity between the $i$th and $j$th data points in the $k$th view, $\sigma$ is the median Euclidean distance. Mutual $k$ nearest neighbor (m-$k$NN) strategy is employed, which means that the elements of the first-order proximity are:
$${{\bf\Lambda}^{(k)}}=\left\{\begin{array}[]{l}{\bf S}_{ij}^{(k)},{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\rm{if}}{\kern 1.0pt}{\kern 1.0pt}{\bf X}_{j}^{(v)}%
{\kern 1.0pt}{\rm{and}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{%
\bf X}_{i}^{(v)}{\kern 1.0pt}{\rm{are}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\rm{m}\rm{-}}k{\rm{NN}}{\kern 1.0pt}{\kern 1.0pt},\\
0{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt%
}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt%
}{\kern 1.0pt},{\kern 1.0pt}{\kern 1.0pt}{\rm{otherwise}}\end{array}\right.$$
(3)
where ${\bf\Lambda}^{(k)}$ is the first-order proximity matrix of the $k$th view. Clearly, ${\bf\Lambda}^{(k)}$ captures the local graph structures.
However, as shown in Figure 1, the graphs with the first-order proximity among views are different from each other because statistic properties of different views are diverse. Evidently, it is not a suitable way to leverage first proximity matrices straightforward. To explore the common shared intrinsic graph information of multi-view data, a consistent graph regularizer is introduced. Given $\{{{\bf\Lambda}^{(k)}}\}_{k=1}^{v}$, a proximity matrix ${\bf\Lambda}^{*}$ can be constructed as follows:
$${\bf\Lambda}^{*}{\rm{=}}\mathop{\odot}\limits_{k=1}^{v}{\bf\Lambda}^{(k)},$$
(4)
where $\odot$ denotes the Hadamard product. It is noteworthy that not all elements of ${\bf\Lambda}^{*}$ are taken into consideration. As shown in Figure 1, nonzero elements of ${\bf\Lambda}^{*}$ indicate the shared intrinsic consensus graph information of multi-view data. The consistent graph regularizer, i.e., ${\Psi_{{\rm{ConGR}}}}$, for multi-view clustering can be defined as follows:
$${\Psi_{{\rm{ConGR}}}}(Z)={\kern 1.0pt}{\kern 1.0pt}\frac{1}{2}\sum\limits_{(i,%
j)\in\Omega}{{\bf\Lambda}_{ij}^{*}\left\|{{{\bf Z}_{i}}-{{\bf Z}_{j}}}\right\|%
_{2}^{2}},$$
(5)
where $\Omega$ is the index set of the nonzero elements in ${\bf\Lambda}^{*}$, and we also denote $\bar{\Omega}$ as the index set of the zero elements in ${\bf\Lambda}^{*}$ in future.
The consistent graph regularizer integrates the consensus graph information into the subspace representation properly. For the rest of the parts in graphs of multiple views, a complementary graph regularizer is introduced to explore the complementary information of multi-view.
Complementary Graph Regularizer
Elements in $\bar{\Omega}$ of $\{{{\bf\Lambda}^{(k)}}\}_{k=1}^{v}$ are inconsistent across different views. Therefore, it is inadvisable to use them as Eq. (5). How to fuse them effectively is vital for multi-view clustering. In this paper, the second-order proximity matrices of multiple views, i.e., $\{{{\bf\Upsilon}^{(k)}}\}_{k=1}^{v}$, are introduced, and a complementary graph regularizer is defined to benefit the clustering performance based on the elements in $\bar{\Omega}$ of $\{{{\bf\Upsilon}^{(k)}}\}_{k=1}^{v}$.
Under the intuition that data points with more shared neighbors are more likely to be similar, the second-order proximity can be constructed as follows:
$${{\bf\Upsilon}}_{ij}^{(k)}=\exp(-\frac{{\left\|{{\bf\Lambda}_{i}^{(k)}-{\bf%
\Lambda}_{j}^{(k)}}\right\|_{2}^{2}}}{{{\sigma^{2}}}}),$$
(6)
where ${\bf\Upsilon}_{ij}^{(k)}$ denotes the second-order proximity matrix of the $i$th and $j$th data points in the $k$th view. Evidently, the second-order proximity matrices of multiple views, i.e., $\{{{\bf\Upsilon}^{(k)}}\}_{k=1}^{v}$, capture the global graph information of multi-view data. Furthermore, to investigate the complementary information of multi-view data, the following complementary graph regularizer, i.e., ${\Psi_{{\rm{ComGR}}}}$, is introduced:
$${\Psi_{{\rm{ComGR}}}}({\bf Z})={\kern 1.0pt}{\kern 1.0pt}\frac{1}{2}\sum%
\limits_{k=1}^{v}{\sum\limits_{(i,j)\in\bar{\Omega}}{{\bf\Upsilon}_{ij}^{(k)}%
\left\|{{{\bf Z}_{i}}-{{\bf Z}_{j}}}\right\|_{2}^{2}}},$$
(7)
in which elements in $\bar{\Omega}$ of $\{{{\bf\Upsilon}^{(k)}}\}_{k=1}^{v}$ are utilized. Different from the consistent graph regularizer, the complementary graph regularizer defined in Eq. (7) explores the global graph information of all views and integrates the complementary graph information into the subspace representation to improve the performance of multi-view clustering.
Objective Function
Fusing the aforementioned three components jointly, the objective function of the proposed GRMSC can be written as:
$$\begin{array}[]{l}\mathop{\min}\limits_{{\bf Z},{{\bf E}^{(k)}}}{\kern 1.0pt}{%
\kern 1.0pt}{\kern 1.0pt}{\left\|{\bf Z}\right\|_{*}}+{\lambda_{1}}\sum\limits%
_{k=1}^{v}{{{\left\|{{{\bf E}^{(k)}}}\right\|}_{2,1}}}\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+{\lambda_{2}}\left({{\Psi%
_{{\rm{ConGR}}}}({\bf Z})+\alpha{\Psi_{{\rm{ComGR}}}}({\bf Z})}\right)\\
{\rm{s}}{\rm{.t}}{\rm{.}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{{\bf X}^{(k)}%
}={{\bf X}^{(k)}}{\bf Z}+{{\bf E}^{(k)}},\end{array}$$
(8)
where $\lambda_{1}$, $\lambda_{2}$, and $\alpha$ are tradeoff parameters.
Optimization
To optimize the ${\bf Z}$ and ${\bf E}^{(k)}$, the ALM method (?) is adopted and an algorithm is proposed. In order to make the optimization effectively and make the objective function separable, an auxiliary variable ${\bf Q}$ is introduced in the nuclear norm. As a result, the objective function, i.e. Eq. (8), can be rewritten as follows:
$$\begin{array}[]{l}\mathop{\min}\limits_{{\bf Z},{{\bf E}^{(k)}},{\bf Q}}{\kern
1%
.0pt}{\kern 1.0pt}{\kern 1.0pt}{\left\|{\bf Q}\right\|_{*}}+{\lambda_{1}}\sum%
\limits_{k=1}^{v}{{{\left\|{{{\bf E}^{(k)}}}\right\|}_{2,1}}}\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+{\lambda_{2}}\left({{\Psi%
_{{\rm{ConGR}}}}({\bf Z})+\alpha{\Psi_{{\rm{ComGR}}}}({\bf Z})}\right)\\
{\rm{s}}{\rm{.t}}{\rm{.}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{{\bf X}^{(k)}%
}={{\bf X}^{(k)}}{\bf Z}+{{\bf E}^{(k)}},{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0%
pt}{\bf Q}={\bf Z},\end{array}$$
(9)
where ${\bf Q}$ is the auxiliary variable. And the augmented Lagrange function can be formulated:
$$\begin{array}[]{l}{\cal L}({\bf Q},{\bf Z},{{\bf E}^{(k)}},{{\bf Y}_{1}^{(k)}}%
,{\bf Y}_{2})\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}={\kern 1.0pt}{\kern 1.0pt%
}{\kern 1.0pt}{\left\|{\bf Q}\right\|_{*}}+{\lambda_{1}}\sum\limits_{k=1}^{v}{%
{{\left\|{{{\bf E}^{(k)}}}\right\|}_{2,1}}}\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+{\lambda_{2}}\left({{\Psi%
_{{\rm{ConGR}}}}({\bf Z})+\alpha{\Psi_{{\rm{ComGR}}}}({\bf Z})}\right)\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+\sum\limits_{k=1}^{v}{%
\Gamma({\bf Y}_{1}^{(k)},{{\bf X}^{(k)}}-{{\bf X}^{(k)}}{\bf Z}-{{\bf E}^{(k)}%
})}\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+\Gamma({{\bf Y}_{2}},{\bf
Z%
}-{\bf Q}),\end{array}$$
(10)
where ${\bf Y}_{1}^{(v)}$ and ${\bf Y}_{2}$ indicate Lagrange multipliers, and to make the representation concise, $\Gamma({\bf A},{\bf B})$ has the following definition:
$$\Gamma({\bf A},{\bf B})=\left\langle{{\bf A},{\bf B}}\right\rangle+\frac{\mu}{%
2}\left\|{\bf B}\right\|_{F}^{2}$$
(11)
where $\mu$ denotes an adaptive penalty parameter with a positive value, $\left\langle{\cdot,\cdot}\right\rangle$ is the inner product operation. Consequently, problem of minimizing the augmented Lagrange function (10) can be divided into four subproblems. Algorithm 1 presents the whole procedure of the optimization.
Subproblem of Updating ${\bf E}^{(k)}$
By fixing other variables, the subproblem with respect to ${\bf E}^{(k)}$ can be constructed:
$$\mathop{\min}\limits_{{{\bf E}^{(k)}}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{%
\lambda_{1}}\sum\limits_{k=1}^{v}{{{\left\|{{{\bf E}^{(k)}}}\right\|}_{2,1}}+%
\Gamma({\bf Y}_{1}^{(k)},{{\bf X}^{(k)}}-{{\bf X}^{(k)}}{\bf Z}-{{\bf E}^{(k)}%
})},$$
(12)
which can be simplified as follows:
$$\mathop{\min}\limits_{{{\bf E}^{(k)}}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{%
\lambda_{1}}\sum\limits_{k=1}^{v}{{{\left\|{{{\bf E}^{(k)}}}\right\|}_{2,1}}+%
\frac{\mu}{2}\left\|{{{\bf E}^{(k)}}-{\bf T}_{E}^{(k)}}\right\|_{F}^{2}},$$
(13)
which can be solved according to Lemma 4.1 in (?), and ${\bf T}_{E}^{(k)}$ has the following definition:
$${\bf T}_{E}^{(k)}={{\bf X}^{(k)}}-{{\bf X}^{(k)}}{\bf Z}+\frac{{{\bf Y}_{1}^{(%
k)}}}{\mu}.$$
(14)
Subproblem of Updating ${\bf Q}$
In order to update ${\bf Q}$, other variables are fixed. And following subproblem can be formulated:
$$\mathop{\min}\limits_{\bf Q}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\left\|{%
\bf Q}\right\|_{*}}+\Gamma({{\bf Y}_{2}},{\bf Z}-{\bf Q}),$$
(15)
optimization of which is the same with the following problem:
$$\mathop{\min}\limits_{\bf Q}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\left\|{%
\bf Q}\right\|_{*}}+\frac{\mu}{2}\left\|{{\bf Q}-({\bf Z}+\frac{{{{\bf Y}_{2}}%
}}{\mu})}\right\|_{F}^{2},$$
(16)
which has a solution with closed form:
$${\bf Q}={\bf U}{{S}_{{1\mathord{\left/{\vphantom{1\mu}}\right.\kern-1.2pt}\mu}%
}}({\bf\Sigma}){\bf V},$$
(17)
where ${\bf U}{\bf\Sigma}{\bf V}={\bf Z}+\frac{{{{\bf Y}_{2}}}}{\mu}$ and ${S_{\varepsilon}}$ denotes a soft-threshold operator (?) as follows:
$${S_{\varepsilon}}(x)=\left\{\begin{array}[]{l}x-\varepsilon,{\kern 1.0pt}{%
\kern 1.0pt}{\kern 1.0pt}{\rm{if}}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}x-%
\varepsilon>0\\
x+\varepsilon,{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\rm{if}}{\kern 1.0pt}{%
\kern 1.0pt}{\kern 1.0pt}x-\varepsilon<0\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}0{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt%
}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt%
},{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\rm{otherwise}}.\end{array}\right.$$
(18)
Subproblem of Updating ${\bf Z}$
When other variables are fixed, the subproblem of Updating ${\bf Z}$ can be written as follows:
$$\begin{array}[]{l}{\kern 1.0pt}\mathop{\min}\limits_{\bf Z}{\kern 1.0pt}{\kern
1%
.0pt}{\kern 1.0pt}{\lambda_{2}}\left({{\Psi_{{\rm{ConGR}}}}({\bf Z})+\alpha{%
\Psi_{{\rm{ComGR}}}}({\bf Z})}\right)\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+\sum\limits_{k=1}^{v}{%
\Gamma({\bf Y}_{1}^{(k)},{{\bf X}^{(k)}}-{{\bf X}^{(k)}}{\bf Z}-{{\bf E}^{(k)}%
})}\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+\Gamma({{\bf Y}_{2}},{\bf
Z%
}-{\bf Q}),\end{array}$$
(19)
solution of which can be obtained by taking derivation with respect to ${\bf Z}$ and setting to be zeros. Specifically, to make the optimization effectively, we define a matrix ${{\bf W}}^{(k)}$:
$$\left\{\begin{array}[]{l}{{\bf W}}_{ij}^{(k)}=\frac{1}{v}{\bf\Lambda}_{ij}^{*}%
,{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt%
}(i,j)\in\Omega\\
{{\bf W}}_{ij}^{(k)}=\alpha{\bf\Upsilon}_{ij}^{(k)},{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}(i,j)\in\bar{\Omega},\end{array}\right.$$
(20)
and it is easy to prove the following equation:
$${\Psi_{{\rm{ConGR}}}}({\bf Z})+\alpha{\Psi_{{\rm{ComGR}}}}({\bf Z}){\rm{=}}%
\sum\limits_{k=1}^{v}{Tr({{\bf Z}^{T}}{{\bf L}^{(k)}}{\bf Z})}$$
(21)
where ${\bf L}^{(k)}$ is the Laplacian matrix of ${{\bf W}}^{(k)}$, and ${\bf Z}^{T}$ indicates the transpose of the subspace representation ${\bf Z}$. Therefore, the optimization of Eq. (19) can be written as follows:
$${\bf Z}={\bf T}_{ZA}^{-1}{{\bf T}_{ZB}},$$
(22)
where ${\bf T}_{ZA}^{-1}$ is the inverse matrix of ${\bf T}_{ZA}$, ${\bf T}_{ZA}$ and ${\bf T}_{ZB}$ have the following definition:
$$\begin{array}[]{l}{{\bf T}_{ZA}}={\lambda_{2}}\sum\limits_{k=1}^{v}{\left({{{%
\bf L}^{{{(k)}^{T}}}}+{{\bf L}^{(k)}}}\right)}\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}+\mu\left({\sum\limits_{k=1}^{v}{\left({{{\bf X}^{{{(k)}^{T}}}}{{%
\bf X}^{(k)}}}\right)}+{\bf I}}\right),\\
{{\bf T}_{ZB}}=\sum\limits_{k=1}^{v}{\left({{{\bf X}^{{{(k)}^{T}}}}{\bf Y}_{1}%
^{(k)}+\mu\left({{{\bf X}^{{{(k)}^{T}}}}{{\bf X}^{(k)}}}\right)}\right)}\\
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}%
{\kern 1.0pt}+\mu\left({\sum\limits_{k=1}^{v}{\left({{{\bf X}^{{{(k)}^{T}}}}{{%
\bf E}^{(k)}}}\right)}+{\bf Q}}\right),\end{array}$$
(23)
where ${\bf I}$ is the identity matrix with suitable size.
Subproblem of Updating ${\bf Y}_{1}^{(k)}$, ${\bf Y}_{2}$ and $\mu$
We update Lagrange multiplers and $\mu$ with the following form according to (?):
$$\left\{\begin{array}[]{l}{\bf Y}_{1}^{(k)}={\bf Y}_{1}^{(k)}+\mu({{\bf X}^{(k)%
}}-{{\bf X}^{(k)}}{\bf Z}-{{\bf E}^{(k)}})\\
{{\bf Y}_{2}}={{\bf Y}_{2}}+\mu({\bf Z}-{\bf Q})\\
\mu{\rm{=}}\min(\rho\mu,{\mu_{\max}}),\end{array}\right.$$
(24)
where $\mu_{\max}$ is a threshold value and $\rho$ indicates a nonnegative scalar.
Computational Complexity
The main computational burden is consist of the four subproblems. Besides, ${\bf\Lambda}^{(k)}$, ${\bf\Lambda}^{*}$ and ${\bf\Upsilon}^{(k)}$ are pre-computed outside of the algorithm. In line with Table 1, the number of samples is $n$, the number of views is $v$, the number of iteration is $t$, and the dimension of the $k$th view is $d_{k}$. For convenience, $d$ is introduced and $d=\max(\left\{{{d_{k}}}\right\}_{k=1}^{v})$. The complexity of updating $\left\{{{{\bf E}^{(k)}}}\right\}_{k=1}^{v}$ and ${\bf Q}$ are ${\cal O}(vdn)$ and ${\cal O}(n^{3})$ respectively, as for updating ${\bf Z}$ and Lagrange multiplers, the complexity is ${\cal O}(n^{3}+vdn)$. Therefore, the computational complexity of Algorithm 1 is ${\cal O}(tn(vd+n^{2}))$.
Experiments
Comprehensive experiments are conducted and presented in this section. Furthermore, the convergence property and parameter sensitivity of the proposed method are analyzed as well. Six benchmark datasets are employed. In particular, 3-Sources (?) is a three-view dataset containing news article data from BBC, Reuters, and Guardian. BBCSport (?) consists of 544 sports news reports, each of which is decomposed into two subparts. Movie617 contains 617 movie samples of 17 categories with two views, i.e., keywords and actors. NGs (?) consisting of 500 samples is a subset of the 20 Newsgroup datasets and has three views. Prokaryotic (?) is a multi-view dataset that describes prokaryotic species from three aspects: textual data, proteome composition, and genomic representations. Yale Face is a dataset containing 165 face images of 15 individuals and each image is described by three features, namely intensity, LBP, and Gabor. Additionally, six evaluation metrics (?; ?; ?) are utilized: Normalized Mutual Information (NMI), ACCuracy (ACC), F-Score, AVGent (AVG), Precious, and Rand Index (RI). Higher values of all metrics, except for AVGent, demonstrate the better clustering results. Parameters of all comparison methods are fine-tuned. To eliminate the randomness, 30 test runs with random initialization are performed and clustering results are represented in the form of mean values with standard derivation. The numbers in the bold type denote the best clustering results.
Validation and Ablation Experiments
To validate the effectiveness of our GRMSC, results of three different methods are compared. The first clustering method is based on the low-rank representation (?) with best single view, i.e., LRR$\rm{{}_{BSV}}$. The second clustering method is based on the subspace representation obtained from Eq. (1), named MSC$\rm{{}_{Naive}}$ for convenience. The third method is the graph-regularized multi-view subspace clustering, which only leverages the first-order proximity to construct the graph regularizer and is termed the GRMSC$\rm{{}_{Naive}}$.
As displayed in Table 2, multi-view clustering can generally achieve better clustering results than those of single view clustering. Furthermore, compared with MSC$\rm{{}_{Naive}}$ and GRMSC$\rm{{}_{Naive}}$, the proposed GRMSC method achieves significantly better clustering performance, which validates the necessity of introducing the consistent graph regularizer and the complementary graph regularizer, while verifying the effectiveness of the proposed method.
Comparison Experiments
To demonstrate the superiority of the GRMSC method, Table 3 displays the comparison of experimental results of five state-of-the-art multi-view subspace clustering methods, namely RMSC (?), AMGL (?), LMSC (?), MLRSSC (?), GMC (?), previously discussed in the section Related Works.
The GRMSC method outperforms other methods on all benchmark datasets. For example, considering the experimental results on the Yale Face dataset, this method improves clustering performance over the second best one by approximately $6.98\%$ and $7.27\%$ with respect to NMI and ACC, respectively. It is noteworthy that although the clustering result of LMSC for the Precious metric is slightly better, GRMSC scores over the second best one by a significant margin in the remaining five metrics. Table 3 displays the competitiveness of the proposed method with respect to other state-of-the-art clustering methods.
Convergence and Parameter Sensitivity
We consider the experiments on NGs. As depicted in Figure 2, the proposed method has a stable convergence and can converge within 20 iterations. Actually, for experiments on all datasets, the proposed method has similar convergence.
Three parameters, namely $\lambda_{1}$, $\lambda_{2}$, and $\alpha$, are involved in our GRMSC. For convenience, $\alpha$, which is the parameter to balance the $\Psi_{{\rm{ConGR}}}$ and $\Psi_{{\rm{ComGR}}}$, is fixed and set as $0.001$ in this study for all datasets. $\lambda_{1}$ tunes bases on the prior multi-view data information, including corruption and noise level. $\lambda_{2}$ is tuned to balance the importance between the low-rank representation of all views and the two graph regularizers. Furthermore, values of $\lambda_{1}$ and $\lambda_{2}$ are selected from the set $\{0.001,0.01,0.1,1,10,100,1000\}$. As shown in Figure 3, good clustering results can be obtained with ${\lambda_{1}}\geq 1$ and ${\lambda_{2}}\leq 1$.
Conclusion
This paper proposes a consistent and complementary graph-regularized multi-view subspace clustering to accurately integrate information from multiple views for clustering. By introducing the consistent graph regularizer and the complementary graph-regularizer, graph information of multi-view data is considered. Both the consensus and complementary information of multi-view data are fully considered for clustering. An elaborate optimization algorithm is also developed to achieve improved clustering results, and extensive experiments are conducted on six benchmark datasets to illustrate the effectiveness and competitiveness of the proposed GRMSC method in comparison to several state-of-the-art multi-view clustering methods.
References
[Ball and Hall 1965]
Ball, G. H., and Hall, D. J.
1965.
Isodata, a novel method of data analysis and pattern classification.
Technical report, Stanford research inst Menlo Park CA.
[Brbić and Kopriva 2018]
Brbić, M., and Kopriva, I.
2018.
Multi-view low-rank sparse subspace clustering.
Pattern Recognition 73:247–258.
[Cai, Candès, and
Shen 2010]
Cai, J.-F.; Candès, E. J.; and Shen, Z.
2010.
A singular value thresholding algorithm for matrix completion.
SIAM Journal on optimization 20(4):1956–1982.
[Cao et al. 2015]
Cao, X.; Zhang, C.; Fu, H.; Liu, S.; and Zhang, H.
2015.
Diversity-induced multi-view subspace clustering.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, 586–594.
[Chao, Sun, and
Bi 2017]
Chao, G.; Sun, S.; and Bi, J.
2017.
A survey on multi-view clustering.
arXiv preprint arXiv:1712.06246.
[Elhamifar and
Vidal 2013]
Elhamifar, E., and Vidal, R.
2013.
Sparse subspace clustering: Algorithm, theory, and applications.
IEEE transactions on pattern analysis and machine intelligence
35(11):2765–2781.
[Gao et al. 2015]
Gao, H.; Nie, F.; Li, X.; and Huang, H.
2015.
Multi-view subspace clustering.
In Proceedings of the IEEE international conference on computer
vision, 4238–4246.
[Kumar and
Daumé 2011]
Kumar, A., and Daumé, H.
2011.
A co-training approach for multi-view spectral clustering.
In Proceedings of the 28th International Conference on Machine
Learning (ICML-11), 393–400.
[Kumar, Rai, and Daume 2011]
Kumar, A.; Rai, P.; and Daume, H.
2011.
Co-regularized multi-view spectral clustering.
In Advances in neural information processing systems,
1413–1421.
[Lin, Liu, and Su 2011]
Lin, Z.; Liu, R.; and Su, Z.
2011.
Linearized alternating direction method with adaptive penalty for
low-rank representation.
In Advances in neural information processing systems,
612–620.
[Liu et al. 2012]
Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; and Ma, Y.
2012.
Robust recovery of subspace structures by low-rank representation.
IEEE transactions on pattern analysis and machine intelligence
35(1):171–184.
[Luo et al. 2018]
Luo, S.; Zhang, C.; Zhang, W.; and Cao, X.
2018.
Consistent and specific multi-view subspace clustering.
In Thirty-Second AAAI Conference on Artificial Intelligence.
[Manning, Raghavan, and
Schütze 2010]
Manning, C.; Raghavan, P.; and Schütze, H.
2010.
Introduction to information retrieval.
Natural Language Engineering 16(1):100–103.
[Nie et al. 2016]
Nie, F.; Li, J.; Li, X.; et al.
2016.
Parameter-free auto-weighted multiple graph learning: A framework for
multiview clustering and semi-supervised classification.
In IJCAI, 1881–1887.
[Tang et al. 2015]
Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; and Mei, Q.
2015.
Line: Large-scale information network embedding.
In Proceedings of the 24th international conference on world
wide web, 1067–1077.
International World Wide Web Conferences Steering Committee.
[Tang et al. 2018]
Tang, C.; Zhu, X.; Liu, X.; Li, M.; Wang, P.; Zhang, C.; and Wang, L.
2018.
Learning joint affinity graph for multi-view subspace clustering.
IEEE Transactions on Multimedia.
[Vidal 2011]
Vidal, R.
2011.
Subspace clustering.
IEEE Signal Processing Magazine 28(2):52–68.
[Von Luxburg 2007]
Von Luxburg, U.
2007.
A tutorial on spectral clustering.
Statistics and computing 17(4):395–416.
[Wang et al. 2017]
Wang, X.; Cui, P.; Wang, J.; Pei, J.; Zhu, W.; and Yang, S.
2017.
Community preserving network embedding.
In Thirty-First AAAI Conference on Artificial Intelligence.
[Wang, Yang, and Liu 2019]
Wang, H.; Yang, Y.; and Liu, B.
2019.
Gmc: graph-based multi-view clustering.
IEEE Transactions on Knowledge and Data Engineering.
[Xia et al. 2014]
Xia, R.; Pan, Y.; Du, L.; and Yin, J.
2014.
Robust multi-view spectral clustering via low-rank and sparse
decomposition.
In Twenty-Eighth AAAI Conference on Artificial Intelligence.
[Xie et al. 2018]
Xie, Y.; Tao, D.; Zhang, W.; Liu, Y.; Zhang, L.; and Qu, Y.
2018.
On unifying multi-view self-representations for clustering by tensor
multi-rank minimization.
International Journal of Computer Vision 126(11):1157–1179.
[Xu, Tao, and
Xu 2013]
Xu, C.; Tao, D.; and Xu, C.
2013.
A survey on multi-view learning.
arXiv preprint arXiv:1304.5634.
[Zhai et al. 2019]
Zhai, L.; Zhu, J.; Zheng, Q.; Pang, S.; Li, Z.; and Wang, J.
2019.
Multi-view spectral clustering via partial sum minimisation of
singular values.
Electronics Letters 55(6):314–316.
[Zhan et al. 2018a]
Zhan, K.; Nie, F.; Wang, J.; and Yang, Y.
2018a.
Multiview consensus graph clustering.
IEEE Transactions on Image Processing 28(3):1261–1270.
[Zhan et al. 2018b]
Zhan, K.; Niu, C.; Chen, C.; Nie, F.; Zhang, C.; and Yang, Y.
2018b.
Graph structure fusion for multiview clustering.
IEEE Transactions on Knowledge and Data Engineering.
[Zhang et al. 2015]
Zhang, C.; Fu, H.; Liu, S.; Liu, G.; and Cao, X.
2015.
Low-rank tensor constrained multiview subspace clustering.
In Proceedings of the IEEE international conference on computer
vision, 1582–1590.
[Zhang et al. 2017]
Zhang, C.; Hu, Q.; Fu, H.; Zhu, P.; and Cao, X.
2017.
Latent multi-view subspace clustering.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 4279–4287.
[Zhang et al. 2018]
Zhang, C.; Fu, H.; Hu, Q.; Cao, X.; Xie, Y.; Tao, D.; and Xu, D.
2018.
Generalized latent multi-view subspace clustering.
IEEE transactions on pattern analysis and machine intelligence.
[Zhou et al. 2019]
Zhou, T.; Zhang, C.; Peng, X.; Bhaskar, H.; and Yang, J.
2019.
Dual shared-specific multiview subspace clustering.
IEEE transactions on cybernetics.
[Zhou 2012]
Zhou, Z.-H.
2012.
Ensemble methods: foundations and algorithms.
Chapman and Hall/CRC. |
Condition on Ramond-Ramond fluxes for factorization
of worldsheet scattering in anti-de Sitter space
Linus Wulff
wulff@physics.muni.cz
Department of Theoretical Physics and Astrophysics, Masaryk University, 611 37 Brno, Czech Republic
Abstract
Factorization of scattering is the hallmark of integrable 1+1 dimensional quantum field theories. For factorization of scattering to be possible the set of masses and momenta must be conserved in any two-to-two scattering process. We use this fact to constrain the form of the Ramond-Ramond fluxes for integrable supergravity anti-de Sitter backgrounds by analysing tree-level scattering of two AdS bosons into two fermions on the worldsheet of a BMN string. We find a condition which can be efficiently used to rule out integrability of AdS strings and therefore of the corresponding AdS/CFT dualities, as we demonstrate for some simple examples.
pacs: 02.30.Ik,11.25.Tq
I Introduction
A key to understanding and checking precisely the AdS/CFT correspondence Maldacena:1997re , which relates string theory in $(d+1)$-dimensional anti-de Sitter (AdS) backgrounds to conformal field theories in $d$ dimensions, has been the discovery of integrability on both sides of the correspondence. Originally for the superstring on $AdS_{5}\times S^{5}$ Bena:2003wd and its dual $\mathcal{N}=4$ super Yang-Mills theory in four dimensions Minahan:2002ve , see the reviews Beisert:2010jr ; Bombardelli:2016rwb . In particular this has allowed for computing the spectrum of the quantum theory in the large $N$ limit exactly Bombardelli:2009ns ; Gromov:2009bc ; Arutyunov:2009ur ; Gromov:2013pga .
Some other AdS/CFT examples have been found which also possess an integrable structure. An interesting question is whether there are more such examples out there to be found. In order to begin to tackle this question we will derive constraints on the supergravity AdS background where the string propagates which are needed for integrability. There are many ways to do this, e.g. Basu:2011di ; Stepanchuk:2012xi ; Chervonyi:2013eja ; Giataganas:2013dha . Here we will follow an approach similar to Wulff:2017hzy . The idea is to expand around a classical string solution where one has a notion of a worldsheet S-matrix. This S-matrix is required to be of factorized form, i.e. to reduce to a sequence of two-to-two scattering events, in the case of an integrable theory and this places very strong constraints on its form Zamolodchikov:1978xm . In particular factorization of the S-matrix requires the set of masses and momenta to be conserved in any two-to-two scattering process. Here we will expand around the BMN string and compute scattering of two worldsheet bosons, coming from the transverse fluctuations in AdS, into two worldsheet fermions of equal mass. The reason we want to include fermions is that in many examples we are interested in, e.g. symmetric spaces, the bosonic string sigma model is integrable. We expect that integrability is lost once fermions are included unless the background is supersymmetric Wulff:2017hzy . The mass of the AdS bosons is 1 in suitable units while the mass spectrum of the fermions is determined by the fluxes. Factorization then implies that either the fermion mass is also 1 or the scattering amplitude must vanish.
As we will see this puts strong constraints on the fluxes of the background (we will simplify things by assuming the NSNS flux does not contribute, when it does it is already strongly constrained at the bosonic level). The constraint we find on the RR fluxes is given in (25) with the RR fluxes encoded in two constant matrices $M$ and $N$ via (13) and (3) or (14) and (4). The matrix $M$ determines the fermion mass spectrum while $N$ determines the relevant Yukawa couplings. This condition does not depend on any of the particle momenta and arises by taking a limit of large center-of-mass energy.
We first recall the superstring action and its near-BMN expansion and gauge fixing. Then we compute the amplitude for scattering of two identical AdS bosons into two fermions of equal mass and derive the constraint on the RR fluxes. Finally we apply this constraint to rule out integrability for some symmetric space backgrounds.
II String action and near-BMN expansion
Our starting point is the Green-Schwarz superstring Lagrangian Cvetic:1999zs ; Wulff:2013kga 111Here $\gamma^{ij}=\sqrt{-h}h^{ij}$ with $h_{ij}$ the worldsheet metric with signature $(-,+)$, $\varepsilon^{01}=1$ and $\theta\Gamma^{a}\mathcal{D}\theta=\theta^{\alpha}\mathcal{C}_{\alpha\beta}(%
\Gamma^{a})^{\beta}{}_{\gamma}(\mathcal{D}\theta)^{\gamma}$. When working in conformal gauge one also has to include the Fradkin-Tseytlin counter-term $\Phi R^{(2)}$ Fradkin:1984pq ; Fradkin:1985ys where $\Phi$ is the dilaton superfield, whose expansion to quadratic order in $\theta$ can be found in Wulff:2013kga , see Wulff:2016tju . Here we will work at tree-level and in light-cone gauge so this term will not be relevant.
$$\displaystyle\mathcal{L}=$$
$$\displaystyle-\tfrac{T}{2}e_{i}{}^{a}e_{j}{}^{b}(\gamma^{ij}\eta_{ab}-%
\varepsilon^{ij}B_{ab})$$
$$\displaystyle{}-iTe_{i}{}^{a}\,\theta\Gamma_{a}(\gamma^{ij}-\varepsilon^{ij}%
\Gamma_{11})\mathcal{D}_{j}\theta+\mathcal{O}(\theta^{4})\,,$$
(1)
where the derivative operator is the same that appears in the Killing spinor equation, namely
$$\mathcal{D}=d-\tfrac{1}{4}\omega^{ab}\Gamma_{ab}+\tfrac{1}{8}e^{a}(H_{abc}%
\Gamma^{bc}\Gamma_{11}+\mathcal{S}\Gamma_{a})\,.$$
(2)
The $\theta^{4}$-terms are also know Wulff:2013kga but they won’t be needed here. The bosonic fields appearing here are the pull-backs to the worldsheet of type II supergravity fields – the vielbeins $e^{a}$ ($a=0,\ldots,9$) and spin connection $\omega^{ab}$, the NSNS two-form $B_{ab}$ and its field strength $H=dB$ and the RR field strengths encoded in the bispinor $\mathcal{S}$. We follow the conventions of Wulff:2013kga and the action we have written is for the type IIA superstring with $\theta$ a 32-component Majorana spinor and
$$\mathcal{S}=e^{\phi}(F^{(0)}+\tfrac{1}{2}F^{(2)}_{ab}\Gamma^{ab}\Gamma_{11}+%
\tfrac{1}{4!}F^{(4)}_{abcd}\Gamma^{abcd})\,,$$
(3)
in terms of the dilaton $\phi$ and RR field strengths. The action for the type IIB superstring is obtained by the replacements $\Gamma^{a}\rightarrow\gamma^{a}$, $\Gamma_{11}\rightarrow\sigma^{3}$ and
$$\mathcal{S}=-e^{\phi}(F^{(1)}_{a}i\sigma^{2}\gamma^{a}+\tfrac{1}{3!}F^{(3)}_{%
abc}\sigma^{1}\gamma^{abc}+\tfrac{1}{2\cdot 5!}F^{(5)}_{abcde}i\sigma^{2}%
\gamma^{abcde})\,,$$
(4)
where $\theta$ now consists of two 16-component Majorana-Weyl spinors of the same chirality and $\gamma^{a}$ are 16-component gamma matrices while the Pauli matrices mix the two spinors. For more details see the appendix of Wulff:2013kga .
We are interested in backgrounds of the form $AdS_{n}\times M_{10-n}$ and we take the AdS metric to have a convenient form for light-cone gauge fixing
$$ds^{2}_{AdS}=R^{2}\left(-\left(\frac{1+\tfrac{1}{4}z_{m}^{2}}{1-\tfrac{1}{4}z_%
{m}^{2}}\right)^{2}dt^{2}+\frac{dz_{m}^{2}}{(1-\tfrac{1}{4}z_{n}^{2})^{2}}%
\right)\,,$$
(5)
with spin connection
$$\omega^{0m}=-\tfrac{1}{2}z^{m}(R^{-1}e^{0}+dt)\,,\qquad\omega^{mn}=R^{-1}z^{[m%
}e^{n]}\,,$$
(6)
where $R$ is the $AdS$ radius and $z_{m}$ ($m=1,\ldots,n-1$) are the transverse AdS coordinates. We assume that $M_{10-n}$ has a $U(1)$ isometry (it does not need to be compact) generated by a geodesic so that the metric can be written
$$ds^{2}_{M}=G_{m^{\prime}n^{\prime}}dy^{m^{\prime}}dy^{n^{\prime}}+G_{m^{\prime%
}}dy^{m^{\prime}}dx^{9}+Gdx^{9}dx^{9}\,,$$
(7)
where $y^{m^{\prime}}$ ($m^{\prime}=n,\ldots,8$) are the transverse coordinates of $M_{10-n}$ and $G_{m^{\prime}n^{\prime}}$, $G_{m^{\prime}}$ and $G$ are functions of these satisfying $G(0)=R^{2}$ and $G_{m^{\prime}}(0)=\partial_{m^{\prime}}G(0)=0$ while $x^{9}$ is the coordinate of the $U(1)$ isometry (suitably normalized). The condition that the linear term in $G(y)$ be absent is needed for the isometry to be a geodesic. The other two conditions can be arranged by rescaling and shifting $x^{9}$.
We also assume that the NSNS three-form $H$ has no legs in the 0,1 or 9-directions and that $\mathcal{S}$, encoding the RR fluxes, is independent of $t,z_{1}$ and $x^{9}$ and respects the ”boost invariance” in the 1-direction, i.e. $[\Gamma^{01},\mathcal{S}]=0$. The assumptions involving the 1-direction are not necessary but they will simplify the analysis.
These conditions guarantee that there exist a BMN solution Berenstein:2002jq of the string equations of motion taking the form
$$x^{+}=\tfrac{1}{2}(x^{0}+x^{9})=\tau\,,$$
(8)
with $\tau$ the worldsheet time-coordinate. We expand the string Lagrangian (1) around this solution fixing so-called uniform light-cone gauge
$$x^{+}=\tau\,,\qquad\frac{\partial\mathcal{L}}{\partial\dot{x}^{-}}=-2g\,,%
\qquad\frac{\partial\mathcal{L}}{\partial x^{\prime-}}=0\,,$$
(9)
where we have defined the dimensionless coupling $g=TR^{2}$. The last two conditions on the momentum density conjugate to $x^{-}$ remove the two degrees of freedom of $\gamma^{ij}$. The Virasoro constraints remove the degrees of freedom associated to $x^{-}$. The kappa gauge invariance of the fermions is fixed by the corresponding condition
$$\Gamma^{+}\theta=0\quad\Leftrightarrow\quad\theta=P_{+}\theta\,,\quad P_{\pm}=%
\tfrac{1}{2}(1\pm\Gamma^{09})\,,$$
(10)
where $\Gamma^{\pm}=\tfrac{1}{2}(\Gamma^{0}\pm\Gamma^{9})$.
Since we will be interested here only in tree-level $z_{1}z_{1}\rightarrow\theta\theta$ scattering we will only keep the terms which can contribute to this. Our assumption that $H$ has no legs in the $0,1$ or 9-directions implies that (up to total derivatives) there cannot be any cubic couplings of the form $yz_{1}z_{1}$ coming from the $B$-field. Therefore the only contributions to $z_{1}z_{1}\rightarrow\theta\theta$ scattering at tree-level come from terms of the form $z_{1}\theta\theta$ and $z_{1}z_{1}\theta\theta$. Setting all the bosons except $z_{1}$ to zero the gauge fixing conditions in (9) lead to $\gamma^{ij}=\eta^{ij}+\hat{\gamma}^{ij}$ with 222Note that the kappa gauge-fixing (10) implies that there are no $dx^{-}\theta\theta$-terms only $dx^{-}z_{1}^{2}\theta\theta$-terms, which cannot contribute at the order we are interested in.
$$\hat{\gamma}^{00}=\hat{\gamma}^{11}=\tfrac{1}{2}z_{1}^{2}+\ldots\,,\qquad\hat{%
\gamma}^{01}=0+\ldots\,,$$
(11)
where the ellipsis denotes terms which cannot contribute to the order we are interested in. Using this in (1) and noting that the conditions on $H$ guarantee that it does not contribute while the spin connection also drops out one finds the Lagrangian
$$\displaystyle\mathcal{L}=$$
$$\displaystyle{}\tfrac{1}{2}\partial_{+}z_{1}\partial_{-}z_{1}-\tfrac{1}{2}z_{1%
}^{2}-\tfrac{i}{2}\theta_{+}\Gamma^{-}\partial_{+}\theta_{+}-\tfrac{i}{2}%
\theta_{-}\Gamma^{-}\partial_{-}\theta_{-}$$
$$\displaystyle{}-\theta_{+}\Gamma^{01}M\theta_{-}-\tfrac{i}{2\sqrt{g}}(\partial%
_{+}z_{1}-\partial_{-}z_{1})\,\theta_{+}\Gamma^{0}N\Gamma^{1}\theta_{-}$$
$$\displaystyle{}+\tfrac{i}{8g}z_{1}^{2}\big{(}\theta_{+}\Gamma^{-}\partial_{-}%
\theta_{+}+\theta_{-}\Gamma^{-}\partial_{+}\theta_{-}\big{)}$$
$$\displaystyle{}+\tfrac{1}{4g}\partial_{+}z_{1}\partial_{-}z_{1}\,\theta_{+}%
\Gamma^{01}M\theta_{-}+\ldots$$
(12)
Here we have rescaled the fields as $z_{1}\rightarrow g^{-1/2}z_{1}$, $\theta\rightarrow\frac{1}{2}R^{1/2}g^{-1/2}\theta$. We have also defined $\partial_{\pm}=\partial_{0}\pm\partial_{1}$ and $\theta_{\pm}=\frac{1}{2}(1\pm\Gamma_{11})\theta$ and used our assumption that $\Gamma^{01}$ commutes with $\mathcal{S}$ to simplify the cubic terms. Furthermore we have split $\mathcal{S}$ into matrices $M$ and $N$ which commute with $\Gamma^{0}$, $\Gamma^{9}$ and $\Gamma_{11}$ defined by
$$P_{+}\mathcal{S}P_{-}|=\tfrac{4i}{R}\Gamma^{01}MP_{-}\,,\quad P_{+}\mathcal{S}%
P_{+}|=\tfrac{4}{R}NP_{+}\,.$$
(13)
The vertical bar means that $\mathcal{S}$ is evaluated setting $z_{m}=y_{m^{\prime}}=0$ so that $M$ and $N$ are constant matrices. It follows from the anti-symmetry of $\mathcal{S}$ that they satisfy $M^{T}=\Gamma^{1}M\Gamma^{1}$ and $N^{T}=-\Gamma^{1}N\Gamma^{1}$. We have written things so that the type IIB case is obtained by replacing $\Gamma^{a}\rightarrow\gamma^{a}$ and $M\rightarrow iM$ and $N\rightarrow iN$ in (12) where now $M$ and $N$ are defined as
$$P_{+}\mathcal{S}P_{-}|=\tfrac{4}{R}\gamma^{01}MP_{-}\,,\quad P_{+}\mathcal{S}P%
_{+}|=\tfrac{4i}{R}NP_{+}$$
(14)
and anti-commute with $\gamma^{0}$, $\gamma^{9}$ and $\sigma^{3}$. They satisfy $M^{T}=-\gamma^{1}M\gamma^{1}$ and $N^{T}=\gamma^{1}N\gamma^{1}$.
Looking at the Lagrangian (12) we see that the $AdS$ boson $z_{1}$ has mass 1 while the fermion mass spectrum is determined by the matrix $M$. The matrix $N$ encodes the Yukawa-type couplings.
We will now consider $z_{1}z_{1}\rightarrow\theta\theta$ scattering. Unless the fermions also have mass 1 this amplitude must vanish in an integrable theory to be compatible with factorized scattering.
III Tree-level $\mathbf{zz\rightarrow\theta\theta}$ scattering
We find it convenient to work directly with the 8-component spinors $\theta_{\pm}$. The propagator takes the form
$$\langle\theta_{\pm}\theta_{\pm}\rangle=\left(\begin{array}[]{cc}k_{-}&-\Gamma^%
{1}M\\
-\Gamma^{1}M&k_{+}\end{array}\right)\frac{i\Gamma^{+}}{k_{+}k_{-}-M^{T}M}\,.$$
(15)
External state fermions come with factors of
$$u^{i}_{\pm}(k)=\left(\begin{array}[]{c}\sqrt{k_{-}}u^{i}\\
-m_{i}^{-1}\sqrt{k_{+}}\Gamma^{1}Mu^{i}\end{array}\right)\,,$$
(16)
solving the free Dirac equation. Here $u^{i}$, with $i,j=1,\ldots,8$ labelling the eight physical fermions, is a constant (commuting) spinor which we take to be a suitably normalized eigenstate of the mass-squared operator
$$M^{T}Mu^{i}=m_{i}^{2}u^{i}\,,\quad u^{i}\Gamma^{-}u^{j}=\delta^{ij}\,.$$
(17)
In the case of type IIB we have $i\gamma^{1}$ in place of $\Gamma^{1}$ in the above expressions. Throughout the remainder of this section the type IIB expressions are obtained simply by setting $\Gamma^{-}\rightarrow\gamma^{-}$.
We are now ready to compute the amplitude for $z_{1}z_{1}\rightarrow\theta\theta$ scattering. The contribution from the quartic interaction terms in (12) is the simplest. It takes the form
$$\displaystyle\mathcal{A}_{4}^{ij}=$$
$$\displaystyle\tfrac{i}{4g}\delta^{ij}\Big{[}(p_{3+}-p_{4+})\sqrt{p_{3+}p_{4+}}%
-(p_{3-}-p_{4-})\sqrt{p_{3-}p_{4-}}$$
$$\displaystyle{}+m_{i}(p_{1+}p_{2-}+p_{1-}p_{2+})\big{(}\sqrt{p_{3-}p_{4+}}-%
\sqrt{p_{3+}p_{4-}}\big{)}\Big{]}\,.$$
(18)
Using the on-shell conditions $p_{1-}=1/p_{1+}$, $p_{2-}=1/p_{2+}$, $p_{3-}=m_{i}^{2}/p_{3+}$, $p_{4-}=m_{j}^{2}/p_{4+}$ and energy-momentum conservation, which implies for example (for $m_{i}=m_{j}$) that $m_{i}^{2}p_{1+}p_{2+}=p_{3+}p_{4+}$, this becomes
$$\mathcal{A}_{4}^{ij}=\tfrac{im_{i}}{4g}\delta^{ij}(p_{3+}-p_{4+})(1-p_{1+}^{2}%
)(1-p_{2+}^{2})(p_{1+}p_{2+})^{-3/2}\,.$$
(19)
We could of course express the amplitude in terms of only the incoming momenta $p_{1}$ and $p_{2}$ 333For $m_{i}=m_{j}=m$ we have for example $p_{3+}-p_{4+}=\sqrt{(p_{1+}+p_{2+})^{2}-4m^{2}p_{1+}p_{2+}}$.
but we have kept the factor of $(p_{3+}-p_{4+})$ to avoid complicating the expression too much.
Now we turn to the contribution from the cubic interaction terms in (12). This contribution is somewhat more complicated and takes the form
$$\displaystyle\mathcal{A}_{3}^{ij}=$$
$$\displaystyle\tfrac{i}{4g}(p_{1+}-p_{1-})(p_{2+}-p_{2-})$$
$$\displaystyle\times\Big{[}u^{i}\Gamma^{-}\mathcal{M}^{\prime}u^{j}-u^{j}\Gamma%
^{-}\mathcal{M}^{\prime}(p_{3}\leftrightarrow p_{4})u^{i}\Big{]}\,,$$
(20)
where
$$\displaystyle\mathcal{M}^{\prime}=$$
$$\displaystyle NM\frac{m_{j}^{-1}\sqrt{p_{3-}p_{4+}}}{-(p_{1}-p_{3})^{2}-M^{T}M%
}NM$$
(21)
$$\displaystyle{}-M^{T}N^{T}\frac{m_{i}^{-1}\sqrt{p_{3+}p_{4-}}}{-(p_{1}-p_{3})^%
{2}-M^{T}M}M^{T}N^{T}$$
$$\displaystyle{}+N\frac{(p_{1+}-p_{3+})\sqrt{p_{3-}p_{4-}}}{-(p_{1}-p_{3})^{2}-%
MM^{T}}N^{T}$$
$$\displaystyle{}-M^{T}N^{T}\frac{(m_{i}m_{j})^{-1}(p_{1-}-p_{3-})\sqrt{p_{3+}p_%
{4+}}}{-(p_{1}-p_{3})^{2}-M^{T}M}NM\,.$$
For simplicity we will now restrict ourselves to the case of equal fermion masses $m_{i}=m_{j}=m$. Using the on-shell conditions and energy-momentum conservation we find
$$\displaystyle\mathcal{A}_{3}^{ij}=$$
$$\displaystyle\tfrac{i}{4mg}(1-p_{1+}^{2})(1-p_{2+}^{2})(p_{1+}p_{2+})^{-3/2}$$
(22)
$$\displaystyle\times\Big{[}(p_{3+}-p_{4+})u^{(i}\Gamma^{-}\mathcal{M}_{s}(x)u^{%
j)}$$
$$\displaystyle\qquad\qquad{}+(p_{1+}-p_{2+})u^{[i}\Gamma^{-}\mathcal{M}_{a}(x)u%
^{j]}\Big{]}\,,$$
where
$$\displaystyle\mathcal{M}_{s}=$$
$$\displaystyle NM\frac{x+2(1-m^{2}+M^{T}M)}{(1-m^{2}+M^{T}M)^{2}+xM^{T}M}NM$$
$$\displaystyle{}+M^{T}N^{T}\frac{1-m^{2}+M^{T}M}{(1-m^{2}+M^{T}M)^{2}+xM^{T}M}NM$$
$$\displaystyle{}+m^{2}N\frac{1-m^{2}+MM^{T}}{(1-m^{2}+MM^{T})^{2}+xMM^{T}}N^{T}\,,$$
(23)
$$\displaystyle\mathcal{M}_{a}=$$
$$\displaystyle NM\frac{x+4(1-m^{2})}{(1-m^{2}+M^{T}M)^{2}+xM^{T}M}NM$$
(24)
and we have introduced the convenient variable $x=-(p_{1}+p_{2})^{2}-4=(p_{1+}-p_{2+})^{2}/(p_{1+}p_{2+})$, the center-of-mass energy minus 4. The total amplitude for $z_{1}z_{1}\rightarrow\theta\theta$ scattering, where the fermions have equal mass, is then given by the sum of (22) and (19). Unless the mass of the fermions is also $1$ this amplitude has to vanish for factorized scattering to be possible. We can extract a relatively simple condition on the RR fluxes for this to happen by focusing on the high-energy limit $x\rightarrow\infty$ by setting $p_{1+}=1+\epsilon$, $p_{2+}=\epsilon$, $p_{3+}=1+(2-m^{2})\epsilon$ and $p_{4+}=m^{2}\epsilon$, so that $x=\epsilon^{-1}$, and taking $\epsilon\rightarrow 0$. In this limit we find that the condition becomes
$$\boxed{m^{2}\delta^{ij}+u^{i}\Gamma^{-}NM\frac{1}{M^{T}M}NMu^{j}=0\,.}$$
(25)
Note that this condition involves only the RR fluxes, through $M$ and $N$ defined in (13), and constant matrices. In the next section we will see that this condition is in fact quite strong and can be used to rule out integrability for many backgrounds.
First let us caution that in calculating the amplitude we have ignored possible IR-divergences which can appear when there are massless fermions in the spectrum. In cases with massless fermions one therefore has to be more careful and it is possible that (25) gets corrected. Luckily cases with massless fermions are very special and in fact one can often avoid dealing with massless fermions all together by picking a suitable BMN geodesic as we will see below.
IV Examples
$\mathbf{AdS_{3}\times S^{3}\times S^{3}\times S^{1}}$. It is instructive to see how a background which is known to be integrable manages to satisfy (25). An interesting and quite non-trivial example is to take $AdS_{3}\times S^{3}\times S^{3}\times S^{1}$ but pick a non-standard (non-supersymmetric) BMN geodesic which involves an angle on both $S^{3}$’s Rughoonauth:2012qd . We take the geodesic given by $x^{+}=\frac{1}{2}(x^{0}+ax^{5}+bx^{8})$ with $a^{2}+b^{2}=1$. For the type IIA solution the RR bispinor takes the form Rughoonauth:2012qd
$$\mathcal{S}=-2\Gamma^{0129}(1-\sqrt{\alpha}\Gamma^{012345}-\sqrt{1-\alpha}%
\Gamma^{012678})\,,$$
(26)
where the parameter $\alpha$ controls the relative size of the two $S^{3}$’s and we have set the AdS radius to unity $R=1$. We define rotated directions $\Gamma^{5^{\prime}}=b\Gamma^{5}-a\Gamma^{8}$, $\Gamma^{8^{\prime}}=a\Gamma^{5}+b\Gamma^{8}$ so that $\Gamma^{\pm}=\Gamma^{0}\pm\Gamma^{8^{\prime}}$. From the definition in (13) we then find
$$\displaystyle M=$$
$$\displaystyle\tfrac{i}{2}\Gamma^{29}(1+a\sqrt{\alpha}\Gamma^{1234}+b\sqrt{1-%
\alpha}\Gamma^{1267})\,,$$
(27)
$$\displaystyle N=$$
$$\displaystyle-\tfrac{1}{2}\Gamma^{345^{\prime}9}(b\sqrt{\alpha}+a\sqrt{1-%
\alpha}\Gamma^{3467})\,.$$
(28)
From the fact that $M^{T}M=\frac{1}{4}(1+a\sqrt{\alpha}\Gamma^{1234}+b\sqrt{1-\alpha}\Gamma^{1267}%
)^{2}$ it follows that the mass spectrum consists of four different masses $m_{\pm\pm}=\frac{1}{2}(1\pm a\sqrt{\alpha}\pm b\sqrt{1-\alpha})$ with eigenvectors $u^{\pm\pm}=\frac{1}{2}(1\pm\Gamma^{1234})\frac{1}{2}(1\pm\Gamma^{1267})u^{\pm\pm}$. Note that generically the masses are non-zero and also not equal to $1$, the mass of the AdS bosons. Using the fact that $a^{2}+b^{2}=1$ it is not hard to prove the nice identity $N^{2}u^{\pm\pm}=m_{\pm\pm}(1-m_{\pm\pm})u^{\pm\pm}$. Using this identity and the fact that $M$ and $N$ anti-commute one finds that the LHS of (25) becomes
$$m_{++}^{2}-m_{++}m_{--}u^{++}\Gamma^{-}M\frac{1}{M^{T}M}Mu^{++}=0\,.$$
(29)
Similar calculations show that the remaining components of this condition are indeed also satisfied. This is consistent with the classical integrability of the string in this background
Babichenko:2009dk ; Sundin:2012gc .
$\mathbf{AdS_{4}\times S^{3}\times S^{3}}$. This example is one of the symmetric space solutions found in Wulff:2017zbl . It is a non-supersymmetric solution of massive type IIA and the RR bispinor takes the form
$$\mathcal{S}=\sqrt{2}R^{-1}(1-\sqrt{5}\Gamma^{0123})\,.$$
(30)
The definition (13) implies $M=i\frac{\sqrt{10}}{4}\Gamma^{23}$ and $N=\frac{\sqrt{2}}{4}$. It follows that all fermions have $m^{2}=5/8$. The LHS of the condition (25) becomes $3/4$ which does not vanish and therefore integrability is ruled out for this background. Note however that the bosonic string is integrable since we are dealing with a symmetric space and there is no NSNS flux.
$\mathbf{AdS_{3}\times S^{3}\times S^{2}\times H^{2}}$. This example is another of the symmetric space solutions of Wulff:2017zbl , see also Figueroa-OFarrill:2012whx . It is a non-supersymmetric type IIB solution with RR bispinor
$$\mathcal{S}=\tfrac{i}{2}\sigma^{2}\big{(}f_{3}(\gamma^{01289}-\gamma^{34567})+%
f_{4}(\gamma^{01267}-\gamma^{34589})\big{)}\,,$$
(31)
where the AdS radius is given by $R^{-2}=(f_{3}^{2}+f_{4}^{2})/8$. To avoid massless fermions it is convenient to take the BMN geodesic given by $x^{+}=\frac{1}{2}(x^{0}+x^{9^{\prime}})$, where we’ve made a rotation in the (79)-plane to $x^{7^{\prime}}=\frac{1}{\sqrt{2}}(x^{7}-x^{9})$, $x^{9^{\prime}}=\frac{1}{\sqrt{2}}(x^{7}+x^{9})$. From the definition in (14) we find
$$\displaystyle M=$$
$$\displaystyle\tfrac{iR}{8\sqrt{2}}\sigma^{2}\gamma^{27^{\prime}}(f_{3}\gamma^{%
8}-f_{4}\gamma^{6})(1-\gamma^{1234567^{\prime}8})\,,$$
(32)
$$\displaystyle N=$$
$$\displaystyle\tfrac{R}{8\sqrt{2}}\sigma^{2}\gamma^{12}(-f_{3}\gamma^{8}-f_{4}%
\gamma^{6})(1+\gamma^{1234567^{\prime}8})\,.$$
(33)
From the first expression we find $M^{T}M=\frac{1}{8}(1-\gamma^{1234567^{\prime}8})$ and noting that $\gamma^{1234567^{\prime}8}u^{i}=\gamma^{0123456789}u^{i}=-u^{i}$ we find that all fermions have $m=\frac{1}{2}$. The LHS of (25) becomes
$$\frac{1}{4}+\frac{R^{4}}{256}u\gamma^{-}(f_{3}^{2}-f_{4}^{2}+2f_{3}f_{4}\gamma%
^{68})^{2}u=\frac{1}{2}-\frac{2f_{3}^{2}f_{4}^{2}}{(f_{3}^{2}+f_{4}^{2})^{2}}\,.$$
(34)
For this to vanish we must have $f_{4}=\pm f_{3}$ but in this case the background degenerates to $AdS_{3}\times S^{3}\times T^{4}$. Therefore integrability is ruled out for this background. Since the RR flux for the backgrounds $AdS_{3}\times S^{5}\times H^{2}$ and $AdS_{3}\times SLAG_{3}\times H^{2}$ is of the same form, but with $f_{4}=0$, integrability is ruled out also for these backgrounds.
V Conclusions
We have used the fact that factorization of worldsheet scattering requires many two-to-two amplitudes to vanish, namely those for which the set of initial an final masses and momenta differ, to constrain the RR fluxes of integrable AdS supergravity backgrounds. In particular we have found the constraint (25) with $M,N$ determined from the RR fluxes by (13) and (3) or the corresponding type IIB expressions. We have also seen how this condition can be used to rule out integrability for some of the symmetric space backgrounds of Wulff:2017zbl . In a forthcoming publication we will extend this to rule out integrability for the remaining non-supersymmetric backgrounds of Wulff:2017zbl .
We hope to also apply this condition, or a suitable modification, to more complicated backgrounds which are not of symmetric space form. Fortunately there is a vast literature on (supersymmetric) AdS backgrounds to exploit. In this way we expect to be able to constrain severely the space of integrable AdS/CFT-pairs.
References
(1)
J. M. Maldacena,
Int.J.Theor.Phys. 38, 1113 (1999), hep-th/9711200.
(2)
I. Bena, J. Polchinski, and R. Roiban,
Phys.Rev. D69, 046002 (2004), hep-th/0305116.
(3)
J. Minahan and K. Zarembo,
JHEP 0303, 013 (2003), hep-th/0212208.
(4)
N. Beisert et al.,
Lett. Math. Phys. 99, 3 (2012), 1012.3982.
(5)
D. Bombardelli et al.,
J. Phys. A49, 320301 (2016), 1606.02945.
(6)
D. Bombardelli, D. Fioravanti, and R. Tateo,
J.Phys. A42, 375401 (2009), 0902.3930.
(7)
N. Gromov, V. Kazakov, A. Kozak, and P. Vieira,
Lett.Math.Phys. 91, 265 (2010), 0902.4458.
(8)
G. Arutyunov and S. Frolov,
JHEP 0905, 068 (2009), 0903.0141.
(9)
N. Gromov, V. Kazakov, S. Leurent, and D. Volin,
Phys.Rev.Lett. 112, 011602 (2014), 1305.1939.
(10)
P. Basu and L. A. Pando Zayas,
Phys. Lett. B700, 243 (2011), 1103.4107.
(11)
A. Stepanchuk and A. A. Tseytlin,
J. Phys. A46, 125401 (2013), 1211.3727.
(12)
Y. Chervonyi and O. Lunin,
JHEP 02, 061 (2014), 1311.1521.
(13)
D. Giataganas, L. A. Pando Zayas, and K. Zoubos,
JHEP 01, 129 (2014), 1311.3241.
(14)
L. Wulff,
J. Phys. A50, 23LT01 (2017), 1702.08788.
(15)
A. B. Zamolodchikov and A. B. Zamolodchikov,
Annals Phys. 120, 253 (1979).
(16)
M. Cvetic, H. Lu, C. Pope, and K. Stelle,
Nucl.Phys. B573, 149 (2000), hep-th/9907202.
(17)
L. Wulff,
JHEP 1307, 123 (2013), 1304.6422.
(18)
D. E. Berenstein, J. M. Maldacena, and H. S. Nastase,
JHEP 0204, 013 (2002), hep-th/0202021.
(19)
N. Rughoonauth, P. Sundin, and L. Wulff,
JHEP 07, 159 (2012), 1204.4742.
(20)
A. Babichenko, J. Stefanski, B., and K. Zarembo,
JHEP 1003, 058 (2010), 0912.1723.
(21)
P. Sundin and L. Wulff,
JHEP 10, 109 (2012), 1207.5531.
(22)
L. Wulff,
(2017), 1706.02118.
(23)
J. Figueroa-O’Farrill and N. Hustler,
Class. Quant. Grav. 30, 045008 (2013), 1209.4884.
(24)
E. S. Fradkin and A. A. Tseytlin,
Phys. Lett. 158B, 316 (1985).
(25)
E. S. Fradkin and A. A. Tseytlin,
Nucl. Phys. B261, 1 (1985),
[Erratum: Nucl. Phys.B269,745(1986)].
(26)
L. Wulff and A. A. Tseytlin,
JHEP 06, 174 (2016), 1605.04884. |
Interpolation formula for the reflection coefficient distribution
of absorbing chaotic cavities in the presence of time reversal symmetry
M Martínez-Mares${}^{1}$ and R A Méndez-Sánchez${}^{2}$
${}^{1}$ Departamento de Física, Universidad Autónoma
Metropolitana-Iztapalapa, 09340 México D. F., México
${}^{2}$ Centro de Ciencias Físicas, Universidad Nacional Autónoma de
México, A.P. 48-3, 62210, Cuernavaca, Mor., México
(November 19, 2020)
Abstract
We propose an interpolation formula for the distribution of the
reflection coefficient in the presence of time reversal symmetry
for chaotic cavities with absorption. This is done assuming a
similar functional form as that when time reversal invariance is
absent. The interpolation formula reduces to the analytical
expressions for the strong and weak absorption limits. Our proposal is compared to the quite complicated exact result existing in the literature.
pacs: 73.23.-b, 03.65.Nk, 42.25.Bs,47.52.+j
1 Introduction
In recent years there has been a great interest in the study of
absorption effects on transport properties of classically chaotic cavities [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] (for a review
see Ref. [19]). This is due to the fact that for experiments in microwave cavities [20, 21], elastic resonators [22] and elastic media [23] absorption always is present. Although the external parameters are particularly easy to control, absorption, due to power loss in the volume of the device used in the experiments, is an ingredient that has to be taken into account in the verification of the
random matrix theory (RMT) predictions.
In a microwave experiment of a ballistic chaotic cavity connected to
a waveguide supporting one propagating mode, Doron
et al [1] studied the effect of absorption on the
$1\times 1$ sub-unitary scattering matrix $S$, parametrized as
$$S=\sqrt{R}\,e^{i\theta},$$
(1)
where $R$ is the reflection coefficient and $\theta$ is twice the
phase shift.
The experimental results were explained by Lewenkopf
et al. [2] by simulating the absorption
in terms of $N_{p}$
equivalent “parasitic channels”, not directly accessible to experiment,
each one having an imperfect coupling to the cavity described by the
transmission coefficient $T_{p}$.
A simple model to describe chaotic scattering including absorption
was proposed by Kogan et al. [4]. It describes
the system through a sub-unitary scattering matrix $S$, whose
statistical distribution satisfies a maximum information-entropy
criterion. Unfortunately the model turns out to be valid only in
the strong-absorption limit and for $R\ll 1$.
For the $1\times 1$ $S$-matrix of
Eq. (1), it was shown that in this limit $\theta$ is
uniformly distributed between 0 and $2\pi$, while $R$ satisfies
Rayleigh’s distribution
$$P_{\beta}(R)=\alpha e^{-\alpha R};\qquad R\ll 1,\hbox{ and }\alpha\gg 1,$$
(2)
where $\beta$ denotes the universality class of $S$ introduced by
Dyson [24]: $\beta=1$ when time reversal invariance (TRI) is
present (also called the orthogonal case), $\beta=2$ when TRI
is broken (unitary case) and $\beta=4$ corresponds to the
symplectic case.
Here, $\alpha=\gamma\beta/2$ and $\gamma=2\pi/\tau_{a}\Delta$, is the
ratio of the mean dwell time inside the cavity ($2\pi/\Delta$), where
$\Delta$ is the mean level spacing, and $\tau_{a}$ is the
absorption time.
This ratio is a measure of the
absorption strength.
Eq. (2) is valid for $\gamma\gg 1$ and for $R\ll 1$
as we shall see below.
The weak absorption limit ($\gamma\ll 1$) of $P_{\beta}(R)$ was
calculated by Beenakker and Brouwer [5], by relating
$R$ to the time-delay in a chaotic cavity which is distributed according
to the Laguerre ensemble. The distribution of the reflexion coefficient
in this case is
$$P_{\beta}(R)=\frac{\alpha^{1+\beta/2}}{\Gamma(1+\beta/2)}\frac{e^{-\alpha/(1-R%
)}}{(1-R)^{2+\beta/2}};\qquad\alpha\ll 1.$$
(3)
In the whole range of $\gamma$, $P_{\beta}(R)$
was explicitly obtained for $\beta=2$ [5]:
$$P_{2}(R)=\frac{e^{-\gamma/(1-R)}}{(1-R)^{3}}\left[\gamma(e^{\gamma}-1)+(1+%
\gamma-e^{\gamma})(1-R)\right],$$
(4)
and for $\beta=4$ more recently [13].
Eq. (4) reduces to Eq. (3) for small
absorption ($\gamma\ll 1$) while for strong absorption it becomes
$$P_{2}(R)=\frac{\gamma\,e^{-\gamma R/(1-R)}}{(1-R)^{3}};\qquad\gamma\gg 1.$$
(5)
Notice that $P_{2}(R)$ approaches zero for $R$ close to one.
Then the Rayleigh distribution, Eq. (2),
is only reproduced in the range of few standard deviations
i.e., for $R\stackrel{{\scriptstyle<}}{{\sim}}\gamma^{-1}$. This can be
seen in Fig. 1(a) where we compare the distribution
$P_{2}(R)$ given by Eqs. (2) and (5) with the
exact result given by Eq. (4) for $\gamma=20$.
As can be seen the result obtained from the time-delay agrees with
the exact result but the Rayleigh distribution is only valid for
$R\ll 1$.
Since the majority of the experiments with absorption are performed with
TRI ($\beta=1$) it is very important to have the result in this case.
Due to the lack of an exact expression at that time,
Savin and Sommers [8] proposed an approximate
distribution $P_{\beta=1}(R)$ by replacing $\gamma$ by $\gamma\beta/2$ in
Eq. (4). However, this is valid for the intermediate and strong
absorption limits only. Another formula was proposed in
Ref. [16] as an interpolation between the strong and
weak absorption limits assuming a quite similar expression as the
$\beta=2$ case (see also Ref. [13]).
More recently [17], a formula for the integrated
probability distribution of $x=(1+R)/(1-R)$,
$W(x)=\int_{x}^{\infty}P_{0}^{(\beta=1)}(x)dx$, was obtained. The
distribution
$P_{\beta=1}(R)=\frac{2}{(1-R)^{2}}P_{0}^{(\beta=1)}(\frac{1+R}{1-R})$
then yields a quite complicated formula.
Due to the importance to have an “easy to use” formula for
the time reversal case, our purpose is to propose a better
interpolation formula for $P_{\beta}(R)$ when $\beta=1$. In the next
section we do this following the same procedure as in
Ref. [16].
We verify later on that our proposal reaches both limits
of strong and weak absorption. In Sec. 6 we compare
our interpolation formula with the exact result of Ref. [17].
A brief conclusion follows.
2 An interpolation formula for $\beta=1$
From Eqs. (2) and (3) we note that $\gamma$
enters in $P_{\beta}(R)$ always in the combination
$\gamma\beta/2$. We take this into account and combine it with the
general form of $P_{2}(R)$ and the interpolation proposed in
Ref. [16]. For $\beta=1$ we then propose the following formula
for the $R$-distribution
$$P_{1}(R)=C_{1}(\alpha)\frac{e^{-\alpha/(1-R)}}{(1-R)^{5/2}}\left[\alpha^{1/2}(%
e^{\alpha}-1)+(1+\alpha-e^{\alpha}){}_{2}F_{1}\left(\frac{1}{2},\frac{1}{2},1;%
R\right)\frac{1-R}{2}\right],$$
(6)
where $\alpha=\gamma/2$, ${}_{2}F_{1}$ is a hyper-geometric
function [25], and $C_{1}(\alpha)$ is a normalization
constant
$$C_{1}(\alpha)=\frac{\alpha}{(e^{\alpha}-1)\Gamma(3/2,\alpha)+\alpha^{1/2}(1+%
\alpha-e^{\alpha})f(\alpha)/2}$$
(7)
where
$$f(\alpha)=\int_{\alpha}^{\infty}\frac{e^{-x}}{x^{1/2}}\,\,{}_{2}F_{1}\left(%
\frac{1}{2},\frac{1}{2},1;1-\frac{\alpha}{x}\right)$$
(8)
and $\Gamma(a,x)$ is the incomplete $\Gamma$-function
$$\Gamma(a,x)=\int_{x}^{\infty}e^{-t}t^{a-1}dt.$$
(9)
In the next sections, we verify that in the limits of strong and
weak absorption we reproduce Eqs. (2) and (3).
3 Strong absorption limit
In the strong absorption limit, $\alpha\rightarrow\infty$,
$\Gamma(3/2,\alpha)\rightarrow\alpha^{1/2}e^{-\alpha}$, and
$f(\alpha)\rightarrow\alpha^{-1/2}e^{-\alpha}$. Then,
$$\lim_{\alpha\rightarrow\infty}C_{1}(\alpha)=\frac{\alpha e^{\alpha}}{(e^{%
\alpha}-1)\alpha^{1/2}+(1+\alpha-e^{\alpha})/2}\simeq\alpha^{1/2}.$$
(10)
Therefore, the $R$-distribution in this limit reduces to
$$P_{1}(R)\simeq\frac{\alpha\,e^{-\alpha R/(1-R)}}{(1-R)^{5/2}}\qquad\alpha\gg 1,$$
(11)
which is the equivalent of Eq. (5) but now for $\beta=1$.
As for the $\beta=2$ symmetry,
it is consistent with the fact that $P_{1}(R)$ approaches zero as $R$
tends to one. It reproduces also Eq. (2) in the range
of a few standard deviations ($R\stackrel{{\scriptstyle<}}{{\sim}}\gamma^{-1}\ll 1$), as can be seen
in Fig. 1(b).
4 Weak absorption limit
For weak absorption $\alpha\rightarrow 0$, the incomplete
$\Gamma$-function
in $C_{1}(\alpha)$ reduces to a $\Gamma$-function $\Gamma(x)$
[see Eq. (9)]. Then, $P_{1}(R)$ can be written as
$$\displaystyle P_{1}(R)$$
$$\displaystyle\simeq\frac{\alpha}{(\alpha+\alpha^{2}/2+\cdots)\Gamma(3/2)-(%
\alpha^{5/2}/2+\cdots)f(0)/2}$$
(12)
$$\displaystyle\times\frac{e^{-\alpha/(1-R)}}{(1-R)^{5/2}}\big{[}\alpha^{3/2}+%
\alpha^{5/2}/2+\cdots$$
$$\displaystyle-(\alpha^{2}/2+\alpha^{3}/6+\cdots){}_{2}F_{1}(1/2,1/2,1;R)(1-R)/%
2\big{]}.$$
By keeping the dominant term for small $\alpha$, Eq. (3)
is reproduced.
5 Comparison with the exact result
In Fig. 2 we compare our interpolation formula,
Eq. (6), with the exact result of Ref. [17].
For the same parameters used in that reference we observe an
excellent agreement.
In Fig. 3 we plot the difference between
the exact and the interpolation formulas for the same values
of $\gamma$ as in Fig. 2. The error
of the interpolation formula is less than 4%.
6 Conclusions
We have introduced a new interpolation formula for the
reflection coefficient
distribution $P_{\beta}(R)$ in the presence of time reversal symmetry
for chaotic cavities with absorption. The interpolation formula
reduces to the analytical expressions for the strong and weak
absorption limits. Our proposal is to produce an “easy to use”
formula that differs by a few percent from the exact, but
quite complicated, result of Ref. [17].
We can summarize the results for both symmetries ($\beta=1$, 2)
as follows
$$P_{\beta}(R)=C_{\beta}(\alpha)\frac{e^{-\alpha/(1-R)}}{(1-R)^{2+\beta/2}}\left%
[\alpha^{\beta/2}(e^{\alpha}-1)+(1+\alpha-e^{\alpha}){}_{2}F_{1}\left(\frac{%
\beta}{2},\frac{\beta}{2},1;R\right)\frac{\beta(1-R)^{\beta}}{2}\right],$$
(13)
where $C_{\beta}(\alpha)$ is a normalization constant that depends on
$\alpha=\gamma\beta/2$. This interpolation formula is exact
for $\beta=2$ and yields the correct limits of strong and weak
absorption.
The authors thank to DGAPA-UNAM for financial support through
project IN118805. We thank D. V. Savin for provide us the data
for the exact results we used in Figs. 2 and 3
and to J. Flores and P. A. Mello for useful comments.
References
References
[1]
Doron E, Smilansky U and Frenkel A 1990
Phys. Rev. Lett. 65 3072
[2]
Lewenkopf C H, Müller A and Doron E 1992
Phys. Rev. A 45 2635
[3]
Brouwer P W and Beenakker C W J 1997
Phys. Rev. B 55 4695
[4]
Kogan E, Mello P A and Liqun He 2000
Phys. Rev. E 61 R17
[5]
Beenakker C W J and Brouwer P W 2001
Physica E 9 463
[6]
Schanze H, Alves E R P, Lewenkopf C H and Stöckmann H-J 2001
Phys. Rev. E 64 065201(R)
[7]
Schäfer R, Gorin T, Seligman T H, and Stöckmann H-J 2003
J. Phys. A: Math. Gen. 36 3289
[8]
Savin D V and Sommers H-J 2003
Phys. Rev. E 68 036211
[9]
Méndez-Sánchez R A, Kuhl U, Barth M, Lewenkopf C H and
Stöckmann H-J 2003
Phys. Rev. Lett. 91 174102
[10]
Fyodorov Y V 2003
JETP Lett. 78 250
[11]
Fyodorov Y V and Ossipov A 2004
Phys. Rev. Lett. 92 084103
[12]
Savin D V and Sommers H-J 2004
Phys. Rev. E 69 035201(R)
[13]
Fyodorov Y V and Savin D V 2004
JETP Lett. 80 725
[14]
Hemmady S, Zheng X, Ott E, Antonsen T M and Anlage S M 2005
Phys. Rev. Lett. 94 014102
[15]
Schanze H, Stöckmann H-J, Martínez-Mares M and Lewenkopf C H,
Phys. Rev. E 71 016223
[16]
Kuhl U, Martínez-Mares M, Méndez-Sánchez R A and Stöckmann H-J 2005
Phys. Rev. Lett. 94 144101
[17]
Savin D V, Sommers H-J and Fyodorov Y V 2005 Preprint cond-mat/0502359
[18]
Martínez-Mares M and Mello P A 2005
Phys. Rev. E 72 026224
[19]
Fyodorov Y V, Savin D V and Sommers H-J 2005 Preprint cond-mat/0507016
[20]
Alt H, Bäcker A, Dembowski C, Gräf H-D, Hofferbert R, Rehfeld H and
Richter A 1998
Phys. Rev. E 58 1737
[21]
Barth M, Kuhl U and Stöckmann H-J 1999
Phys. Rev. Lett. 82 2026
[22]
Schaadt K and Kudrolli A 1999
Phys. Rev. E 60 R3479
[23]
Morales A, Gutiérrez L and Flores J 2001
Am. J. Phys. 69 517
[24]
Dyson F J 1962
J. Math. Phys. 3 1199
[25]
Abramowitz M and Stegun I A 1972
Handbook of Mathematical Functions (New York: Dover) chapter 15 |
August, 1992 LBL-32938
COMPLEMENTARITY OF RESONANT AND NONRESONANT STRONG WW
SCATTERING AT SSC AND LHC
††thanks: This work was supported by the
Division of High
Energy Physics of the U.S. Department of Energy under Contract
DE-AC03-76SF00098.
(To be published in Proc. XXVI Intl. Conf. on High Energy Physics,
Dallas, Texas, August 1992)
Michael S. Chanowitz
Lawrence Berkeley Laboratory
Berkeley CA 94720
Abstract
Signals and backgrounds for strong $WW$ scattering at the SSC and LHC are
considered. Complementarity of resonant signals in the $I=1$ $WZ$ channel and
nonresonant signals in the $I=2$ $W^{+}W^{+}\hbox{ }$channel is illustrated using a chiral
lagrangian with a $J=1$ “$\rho$” resonance.
Results are presented for purely leptonic final states in the $W^{\pm}Z$,
$W^{+}W^{+}+W^{-}W^{-}$, and $ZZ$ channels.
\finalcopy\onehead
INTRODUCTION
High energy physics today is in an extraordinary situation. The Standard Model
(SM) is reliable but incomplete. For its completion it predicts 1) that a
fifth force exists, 2) the mass range of the associated quanta, and 3) neither
the precise mass nor the interaction strength but the relation between them.
These properties are sufficient to guide the search. Like any prediction in
science, this one too may fail. If so we will make an equally important
discovery: a deeper theory hidden until now behind the SM, which will emerge by
the same experimental program that we will follow to find the fifth force if it
does exist. In this paper I assume the SM is correct. This presentation is
necessarily brief; a more complete review and bibliography will
appear elsewhere.${}^{1}$
The Higgs mechanism is the feature of the SM that requires a fifth force and
implies its general properties. The Higgs mechanism requires a new sector of
quanta with dynamics
specified by an unknown Lagrangian I will call ${\cal L}_{5}$, that
spontaneously breaks $SU(2)_{L}\times U(1)_{Y}$,
giving rise to Goldstone bosons $w^{+},w^{-},z\hbox{ }$that
become the longitudinal gauge bosons $W^{+}_{L},W^{-}_{L},Z_{L}\hbox{ }$. By measuring $W_{L}W_{L}\hbox{ }$scattering at
$E\gg M_{W}$, we are effectively measuring $ww$ scattering
(i.e., the equivalence theorem) and are therefore probing the dynamics of
${\cal L}_{5}$.
Let $M_{5}$ be the typical mass scale of the quanta of ${\cal L}_{5}$.
Then the $W_{L}W_{L}\hbox{ }$scattering amplitudes
are determined by low energy theorems,${}^{2,3}$ e.g., for the $J=0$ partial wave
$$a_{0}(W_{L}^{+}W_{L}^{-}\to Z_{L}Z_{L})={1\over\rho}{s\over 16\pi v^{2}}$$
(1)1( 1 )
(with $v=0.247$ TeV) in the energy domain
$$M^{2}_{W}\ll s\ll\hbox{minimum}\{M^{2}_{5},(4\pi v)^{2}\}$$
(2)2( 2 )
which may or may not exist in nature, depending on whether $M_{5}\gg M_{W}$.
Partial wave unitarity requires the linear growth of $|a_{0}|$ to be damped
before
it exceeds unity at a “cutoff” scale
$\Lambda_{5}\leq 4\pi\sqrt{v}=1.8$ TeV. The cutoff is enforced
by the Higgs mechanism with $\Lambda_{5}\simeq M_{5}$ where more precisely
$M_{5}$ is the mass scale of the
quanta of ${\cal L}_{5}\hbox{ }$ that make the $SU(2)_{L}\times U(1)_{Y}\hbox{ }$ breaking condensate that engenders
$M_{W}$.
If $M_{5}\ll 1.8$ TeV then ${\cal L}_{5}\hbox{ }$ is weak and its
quanta include one or more Higgs bosons with $M_{5}$ equal to the average Higgs
boson mass (weighted by contribution
to $v$). If $M_{5}\geq 1$ TeV then ${\cal L}_{5}\hbox{ }$ is strong, there
is strong $WW$ scattering for $s>1$
TeV${}^{2}$, and rather than Higgs bosons we expect a complex spectrum of quanta.
Resonance formation then occurs in attractive channels at the energy scale of
unitarity saturation, $a_{J}(M^{2})\sim\hbox{O}(1)$, implying $M\sim 1$ - 3 TeV.
We detect a strong ${\cal L}_{5}\hbox{ }$ by observing strong $WW$ resonances and/or
strong nonresonant $WW$ scattering. Fortunately the two
approaches are complementary: if the resonances are very heavy and difficult to
observe there will be large signals in nonresonant channels.
\onehead
COMPLEMENTARITY
If ${\cal L}_{5}\hbox{ }$ contains no light quanta $\ll 1$ TeV such as Higgs bosons or pseudo
Goldstone bosons, then in the absence of strong $WW$ resonances the leading
partial wave amplitudes, $a_{IJ}=a_{00},a_{11},a_{20}$, will smoothly
saturate unitarity. Strong scattering cross sections are then estimated by
extrapolating the low energy theorems. (The index $I$ refers to the diagonal
$SU(2)_{L+R}$ subgroup that is necessarily${}^{3}$
a good symmetry of the Goldstone boson sector at low energy because
$\rho\simeq 1$.)
Models illustrating the smooth approach to the unitarity limit include the
“linear” model${}^{2}$, the K-matrix unitarization model${}^{4}$,
scaled $\pi\pi\hbox{ }$data in nonresonant
channels${}^{2,4,5}$, and effective Lagrangians incorporating dimension 6
operators and/or one loop corrections${}^{6}$. These models provide
large signals in nonresonant channels but are conservative
in that they apply when
more dramatic signals from light quanta or strong resonances are absent.
It is instructive to compare the linear model with $\pi\pi\hbox{ }$scattering data.${}^{7}$
The model agrees well in the
$I,J=0,0$ channel, probably a fortuitous result of the
attractive dynamics in that channel. The model underestimates $|a_{11}|$
and overestimates $|a_{20}|$, both because of the $\rho(770)$:
s-channel $\rho\hbox{ }$exchange enhances $|a_{11}|$ while
$t$- and $u$-channel exchanges suppress $|a_{20}|$, implying a complementary
relationship between the two channels.
The effects of $\rho\hbox{ }$exchange can be studied using a chiral Lagrangian with
chiral invariant $\rho\pi\pi\hbox{ }$interaction.${}^{8}$
Figure 1 shows that the model fits $\pi\pi\hbox{ }$data for $|a_{11}|$ and $|a_{20}|$ very
well.
Figure 1.The $\rho\hbox{ }$chiral Lagrangian model compared with $\pi\pi\hbox{ }$scattering
data for $|a_{11}|$ and $\delta_{20}$ (W. Kilgore).
We will
use the model to explore the effect of an analogous “$\rho$” resonance on $W_{L}W_{L}\hbox{ }$scattering.
Consider for instance minimal technicolor with one techniquark
doublet. (Nonminimal models have lighter resonances which are more
easily observed.) For $N_{TC}=4$,
large $N$ scaling implies $(m_{\rho},\Gamma_{\rho})=(1.78,0.33)$ TeV, while the heaviest $\rho_{TC}\hbox{ }$, for $N_{TC}=2$, has
$(m_{\rho},\Gamma_{\rho})=(2.52,0.92)$ TeV. Though unlikely according to
popular
prejudice, strong $WW$ resonances could be even heavier.
To explore
Figure 2.
$|a_{11}|$ and $\delta_{20}$ for the chiral invariant $\rho\hbox{ }$exchange model
with $m_{\rho}=1.78$ (dashes), $m_{\rho}=2.52$ (long dashes) and
$m_{\rho}=4.0$ (dot-dash). The nonresonant $K$-LET model is indicated
by the solid line.
that possibility I also consider a “$\rho$” of mass 4 TeV, with a width
of 0.98 TeV determined assuming a “$\rho$”$ww$ coupling equal to
$f_{\rho\pi\pi}$ from hadronic physics.
To ensure elastic unitarity the real parts are computed with
$\Gamma_{\rho}=0$ and the K-matrix prescription is then used to
compute
the imaginary parts.${}^{9}$ For resonance dominance this prescription is
equivalent to the usual broad-resonance Breit-Wigner prescription, in which
the term $m_{\rho}\Gamma_{\rho}$ in the B-W denominator
is replaced by $\sqrt{s}\Gamma_{\rho}(\sqrt{s})$.
Figure 2 displays $|a_{11}|$ and $|a_{20}|$ for the
three “$\rho$” cases and for the nonresonant K-matrix unitarization of the low
energy theorem amplitudes (K-LET). The 4 TeV “$\rho$” is nearly
indistinguishable from the nonresonant K-LET model below 3 TeV.
The complementarity of the two
channels is evident: the $\rho_{TC}(1.78)$ provides a spectacular
signal in $a_{11}$ but suppresses the signal in $a_{20}$, while the
“$\rho$”(4.0) provides a minimal signal in $a_{11}$ but allows a large
signal to emerge in $a_{20}$.
The sign of the interference between the LET amplitude and resonance
exchange contributions depends on the resonance quantum numbers, but it is
generally true that
the amplitude approaches a smooth unitarization of the LET (e.g., the K-LET)
as $M_{5}\rightarrow\infty$. This is the limit in which the “conservative” nonresonant models
apply. A heavy “$\rho$” is a worst case example since “$\rho$” exchange
interferes destructively with the $a_{20}$ threshold amplitude
so
that the limiting behavior is approached from below as the “$\rho$” mass is
increased. Resonances that interfere constructively in the channel would
provide
bigger signals.
\onehead
SIGNALS
In this section I will briefly review signals and backgrounds
at the SSC and LHC, in the $W^{\pm}Z\hbox{ }$, $W^{+}W^{+}+W^{-}W^{-}\hbox{ }$, and $ZZ$ final states. Signals are
computed using the ET-EWA approximation (i.e., the combined equivalence
theorem-effective W approximation) with HMRSB structure
functions evaluated at $Q^{2}=M_{W}^{2}$. Only
final states with both gauge bosons decaying leptonically are considered.
Except for the
central jet veto${}^{4}$ (CJV) considered in the $W^{+}W^{+}\hbox{ }$channel, the cuts apply only
to leptonic variables.
My criterion for a significant signal is
$$\sigma^{\uparrow}=S/\sqrt{B}\geq 5$$
(3)3( 3 )
$$\sigma^{\downarrow}=S/\sqrt{S+B}\geq 3,$$
(4)4( 4 )
respectively the standard deviations for the background to fluctuate
up to a false signal or for the signal plus background to
fluctuate down to the level of the background alone. The criterion is corrected
below for the acceptance in each channel. In addition $S\geq B$ is required
because of the theoretical uncertainty in the backgrounds,
expected to be known to within $\leq\pm 30\%$ after “calibration” studies
at the SSC and LHC.
\twohead
“$\rho$” $\rightarrow WZ$
Consider “$\rho$” $\rightarrow WZ\rightarrow l\nu+\overline{l}l$
with $l=e,\mu$ ($BR=0.014$).
Production mechanisms are $\overline{q}q$ annihilation${}^{10}$
and $WZ$ fusion${}^{3}$, the latter computed using the chiral Lagrangian with
contributions from $a_{11}$ and $a_{20}$. Elastic
unitarity is imposed with the K-matrix prescription described above. The
dominant
background (and the only one considered here) is $\overline{q}q\rightarrow WZ$.
A simple cut on the WZ invariant mass and the gauge boson rapidities ($y_{W,Z}\leq 1.5$) suffices to demonstrate the observability of the signal. (The $WZ$
mass is measurable only up to a twofold ambiguity; a more realistic and
effective procedure is to cut on the charged lepton transverse momenta.)
The acceptance estimate${}^{11}$ is $0.85\times 0.95\simeq 0.8$ so the
significance criterion for the uncorrected cross sections is
$\sigma^{\uparrow}\geq 5.5$ and $\sigma^{\downarrow}\geq 3.3$. The results
are shown in figure 3 and table 1.
Table 1. Yields of $\rho^{\pm}$ signal and background events
per 10 fb${}^{-1}$ at the SSC and LHC.
Cuts are $|y_{W}|<1.5$, $|y_{Z}|<1.5$, and $M_{WZ}$ as
indicated.
$$\sqrt{s}$$
$$M_{\rho}$$
$$M_{WZ}$$
S
B
$$\sigma^{\uparrow},\sigma^{\downarrow}$$
40
1.78
$$>$$1.0
30
9.3
10, 4.8
TeV
2.52
$$>1.2$$
15
5.3
6.3, 3.3
4.0
$$>1.0$$
10
5.3
4.4, 2.6
16
1.78
$$>1.0$$
5.5
3.2
3.0, 1.9
TeV
2.52
$$>1.2$$
1.7
1.6
1.4, 0.9
4.0
$$>1.6$$
0.5
0.5
0.7, 0.5
With 10 fb${}^{-1}$ at the SSC
the $\rho_{TC}(1.78)$ signal far exceeds the criterion, the $\rho_{TC}(2.52)$
signal just meets it, and the “$\rho$”(4.0) requires 17
fb${}^{-1}$. To just meet the criterion at the LHC, 33, 160, and 570 fb${}^{-1}$ are
needed for the three cases respectively.
\twohead
$W^{+}W^{+}+W^{-}W^{-}\hbox{ }$
The $W^{+}W^{+}\hbox{ }$channel has the largest leptonic branching ratio,
$\simeq 0.05$ to $e$’s and/or $\mu$’s, and no $\overline{q}q\hbox{ }$annihilation background.
The signature is striking: two isolated, high $p_{T}$, like-sign leptons in an
event with no other significant activity (jet or lepton) in the central region.
The dominant backgrounds are
Figure 3. $WZ$ cross section at SSC and LHC with $|y_{W,Z}|<1.5$ for
$\rho(1.78)$ (solid), $\rho(2.52)$ (dashes), and $\overline{q}q\hbox{ }$background (dot-dash).
Table 2. Cumulative effect of cuts on linear model
signal and background for
$W^{+}W^{+}$ only at the SSC.
Entries are events per 10 fb${}^{-1}$.
Cut
Signal
Bkgd.
$$|y_{l}|<2$$
71
560
$$p_{Tl}>0.1$$ TeV
44
49
$$\cos\varphi_{ll}<-0.975$$
32
9.1
CJV
27
2.4
the $\hbox{O}(\alpha_{W}^{2})$${}^{12}$ and
$\hbox{O}(\alpha_{W}\alpha_{S})$${}^{13}$ amplitudes for $qq\rightarrow qqWW$. The former is
essentially the
$W^{+}W^{+}\hbox{ }$pair cross section from $SU(2)_{L}\times U(1)_{Y}\hbox{ }$ gauge interactions,
computed using the
standard model with a light Higgs boson, e.g., $m_{H}\leq 0.1$ TeV.
Other backgrounds, from $W^{+}W^{-}$ with
lepton charge mismeasured and from $\overline{t}t\hbox{ }$production, require detector
simulation.
Studies presented in the SDC TDR${}^{11}$ show that they can be controlled.
A powerful set of cuts that efficiently though indirectly exploits the
longitudinal polarization of the signal has emerged from the efforts of three
collaborations.${}^{4,5,14}$. The most useful variables are the
lepton transverse
momentum $p_{Tl}$ and the azimuthal angle between the two leptons
$\phi_{ll}$${}^{14}$.
The CJV${}^{4}$ also effectively exploits the $W$
polarization; since the CJV signal efficiency may be affected by QCD
corrections
I present results with and without it. The truth probably lies closer to
the results with CJV, but the necessary calculations have not been done.
The successive effect of these cuts is illustrated in table 2. Even without the
CJV they reduce the background by $\simeq{\rm O}(10^{2})$ while decreasing the
signal by little more than a factor 2.
Assuming 85% detection efficiency for a single isolated lepton,${}^{11}$
eqs. (3-4) applied to the uncorrected yields become
$\sigma^{\uparrow}>6$ and $\sigma^{\downarrow}>3.5$. Typical results for
the linear,
K-LET, and scaled $\pi\pi$ data models are shown in table 3. In addition
to $y_{l}<2$ the cuts are $p_{Tl}>0.2$ TeV and cos$\phi_{ll}<-0.975$ for the
linear and K-LET models and $p_{Tl}>0.1$ TeV and cos$\phi_{ll}<-0.90$ for the
$\pi\pi$ model. The observability criterion is exceeded by a large margin
at the SSC in all cases but one —
the $\pi\pi$ model without CJV for which the
criterion is just satisfied. At the LHC both the signals and signal:background
ratios
are less favorable, and about 70 fb${}^{-1}$ would be needed just to meet the
minimum criterion for $\sigma^{\downarrow}$.
Results for the chiral invariant $\rho\hbox{ }$exchange model
are given in table 4. The cuts optimize the
signal without CJV. For the SSC they are $p_{Tl}>0.1$ TeV and
cos$\phi_{ll}<-0.925$ for $\rho(1.78)$ and $\rho(2.52)$, and
$p_{Tl}>0.2$ TeV and cos$\phi_{ll}<-0.975$ for $\rho(4.0)$.
Each case meets the minimum criterion with 10 fb${}^{-1}$ except $\rho(1.78)$ without CJV which would
require 17 fb${}^{-1}$ but is readily observable with a big signal in the
$WZ$ channel (table 1). As expected from figure 2. the SSC yields for
$\rho(4.0)$ (table 4) are within 5% of the K-LET yields (table 3).
Comparing with the $WZ$ yields in table 1, we see that 10 fb${}^{-1}$ suffices
to detect the signal for any value of $m_{\rho}$ in at least one of the two
(complementary) channels.
The LHC cuts in table 4 are $p_{Tl}>0.15$ TeV and cos$\phi_{ll}<-0.95$
for all three models. The $\rho(1.78)$ signal would require 160 fb${}^{-1}$ just to
meet the minimum criterion, while the $\rho(4.0)$ signal would require 55
fb${}^{-1}$. With $\simeq$ 100 fb${}^{-1}$ the LHC could meet the
minimum criterion for each model in at least one of the $WZ$ or $W^{+}W^{+}\hbox{ }$channels,${}^{1}$
assuming the relevant measurements can really be carried out at
$10^{34}$cm${}^{-1}$s${}^{-1}$ (and with the
efficiencies assumed here). In addition to instrumentation issues, the
$\overline{t}t\hbox{ }$backgrounds that have been studied at
$10^{33}$ cm${}^{-2}$ sec${}^{-1}\hbox{ }$have yet to be simulated at $10^{34}$.
\twohead
$ZZ$
Very heavy Higgs bosons and strong scattering into the $ZZ$ final state are
best
detected
Table 3. Signal ($S$) and background ($B$) $W^{+}W^{+}+W^{-}W^{-}\hbox{ }$events per
10 fb${}^{-1}$ at SSC and LHC for the indicated models. Cuts are specified
in the text.
$$\sqrt{s}$$
Model
No CJV
CJV
TeV
S
B
$$\sigma^{\uparrow},\sigma^{\downarrow}$$
S
B
$$\sigma^{\uparrow},\sigma^{\downarrow}$$
Linear
30
3.5
16, 5.2
26
0.8
29, 5
40
K
23
3.5
12, 4.4
20
0.8
23, 4.4
$$\pi\pi$$
33
26
6.5, 4.3
27
6.5
11, 4.7
Linear
2.5
0.5
3.5, 1.4
2.1
0.09
6.9, 1.4
16
K
2.0
0.5
2.8, 1.3
1.7
0.09
5.5, 1.3
$$\pi\pi$$
5.0
5.4
2.2, 1.6
3.9
1.0
3.9, 1.8
Table 4. Signal ($S$) and background ($B$) $W^{+}W^{+}+W^{-}W^{-}\hbox{ }$events per
10 fb${}^{-1}$ at SSC and LHC for the $\rho\hbox{ }$exchange model. Cuts are specified in the
text.
$$\sqrt{s}$$
$$M_{\rho}$$
No CJV
CJV
TeV
TeV
S
B
$$\sigma^{\uparrow},\sigma^{\downarrow}$$
S
B
$$\sigma^{\uparrow},\sigma^{\downarrow}$$
1.78
22
23
4.6, 3.3
18
5.7
7.6, 3.7
40
2.52
31
23
6.4, 4.2
25
5.7
11, 4.5
4.0
22
3.5
11, 4.3
20
0.8
21, 4.4
1.78
1.8
1.5
1.5, 1.0
1.4
0.3
2.8, 1.1
16
2.52
2.4
1.5
2.0, 1.2
1.9
0.3
3.7, 1.3
4.0
3.3
1.5
2.7, 1.5
2.6
0.3
5.1, 1.5
Table 5. Linear model signals and background $ZZ$ events per 10
fb${}^{-1}$
at SSC and LHC for various values of $m_{t}$.
Cuts are $|y_{l}|<2$ and $p_{Tl}>75$ GeV.
For the SSC $M_{TZ}>$ 700 GeV and for the LHC $M_{TZ}>$ 600 GeV.
$$\sqrt{s}$$
$$m_{t}$$
Signal
Bkgd
$$\sigma^{\uparrow}$$
$$\sigma^{\downarrow}$$
TeV
GeV
$$gg$$
$$WW$$
100
4.1
17.3
29.4
4.0
3.0
40
150
10.1
17.3
30.3
5.0
3.6
200
16.7
17.3
32.2
6.0
4.2
100
0.75
1.83
8.98
0.9
0.8
16
150
1.72
1.83
9.11
1.2
1.0
200
2.41
1.83
9.49
1.4
1.2
in the “neutrino” mode, $ZZ\rightarrow l^{+}l^{-}+\overline{\nu}\nu$
with $l=e$ or $\mu$. The net branching ratio from the $ZZ$ initial
state is 0.025, 6 times larger than the $l^{+}l^{-}+l^{+}l^{-}$ final state. The
signature — a high $p_{T}$ $Z$ boson recoiling against missing $p_{T}$ with no
other
significant jet activity in the central rapidity region — is experimentally
clean. Backgrounds from $Z+jets$ and from mismeasurement of the missing
$E_{T}$ have
been carefully studied and found to be controllable at $10^{33}$ cm${}^{-2}$ sec${}^{-1}\hbox{ }$for the SDC.${}^{11}$
For the 1 TeV Standard Model Higgs boson with $m_{t}=150$ GeV, a cut of $y_{l}<2$,
$p_{Tl}>75$ GeV and transverse mass $M_{T}>600$ GeV provides a $14\sigma$
signal
with 96 signal events and 44 background events for 10 fb${}^{-1}$ at the SSC.
If ${\cal L}_{5}\hbox{ }$ is strongly interacting and if a single symmetry breaking condensate
gives mass to both the weak gauge bosons and to the top quark, then the $ZZ$
signal has two components.${}^{15}$ Just as
$WW$ fusion probes the mass scale of the quanta which generate the
condensate that gives mass to $W$ and $Z$, $gg$ fusion via a
$\overline{t}t\hbox{ }$loop probes the quanta which generate the $t$ quark mass. If only one
condensate does both jobs, the $gg$ fusion contribution
enhances the strong scattering signal in the $ZZ$ final state.
This generalizes the two familiar Higgs boson production
mechanisms, $gg\rightarrow H$ and $WW\rightarrow H$,
to dynamical symmetry breaking with strong ${\cal L}_{5}\hbox{ }$.
Results${}^{15}$ are given in table 5.
Backgrounds considered are $\overline{q}q\hbox{ }$annihilation, $gg$ fusion, and the ${\rm O}(\alpha_{W}^{2})$ amplitude for $qq\rightarrow qqZZ$, the latter two
computed in the Standard Model with a light ($\leq 100$ GeV) Higgs boson. The
efficiency correction is offset by the additional contribution from $ZZ\rightarrow l^{+}l^{-}+l^{+}l^{-}$ that is not included in table 5,
so eqs. (3-4) apply directly. For $m_{t}\geq 150$
GeV there are significant signals at the SSC with 10 fb${}^{-1}$ thanks to the
big enhancement from $gg$ fusion.
The LHC signals with 10 fb${}^{-1}$ are not significant.
To enforce $S\geq B$ the $p_{Tl}$ cut must be raised to 200 GeV, and
350 fb${}^{-1}$ are then required to satisfy eqs. (3-4).
E.g., for $m_{t}=150$ GeV the LHC with 350 fb${}^{-1}$ yields 28 signal
and 31 background events, virtually identical to the SSC values in table 5 for
10 $fb^{-1}$. In addition the $Z+jets$ background
requires study at such high luminosity.
With luminosity above 10${}^{33}$ at the SSC it
becomes possible to probe for multiple
condensates. E.g., if $m_{t}$ is generated by a light Higgs boson while
$M_{W}$ is generated dynamically${}^{1,15}$ then only $WW$
fusion contributes to the $ZZ$ signal. For $m_{t}=150$ GeV and 50 fb${}^{-1}$ the signal exceeds eqs. (3-4)
($\sigma^{\uparrow}=7$ and $\sigma^{\downarrow}=6$) and
differs by 3$\sigma$ from the one condensate model. We do not satisfy $S>B$
since $S/B=0.6$, but that may suffice given the years of experience likely to
precede such measurements.
It is unlikely that this measurement could be done at the LHC. To satisfy
$\sigma^{\uparrow}\geq 5$ for the two condensate model with $S/B=0.6$ would
require more than 1000 fb${}^{-1}$ at the LHC.${}^{1}$
\onehead
CONCLUSION
The fifth force predicted by the Standard Model must begin to emerge at $\leq 2$ TeV in
$WW$ scattering. If that prediction fails, the Standard Model will be supplanted by a
deeper
theory that will begin to emerge in the same energy region. With 10 fb${}^{-1}$ the
SSC has capability for the full range of possible signals: strong $WW$
scattering above 1 TeV or new quanta from ${\cal L}_{5}\hbox{ }$ below 1 TeV. The strong
scattering signals can occur in complementary resonant and/or nonresonant
channels.
The practicability of measurements with $\geq$ $10^{34}$ cm${}^{-2}$ sec${}^{-1}\hbox{ }$is beyond the scope of
this
paper. In addition to accelerator and detector hardware questions there are
backgrounds — some mentioned above — which have been studied for $10^{33}$ cm${}^{-2}$ sec${}^{-1}\hbox{ }$but
require study at $10^{34}$. It may take years of experience
to learn to do physics in the 10${}^{34}$ environment. If 100 fb${}^{-1}$ data samples
are eventually achieved and the relevant backgrounds are overcome, the LHC
could
meet the minimum observability criterion for the models discussed here in at
least
one of the $W^{+}W^{+}\hbox{ }$and $WZ$ channels, while $\simeq 350$ fb${}^{-1}$ would be needed
in
the $ZZ$ channel. Luminosity $\geq 10^{34}$ at the SSC would enable
the detailed studies of ${\cal L}_{5}\hbox{ }$ that will be needed after the
initial discovery whether ${\cal L}_{5}\hbox{ }$ is weak or strong. That program could extend
productively for several decades into the next century.
Acknowledgements: I wish to thank Bill Kilgore for helping me to
understand the $\rho\hbox{ }$exchange model, for suggesting a sensible unitarization
method, and for preparing the data compilations.
\onehead
REFERENCES
1.
A more complete presentation of these results may be found in
M.S. Chanowitz, LBL-32846, 1992 (to be published in Perspectives on Higgs Physics, N.Y.: World Sci.)
2.
M.S. Chanowitz and M.K. Gaillard, Nucl. Phys. B261, 379 (1985).
3.
M.S. Chanowitz, M. Golden, and H.M. Georgi, Phys. Rev. D36, 1490
(1987); Phys. Rev. Lett. 57, 2344 (1986).
4.
V. Barger et al., Phys. Rev. D42, 3052 (1990).
5.
M. Berger and M.S. Chanowitz, Phys. Lett. 263B, 509 (1991).
6.
T. Appelquist and C. Bernard, Phys, Rev. D22, 200 (1980); A.
Longhitano, Phys. Rev. D22, 1166 (1980);
J.F. Donoghue and C. Ramirez, Phys. Lett. 234B, 361 (1990);
A.Dobado, M.J.Herrero, and J.Terron, Z. Phys. C50, 205 (1991);
S.Dawson and G.Valencia, Nucl. Phys. B352, 27 (1991).
7.
See fit $a$ in figure 4 of
J. Donoghue, C. Ramirez, and G. Valencia, Phys. Rev. D38, 2195 (1988).
8.
S. Weinberg, Phys. Rev. 166, 1568 (1968).
9.
This prescription is due to W. Kilgore.
10.
E.Eichten et al., Rev. Mod. Phys. 56, 579 (1984).
11.
Solenoidal Detector Collaboration, E.L. Berger et al.,
Technical Design Report, SDC-92-201, 1992.
12.
D. Dicus and R. Vega, Nucl. Phys. B329, 533 (1990).
13.
M.S. Chanowitz and M. Golden, Phys. Rev. Lett. 61, 1053 (1985);
E 63, 466 (1989);
D. Dicus and R. Vega, Phys. Lett. 217B, 194 (1989).
14.
D. Dicus, J. Gunion, and R. Vega, Phys. Lett. 258B, 475 (1991);
D. Dicus, J. Gunion, L. Orr, and R. Vega, UCD-91-10, 1991.
15.
M. Berger and M.S. Chanowitz, Phys. Rev. Lett. 68, 757 (1992). |
Influence of pulsatile blood flow on allometry of aortic wall shear stress
G. Croizat, A. Kehren, H. Roux de Bézieux, A. Barakat
Laboratoire d’hydrodynamique de l’Ecole polytechnique (LADHYX)
9 Boulevard des Maréchaux, 91120 Palaiseau, France
()
Abstract
Shear stress plays an important role in the creation and evolution of atherosclerosis. A key element for in-vivo measurements and extrapolations is the dependence of shear stress on body mass. In the case of a Poiseuille modeling of the blood flow, P. Weinberg and C. Ethier [2] have shown that shear stress on the aortic endothelium varied like body mass to the power $-\frac{3}{8}$, and was therefore 20-fold higher in mice than in men. However, by considering a more physiological oscillating Poiseuille - Womersley combinated flow in the aorta, we show that results differ notably: at larger masses ($M>10\ kg$) shear stress varies as body mass to the power $-\frac{1}{8}$ and modifies the man to mouse ratio to 1:8. The allometry and value of temporal gradient of shear stress also change: $\partial\tau/\partial t$ varies as $M^{-3/8}$ instead of $M^{-5/8}$ at larger masses, and the 1:150 ratio from man to mouse becomes 1:61. Lastly, we show that the unsteady component of blood flow does not influence the constant allometry of peak velocity on body mass: $u_{max}\propto M^{0}$. This work extends our knowledge on the dependence of hemodynamic parameters on body mass and paves the way for a more precise extrapolation of in-vivo measurements to humans and bigger mammals.
Introduction
The formation of atherosclerosis in arteries is a multifactorial process, still not fully understood in its mechanical factors ([12], ch. 23, p. 502). Yet, it has been shown that wall shear stress plays a key part in this phenomenon [1].
For theoretical analyses as well as for experimentation, it is very useful to know the dependence of wall shear stress $\tau$ in arteries, and especially in the aorta, on body mass $M$. It has long been assumed that wall shear stress was uniform accross species, thus independent of body mass. However, recent works have shown that this assumption was not correct: allometric relationships, expressed as $\tau\sim M^{\alpha}$, with $\alpha\in\mathbb{R}$, can be obtained from simple models of blood flow in the aorta. Note that $\propto$ shall be understood as ”is proportional to”. P. Weinberg and C. Ethier have modeled blood in the aorta as a Poiseuille flow and have deduced the following result: $\tau_{Poiseuillle}\sim M^{-3/8}$ [2]. This result implies that wall shear stress in mice should be 20 times higher than in men. The objective of this article is to study the allometric relationship between body mass and wall shear stress and other key parameters with a more precise model of blood flow in the aorta. In order to be closer to physiology, we added a purely oscillatory unsteady profile to the steady Poiseuille one. This oscillating component is called Womersley flow.
We will first detail the mathematical derivation of the Poiseuille and Womersley profiles, before exploring their influence on the allometry of wall shear stress (WSS), temporal gradient of wall shear stress (TGWSS) and peak velocity.
1 Mathematical modeling
In everything that follows, blood is considered as an incompressible Newtonian fluid of density $\rho=1060\ kg.m^{-3}$ and dynamic viscosity $\mu=3.0\ 10^{-3}\ Pa.s$. The arteries are assumed to be rigid of radius $a$, and we adopt a no-slip condition on the arterial wall ($u(r=a)=0$). We also assume the flow to be axisymmetric. Model-dependent hypotheses are described in the relevant subsections. See table 5 at the end of the document of all constants used in this work.
All numeric simulations were done in Matlab R2016 a, The MathWorks, Inc., Natick, Massachusetts, United States. $log$ stands for logarithm in base 10, and $ln$ for natural logarithm.
1.1 Poiseuille flow
In the case of a Poiseuille flow, shear stress on the wall is worth:
$$\tau=2\mu\frac{u_{max}}{a}$$
(1)
Let us detail the allometric arguments of equation 1, applied to the aortic artery:
•
the aortic diameter $a$ is assumed to vary as $M^{0.375}$, both from theoretical ([5]) and experimental considerations ([3] and [4])
•
the dynamic viscosity of blood $\mu$ is assumed to be independent of the body mass
•
the aortic velocity $u_{max}$ can be seen as the cardiac flow rate divided the aortic cross-section $u_{max}=\frac{Q}{\pi a^{2}}$. The cardiac flow rate varies with $M^{0.75}$, as described in [5] and [6]. Therefore, $u_{max}$ is independent of $M$.
We can conclude that in the case of a Poiseuille flow, wall shear stress scales as:
$$\tau_{Poiseuille}\propto M^{-0.375}$$
(2)
Additionally, we can deduce the allometry of temporal gradient of wall shear stress. Since $\frac{\partial\tau}{\partial t}\sim\omega\tau$, $\omega\sim M^{-0.25}$ being the cardiac frequency, we get:
$$\frac{\partial\tau}{\partial t}_{Poiseuille}\sim M_{b}^{-0.625}$$
(3)
We obtain the following values for mice, rabbits and humans (tab. 1), as described in [2], relatively to the human value:
1.2 Womersley flow
We consider a purely oscillating pressure gradient (eq. 5) in the same geometry, and derive the allometry of wall shear stress, temporal gradient of wall shear stress and peak velocity associated to this flow.
1.2.1 Ruling equations
In this context, the Navier-Stokes equation projected on $z$ can be written as:
$$\rho\frac{\partial u}{\partial t}=-\nabla P+\mu\Delta u$$
(4)
We inject the expression of the pressure gradient (with $G_{1}\in\mathbb{R}$ constant) in 4:
$$-\nabla P=\Re(G_{1}\exp(i\omega t))$$
(5)
as well as the expression of the velocity (note that $A(r)\in\mathbb{C}$):
$$u(r,t)=\Re(A(r)\exp(i\omega t))$$
(6)
We obtain the following equation, ${}^{\prime}$ being the differentiation with respect to $r$.
$$A^{{}^{\prime\prime}}+\frac{1}{r}A^{{}^{\prime}}-\frac{i\omega\rho}{\mu}A=-%
\frac{G_{1}}{\mu}$$
(7)
where we recognize the sum of a constant particular solution and a Bessel differential equation. We get the following velocity field, with $J_{0}$ the 0-order Bessel function of first kind. We also introduce the Womersley number $\alpha=a\sqrt{\frac{\omega\rho}{\mu}}$. This dimensionless number corresponds to the ratio of inertial transient forces to viscous forces.
$$u(r,t)=\Re(\frac{G_{1}}{i\omega\rho}(1-\frac{J_{0}(i^{\frac{3}{2}}\alpha\frac{%
r}{a})}{J_{0}(i^{\frac{3}{2}}\alpha)})e^{i\omega t})\ \ on\ \vec{u_{z}}$$
(8)
1.2.2 Derivation
Let us now calculate the flow rate $Q$:
$$Q(t)=\int_{0}^{a}u(r)2\pi rdr$$
(9)
and the wall shear stress $\tau$:
$$\tau(t)=-\mu\frac{\partial u}{\partial r}|_{r=a}$$
(10)
The calculation of (10) goes down to
$$\frac{\partial}{\partial r}[J_{0}(i^{\frac{3}{2}}\alpha\frac{r}{a})]_{|r=a}$$
(11)
Using $J_{0}^{{}^{\prime}}=J_{1}$ in $\mathbb{C}$, we can conclude that
$$\tau(t)=\Re(\frac{aG_{1}}{i^{\frac{3}{2}}\alpha}\frac{J_{1}(\alpha i^{\frac{3}%
{2}})}{J_{0}(\alpha i^{\frac{3}{2}})}e^{i\omega t})$$
(12)
Concerning (9), the key point is
$$\int_{0}^{a}rJ_{0}(i^{\frac{3}{2}}\alpha\frac{r}{a})dr$$
(13)
Using $x^{n}J_{n-1}(x)=\frac{d}{dx}(x^{n}J_{n}(x))$ for n in $\mathbb{N}$* and x in $\mathbb{C}$, we get
$$Q(t)=\Re(\frac{\pi a^{2}iG_{1}}{\omega\rho}(1-\frac{2J_{1}(\alpha i^{\frac{3}{%
2}})}{\alpha i^{\frac{3}{2}}J_{0}(\alpha i^{\frac{3}{2}})})e^{i\omega t})$$
(14)
1.3 Physiological pulse wave: Poiseuille + Womersley flow
In reality, the pressure gradient is neither steady, nor purely oscillatory. It was shown ([13], [14]) that the pressure gradient $\nabla P$ could be reasonably approximated by the first six harmonics of its Fourier decomposition. In our case, six harmonics are not needed, as we are only looking at allometric variations. Along with the allomerty of the Womersley flow (first harmonic), we will present the sum of Poiseuille and Womersley flows (fundamental + first harmonic), which we denote PW flow from here on:
$$u_{PW}=u_{Poiseuille}+u_{Womersley}$$
(15)
Consequently, by linearity of differentiation, wall shear stress and its temporal gradient are obtained by: $\tau_{PW}=\tau_{Poiseuille}+\tau_{Womersley}$ and $\nabla\tau_{PW}=\nabla\tau_{Poiseuille}+\nabla\tau_{Womersley}$. Concerning peak velocity however, calculations are done using the total velocity field $u_{PW}$.
2 Results
2.1 Wall shear stress (WSS)
To evaluate the allometry of wall shear stress, we need to know the allometry of every variable present in its expression. Concerning $a$, $\alpha$ or $\omega$, the allometry is known from [2], as summed up in table 5. But the allometry of $G_{1}$ is unknown. To overcome this problem, we substitute the flow rate $Q$, which follows an experimentally determined allometry $Q\sim M^{0.75}$. This step can be discussed, since the Womersley flow model implies a zero average flow rate. Although $<Q>$, the time average value of Q, is always null, its effective value $<Q^{2}>$ is indeed an increasing function of body mass, which justifies the use of $Q\propto M^{0.75}$ in the Womersley model.
Combining (12) and (14), we obtain
$$\underline{\tau}=\frac{\omega\rho J_{1}(x)}{\pi ai(xJ_{0}(x)-2J_{1}(x))}%
\underline{Q}$$
(16)
with $x=i^{\frac{3}{2}}\alpha$ and $\underline{Q}$, $\underline{\tau}$ the complex variables such that $Q=\Re(\underline{Q})$ and $\tau=\Re(\underline{\tau})$.
We introduce the following function $f$, such that $\underline{\tau}=\frac{\omega\rho}{\pi a}f(\alpha)\underline{Q}$:
$$f(\alpha)=\frac{iJ_{1}(x)}{(2J_{1}(x)-xJ_{0}(x))}$$
(17)
We plot $|\Re(f)|$ as a function of $\alpha$ in logarithmic scale on figure 1). Note that $\Re(f)$ is negative, ie $|\Re(f)|=-\Re(f)$.
•
for $\alpha<2$, $\Re(f)\sim\alpha^{-2}$ ($r^{2}=0.98$)
•
for $\alpha>5$, $\Re(f)\sim\alpha^{-1}$ ($r^{2}=0.99$)
We can confirm these regressions using Taylor and asymptotic developments of $J_{0}$ and $J_{1}$:
$$J_{0}(x)=1-x^{2}/4+O_{0}(x^{4})$$
(18)
and
$$J_{1}(x)=x/2-x^{3}/16+O_{0}(x^{5})$$
(19)
using big O notation ([16], p. 687).
Consequently, for $\alpha<<1$, $f(\alpha)\ \approx\ 4i/x^{2}\approx-4/\alpha^{2}$, which confirms $\Re(f)\sim\alpha^{-2}$ for $\alpha<<1$.
Conversely, for $|x|\rightarrow+\infty$:
$$J_{0}(x)\approx\sqrt{\frac{2}{\pi x}}cos(x-\pi/4)$$
(20)
and
$$J_{1}(x)\approx\sqrt{\frac{2}{\pi x}}sin(x-\pi/4)$$
(21)
introducing complex sine and cosine ([16], p. 723).
In our case, $x=i^{3/2}\alpha=\alpha\exp(3i\pi/4)$, therefore $f(\alpha)\approx-\frac{itan(x)}{x}=\frac{tanh(\alpha\sqrt{2}/2)}{\alpha}\exp(-%
3i\pi/4)$ and finally $\Re(f(\alpha))\approx\frac{\sqrt{2}}{2\alpha}$, which confirms $\Re(f)\sim\alpha^{-1}$ for $\alpha\rightarrow\infty$.
Knowing that $\alpha\sim M^{0.25}$, and that $\frac{\omega\rho Q}{\pi a}\sim M^{0.125}$ (see table 5 at the end of the document), we obtain:
$\tau_{womersley}\sim M^{-0.375}$ for $M<1\ kg$ and $\tau_{womersley}\sim M^{-0.125}$ for $M>10\ kg$
which differs notably from
$\tau_{poiseuille}\sim M^{-0.375}$ for all $M$
We can complete the previous chart of relative shear stress for different animals (tab. 2).
We also plot the compared shear stress induced by a Poiseuille, Womersley or physiological PW flows as a function of mass on figure 2.
First of all, we can see that despite several allometric approximations (flow rate, heart rate, aortic radius, Womersley number), the absolute value of wall shear stress $\tau\in[0.1;10]\ Pa$ is within the range of measured values ([12], ch. , p.502). We observe that Poiseuille shear stress is, in absolute value, much lower than Womersley stress (10 to 20 times lower as mass increases). At low masses ($M<1\ kg$), both flows follow the same allometry: $\tau_{PW}\propto M^{-0.375}$. At higher masses however, ($M>10\ kg$), the Womersley stress accounts for the variations of total stress and thus $\tau_{PW}\propto M^{-0.125}$. In practice, it means that the difference in wall shear stress between mice and men is greatly reduced by the influence of pulsatile flow in larger arteries, and shows the necessity of the unsteady modeling of blood flow for correct body mass allometries.
2.2 Temporal gradient of wall shear stress (TGWSS)
Concerning the temporal gradient of shear stress, we get $\frac{\partial\tau}{\partial t}\sim\omega\tau$, with $\omega\sim M^{-0.25}$ [2], and therefore:
$\frac{\partial\tau}{\partial t}_{wom}\sim M_{b}^{-0.625}$ for $M<1\ kg$ and $\frac{\partial\tau}{\partial t}_{wom}\sim M_{b}^{-0.375}$ for $M>10\ kg$
Which should be compared to
$\frac{\partial\tau}{\partial t}_{poi}\sim M^{-0.625}$ for all $M$
We compare the values of shear stress gradient between Womersley and Poiseuille in tab. 3. Poiseuille TGWSS is lower than Womersley TGWSS, and the allometric domination at high masses is also present. The graph of these functions (fig. 3) is very similar to the one on wall shear stress (fig. 2). At low masses ($M<1kg$), both flows result in $\frac{\partial\tau}{\partial t}_{PW}\propto M^{-0.625}$ allometry, and at higher masses ($M>10kg$), Womersley stresses dominate the allometry: $\frac{\partial\tau}{\partial t}_{PW}\propto M^{-0.375}$. Here again, we observe a reduced difference between men and mice due to the influence of unsteady flow at high Womersley numbers.
2.3 Peak velocity
And finally, we can express the maximum velocity. First we inject the expression of the flow rate in the Womersley velocity to absorb the time dependence (again using $x=i^{3/2}\alpha$) :
$$u_{Wom}(r)=\frac{Q}{\pi a^{2}}\times\frac{J_{0}(xr/a)-J_{0}(x)}{xJ_{0}(x)-2J_{%
1}(x)}$$
(22)
We know that $\frac{Q}{\pi a^{2}}$ is independent of $\alpha$ (table 5), therefore
$$u^{Wom}_{max}(\alpha)\propto\max_{r\in[0,a]}\Re\Bigg{(}\frac{J_{0}(xr/a)-J_{0}%
(x)}{xJ_{0}(x)-2J_{1}(x)}\Bigg{)}=\max_{r\in[0,a]}(g_{wom}(\alpha))$$
(23)
Recalling equivalents of $J_{0}$ and $J_{1}$ (eq. 18 to 21), we can find equivalents of $g_{wom}(\alpha)$ at low and high $\alpha$:
•
for $M<<1$, $g_{wom}\ \sim\frac{\sqrt{2}}{2\alpha}\sim M^{-0.25}$, we observe a divergence:
$$\lim_{M\to 0}u^{Wom}_{max}=+\infty$$
.
•
for $M>>1$, we also obtain $g_{wom}\ \sim\frac{1}{\alpha}\sim M^{-0.25}$, and
$$\lim_{M\to+\infty}u^{Wom}_{max}=0$$
.
We plot $\max_{r}(g_{wom})$, which represents the Womersley peak velocity only, as a function of M on figure 4. We also add the Poiseuille peak velocity.
Concerning PW flow, calculations must be done considering the total velocity field: recalling the expression of $u_{Pois}$ as a function of $Q$ and $r$: $u_{Pois}=\frac{2Q}{\pi a^{2}}(1-\frac{r}{a})$, $u^{PW}_{max}$ is:
$$u^{PW}_{max}(\alpha)\propto\max_{r\in[0,a]}\Re\Bigg{(}2(1-\frac{r}{a})+\frac{J%
_{0}(xr/a)-J_{0}(x)}{xJ_{0}(x)-2J_{1}(x)}\Bigg{)}=\max_{r\in[0,a]}(g_{PW}(%
\alpha))$$
(24)
On figure 4, we can see that the Womersley flow alone gives rise to a peak velocity which vanishes with $M$ (blue curve), while the PW flow (yellow curve) shows a constant peak velocity for reasonably high $M$. This proves that the constant peak velocity measured experimentally [18] is due to the steady Poiseuille component of the flow, rather than to the Womersley unsteady component. We also observe that the maximum velocity is systematically reached in the center of the artery ($r=0$, data not shown), which is not the case for a pure Womersley flow at high Womersley number. This confirms the domination of the Poiseuille flow. Figure 4 further proves the necessity of a combined Poiseuille - Womersley flow to account for the variations of the different hemodynamic parameters.
2.4 Summary of the different allometric laws
We summarize the different allometries obtained throughout this work in table 4.
Conclusion
In this study, we have recalled the derivation of wall shear stress, temporal gradient of wall shear stress and peak velocity, for a Poiseuille and subsequently Womersley modeling of the blood flow in the aorta.
We have shown that a Poiseuille modeling alone (and equivalently, a Womersley modeling alone) was not able to account for the variations of the previously mentionned parameters. We showed that a combination of Poiseuille and Womersley flows, here called PW flow, was necessary to conduct an allometry study.
•
Concerning wall shear stress, Poiseuille and Womersley flows agree on a $M^{-0.375}$ at low Womersley numbers, while the Womersley component dominates at higher $\alpha$ ($M^{-0.125}$ against $M^{-0.375}$). This changes the ”Twenty-fold difference in hemodynamic wall shear stress between murine and human aortas” advanced in [2] to a lower 8.2 fold difference.
•
About the temporal gradient of wall shear stress, the same phenomenon occurs: while $M^{-0.625}$ is valid at low $\alpha$, the unsteady component dominates at higher $\alpha$ ($M^{-0.375}$ against $M^{-0.625}$), and brings mice to men ratio from $150:1$ to $61:1$.
•
For peak velocity on the contrary, the independance on $\alpha$ observed experimentally [18] is due to the Poiseuille component rather than to the Womersley one, which follows a vanishing $u_{max}^{wom}\propto M^{-0.25}$.
As a consequence, only considering a Poiseuille flow to infer the wall shear stress allometry in mammals is not a precise method. On the contrary, one should take into account the two components of the blood flow profile (steady Poiseuille and unsteady Womersley), to account for their respective ranges of domination.
This work gives a new insight on the allometry of 3 key hemodynamic parameters in atherosclerosis. It can also help extrapolate measurements from small mammals like mice, which are common in cardiovascular laboratories, to bigger ones and to humans. A comprehensive imaging study of aortic velocity profiles on heavy mammals could help support or reject this theory and greatly benefit the field.
Appendix
References
[1]
Cunningham KS., Gotlieb AI.,
The role of shear stress in the pathogenesis of atherosclerosis
Lab Invest. 2005 Jul;85(7):942
[2]
P. Weinberg, C. Ethier,
Twenty-fold difference in hemodynamic wall shear stress between murine and human aortas.
Journal of Biomechanics 40, 2007
[3]
Clark, A.J.,
Comparative physiology of the heart
MacMillan, New-York, 1927
[4]
Holt, J.P., Rhode, E.A., Holt, W.W., Kines, H.
Geometric similarities of aortas, venae cavae and certain of their branches in mammals
American Journal of Physiology Regulatory, Integrative and Comparative Physiology 241, 1981
[5]
West, G.B., Brown, J.H., Enquist, B.J..
A general model for the origin of allometric scaling laws in biology
Science 276, 1997
[6]
Kleiber, M.,
Body size and metabolism
Hilgardia 6, 315-353, 1932
[7]
Castier, Y., Brandes, R.P., Leseche, G., Tedgui, A., Lehoux, S. 2005
p47phox-dependent NADPH oxidase regulates ï¬ow-induced vascular remodeling
Circulation Research 97, 533-540, 2005
[8]
Ibrahim, J., Miyashiro, J.K., Berk, B.C.,
Shear stress is differentially regulated among inbred rat strains
Circulation Research 92, 1001â1009, 2003
[9]
Lacy, J.L., Nanavaty, T., Dai, D., Nayak, N., Haynes, N., Martin, C.,
Development and validation of a novel technique for murine ï¬rst-pass radionuclide angiography with a fast multiwire camera and tantalum 178
Journal of Nuclear Cardiology 8, 171â181, 2001
[10]
Khir, A.W., Henein, M.Y., Koh, T., Das, S.K., Parker, K.H., Gibson, D.G.,
Arterial waves in humans during peripheral vascular surgery
Clinical Science (London) 101, 749â757, 2001
[11]
J. Mestel
Pulsatile flow in a long straight artery
Lecture 14, Biofluid Mechanics, Imperial College, 2009
[12]
W. Nichols, M. O’Rourke, C. Vlachopoulos, A. Hoeks, R. Reneman -
McDonald’s blod flow in arteries -
Sixth edition - Chapter 10 - Contours of pressure and flow waves in arteries
[13]
Tracy Morland -
Gravitational Effects On Blood Flow -
Mathematical Biology group, University of Colorado at Boulder, available at mathbio.colorado.edu/index.php/MBW:Gravitational_Effects_on_Blood_Flow
[14]
J.R. Womersley -
Method for the calculation of velocity, rate of flow and viscous drag in arteries when the pressure gradient is known -
Journal of Physiology (I955) 127, p. 553-563
[15]
C. Cheng & al. -
Review: Large variations in wall shear stress levels within one species and between species -
Atherosclerosis, December 2007, Volume 195, Issue 2, Pages 225â235
[16]
G. Arfken, H. Weber -
Mathematical methods for physicists, sixth edition -
Elsevier Acedemic Press, 2005
[17]
J. Greve & al. -
Allometric Scaling of Wall Shear Stress from Mouse to Man: Quantification Using Cine Phase-Contrast MRI and Computational Fluid Dynamics -
American Journal of Physiology - Heart and Circulatroy Physiology, May 19, 2006
[18]
K. Schmidt-Nielsen -
Scaling â Why is Animal Size so Important? -
Cambridge University Press, Cambridge 1984
[19]
Greve et al.
Allometric scaling of wall shear stress from mice to humans quantification using cine phase-contrast MRI and computational ï¬uid dynamics
Am. J. Physiol. Heart. Circ. Physiol. 291, 2006 |
Lepton universality violation in the MF331 model
P. N. Thu ${}^{a,c}$
N. T. Duy ${}^{a,b}$
A. E. Cárcamo Hernández${}^{d,e,f}$
D. T. Huong${}^{b}$
dthuong@iop.vast.vn (Corresponding author)
thupn@utb.edu.vn
ntduy@iop.vast.vn
antonio.carcamo@usm.cl
${}^{a}$ Graduate University of Science and Technology,
Vietnam Academy of Science and Technology,
18 Hoang Quoc Viet, Cau Giay, Hanoi, Vietnam
${}^{b}$ Institute of Physics, VAST, 10 Dao Tan, Ba Dinh, Hanoi, Vietnam
${}^{c}$ Faculty of Natural Sciences and Technology
Tay Bac University, Quyet Tam Ward, Son La City, Son La Province
${}^{d}$ Universidad Técnica Federico Santa María, Casilla 110-V,
Valparaíso, Chile
${}^{e}$ Centro Científico-Tecnológico de Valparaíso, Casilla 110-V, Valparaíso, Chile
${}^{f}$ Millennium Institute for Subatomic physics at high energy frontier - SAPHIR, Fernandez Concha 700, Santiago, Chile
Abstract
We perform a detailed study of the $\text{b}\to\text{c}\tau\nu$ and $\text{b}\to\text{s}l^{+}l^{-}$ processes in a minimal flipped 331 model based on the $SU(3)_{C}\times SU(3)_{L}\times U(1)_{N}$ gauge symmetry. The non universal $SU(3)_{L}\times U(1)_{N}$ symmetry in the lepton sector gives rise to non universal neutral and charged currents involving heavy non SM gauge bosons and SM leptons that yield radiative contributions to the $b\to s$, $b\to c$, $s\to u$ and $d\to u$ transitions arising from one loop level penguin and box diagrams. We found that the observables related to these transitions agree with their experimental values in a region of parameter space that includes TeV scale exotic up type quarks, within the LHC’s reach.
pacs: 12.60.-i, 95.35.+d
I Introduction
In recent years, experimental data in B physics has hinted toward deviations of Lepton Flavor Universality (LFU) in semi-leptonic decays from the Standard Model (SM) expectations. More specifically, measurements of $\text{V}_{\text{cb}}-$independent ratios
$$\displaystyle\text{R}(\text{D}^{(*)})=\frac{\mathcal{B}(\text{B}\to\text{D}^{(*)}\tau\nu)}{\mathcal{B}(\text{B}\to\text{D}^{(*)}l\nu)},$$
(1)
with $l=e$ or $\mu$, have been performed by the Babar BaBar:2012obs ; BaBar:2013mob , Belle Belle:2015qfa ; Belle:2016kgw ; Abdesselam:2016xqt , and LHCb LHCb:2015gmp collaborations. The world average result, which is extracted from the latest announcement of LHCb, is given as:
$$\displaystyle\text{R}(\text{D})_{\text{exp}}$$
$$\displaystyle=0.356\pm 0.029_{\text{total}}\hskip 14.22636pt\text{R}(\text{D}^{(*)})_{\text{exp}}=0.284\pm 0.013_{\text{total}},$$
(2)
On the other hand, the SM calculations for these ratios, which are performed by several groups Bigi:2016mdz ; Fajfer:2012vx ; Becirevic:2012jf ; Bernlochner:2017jka ; Bigi:2017jbd ; Jaiswal:2017rve , are averaged by HFLAV:2016hnz
$$\displaystyle\text{R}(\text{D})_{\text{SM}}$$
$$\displaystyle=0.298\pm 0.004,\hskip 14.22636pt\text{R}(\text{D}^{(*)})_{\text{SM}}=0.254\pm 0.005.$$
(3)
These relative rates have been predicted with rather high accuracy because many hadronic uncertainties are canceled out to a large extent. The SM expectations are significantly lower than the measurements. If confirmed, this could be a signal of new physics (NP). In principle, the NP contributions could be due to a tree-level exchange of a new charged scalar Crivellin:2012ye ; Celis:2012dk ; Crivellin:2013wna , a heavy charged vector Greljo:2015mma ; Boucenna:2016qad ; Greljo:2018ogz , or due to an exchange of leptoquarks Dorsner:2016wpm ; Bauer:2015knc ; Fajfer:2015ycq ; Barbieri:2015yvd ; Becirevic:2016yqi ; Hiller:2016kry ; Crivellin:2017zlb .
Effects due to the presence of light sterile neutrinos have
also been explored in Abada:2013aba ; Cvetic:2016fbv ; Crivellin:2017zlb . Besides, the LHCb, Belle collaborations measured Aaij_2014 ; Aaij_2019 ; Choudhury_2021 ; Aaij2021 ; Aaij_2017 ; Wehle_2021
the ratios,
$$\displaystyle\text{R}_{\text{K}}$$
$$\displaystyle\equiv\frac{\text{Br}\left(\text{B}^{+}\rightarrow\text{K}^{+}\mu^{+}\mu^{-}\right)}{\text{Br}\left(\text{B}^{+}\rightarrow\text{K}^{+}e^{+}e^{-}\right)},\hskip 14.22636pt\text{R}_{\text{K}^{*}}\equiv\frac{\text{Br}\left(\text{B}\rightarrow\text{K}^{*}\mu^{+}\mu^{-}\right)}{\text{Br}\left(\text{B}\rightarrow\text{K}^{*}e^{+}e^{-}\right)}.$$
(4)
The latest values of $\text{R}_{\text{K}},\text{R}_{\text{K}^{*}}$ have been reported bylhcbcollaboration2022test ,lhcbcollaboration2022measurement , such as
$$\displaystyle\begin{cases}\text{R}_{\text{K}}=0.994^{+0.090}_{-0.082}(\text{stat})^{+0.027}_{-0.029}(\text{syst})&\text{for low-}\text{q}^{2},\,,\\
\text{R}_{\text{K}}=0.949^{+0.042}_{-0.041}(\text{stat})^{+0.023}_{-0.023}(\text{syst)}&\text{for central-}\text{q}^{2}.\end{cases}$$
(5)
which showed $0.2\sigma$ deviation from the SM expectation Bordone_2016 ; straub2018flavio of $\simeq 1$, and
$$\displaystyle\text{R}_{\text{K}^{*}}^{\text{LHCb}}=\begin{cases}0.927^{+0.093}_{-0.087}(\text{sat})^{+0.034}_{-0.033}(\text{syst})&\text{for low-}\text{q}^{2}\,,\\
1.027^{+0.072}_{-0.068}(\text{sat})^{+0.027}_{-0.027}(\text{syst})&\text{for central-}\text{q}^{2}.\end{cases}$$
These ratios also are $0.2$ standard deviations from their SM expectations Bordone_2016 ; straub2018flavio ; Altmannshofer_2017 . Solutions for both sets of anomalies are very scarce.
This is because the semileptonic decay $\text{B}\to\text{D}^{*}\tau\nu$ is a charged current process that occurs at the tree level, whereas the decay process $\text{B}\to\text{K}^{(*)}ll$ occurs at the one-loop level in the SM. If the new interactions maintained the SM pattern, both processes would account for a large deviation from the SM of similar size, which again $\text{O}(25$%$)$, would point to a very light mediator. However, the light mediator would be hard to hide from other observables that are in perfect agreement with the SM. In fact, among the proposed models, the model based on an extended gauge symmetry group Boucenna:2016wpr , leptoquarks Dorsner:2016wpm ; Bauer:2015knc ; Fajfer:2015ycq ; Barbieri:2015yvd ; Becirevic:2016yqi ; Hiller:2016kry ; Crivellin:2017zlb ; Alonso:2015sja , strong interactions Buttazzo:2016kid , and an effective theory approach Alonso:2015sja ; Bhattacharya:2014wla ; Greljo:2015mma ; Calibbi:2015kma are possible to produce both solutions. Specifically, in the non-universal gauge extensions of the SM Boucenna:2016wpr the B-decay anomalies are explained by non-trivial tuning between parameters of the scalar and gauge sectors, and the gauge mixing terms, but the gauge mixing effects are suppressed. In this work, we show that the minimal flipped 3-3-1 (MF331) model Van_Loi_2020 , a version of the F331 models in which scalar multiplets are reduced to a minimum, can provide a possible explanation from SM deviations observed in B-meson decays. The F331 model is one of the extended SM that is based on the gauge symmetry $SU(3)_{C}\times SU(3)_{L}\times U(1)_{X}\text{(3-3-1)}$. We call it the 331 models. The MF331 model has all the benefits of the 331 model Pisano_1992 ; PhysRevLett.69.2889 ; PhysRevD.47.4158 ; PhysRevD.22.738 ; Montero_1993 ; Foot_1994 due to including solutions for dark matter, neutrino masses, cosmic inflation, and matter-antimatter asymmetry, all of which are current SM issues. This also explains the existence of only three SM fermion families, strong CP conservation, and electric charge quantization. The difference between the F331 model compared to other versions of the 331 model is an arrangement of fermions in each generation. In the F331 model, the first lepton generation transforms differently from the other two lepton generations. Therefore, the model predicts non-universal interactions between the SM leptons and new particles (new fermions and new gauge bosons) Huong_2019 , which naturally provide solutions for explaining the anomalies in the B meson decays. In the paper Duy:2022qhy , we considered the NP contribution to the $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$, and where we have looked for a parameter space that can explain anomalies Aaij2021 Aaij_2017 . We demonstrated that the anomalies $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$ given in Aaij2021 Aaij_2017
can be explained in the MF331 model if there is mass degeneracy between the new leptons $E_{i}$ and $\xi$.
However, the new results lhcbcollaboration2022test ,lhcbcollaboration2022measurement , which have been reported at the end of 2022, show the comparison of
$\text{R}_{\text{K}}$ and $\text{R}_{\text{K}^{*}}$
measured with the SM prediction.
As a result, in this study, we revisit the NP contribution to the $\text{R}_{\text{K}}$ and $\text{R}_{\text{K}^{*}}$
in addition to investigating the NP contributions of charge currents to the $\text{R}_{\text{D}}$, $\text{R}_{\text{D}^{*}}$ anomalies.
The NP effects of the charge current on the $s-u$, $d-u$ transition processes.
We show how a simultaneous explanation of the $\text{b}\to\text{c}\tau\nu$ and $\text{b}\to\text{s}l^{+}l^{-}$ processes can be solved in the context of the MF331 model.
The structure of the paper is organized as follows. We perform a summary of the particle contents and their mass spectra for the MF331 model in Sec. (II). In Sec. III, we examine all of the NP contributions that modify the SM-charged (neutral) currents at the tree level. Effective Hamiltonian for the flavour non-universal $u_{i}-d_{j}$ transitions are described in Sec. IV. In Sec.V, we investigate a variety of observables related to flavour non-universality interactions. Finally, we provide our conclusions in Sec. VI.
II The model
II.1 Paticle content
The MF331 model Fonseca_2016 ; Van_Loi_2020 is an unified theory with electroweak gauge group promoted to $SU(3)_{L}\times U(1)_{N}$. The electric charge and hypercharge are determined by
$$\displaystyle Q=T_{3}+\frac{1}{\sqrt{3}}T_{8}+X,\hskip 14.22636ptY=\frac{1}{\sqrt{3}}T_{8}+X,$$
(6)
with $T_{3},T_{8}$ are the diagonal generators of $SU(3)_{L}$, and $X$ is a generator of $U(1)_{X}$. The particle content in the MF331 model is presented in Table 1, and more details can be found in Van_Loi_2020 .
The gauge bosons and gauge couplings of the extended electroweak group are denoted as follows
$$\displaystyle SU(3)_{L}$$
$$\displaystyle:\hskip 14.22636ptg,\hskip 14.22636ptA_{\mu}^{a},\hskip 14.22636pta=1,...,8,$$
$$\displaystyle U(1)_{X}$$
$$\displaystyle:\hskip 14.22636ptg^{\prime}=T_{X}g,\hskip 14.22636ptB_{\mu}.$$
(7)
The model contains three neutral gauge bosons, $A_{3},A_{8},B$, that are mixed. By diagonalizing this mixing matrix, the model produces three physical states, such as Van_Loi_2020
$$\displaystyle A$$
$$\displaystyle=$$
$$\displaystyle s_{W}A_{3}+\left(\frac{c_{W}t_{W}}{\sqrt{3}}A_{8}+c_{W}\sqrt{1-\frac{t^{2}_{W}}{3}}B\right),$$
(8)
$$\displaystyle Z$$
$$\displaystyle=$$
$$\displaystyle c_{W}A_{3}-\left(\frac{s_{W}t_{W}}{\sqrt{3}}A_{8}+s_{W}\sqrt{1-\frac{t^{2}_{W}}{3}}B\right),$$
(9)
$$\displaystyle Z^{\prime}$$
$$\displaystyle=$$
$$\displaystyle\sqrt{1-\frac{t^{2}_{W}}{3}}A_{8}-\frac{t_{W}}{\sqrt{3}}B,$$
(10)
and their masses are $\left(0,\frac{g^{2}v^{2}}{4c^{2}_{W}},\frac{g^{2}\left[c^{2}_{2W}v^{2}+4c^{4}_{W}w^{2}\right]}{4c^{2}_{W}(3-3s^{2}_{W})}\right)$, respectively. Due to the existence of $v^{\prime},w^{\prime}$, there is a slight mixing between two neutral gauge bosons, $Z,Z^{\prime}$, with a mixing angle defined as follows
$$\displaystyle t_{2\varphi}=-\frac{c_{2W}\sqrt{1+2c_{2W}}v^{2}}{2c^{4}_{W}w^{2}}.$$
(11)
There are six non-hermitian gauge boson states, $W^{\pm}=\frac{A_{1}\mp iA_{2}}{\sqrt{2}}$, $X^{\pm}=\frac{A_{4}\mp iA_{5}}{\sqrt{2}},Y^{0,(0*)}=\frac{A_{6}\mp iA_{7}}{\sqrt{2}}$. The presence of the VEVs, $v^{\prime}$, $w^{\prime}$, leads to the mixing of the charged gauge bosons, $W^{\pm},X^{\pm}$. The physical states are $(W^{\prime},X^{\prime})$ which are determined in Van_Loi_2020 as
$$\displaystyle\left\{\begin{array}[]{l}W_{\mu}^{\prime}=\cos{\theta}W_{\mu}-\sin\theta X_{\mu},\\
X_{\mu}^{\prime}=\sin{\theta}W_{\mu}+\cos\theta X_{\mu},\end{array}\right.$$
(14)
where $\theta$ is a small mixing angle and is defined by
$$\displaystyle t_{2\theta}\equiv\tan 2\theta=\frac{-2(w^{\prime}v+wv^{\prime})}{v^{2}+v^{\prime 2}+w^{2}+w^{\prime 2}}.$$
(15)
The mass expressions of the non-hermitian gauge boson can be found in Van_Loi_2020 .
In the limit, $v^{\prime},w^{\prime}\ll v\ll w$, two $SU(3)_{L}$ Higgs triplets
can be written in terms of physical states as follows
$$\rho\simeq\left(\begin{array}[]{c}G^{+}_{W}\\
\frac{1}{\sqrt{2}}(v+H+iG_{Z})\\
\frac{1}{\sqrt{2}}w^{\prime}+H^{\prime}\\
\end{array}\right),\hskip 14.22636pt\chi\simeq\left(\begin{array}[]{c}G^{+}_{X}\\
\frac{1}{\sqrt{2}}v^{\prime}+G^{0}_{Y}\\
\frac{1}{\sqrt{2}}(w+H_{1}+iG_{Z^{\prime}})\\
\end{array}\right).$$
(16)
where $H$ is identified as the $126$ GeV SM like Higgs boson, $H_{1},H^{\prime}$ represent new neutral non SM Higgs bosons, and $G_{W,X,Y,Z,Z^{\prime}}$ are the Goldstone bosons associated with the longitudinal components of the $W,X,Y,Z,Z^{\prime}$ gauge bosons, respectively.
II.2 Femion mass spectrum
The total Yukawa interactions up to six dimensions are given in Ref. Van_Loi_2020 as follows
$$\displaystyle\mathcal{L}_{\text{Yukawa}}=\mathcal{L}^{\text{quark}}_{\text{Yukawa}}+\mathcal{L}^{\text{lepton}}_{\text{Yukawa}}.$$
(17)
The first term in Eq.(17) contains the quark Yukawa interactions, and it can be written as follows
$$\displaystyle\mathcal{L}^{\text{quark}}_{\text{Yukawa}}$$
$$\displaystyle=$$
$$\displaystyle h^{U}_{ab}\bar{Q}_{{aL}}\chi^{*}U_{bR}+h^{u}_{ab}\bar{Q}_{{aL}}\rho^{*}u_{bR}+s^{u}_{ab}\bar{Q}_{{aL}}\chi^{*}u_{bR}$$
(18)
$$\displaystyle+$$
$$\displaystyle s^{U}_{ab}\bar{Q}_{{aL}}\rho^{*}U_{bR}+\frac{h^{d}_{ab}}{\Lambda}\bar{Q}_{aL}\chi\rho d_{bR}+H.c.$$
After the spontaneous breaking of the $SU(3)_{L}\times U(1)_{N}$ electroweak gauge group, the d-quarks gain masses via the following non-renormalizable Yukawa terms:
$$\displaystyle[\mathcal{M}_{d}]_{ab}=\frac{h^{d}_{ab}}{2\Lambda}(wv-w^{\prime}v^{\prime}).$$
(19)
The SM $u$-quarks, $u=(u_{1},u_{2},u_{3})$, and new $U$-quarks, $U=(U_{1},U_{2},U_{3})$, are mixed via the following mass matrix
$$\displaystyle\mathcal{M}_{\text{up}}=\frac{1}{\sqrt{2}}\left(\begin{array}[]{ccc}h^{u}v+s^{u}{v^{\prime}}&h^{U}v^{\prime}+s^{U}v\\
-h^{u}w^{\prime}-s^{u}w&-h^{U}w-s^{U}w^{\prime}\\
\end{array}\right)=\left(\begin{array}[]{ccc}M_{u}&M_{uU}\\
M^{T}_{uU}&M_{U}\\
\end{array}\right).$$
(24)
Due to the conditions, $w\gg v\gg w^{\prime},v^{\prime}$, and $h^{u},h^{U}\gg s^{u},s^{U}$, the mass matrix, $\mathcal{M}_{\text{up}}$, allows the implementation a type I seesaw mechanism and thus, after a block diagonalization, it is found that the light states, $u^{\prime}=\left(\begin{array}[]{ccc}u_{1}^{\prime},u_{2}^{\prime},u_{3}^{\prime}\end{array}\right)^{T}$, and heavy states, $U^{\prime}=\left(\begin{array}[]{ccc}U_{1}^{\prime},U_{2}^{\prime},U_{3}^{\prime}\end{array}\right)^{T}$, are separated as follows:
$$\displaystyle u^{\prime}=u+\left(M_{uU}^{*}M_{u}^{-1}\right)U=u+T_{u}U,\hskip 14.22636ptU^{\prime}=U-\left(M_{U}^{-1}M_{uU}^{T}\right)u=U-T^{\prime}_{u}u$$
(25)
with $T_{u}=M_{uU}^{*}M_{U}^{-1}$, and $T_{u}^{\prime}=M_{U}^{-1}M_{uU}^{T}$. The light quarks mix together, and the physical states of light quarks are denoted by
$$\displaystyle u^{\prime}_{L,R}$$
$$\displaystyle=\left(\begin{array}[]{ccc}u_{1}^{\prime},u_{2}^{\prime},u_{3}^{\prime}\end{array}\right)_{L,R}^{T}=V_{L,R}^{u}\left(\begin{array}[]{ccc}u_{1},u_{2},u_{3}\end{array}\right)_{L,R}^{T},$$
(28)
$$\displaystyle d^{\prime}_{L,R}$$
$$\displaystyle=\left(\begin{array}[]{ccc}d_{1}^{\prime},d_{2}^{\prime},d_{3}^{\prime}\end{array}\right)_{L,R}^{T}=V_{L,R}^{d}\left(\begin{array}[]{ccc}d_{1},d_{2},d_{3}\end{array}\right)_{L,R}^{T},$$
(31)
$$\displaystyle U^{\prime}_{L,R}$$
$$\displaystyle=\left(\begin{array}[]{ccc}U_{1}^{\prime},U_{2}^{\prime},U_{3}^{\prime}\end{array}\right)_{L,R}^{T}=V_{L,R}^{U}\left(\begin{array}[]{ccc}U_{1},U_{2},U_{3}\end{array}\right)_{L,R}^{T}.$$
(34)
The second term in Eq.(17) contains the Yukawa interactions of leptons,
$$\displaystyle\mathcal{L}^{\text{lepton}}_{\text{Yukawa}}$$
$$\displaystyle=$$
$$\displaystyle h^{E}_{\alpha b}\bar{\psi}_{\alpha L}\chi E_{bR}+h^{e}_{\alpha b}\bar{\psi}_{\alpha L}\rho e_{bR}+s^{e}_{\alpha b}\bar{\psi}_{\alpha L}\chi e_{bR}+s^{E}_{\alpha b}\bar{\psi}_{\alpha L}\rho E_{bR}$$
(36)
$$\displaystyle+\frac{h_{1b}^{E}}{\Lambda}\bar{\psi}_{1L}\chi\chi E_{bR}+\frac{h_{1b}^{e}}{\Lambda}\bar{\psi}_{1L}\chi\rho e_{bR}+\frac{s_{1b}^{E}}{\Lambda}\bar{\psi}_{1L}\chi\rho E_{bR}$$
$$\displaystyle+\frac{s^{\prime E}_{1b}}{\Lambda}\bar{\psi}_{1L}\rho\rho E_{bR}+\frac{s_{1b}^{e}}{\Lambda}\bar{\psi}_{1L}\chi\chi e_{bR}+\frac{{\rm s}_{1b}^{e}}{\Lambda}\bar{\psi}_{1L}\rho\rho e_{bR}$$
$$\displaystyle+\frac{h_{11}^{\xi}}{\Lambda}\bar{\psi}^{c}_{1L}\chi\chi\psi_{1L}+\frac{s_{11}}{\Lambda}\bar{\psi}^{c}_{1L}\chi\rho\psi_{1L}+\frac{s^{\prime}_{11}}{\Lambda}\bar{\psi}^{c}_{1L}\rho\rho\psi_{1L}$$
$$\displaystyle+\frac{s_{1\alpha}}{\Lambda^{2}}\bar{\psi}^{c}_{1L}\chi\chi\rho\psi_{\alpha L}+\frac{s^{\prime}_{1\alpha}}{\Lambda^{2}}\bar{\psi}^{c}_{1L}\chi\rho\rho\psi_{\alpha L}+H.c.$$
From the charged lepton Yukawa interactions, it follows that the SM charged leptons mix with the exotic ones. In the basis, $e_{a}^{\pm},E_{a}^{\pm},\xi^{\pm}$, the charged lepton mass matrix has the form:
$$\displaystyle\mathcal{M}_{l}=\left(\begin{array}[]{ccc}M_{ee}&M_{eE}&M_{e\xi}\\
M^{T}_{eE}&M_{EE}&M_{E\xi}\\
M_{e\xi}^{T}&M_{E\xi}^{T}&M_{\xi\xi}\end{array}\right),$$
(40)
where,
$$\displaystyle[M_{ee}]_{\alpha b}$$
$$\displaystyle\simeq-\frac{1}{\sqrt{2}}h_{\alpha b}^{e}v+f^{ee}_{\alpha b}(v^{\prime},w^{\prime},s^{e}_{\alpha b}(v,w)),\hskip 14.22636pt[M_{ee}]_{1b}\simeq-\frac{1}{2\sqrt{2}}h_{1b}^{e}v+f^{ee}_{1b}(v^{\prime},w^{\prime},s^{e}_{1b}(v,w)),$$
$$\displaystyle[M_{EE}]_{\alpha b}$$
$$\displaystyle\simeq-\frac{1}{\sqrt{2}}h_{\alpha b}^{E}w+f^{EE}_{\alpha b}(v^{\prime},w^{\prime},s^{E}_{\alpha b}(v,w)),\hskip 14.22636pt[M_{EE}]_{1b}\simeq-\frac{1}{2\sqrt{2}}h_{1b}^{E}w+f^{EE}_{1b}(v^{\prime},w^{\prime},s^{(\prime)E}_{1b}(v,w)),$$
$$\displaystyle[M_{eE}]_{\alpha b}$$
$$\displaystyle\simeq f_{\alpha b}^{eE}(v^{\prime},w^{\prime},s^{E}_{\alpha b}(v,w)),\hskip 14.22636pt[M_{eE}]_{1b}\simeq f_{1b}^{eE}(v^{\prime},w^{\prime},s^{(\prime)E}_{\alpha b}(v,w))$$
with $\alpha=2,3$ and
$$\displaystyle M_{\xi\xi}=-h_{11}^{\xi}w+f^{\xi}(v^{\prime}),\hskip 14.22636pt\hskip 14.22636pt[M_{e\xi}]_{1b}=f^{e\xi}_{1b}(w^{\prime},v^{\prime},s_{1b}^{e}w,\text{s}_{1b}^{e}v).$$
(41)
The functions, $f^{EE}_{ab},f^{eE}_{ab},f^{e\xi}$, are given in Appendix A. It is very clear to note that the charged lepton mass matrix, $\mathcal{M}_{l}$, allows the implementation of the type II seesaw mechanism, and the resulting mass eigenstates can be written as follows
$$\displaystyle e^{\prime}$$
$$\displaystyle=\left(\begin{array}[]{ccc}e_{1}^{\prime}&e_{2}^{\prime}&e_{3}^{\prime}\end{array}\right)^{T}=\left(\begin{array}[]{ccc}e_{1}&e_{2}&e_{3}\end{array}\right)^{T}+T_{e}\left(\begin{array}[]{cccc}e_{1}&e_{2}&e_{3}&\xi^{-}\end{array}\right)^{T},$$
(45)
$$\displaystyle E^{\prime}$$
$$\displaystyle=\left(\begin{array}[]{cccc}E_{1}^{\prime}&E_{2}^{\prime}&E_{3}^{\prime}&\xi^{\prime}\end{array}\right)^{T}=\left(\begin{array}[]{cccc}E_{1}&E_{2}&E_{3}&\xi\end{array}\right)^{T}-T_{e}^{\prime}\left(\begin{array}[]{ccc}e_{1}&e_{2}&e_{3}\end{array}\right)^{T}$$
(49)
with $T_{e}=\left(\begin{array}[]{cc}M_{eE}^{*}&M_{e\xi}^{*}\end{array}\right)M_{E\xi}^{*-1}$, and $T_{e}^{\prime}=M_{E\xi}^{-1}\left(\begin{array}[]{cc}M_{eE}&M_{e\xi}\end{array}\right)^{T}$. The light leptons, $e^{\prime}$, and the heavy states, $E^{\prime}$ self-mix, and their physical states are defined via the mixing matrix as follows:
$$\displaystyle e^{\prime}_{L,R}$$
$$\displaystyle=\left(\begin{array}[]{ccc}e_{1}^{\prime}&e_{2}^{\prime}&e_{3}^{\prime}\end{array}\right)_{L,R}^{T}=V^{e}_{L,R}\left(\begin{array}[]{ccc}e_{1}&e_{2}&e_{3}\end{array}\right)_{L,R}^{T},$$
(52)
$$\displaystyle E^{\prime}_{L,R}$$
$$\displaystyle=\left(\begin{array}[]{cccc}E_{1}^{\prime}&E_{2}^{\prime}&E_{3}^{\prime}&\xi^{\prime}\end{array}\right)_{L,R}^{T}=V^{E}_{L,R}\left(\begin{array}[]{cccc}E_{1}&E_{2}&E_{3}&\xi\end{array}\right)_{L,R}^{T}.$$
(55)
The active neutrinos obtain tiny masses from a combination of type II and III seesaw mechanisms, as follows from the leptonic Yukawa interactions and shown in detail in Ref. Van_Loi_2020 . We would like to note that the physical neutrino states are related to the flavor states as follows
$$\displaystyle\nu^{\prime}=\left(\begin{array}[]{ccc}\nu^{\prime}_{1}&\nu^{\prime}_{2}&\nu^{\prime}_{3}\end{array}\right)_{L,R}^{T}=V^{\nu}_{L,R}\left(\begin{array}[]{ccc}\nu_{1}&\nu_{2}&\nu_{3}\end{array}\right)_{L,R}^{T}$$
(58)
III New physics effects on charged currents
The interactions of fermions with gauge bosons are derived from the kinetic energy terms of the fermions and have the following form:
$$\displaystyle\mathcal{L}_{\text{Fermion}}^{\text{kinetic}}=\text{i}\bar{F}\gamma\text{D}_{\mu}F.$$
(59)
where $F$ runs on all of the model’s fermion multiplets. We can obtain both neutral and charged currents from Eq. (59). The neutral current has the form
$$\displaystyle\mathcal{L}^{\text{N.C}}=-\frac{g}{2{\text{c}}_{W}}\left\{\bar{f}\gamma^{\mu}\left[g^{Z}_{V}(f)-g^{Z}_{A}(f)\gamma_{5}\right]fZ_{\mu}-\bar{f}\gamma^{\mu}\left[g^{Z^{\prime}}_{V}(f)-g^{Z^{\prime}}_{A}(f)\gamma_{5}\right]fZ^{\prime}_{\mu}\right\},$$
(60)
where the vector and axial-vector couplings $g^{Z,Z^{\prime}}_{V}(f)$, $g^{Z,Z^{\prime}}_{A}(f)$ can be found in Van_Loi_2020 .
It is worth noting that the first lepton family transforms as a sextet of $SU(3)_{L}$, whereas the remaining two families transform as a triplet, freeing the LFUV from both gauge couplings and Yukawa couplings.
The non-universal interactions of $Z^{\prime}$, and $X^{\pm},Y^{0(0*)}$ bosons with leptons, are expressed via the following charged currents Van_Loi_2020 :
$$\displaystyle\mathcal{L}^{\text{C.C}}=\text{J}_{W}^{-\mu}W^{+}_{\mu}+\text{J}_{X}^{-\mu}X^{+}_{\mu}+\text{J}_{Y}^{0\mu}Y^{0}_{\mu}+\text{H.c},$$
(61)
where $\text{J}_{W}^{-\mu}$, $\text{J}_{X}^{-\mu}$ and $\text{J}_{Y}^{-\mu}$ are given by:
$$\displaystyle\text{J}_{W}^{-\mu}$$
$$\displaystyle=-\frac{g}{\sqrt{2}}\left\{\bar{\nu}_{{aL}}\gamma^{\mu}e_{{aL}}+\bar{u}_{{aL}}\gamma^{\mu}d_{{aL}}+\sqrt{2}\left(\bar{\xi^{+}_{{L}}}\gamma^{\mu}\xi^{0}_{{L}}+\bar{\xi^{0}_{{L}}}\gamma^{\mu}\xi^{-}_{{L}}\right)\right\},$$
(62)
$$\displaystyle\text{J}_{X}^{-\mu}$$
$$\displaystyle=-\frac{g}{\sqrt{2}}\left\{\bar{\nu}_{\alpha{L}}\gamma^{\mu}E_{\alpha{L}}+\sqrt{2}\left(\bar{\nu}_{1{L}}\gamma^{\mu}E_{1{L}}+\bar{\xi^{+}_{{L}}}\gamma^{\mu}\nu_{1{L}}\right)+\bar{\xi_{{L}}^{0}}\gamma^{\mu}e_{1{L}}-\bar{U}_{{aL}}\gamma^{\mu}d_{{aL}}\right\},$$
(63)
$$\displaystyle\text{J}_{Y}^{0\mu}$$
$$\displaystyle=-\frac{g}{\sqrt{2}}\left\{\bar{e}_{\alpha{L}}\gamma^{\mu}E_{\alpha{L}}+\sqrt{2}\left(\bar{e}_{1{L}}\gamma^{\mu}E_{1{L}}+\bar{\xi^{-}_{{L}}}\gamma^{\mu}e_{1{L}}\right)+\bar{\xi_{{L}}^{0}}\gamma^{\mu}\nu_{1{L}}+\bar{U}_{{aL}}\gamma^{\mu}u_{{aL}}\right\}.$$
(64)
To find the interaction vertexes of fermions-quarks-charged gauge bosons, we have to work with the physical states, such as
$$\displaystyle u_{L,R}$$
$$\displaystyle=\left(V_{L,R}^{u}\right)^{-1}u_{L,R}^{\prime}-T_{u}\left(V^{U}_{L,R}\right)^{-1}U_{L,R}^{\prime},$$
$$\displaystyle U_{L,R}$$
$$\displaystyle=\left(V_{L,R}^{U}\right)^{-1}U_{L,R}^{\prime}+T_{u}^{\prime}\left(V^{u}_{L,R}\right)^{-1}u_{L,R}^{\prime},$$
$$\displaystyle d_{L,R}$$
$$\displaystyle=$$
$$\displaystyle\left(V_{L,R}^{d}\right)^{-1}d_{L,R}^{\prime},$$
$$\displaystyle e_{L,R}$$
$$\displaystyle=\left(V_{L,R}^{l}\right)^{-1}e_{L,R}^{\prime}-T_{e}\left(V_{L,R}^{E}\right)^{-1}E_{L,R}^{\prime},$$
$$\displaystyle E_{L,R}$$
$$\displaystyle=\left(V_{L,R}^{E}\right)^{-1}E_{L,R}^{\prime}+T_{e}^{\prime}\left(V_{L,R}^{l}\right)^{-1}e_{L,R}^{\prime}.$$
It is worth noting that $V_{CKM}=V^{u}_{L}\left(V_{L}^{d}\right)^{\dagger}$, $U_{PMNS}=V_{L}^{l}\left(V_{L}^{\nu}\right)^{\dagger}$.
Due to gauge mixing, $W^{\pm}$ mixed with $X^{\pm}$, and fermion mixing effects, left-handed SM fermions have anomalous flavor-changing couplings. The part of the Lagrangian describing these interactions, which are obtained from Eq.(61), is:
$$\displaystyle\delta\mathcal{L}^{C.C}\ni$$
$$\displaystyle-\frac{g}{\sqrt{2}}\left[\left(V_{CKM}\Delta_{L}^{q}\right)_{ij}W_{\mu}^{{}^{\prime}+}\bar{u_{L}^{\prime}}^{i}\gamma^{\mu}d^{\prime j}_{L}+\left(\left(U_{PMNS}\right)^{\dagger}\Delta_{L}^{l}\right)_{ij}W_{\mu}^{{}^{\prime}+}\bar{\nu^{\prime}}^{i}_{L}\gamma^{\mu}l^{\prime j}_{L}\right]$$
$$\displaystyle-\frac{g}{\sqrt{2}}\left[\left(V_{CKM}\Delta_{L}^{{}^{\prime}q}\right)_{ij}X_{\mu}^{{}^{\prime}+}\bar{u^{\prime}_{L}}^{i}\gamma^{\mu}d^{\prime j}_{L}+\left(\left(U_{PMNS}\right)^{\dagger}\Delta_{L}^{\prime l}\right)_{ij}X_{\mu}^{{}^{\prime}+}\bar{\nu^{\prime}}^{i}_{L}\gamma^{\mu}l^{\prime j}_{L}\right]+H.c,$$
where,
$$\displaystyle\left(\Delta_{L}^{q}\right)_{ij}$$
$$\displaystyle=$$
$$\displaystyle c_{\theta}\delta_{ij}+\left(\bar{T_{u}^{\prime}}\right)_{ij}s_{\theta},$$
(66)
$$\displaystyle\left(\Delta^{{}^{\prime}q}_{L}\right)_{ij}$$
$$\displaystyle=$$
$$\displaystyle s_{\theta}\delta_{ij}-\left(\bar{T_{u}^{\prime}}\right)_{ij}c_{\theta},$$
(67)
$$\displaystyle\left(\Delta^{l}_{L}\right)_{ij}$$
$$\displaystyle=$$
$$\displaystyle\begin{cases}c_{\theta}\delta_{ij}-\sqrt{2}s_{\theta}\left(T_{e}^{\prime}\right)_{ij}\hskip 14.22636pt\textrm{for }i,j=1,\\
c_{\theta}\delta_{ij}-s_{\theta}\left(T_{e}^{\prime}\right)_{ij}\ \hskip 14.22636pt\hskip 14.22636pt\textrm{for }i,j=2,3,\end{cases}$$
(68)
$$\displaystyle\left(\Delta^{{}^{\prime}l}_{L}\right)_{ij}$$
$$\displaystyle=$$
$$\displaystyle\begin{cases}s_{\theta}\delta_{ij}+\sqrt{2}c_{\theta}\left(T_{e}^{\prime}\right)_{ij}&\textrm{for }i,j=1,\\
s_{\theta}\delta_{ij}+c_{\theta}\left(T_{e}^{\prime}\right)_{ij}&\textrm{for }i,j=2,3.\end{cases}$$
(69)
IV The flavor non-universality effective Hamiltonian in $u^{i}-d^{j}$ transitions
The contributions of charged-currents (LABEL:char2) to lepton-flavor non-universal processes, such as the $u^{i}-d^{j}$ transition, are contained in the effective Hamiltonian
$$\displaystyle\mathcal{H}_{eff}=\left[\text{C}_{\nu_{a}e_{b}}^{u_{i}d_{j}}\right]\left(\bar{u}^{\prime}_{iL}\gamma^{\mu}d_{jL}^{{}^{\prime}}\bar{\nu}_{aL}^{\prime}\gamma_{\mu}e^{\prime}_{bL}\right).$$
(70)
At tree level, the Wilson coefficients (WCs), $\left[\text{C}_{\nu_{a}e_{b}}^{u_{i}d_{j}}\right]_{\text{tree}}$, are separated as follows
$$\displaystyle\left[\text{C}_{\nu_{a}e_{b}}^{u_{i}d_{j}}\right]_{\text{tree}}=\left[\text{C}_{\nu_{a}e_{c}}^{u_{i}d_{k}}\right]_{\text{SM}}\left(\delta\left[\text{C}^{u_{k}d_{j}}_{\nu_{c}e_{b}}\right]_{W_{\mu}^{{}^{\prime}}}+\delta\left[\text{C}^{u_{k}d_{j}}_{\nu_{c}e_{b}}\right]_{X_{\mu}^{{}^{\prime}}}\right)$$
(71)
with
$$\displaystyle\left[\text{C}_{\nu_{a}e_{c}}^{u_{i}d_{k}}\right]_{\text{SM}}$$
$$\displaystyle=\frac{4G_{F}}{\sqrt{2}}\left(U_{PMNS}\right)^{\dagger}_{ac}\left(V_{CKM}\right)_{ik},$$
$$\displaystyle\delta\left[\text{C}_{\nu_{c}e_{b}}^{u_{k}d_{j}}\right]_{W_{\mu}^{{}^{\prime}}}$$
$$\displaystyle=\left(\Delta_{L}^{q}\right)_{kj}\left(\Delta_{L}^{l}\right)_{cb},$$
$$\displaystyle\delta\left[\text{C}_{\nu_{c}e_{b}}^{u_{k}d_{j}}\right]_{X_{\mu}^{{}^{\prime}}}$$
$$\displaystyle=\frac{m_{W}^{2}}{m_{X}^{2}}\left(\Delta_{L}^{\prime q}\right)_{kj}\left(\Delta_{L}^{\prime l}\right)_{cb}.$$
(74)
The intensity of the new interactions is of the order of $\frac{v^{\prime},w^{\prime}}{v,w}\simeq(\varepsilon^{2})$, which implies that NP contributions arising from the tree-level exchange of heavy vector bosons are very suppressed. The non-universal interactions of the SM leptons and new leptons with new gauge bosons also generate the four-fermion interactions via the one loop level box and penguin diagrams, which are shown in Figs. (1), (2), (3).
Including one loop correction, the WCs, $\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}$, can be separated as follows
$$\displaystyle\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}=\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{tree}}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}.$$
(75)
We split the penguin diagrams’ contribution into two components
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}=\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}^{\text{SM}}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}^{\text{NP}}.$$
(76)
The first SM’s contribution is denoted by the $\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}^{\text{SM}}$ coefficient , which is written as
$$\displaystyle\begin{multlined}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}^{\text{SM}}=\frac{4G_{F}}{\sqrt{2}}\frac{3g^{2}}{512\pi^{2}}\times\left\{\frac{1}{m^{2}_{Z}-m^{2}_{W}}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WWZ}\right.\\
\left.+\frac{1}{m^{2}_{e_{b}}-m^{2}_{\nu_{a}}}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{We\nu}_{Z}+\frac{1}{m_{W}^{2}}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WW\gamma}\right\},\end{multlined}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}^{\text{SM}}=\frac{4G_{F}}{\sqrt{2}}\frac{3g^{2}}{512\pi^{2}}\times\left\{\frac{1}{m^{2}_{Z}-m^{2}_{W}}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WWZ}\right.\\
\left.+\frac{1}{m^{2}_{e_{b}}-m^{2}_{\nu_{a}}}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{We\nu}_{Z}+\frac{1}{m_{W}^{2}}\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WW\gamma}\right\},$$
(79)
where the coefficient $\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WWZ}$ is the contribution of the penguin diagrams with label (1b,1c) illustrated in Figs. (1) and takes the form:
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WWZ}$$
$$\displaystyle=\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{1b}}^{WWZ}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{1c}}^{WWZ}$$
(80)
$$\displaystyle=4\left(U_{PMNS}\right)^{\dagger}_{ab}\left\{\left(2s_{W}^{2}-1\right)\Gamma^{WZe_{b}}+\Gamma^{WZ\nu_{a}}\right\}\left(V_{CKM}\right)_{ij}.$$
According to the penguin diagram with label (1a) shown in Fig.(1), the coefficient $\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{Z}^{We\nu}$ is calculated by
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{Z}^{We\nu}=\left(t_{W}^{2}-1\right)\left(U_{PMNS}\right)^{\dagger}_{ab}\Gamma^{W\nu_{a}e_{b}}_{Z}\left(V_{CKM}\right)_{ij}.$$
(81)
The penguin diagram with label (1d) is the last contribution made by the SM. It is given as
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WW\gamma}=8s_{W}^{2}\left(U_{PMNS}\right)^{\dagger}_{ab}\Gamma^{W\gamma e_{b}}_{Z}\left(V_{CKM}\right)_{ij}.$$
(82)
When the mixing effect of new fermions and the SM fermions is ignored, the penguin diagram gives new contributions,
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{penguin}}^{\text{NP}}=\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{Z^{\prime}}^{We\nu}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WXY}_{\text{penguin}}$$
(83)
with
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{Z^{\prime}}^{We\nu}$$
$$\displaystyle=\frac{4G_{F}}{\sqrt{2}}\frac{3g^{2}}{512\pi^{2}}\frac{1}{m_{e_{b}}^{2}-m_{\nu_{a}}^{2}}\frac{1}{c_{W}^{2}\left(1+2c_{2W}\right)}$$
$$\displaystyle\times\left[c_{2W}^{2}\left(V_{L}^{\nu}\right)_{1a}\left(V_{L}^{l}\right)^{\dagger}_{1b}+\left(V_{L}^{\nu}\right)_{\alpha a}\left(V_{L}^{l}\right)^{\dagger}_{\alpha b}\right]\Gamma^{W\nu_{a}e_{b}}_{Z^{\prime}}\left(V_{CKM}\right)_{ij},$$
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WXY}_{\text{penguin}}$$
$$\displaystyle=\frac{4G_{F}}{\sqrt{2}}\frac{3g^{2}}{128\pi^{2}}\frac{1}{m_{X}^{2}-m_{Y}^{2}}\left\{\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WXY}_{2a}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WXY}_{2b}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WXY}_{2c}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]^{WXY}_{2d}\right\},$$
where
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{2a}}^{WXY}$$
$$\displaystyle=\left(V^{\nu}_{L}\right)_{1a}\left(V^{l}_{L}\right)^{\dagger}_{1b}\Gamma^{XY\xi^{0}}\left(V_{CKM}\right)_{ij},$$
(84)
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{2b}}^{WXY}$$
$$\displaystyle=2\left(V^{\nu}_{L}\right)_{1a}\left(V^{l}_{L}\right)^{\dagger}_{1b}\Gamma^{XY\xi}\left(V_{CKM}\right)_{ij},$$
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{2c}}^{WXY}$$
$$\displaystyle=\sum_{c=1}^{3}\mathcal{G}^{\nu_{a}E_{c}X}\Gamma^{XYE_{c}}\left(\mathcal{G}^{l_{b}E_{c}Y}\right)^{\dagger}\left(V_{CKM}\right)_{ij},$$
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{2d}}^{WXY}$$
$$\displaystyle=\left(U_{PMNS}\right)_{ab}\sum_{c=1}^{3}\left(V_{L}^{u}\left(V_{L}^{U}\right)^{\dagger}\right)_{ic}\Gamma^{XYU_{c}}\left(V_{L}^{U}\left(V_{L}^{d}\right)^{\dagger}\right)_{cj}.$$
The couplings, $\mathcal{G}^{\nu_{a}E_{c}X},\mathcal{G}^{e_{b}E_{c}Y}$ are determined as follows:
$$\displaystyle\mathcal{G}^{\nu_{a}E_{c}X}$$
$$\displaystyle=\left(V_{L}^{\nu}\right)_{a\alpha}\left(V_{L}^{E}\right)^{\dagger}_{\alpha c}+\sqrt{2}\left(V_{L}^{\nu}\right)_{a1}\left(V_{L}^{E}\right)^{\dagger}_{1c},$$
(85)
$$\displaystyle\left(\mathcal{G}^{e_{b}E_{c}Y}\right)^{\dagger}$$
$$\displaystyle=\left(V_{L}^{E}\right)_{c\alpha}\left(V_{L}^{l}\right)_{\alpha b}^{\dagger}+\sqrt{2}\left(V_{L}^{E}\right)_{c1}\left(V_{L}^{l}\right)_{1b}^{\dagger},$$
where $\alpha=2,3,$ and $\Gamma^{ABC}$ is given in Appendix. (B).
The box diagrams are shown in the Fig.(3) and their contribution to the WCs are as follows
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}=-\frac{4G_{F}}{\sqrt{2}}\frac{51g^{2}}{64\pi^{2}}\frac{m_{W}^{2}}{m_{X}^{2}-m_{Y}^{2}}\left\{\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}^{E}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}^{\xi^{0}}+\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}^{\xi}\right\},$$
(86)
where
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}^{E}$$
$$\displaystyle=$$
$$\displaystyle\sum_{l=1}^{3}\sum_{c=1}^{3}\left(V^{u}_{L}\left(V^{U}_{L}\right)^{\dagger}\right)_{il}\left(V^{U}_{L}\left(V^{d}_{L}\right)^{\dagger}\right)_{lj}\Gamma^{U_{l}E_{c}}\mathcal{G}^{\nu_{a}E_{c}X}\left(\mathcal{G}^{l_{b}E_{c}Y}\right)^{\dagger},$$
(87)
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}^{\xi^{0}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{l=1}^{3}\left(V^{u}_{L}\left(V^{U}_{L}\right)^{\dagger}\right)_{il}\left(V^{U}_{L}\left(V^{d}_{L}\right)^{\dagger}\right)_{lj}\Gamma^{U_{l}\xi^{0}}\left(V_{L}^{\nu}\right)_{a1}\left(V_{L}^{l}\right)_{1b}^{\dagger},$$
$$\displaystyle\left[\text{C}^{u_{i}d_{j}}_{\nu_{a}e_{b}}\right]_{\text{box}}^{\xi}$$
$$\displaystyle=$$
$$\displaystyle 2\sum_{l=1}^{3}\left(V^{u}_{L}\left(V^{U}_{L}\right)^{\dagger}\right)_{il}\left(V^{U}_{L}\left(V^{d}_{L}\right)^{\dagger}\right)_{lj}\Gamma^{U_{l}\xi}\left(V_{L}^{\nu}\right)_{a1}\left(V_{L}^{l}\right)_{1b}^{\dagger}.$$
The functions $\Gamma^{U_{l}E_{c}}$, $\Gamma^{U_{l}\xi^{0}}$, $\Gamma^{U_{l}\xi}$ are defined respectively in Appendix C
V Studying several observables connected to the flavour non-universality of interactions
V.1 $b\to c$ transitions
Let’s look at the NP effects in $b$ to $c$ transitions. Both the exclusive and inclusive ratios,
$\text{R}(\text{D}^{(*)})$, $\text{R}(\text{X}_{\text{c}})$, are taken into account. The ratios, $\text{R}(\text{D}^{(*)}),\text{R}(\text{X}_{\text{c}})$, which come from NP in the form of the Wilson coefficients, are presented in Boucenna:2016qad as:
$$\displaystyle\text{R}(\text{D}^{(*)})$$
$$\displaystyle\equiv$$
$$\displaystyle\frac{\Gamma\left(\text{B}\to\text{D}^{(*)}\tau\bar{\nu}\right)}{\Gamma\left(\text{B}\to\text{D}^{(*)}\l\bar{\nu}\right)}=\frac{\sum_{k}|\text{C}^{cb}_{3j}|^{2}}{\sum_{k}\left(|\text{C}^{cb}_{1k}|^{2}+|\text{C}^{cb}_{2k}|^{2}\right)}\times\left[\frac{\sum_{k}\left(|\text{C}^{cb}_{1k}|^{2}+|\text{C}^{cb}_{2k}|^{2}\right)}{\sum_{k}|\text{C}^{cb}_{3k}|^{2}}\right]_{\text{SM}}\times\text{R}(\text{D}^{(*)})_{\text{SM}},$$
$$\displaystyle\text{R}(\text{X}_{\text{c}})$$
$$\displaystyle\equiv$$
$$\displaystyle\frac{\Gamma\left(\text{B}\to\text{X}_{\text{c}}\tau\bar{\nu}\right)}{\Gamma\left(\text{B}\to\text{X}_{\text{c}}l\bar{\nu}\right)}=\frac{\sum_{k}|\text{C}^{cb}_{3k}|^{2}}{\sum_{k}|\text{C}^{cb}_{1k}|^{2}}\times\left[\frac{\sum_{k}|\text{C}^{cb}_{1k}|^{2}}{\sum_{k}|\text{C}^{cb}_{3k}|^{2}}\right]_{\text{SM}}\times\text{R}(\text{X}_{\text{c}})_{\text{SM}}.$$
(88)
with $k=1,2,3$ is the generation index of the leptons.
Eq.(3) determines the ratio $\text{R}(\text{D}^{(*)})_{\text{SM}}$, and the ratio $\text{R}(\text{X}_{\text{c}})_{\text{SM}}=0.223(5)$, which is reported in PhysRevD.90.034021 .
Furthermore, the experimental value for the inclusive ratio, $\text{R}(\text{X}_{\text{c}})$, is determined as follows
$$\displaystyle\text{R}(\text{X}_{\text{c}})_{\text{exp}}=0.222(22),$$
(89)
and the average values of the measurements, $\text{R}_{\text{D}},\text{R}_{\text{D}^{*}}$, are given in Eq.(2).
The discrepancy between the measured values of $\text{R}_{\text{D}},\text{R}_{\text{D}^{*}}$ and their respective SM predictions is an indication of the presence of NP, whose effects are encoded in the NP Wilson coefficients. In contrast, the experimental result is in slight tension with the SM prediction of $\text{R}(\text{X}_{\text{c}})$. NP effects in the $\text{R}(\text{X}_{\text{c}})$ lead to new stringent constraints on the NP parameters.
In the next study, we fit the parameter space of the considered model by using the data on the observables, $\text{R}_{\text{D}},\text{R}_{\text{D}^{*}}$.
As was already indicated, the NP Wilson coefficients depend not only on the SM parameters but also on the new parameters such as the mixing matrices, $V_{L}^{d},V_{L}^{u},V_{L}^{U},V_{L}^{E},V_{L}^{l},V_{L}^{\nu}$, the new particle masses, $m_{X},m_{Y},m_{Z^{\prime}},m_{E_{i}},m_{\xi},m_{\xi^{0}}$, and $m_{U_{i}}$. To perform a numerical study, we use the SM parameters reported in Workman:2022ynf and the new parameters are assumed as follows:
•
The lepton and quark mixing matrices take the following form:
$$\displaystyle V_{L}^{l}=V_{L}^{u}=V_{L}^{U}=V_{L}^{E}=\text{Diag}\left(1,1,1\right),\hskip 14.22636ptV_{L}^{\nu}=U_{\text{PMNS}},\hskip 14.22636ptV_{L}^{d}=V_{\text{CKM}}.\hskip 14.22636pt$$
(90)
this corresponds to the choice of basis where the up type quark and charged lepton mass matrices are diagonal so that the observed quark and lepton mixings only arise from the down type quark and neutrino sectors, respectively.
•
To satisfy the LHC constraints Workman:2022ynf , the new gauge boson masses were selected as follows: $m_{Z^{\prime}}=4500\text{GeV},m_{X}=4100\text{GeV},m_{Y}^{2}=m_{X}^{2}+m_{W}^{2}.$
•
Without loss of generality, we investigate the mass hierarchy of new fermions according to four scenarios:
–
The mass of three new leptons, $E_{i}$, is $m_{E_{1}}=m_{E_{2}}=m_{E_{3}}$, and the mass of three exotic quarks is, $m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$.
–
Both kinds of new quarks, leptons, have the following normal mass hierarchy, denoted by the symbol $(\text{E}_{\text{n}}\text{U}_{\text{n}})$: $\frac{m_{E_{1}}}{m_{E_{2}}}=\frac{m_{e}}{m_{\mu}},\frac{m_{E_{1}}}{m_{E_{3}}}=\frac{m_{e}}{m_{\tau}}$, $\frac{m_{U_{1}}}{m_{U_{2}}}=\frac{m_{u}}{m_{c}},\frac{m_{U_{1}}}{m_{U_{3}}}=\frac{m_{u}}{m_{t}}$.
–
Both kinds of new quarks, leptons, have the following inverted mass hierarchy, denoted by the symbol $(\text{E}_{\text{i}}\text{U}_{\text{i}})$: $\frac{m_{E_{1}}}{m_{E_{2}}}=\frac{m_{\mu}}{m_{e}},\frac{m_{E_{1}}}{m_{E_{3}}}=\frac{m_{\tau}}{m_{e}}$, $\frac{m_{U_{1}}}{m_{U_{2}}}=\frac{m_{c}}{m_{u}},\frac{m_{U_{1}}}{m_{U_{3}}}=\frac{m_{t}}{m_{u}}$.
–
The new leptons have a normal mass hierarchy, but the exotic quarks have an inverted mass hierarchy, denoted by the symbol $(\text{E}_{\text{n}}\text{U}_{\text{i}})$: $\frac{m_{E_{1}}}{m_{E_{2}}}=\frac{m_{e}}{m_{\mu}},\frac{m_{E_{1}}}{m_{E_{3}}}=\frac{m_{e}}{m_{\tau}}$, $\frac{m_{U_{1}}}{m_{U_{2}}}=\frac{m_{c}}{m_{u}},\frac{m_{U_{1}}}{m_{U_{3}}}=\frac{m_{t}}{m_{u}}$, and vice versa $(\text{E}_{\text{i}}\text{U}_{\text{n}})$.
In the first scenario, shown in the plots of Fig. (4), we display the allowed blue, orange, and green regions of parameter space in the $\delta m-m_{U_{1}}$ plane consistent with the experimental constraints of the $\text{R}(\text{D}),\text{R}(\text{D}^{*}),\text{R}(\text{X}_{c})$ observables, respectively. We assumed: $m_{E_{1}}=m_{E_{2}}=m_{E_{3}}$ and $m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$, $m_{\xi}=m_{E_{1}}+\delta m,m_{\xi^{0}}=m_{E_{1}}-\delta m$.
As follows from the plots of Fig. (4), we found that the experimental values of the $\text{R}(\text{D}),\text{R}(\text{D}^{*}),\text{R}(\text{X}_{c})$ observables can be successfully accommodated in two regions of $\delta m$, one of the order of few GeV and the other one of the order of few TeV. As follows from the plot in the right panel of Fig. (4), we found an upper limit for the exotic quark mass $m_{U_{1}}$ less than $5$ TeV.
In the scenarios, both quarks and leptons have the same mass hierarchy, $\text{E}_{\text{n}}\text{U}_{\text{n}}$ or $\text{E}_{\text{i}}\text{U}_{\text{i}}$ or either quark or lepton has the inverted mass hierarchy, $\text{E}_{\text{i}}\text{U}_{\text{n}},\text{E}_{\text{n}}\text{U}_{\text{i}}$, can find a parameter space of $m_{U_{1}}$, $\delta m$, which can successfully accommodate the experimental values of the $\text{R}(\text{D}),\text{R}(\text{D}^{*}),\text{R}(\text{X}_{c})$ observables. We display the allowed region of parameter space in the $\delta m-m_{U_{1}}$ plane consistent with the experimental constraints in Fig.(5).
The $\delta m$ is constrained by the experimental values of these ratios, as shown in Fig.(5), and the mass hierarchy of new leptons and quarks. All scenarios, $\text{E}_{\text{i}}\text{U}_{\text{i}},\text{E}_{\text{n}}\text{U}_{\text{n}}$, $\text{E}_{\text{i}}\text{U}_{\text{n}},\text{E}_{\text{n}}\text{U}_{\text{i}}$, accommodate the $\delta m$ to achieve the electroweak enegy scale and the TeV energy scale. The parameter space distribution shape depends on the mass hierarchy of the exotic quarks . For the case of $\text{E}_{\text{i}}\text{U}_{\text{n}},\text{E}_{\text{n}}\text{U}_{\text{n}}$, the allowable region of $\delta m$ at the electroweak scale can range from several GeV to several tens of GeV, the allowed regions at the TeV scale are restricted by a curved surface, creating a limit on the exotic quark mass. For the case, $\text{E}_{\text{i}}\text{U}_{\text{i}},\text{E}_{\text{n}}\text{U}_{\text{i}}$, the parameter space of $\delta m$ only reaches up to a few GeV, and the allowed energy scale at the TeV is part of the plane bounded by lines that the $\delta m$ is constant in the $\delta m-m_{U_{1}}$. This is equivalent to not creating the limit on the exotic quark.
V.2 $s\to u$ transitions
We consider other decay processes, $\text{K}^{+}\to\pi^{0}\text{l}^{+}\nu,\text{K}\to\text{l}\nu,\tau\to\text{K}\nu$, which give rise to the constraint on the flavor of non-universality. For simplify, we consider the ratios: $\frac{\Gamma(\text{K}\to\mu\bar{\nu})}{\Gamma(\text{K}\to e\bar{\nu})},\frac{\Gamma(\tau\to\text{K}\nu)}{\Gamma(\text{K}\to e\bar{\nu})},\frac{\Gamma(\text{K}^{+}\to\pi^{0}\bar{\mu}\nu)}{\Gamma(\text{K}^{+}\to\pi^{0}\bar{e}\nu)}$. In the considered model, we obtain
$$\displaystyle\frac{\Gamma(\text{K}\to\mu\bar{\nu})}{\Gamma(\text{K}\to e\bar{\nu})}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{k}|\text{C}_{2k}^{us}|^{2}}{\sum_{k}|\text{C}_{1k}^{us}|^{2}}\times\left[\frac{\sum_{k}|\text{C}^{us}_{1k}|^{2}}{\sum_{k}|\text{C}^{us}_{2k}|^{2}}\right]_{\text{SM}}\times\left[\frac{\Gamma(\text{K}\to\mu\bar{\nu})}{\Gamma(\text{K}\to e\bar{\nu})}\right]_{\text{SM}},$$
$$\displaystyle\frac{\Gamma(\tau\to\text{K}\nu)}{\Gamma(\text{K}\to e\bar{\nu})}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{k}|\text{C}_{3k}^{us}|^{2}}{\sum_{k}|\text{C}_{1k}^{us}|^{2}}\times\left[\frac{\sum_{k}|\text{C}^{us}_{1k}|^{2}}{\sum_{k}|\text{C}^{us}_{3k}|^{2}}\right]_{\text{SM}}\times\left[\frac{\Gamma(\tau\to\text{K}\nu)}{\Gamma(\text{K}\to e\bar{\nu})}\right]_{\text{SM}},$$
$$\displaystyle\frac{\Gamma(\text{K}^{+}\to\pi^{0}\bar{\mu}\nu)}{\Gamma(\text{K}^{+}\to\pi^{0}\bar{e}\nu)}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{k}|\text{C}_{2k}^{us}|^{2}}{\sum_{k}|\text{C}_{1k}^{us}|^{2}}\times\left[\frac{\sum_{k}|\text{C}^{us}_{1k}|^{2}}{\sum_{k}|\text{C}^{us}_{2k}|^{2}}\right]_{\text{SM}}\times\left[\frac{\Gamma(\text{K}^{+}\to\pi^{0}\bar{\mu}\nu)}{\Gamma(\text{K}^{+}\to\pi^{0}\bar{e}\nu)}\right]_{\text{SM}}.$$
The experimental values for these ratios are given in Workman:2022ynf
$$\displaystyle\left[\frac{\Gamma(\text{K}\to\mu\bar{\nu})}{\Gamma(\text{K}\to e\bar{\nu})}\right]_{\text{exp}}=4.018(3)\times 10^{4},\left[\frac{\Gamma(\tau\to\text{K}\nu)}{\Gamma(\text{K}\to e\bar{\nu})}\right]_{\text{exp}}=1.89(3)\times 10^{7},\left[\frac{\Gamma(\text{K}^{+}\to\pi^{0}\bar{\mu}\nu)}{\Gamma(\text{K}^{+}\to\pi^{0}\bar{e}\nu)}\right]_{\text{exp}}=0.660(3),$$
while the SM predicted values can be found inWorkman:2022ynf
$$\displaystyle\left[\frac{\Gamma(\text{K}\to\mu\bar{\nu})}{\Gamma(\text{K}\to e\bar{\nu})}\right]_{\text{SM}}=4.0037(2)\times 10^{4},\left[\frac{\Gamma(\tau\to\text{K}\nu)}{\Gamma(\text{K}\to e\bar{\nu})}\right]_{\text{SM}}=1.939(4)\times 10^{7},\left[\frac{\Gamma(\text{K}^{+}\to\pi^{0}\bar{\mu}\nu)}{\Gamma(\text{K}^{+}\to\pi^{0}\bar{e}\nu)}\right]_{\text{SM}}=0.663(2).$$
Based on the constraints given in the previous studies, we continue our numerical analysis of the s-u transition processes. In Figs.(6), we create a contour of the ratios
$\frac{\Gamma(\text{K}\to\mu\bar{\nu})}{\Gamma(\text{K}\to e\bar{\nu})},\frac{\Gamma(\tau\to\text{K}\nu)}{\Gamma(\text{K}\to e\bar{\nu})}$ and $\frac{\Gamma(\text{K}^{+}\to\pi^{0}\bar{\mu}\nu)}{\Gamma(\text{K}^{+}\to\pi^{0}\bar{e}\nu)}$, in plane $\delta m-m_{U_{1}}$. The frames from top to bottom are considered according to the following cases:
$m_{E_{1}}=m_{E_{2}}=m_{E_{3}},m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$; $\text{E}_{\text{n}}\text{U}_{\text{i}}$; $\text{E}_{\text{i}}\text{U}_{\text{n}}$. In all three cases, the allowed parameter space region of $\delta m$ that can explain these experimental values, is also subdivided into the electroweak scale or the TeV scale. The allowed parameter space regions are determined by their consistency with the experimental values of $\text{R}(\text{D}),\text{R}(\text{D}^{(*)})$, and $\text{R}(\text{X}_{c})$ as previously considered.
V.3 $d\rightarrow u$ transition
One of the tighter constraints on flavor non-universality is the decay processes $d\to ul\bar{\nu}$, which corresponds to
$\pi\to l\bar{\nu}$. To cancel the dependence the combination $\text{G}_{\text{F}}\mid\text{V}_{\text{ud}}\mid$, we consider the ratios $\frac{\Gamma(\tau\rightarrow\pi\nu)}{\Gamma(\pi\rightarrow e\bar{\nu})}$ and $\frac{\Gamma(\pi\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu})}$. The experimental values for
these ratios are collected by Boucenna:2016qad
$$\displaystyle\left[\frac{\Gamma(\tau\rightarrow\pi\nu)}{\Gamma(\pi\rightarrow e\bar{\nu})}\right]_{\text{exp}}=7.90(5)\times 10^{7},\hskip 14.22636pt\left[\frac{\Gamma(\pi\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu)}}\right]_{\text{exp}}=8.13(3)\times 10^{3},$$
(93)
while the predictive values of the SM are Cirigliano:2007ga ; Boucenna:2016qad
$$\displaystyle\hskip 14.22636pt\left[\frac{\Gamma(\tau\rightarrow\pi\nu)}{\Gamma(\pi\rightarrow e\bar{\nu})}\right]_{\text{SM}}=7.91(1)\times 10^{7},\hskip 14.22636pt\left[\frac{\Gamma(\pi\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu})}\right]_{\text{SM}}=8.096(1)\times 10^{3}.$$
(94)
For these ratios, the MF331 model predicts:
$$\displaystyle\frac{\Gamma(\pi\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu})}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{k}\mid\text{C}^{ud}_{2k}\mid^{2}}{\sum_{k}\mid\text{C}^{ud}_{1k}\mid^{2}}\times\left[\frac{\sum_{k}|\text{C}^{ud}_{1k}|^{2}}{\sum_{k}|\text{C}^{ud}_{2k}|^{2}}\right]_{\text{SM}}\times\left[\frac{\Gamma(\pi\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu})}\right]_{\text{SM}},$$
$$\displaystyle\frac{\Gamma(\tau\rightarrow\pi\nu)}{\Gamma(\pi\rightarrow e\bar{\nu})}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sum_{k}\mid\text{C}^{ud}_{3k}\mid^{2}}{\sum_{k}\mid\text{C}^{ud}_{1k}\mid^{2}}\times\left[\frac{\sum_{k}|\text{C}^{ud}_{1k}|^{2}}{\sum_{k}|\text{C}^{ud}_{3k}|^{2}}\right]_{\text{SM}}\times\left[\frac{\Gamma(\tau\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu})}\right]_{\text{SM}}.$$
In Figs. (7), (8), we contour the ratios, $\frac{\Gamma(\tau\rightarrow\pi\nu)}{\Gamma(\pi\rightarrow e\bar{\nu})}$, $\frac{\Gamma(\pi\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu})}$, as a function of $m_{U_{1}},\delta m$ in the possible cases indicated in two above sections. For the case, $m_{E_{1}}=m_{E_{2}}=m_{E_{3}},m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$, we realize that
in the TeV scale region, there are a few values of $\delta m$ that predict the ratios, $\frac{\Gamma(\tau\rightarrow\pi\nu)}{\Gamma(\pi\rightarrow e\bar{\nu})}$, $\frac{\Gamma(\pi\rightarrow\mu\bar{\nu})}{\Gamma(\pi\rightarrow e\bar{\nu})}$, consistent with the experimental values, while the GeV region of $\delta m$ is predicted for explaining these values. In the limit, $2\text{GeV}<\delta m<20\text{GeV}$, the upper bound of $m_{U_{1}}$ is smaller than 4 TeV. These conclusions also apply to cases: $\text{E}_{\text{i}}\text{U}_{\text{n}},\text{E}_{\text{n}}\text{U}_{\text{i}}$.
Let us review the allowed parameter space derived from studying the transition processes, $b-c,s-u,d-u$. The region of parameter space in the first scenario, where $m_{E_{1}}=m_{E_{2}}=m_{E_{3}},m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$, is determined by the intersection of planes $m_{U_{1}}-\delta m$ shown in Figs. (4),(6), and (7). We get to the conclusion that the allowed region is a part of the plane limited by $\delta m,m_{U_{1}}$ is: $2<\delta m<20$ GeV, and $m_{U_{1}}<4$ TeV or $\delta m<2$ TeV. The parameter space derived from the scenarios, $\text{E}_{\text{i}}\text{U}_{\text{n}}$ ,$\text{E}_{\text{n}}\text{U}_{\text{i}}$,
must be simultaneously consistent with the values depicted in Figs. (5),(8).
Specifically, in the GeV energy scale, there is no common pair of values, $m_{U_{1}}-\delta m$, while in the TeV energy scale, there is a narrow region of $m_{U_{1}}-\delta m$ that allows to successfully accommodate the experimental values of the lepton flavor universality observables.
V.4 Revising for $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$
In Duy:2022qhy , we investigated the ratios $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$, corresponding to lepton flavor universality observables, in the MF331 model. We demonstrated that there are two sources of contributions to the ratios $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$, namely box, and penguin diagrams, with the box diagram having a stronger influence. With some other assumptions attached, we showed that only the box new particles’ mass degeneracy can account for fitting the $b\to s\mu^{+}\mu^{-}$ anomaly data Aaij_2014 ; Aaij_2019 ; Choudhury_2021 ; Aaij2021 ; Aaij_2017 ; Wehle_2021 ; Aaij2021 .
December 2022, an updated LHCb analysis of $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$ based on the full Run 1 and 2 datasets has been presented lhcbcollaboration2022test ; lhcbcollaboration2022measurement . These new results are consistent with the SM predictions. These results dramatically change the scenario of NP effects in the ratios $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$. So, in the considered model, we provide a reassessment of NP effects in the $\text{R}_{\text{K}},\text{R}_{\text{K}^{*}}$. Fig.(9) displays the predicted outcomes, the ratios $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$, which are obtained by taking random values for $\delta m\in[2,20]$ GeV, $m_{U_{1}}\in[200,5000]$ GeV in three scenarios: $m_{E_{1}}=m_{E_{2}}=m_{E_{3}},m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$, and $\text{E}_{\text{i}}\text{U}_{\text{n}}$, $\text{E}_{\text{n}}\text{U}_{\text{i}}$. We observe that the distribution of points demonstrating the correlation between $\text{R}_{\text{K}}$ and $\text{R}_{\text{K}^{*}}$depends on the mass hierarchy of new quarks (leptons). The larger distribution density responds to the most recent measurements lhcbcollaboration2022test ; lhcbcollaboration2022measurement , $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$, corresponds to the
scenarios, $m_{E_{1}}=m_{E_{2}}=m_{E_{3}},m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$, and $\text{E}_{\text{i}}\text{U}_{\text{n}}$. The model also predicts pairs of $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$ values that are consistent with the results before December 2022 Aaij_2014 ; Aaij_2019 ; Choudhury_2021 ; Aaij2021 ; Aaij_2017 ; Wehle_2021 ; Aaij2021 in the case of $m_{E_{1}}=m_{E_{2}}=m_{E_{3}},m_{U_{1}}=m_{U_{2}}=m_{U_{3}}$, but the density of matches is lower. This is not a conclusion in the case $\text{E}_{\text{i}}\text{U}_{\text{n}}$. Compared with the two mentioned cases, the distribution of points in the case, $\text{E}_{\text{n}}\text{U}_{\text{i}}$, is completely different. The correlation between the ratios,$\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$, is almost linearly distributed. In this case, not only is there a parameter space that can accommodate the
most recent measurements of the ratios $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$ but there is also another parameter space that accommodates the old data of the $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$ observables Aaij_2014 ; Aaij_2019 ; Choudhury_2021 ; Aaij2021 ; Aaij_2017 ; Wehle_2021 ; Aaij2021 .
VI Conclusion
We have performed a detailed analysis of the $\text{b}\to\text{c}\tau\nu$ and $\text{b}\to\text{s}l^{+}l^{-}$ processes within the framework of theory with $SU(3)_{C}\times SU(3)_{L}\times U(1)_{N}$ gauge symmetry ($331$ model) with non-universal $SU(3)_{L}\times U(1)_{N}$ symmetry in the lepton sector. The theory under consideration corresponds to the MF331 model, where the scalar sector is only composed of two $SU(3)_{L}$ scalar triplets, one responsible for the spontaneous breaking of the $SU(3)_{L}\times U(1)_{N}$ and the other for triggering the breaking of the SM gauge group. The non universal $SU(3)_{L}\times U(1)_{N}$ assignments in the lepton sector allow for non-universal neutral and charged currents involving heavy non SM gauge bosons and SM leptons. Those interactions give rise to radiative contributions to the $\text{u}_{i}-\text{d}_{j}$ transitions arising from one loop level penguin and box diagrams involving the virtual exchange of heavy non-SM up type quarks, exotic charged leptons, and non-SM gauge bosons running in the internal lines of the loop. We performed a detailed numerical analysis of several observables related to the $b\to c$, $s\to u$ and $d\to u$ transitions, finding that the mass hierarchy of exotic quarks and new leptons has an impact on the ratios involved in these transitions. The mass differential of two new leptons, $\delta m$ at the electroweak scale and the mass of exotic up-type quarks at the TeV scale, making these particles accessible at the LHC scale, is the region of parameter space where the observables associated with these transitions are consistent with their corresponding experimental values. In these allowable parameter space regions, the ratios, $\text{R}_{\text{K}}$, $\text{R}_{\text{K}^{*}}$, are predicted to match the recent measurements.
Acknowledgments
This research is funded by International Centre of Physics, Institute of Physics, Vietnam Academy of Science and Technology under Grant No. ICP-2023.02. AECH has received funding from ANID-Chile FONDECYT 1210378, ANID-Chile PIA/APOYO AFB220004 and Milenio-ANID-ICN2019_044
Appendix A The parameters appeared in the lepton mass matrix
The expressions of the functions $f^{EE}_{ab},f^{eE}_{ab},f^{ee}_{ab},f^{e\xi}_{1b},f^{E\xi}_{1b}$ in the mass mixing matrix of leptons $\mathcal{M}_{l}$ are given by
$$\displaystyle f^{ee}_{\alpha b}(v^{\prime},w^{\prime},s^{e}_{\alpha b}(v,w))=-\frac{1}{\sqrt{2}}s^{e}_{\alpha b}v^{\prime},$$
(95)
$$\displaystyle f^{ee}_{1b}(v^{\prime},w^{\prime},s^{e}_{1b}(v,w))=-\frac{1}{\sqrt{2}}\frac{s^{e}_{1b}}{\Lambda}v^{\prime}w-\frac{1}{\sqrt{2}}\frac{s^{\prime e}_{1b}}{\Lambda}vw^{\prime},$$
(96)
$$\displaystyle f^{EE}_{\alpha b}(v^{\prime},w^{\prime},s^{E}_{\alpha b}(v,w))=-\frac{1}{\sqrt{2}}s^{E}_{\alpha b}w^{\prime},$$
(97)
$$\displaystyle f^{EE}_{1b}(v^{\prime},w^{\prime},s^{\prime E}_{1b}(v,w))=-\frac{1}{2}\frac{s^{E}_{1b}}{\Lambda}ww^{\prime}-\frac{1}{2}\frac{s^{\prime E}_{1b}}{\Lambda}w^{\prime 2},$$
(98)
$$\displaystyle f^{eE}_{\alpha b}(v^{\prime},w^{\prime},s^{E}_{\alpha b}(v,w))=-\frac{1}{\sqrt{2}}h^{E}_{\alpha b}v^{\prime}-\frac{1}{\sqrt{2}}s^{E}_{\alpha b}v-\frac{1}{\sqrt{2}}h^{e}_{\alpha b}w^{\prime},$$
(99)
$$\displaystyle f^{eE}_{1b}(v^{\prime},w^{\prime},s^{\prime E}_{1b}(v,w))=-\frac{1}{\sqrt{2}}\frac{h^{E}_{1b}}{\Lambda}v^{\prime}w-\frac{1}{2\sqrt{2}}\frac{s^{E}_{1b}}{\Lambda}(v^{\prime}w^{\prime}+vw)-\frac{1}{\sqrt{2}}\frac{s^{\prime E}_{1b}}{\Lambda}vw^{\prime},$$
(100)
$$\displaystyle f^{e\xi}_{1b}(w^{\prime},v^{\prime},s_{1b}^{e}w,\text{s}_{1b}^{e}v)=-\frac{h^{e}_{1b}}{2\Lambda}vv^{\prime}-\frac{s^{e}_{1b}}{2\Lambda}v^{\prime}-\frac{s^{\prime e}_{1b}}{2\Lambda}v^{2},$$
(101)
$$\displaystyle f^{E\xi}_{1b}(w^{\prime},v^{\prime},s_{1b}^{e}w,\text{s}_{1b}^{e}v)=-\frac{h^{E}_{1b}}{2\Lambda}v^{\prime 2}-\frac{s^{E}_{1b}}{2\Lambda}vv^{\prime}-\frac{s^{\prime E}_{1b}}{2\Lambda}v^{2}.$$
(102)
Appendix B The $\Gamma^{f_{i}f_{j}f_{k}}$ functions
$$\displaystyle\Gamma^{WZe_{b}}=(m^{2}_{Z}-m^{2}_{W})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{e_{b}}\right)-m^{2}_{Z}\frac{\ln{x_{Z}^{e_{b}}}}{x_{Z}^{e_{b}}-1}+m^{2}_{W}\frac{\ln{x_{W}^{e_{b}}}}{x_{W}^{e_{b}}-1},$$
(103)
$$\displaystyle\Gamma^{WZ\nu_{a}}=(m^{2}_{Z}-m^{2}_{W})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{\nu_{a}}\right)-m^{2}_{Z}\frac{\ln{x_{Z}^{\nu_{a}}}}{x_{Z}^{\nu_{a}}-1}+m^{2}_{W}\frac{\ln{x_{W}^{\nu_{a}}}}{x_{W}^{\nu_{a}}-1},$$
(104)
$$\displaystyle\Gamma^{W\nu_{a}e_{b}}_{Z}=(m^{2}_{e_{b}}-m^{2}_{\nu_{a}})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{Z}\right)-m^{2}_{e_{b}}\frac{\ln{x_{e_{b}}^{Z}}}{x_{e_{b}}^{Z}-1}+m^{2}_{\nu_{a}}\frac{\ln{x_{\nu_{a}}^{Z}}}{x_{\nu_{a}}^{Z}-1},$$
(105)
$$\displaystyle\Gamma^{W\nu_{a}e_{b}}_{Z^{\prime}}=(m^{2}_{e_{b}}-m^{2}_{\nu_{a}})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{Z^{\prime}}\right)-m^{2}_{e_{b}}\frac{\ln{x_{e_{b}}^{Z^{\prime}}}}{x_{e_{b}}^{Z^{\prime}}-1}+m^{2}_{\nu_{a}}\frac{\ln{x_{\nu_{a}}^{Z^{\prime}}}}{x_{\nu_{a}}^{Z^{\prime}}-1},$$
(106)
$$\displaystyle\Gamma^{W\gamma e_{c}}=\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{e_{c}}\right)-\frac{\ln{x_{W}^{e_{c}}}}{x_{W}^{e_{c}}-1},$$
(107)
$$\displaystyle\Gamma^{XYE_{c}}=(m^{2}_{X}-m^{2}_{Y})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{E_{c}}\right)-m^{2}_{X}\frac{\ln{x_{X}^{E_{c}}}}{x_{X}^{E_{c}}-1}+m^{2}_{Y}\frac{\ln{x_{Y}^{E_{c}}}}{x_{Y}^{E_{c}}-1},$$
(108)
$$\displaystyle\Gamma^{XY\xi^{0}}=(m^{2}_{X}-m^{2}_{Y})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{\xi^{0}}\right)-m^{2}_{X}\frac{\ln{x_{X}^{\xi^{0}}}}{x_{X}^{\xi^{0}}-1}+m^{2}_{Y}\frac{\ln{x_{Y}^{\xi^{0}}}}{y_{Y}^{\xi^{0}}-1},$$
(109)
$$\displaystyle\Gamma^{XY\xi}=(m^{2}_{X}-m^{2}_{Y})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{\xi}\right)-m^{2}_{X}\frac{\ln{x_{X}^{\xi}}}{x_{X}^{\xi}-1}+m^{2}_{Y}\frac{\ln{x_{Y}^{\xi}}}{x_{Y}^{\xi}-1},$$
(110)
$$\displaystyle\Gamma^{XYU_{c}}=(m^{2}_{X}-m^{2}_{Y})\left(1+\frac{1}{\epsilon}-\gamma+\ln{4\pi}-\ln m^{2}_{U_{c}}\right)-m^{2}_{X}\frac{\ln{x_{X}^{U_{c}}}}{x^{X}_{U_{c}}-1}+m^{2}_{Y}\frac{\ln{x_{Y}^{U_{c}}}}{x_{Y}^{U_{a}}-1},$$
(111)
with $x^{a}_{b}=\frac{m^{2}_{a}}{m^{2}_{b}}$.
Appendix C The $\Gamma^{f_{i}f_{j}}$ functions
$$\displaystyle\Gamma^{U_{l}E_{c}}$$
$$\displaystyle=$$
$$\displaystyle\left[\frac{({x^{U_{l}}_{X}})^{2}}{\left(x^{U_{l}}_{X}-x^{E_{c}}_{X}\right)\left(x^{U_{l}}_{X}-1\right)}-\frac{({x^{U_{l}}_{Y}})^{2}}{\left(x^{U_{l}}_{Y}-x^{E_{c}}_{Y}\right)\left(x^{U_{l}}_{Y}-1\right)}\right]\ln m_{U_{l}}$$
(112)
$$\displaystyle-$$
$$\displaystyle\left[\frac{({x^{E_{c}}_{X}})^{2}}{\left(x^{E_{c}}_{X}-x^{U_{l}}_{X}\right)\left(x^{E_{c}}_{X}-1\right)}-\frac{({x^{E_{c}}_{Y}})^{2}}{\left(x^{E_{c}}_{Y}-x^{U_{l}}_{Y}\right)\left(x^{E_{l}c}_{Y}-1\right)}\right]\ln m_{E_{c}}$$
$$\displaystyle+$$
$$\displaystyle\frac{x^{U_{l}}_{X}x^{E_{c}}_{X}}{\left(x^{U_{l}}_{X}-1\right)\left(x^{E_{c}}_{X}-1\right)}\ln m_{X}-\frac{x^{U_{l}}_{Y}x^{E_{c}}_{Y}}{\left(x^{U_{l}}_{Y}-1\right)\left(x^{E_{c}}_{Y}-1\right)}\ln m_{Y}$$
$$\displaystyle\Gamma^{U_{l}\xi^{0}}$$
$$\displaystyle=$$
$$\displaystyle\left[\frac{({x^{U_{l}}_{X}})^{2}}{\left(x^{U_{l}}_{X}-x^{\xi^{0}}_{X}\right)\left(x^{U_{l}}_{X}-1\right)}-\frac{({x^{U_{l}}_{Y}})^{2}}{\left(x^{U_{l}}_{Y}-x^{\xi^{0}}_{Y}\right)\left(x^{U_{l}}_{Y}-1\right)}\right]\ln m_{U_{l}}$$
(113)
$$\displaystyle-$$
$$\displaystyle\left[\frac{({x^{\xi^{0}}_{X}})^{2}}{\left(x^{\xi^{0}}_{X}-x^{U_{l}}_{X}\right)\left(x^{\xi^{0}}_{X}-1\right)}-\frac{({x^{\xi^{0}}_{Y}})^{2}}{\left(x^{\xi^{0}}_{Y}-x^{U_{l}}_{Y}\right)\left(x^{\xi^{0}}_{Y}-1\right)}\right]\ln m_{\xi^{0}}$$
$$\displaystyle+$$
$$\displaystyle\frac{x^{U_{l}}_{X}x^{\xi^{0}}_{X}}{\left(x^{U_{l}}_{X}-1\right)\left(x^{\xi^{0}}_{X}-1\right)}\ln m_{X}-\frac{x^{U_{l}}_{Y}x^{\xi^{0}}_{Y}}{\left(x^{U_{l}}_{Y}-1\right)\left(x^{\xi^{0}}_{Y}-1\right)}\ln m_{Y}$$
$$\displaystyle\Gamma^{U_{l}\xi}$$
$$\displaystyle=$$
$$\displaystyle\left[\frac{({x^{U_{l}}_{X}})^{2}}{\left(x^{U_{l}}_{X}-x^{\xi}_{X}\right)\left(x^{U_{l}}_{X}-1\right)}-\frac{({x^{U_{l}}_{Y}})^{2}}{\left(x^{U_{l}}_{Y}-x^{\xi}_{Y}\right)\left(x^{U_{l}}_{Y}-1\right)}\right]\ln m_{U_{l}}$$
(114)
$$\displaystyle-$$
$$\displaystyle\left[\frac{({x^{\xi}_{X}})^{2}}{\left(x^{\xi}_{X}-x^{U_{l}}_{X}\right)\left(x^{\xi}_{X}-1\right)}-\frac{({x^{\xi}_{Y}})^{2}}{\left(x^{\xi}_{Y}-x^{U_{l}}_{Y}\right)\left(x^{\xi}_{Y}-1\right)}\right]\ln m_{\xi}$$
$$\displaystyle+$$
$$\displaystyle\frac{x^{U_{l}}_{X}x^{\xi}_{X}}{\left(x^{U_{l}}_{X}-1\right)\left(x^{\xi}_{X}-1\right)}\ln m_{X}-\frac{x^{U_{l}}_{Y}x^{\xi}_{Y}}{\left(x^{U_{l}}_{Y}-1\right)\left(x^{\xi}_{Y}-1\right)}\ln m_{Y}$$
with $x^{a}_{b}=\frac{m_{a}^{2}}{m^{2}_{b}}.$
Appendix D New couplings of gauge bosons
REFERENCES
References
(1)
BaBar Collaboration, J. P. Lees et al., “Evidence for an
excess of $\bar{B}\to D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ decays,”
Phys. Rev.
Lett. 109 (2012) 101802,
arXiv:1205.5442 [hep-ex].
(2)
BaBar Collaboration, J. P. Lees et al., “Measurement of an
Excess of $\bar{B}\to D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ Decays and Implications
for Charged Higgs Bosons,”
Phys. Rev. D
88 no. 7, (2013) 072012,
arXiv:1303.0571 [hep-ex].
(3)
Belle Collaboration, M. Huschle et al., “Measurement of the
branching ratio of $\bar{B}\to D^{(\ast)}\tau^{-}\bar{\nu}_{\tau}$ relative to
$\bar{B}\to D^{(\ast)}\ell^{-}\bar{\nu}_{\ell}$ decays with hadronic tagging
at Belle,” Phys.
Rev. D 92 no. 7, (2015) 072014,
arXiv:1507.03233
[hep-ex].
(4)
Belle Collaboration, A. Abdesselam et al., “Measurement of
the branching ratio of $\bar{B}^{0}\rightarrow D^{*+}\tau^{-}\bar{\nu}_{\tau}$
relative to $\bar{B}^{0}\rightarrow D^{*+}\ell^{-}\bar{\nu}_{\ell}$ decays
with a semileptonic tagging method,” in 51st Rencontres de Moriond on
EW Interactions and Unified Theories.
3, 2016.
arXiv:1603.06711
[hep-ex].
(5)
A. Abdesselam et al., “Measurement of the $\tau$ lepton polarization in
the decay ${\bar{B}}\rightarrow D^{*}\tau^{-}{\bar{\nu}_{\tau}}$,”
arXiv:1608.06391
[hep-ex].
(6)
LHCb Collaboration, R. Aaij et al., “Measurement of the
ratio of branching fractions $\mathcal{B}(\bar{B}^{0}\to D^{*+}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(\bar{B}^{0}\to D^{*+}\mu^{-}\bar{\nu}_{\mu})$,”
Phys. Rev.
Lett. 115 no. 11, (2015) 111803,
arXiv:1506.08614
[hep-ex]. [Erratum: Phys.Rev.Lett. 115, 159901 (2015)].
(7)
D. Bigi and P. Gambino, “Revisiting $B\to D\ell\nu$,”
Phys. Rev. D
94 no. 9, (2016) 094008,
arXiv:1606.08030
[hep-ph].
(8)
S. Fajfer, J. F. Kamenik, and I. Nisandzic, “On the $B\to D^{*}\tau\bar{\nu}_{\tau}$ Sensitivity to New Physics,”
Phys. Rev. D
85 (2012) 094025,
arXiv:1203.2654 [hep-ph].
(9)
D. Bečirević, N. Košnik, and A. Tayduganov, “$\bar{B}\to D\tau\bar{\nu}_{\tau}$ vs. $\bar{B}\to D\mu\bar{\nu}_{\mu}$,”
Phys. Lett. B
716 (2012) 208–213,
arXiv:1206.4977 [hep-ph].
(10)
F. U. Bernlochner, Z. Ligeti, M. Papucci, and D. J. Robinson, “Combined
analysis of semileptonic $B$ decays to $D$ and $D^{*}$: $R(D^{(*)})$,
$|V_{cb}|$, and new physics,”
Phys. Rev. D
95 no. 11, (2017) 115008,
arXiv:1703.05330
[hep-ph]. [Erratum: Phys.Rev.D 97, 059902 (2018)].
(11)
D. Bigi, P. Gambino, and S. Schacht, “$R(D^{*})$, $|V_{cb}|$, and the Heavy
Quark Symmetry relations between form factors,”
JHEP 11
(2017) 061, arXiv:1707.09509 [hep-ph].
(12)
S. Jaiswal, S. Nandi, and S. K. Patra, “Extraction of $|V_{cb}|$ from $B\to D^{(*)}\ell\nu_{\ell}$ and the Standard Model predictions of $R(D^{(*)})$,”
JHEP 12
(2017) 060, arXiv:1707.09977 [hep-ph].
(13)
HFLAV Collaboration, Y. Amhis et al., “Averages of
$b$-hadron, $c$-hadron, and $\tau$-lepton properties as of summer 2016,”
Eur. Phys. J. C
77 no. 12, (2017) 895,
arXiv:1612.07233
[hep-ex].
(14)
A. Crivellin, C. Greub, and A. Kokulu, “Explaining $B\to D\tau\nu$, $B\to D^{*}\tau\nu$ and $B\to\tau\nu$ in a 2HDM of type III,”
Phys. Rev. D
86 (2012) 054014,
arXiv:1206.2634 [hep-ph].
(15)
A. Celis, M. Jung, X.-Q. Li, and A. Pich, “Sensitivity to charged scalars in
$\bm{B\to D^{(*)}\tau\nu_{\tau}}$ and $\bm{B\to\tau\nu_{\tau}}$
decays,” JHEP
01 (2013) 054, arXiv:1210.8443 [hep-ph].
(16)
A. Crivellin, A. Kokulu, and C. Greub, “Flavor-phenomenology of
two-Higgs-doublet models with generic Yukawa structure,”
Phys. Rev. D
87 no. 9, (2013) 094031,
arXiv:1303.5877 [hep-ph].
(17)
A. Greljo, G. Isidori, and D. Marzocca, “On the breaking of Lepton Flavor
Universality in B decays,”
JHEP 07
(2015) 142, arXiv:1506.01705 [hep-ph].
(18)
S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente, and J. Virto,
“Phenomenology of an $SU(2)\times SU(2)\times U(1)$ model with
lepton-flavour non-universality,”
JHEP 12
(2016) 059, arXiv:1608.01349 [hep-ph].
(19)
A. Greljo, D. J. Robinson, B. Shakya, and J. Zupan, “R(D${}^{(∗)}$) from
W${}^{′}$ and right-handed neutrinos,”
JHEP 09
(2018) 169, arXiv:1804.04642 [hep-ph].
(20)
I. Doršner, S. Fajfer, A. Greljo, J. F. Kamenik, and N. Košnik,
“Physics of leptoquarks in precision experiments and at particle
colliders,” Phys. Rept. 641 (2016) 1–68,
arXiv:1603.04993
[hep-ph].
(21)
M. Bauer and M. Neubert, “Minimal Leptoquark Explanation for the
$R_{D^{(*)}}$ , $R_{K}$ , and $(g-2)_{\mu}$ Anomalies,”
Phys. Rev.
Lett. 116 no. 14, (2016) 141802,
arXiv:1511.01900
[hep-ph].
(22)
S. Fajfer and N. Košnik, “Vector leptoquark resolution of $R_{K}$ and
$R_{D^{(*)}}$ puzzles,”
Phys. Lett. B
755 (2016) 270–274,
arXiv:1511.06024
[hep-ph].
(23)
R. Barbieri, G. Isidori, A. Pattori, and F. Senia, “Anomalies in $B$-decays
and $U(2)$ flavour symmetry,”
Eur. Phys. J. C
76 no. 2, (2016) 67,
arXiv:1512.01560
[hep-ph].
(24)
D. Bečirević, S. Fajfer, N. Košnik, and O. Sumensari, “Leptoquark
model to explain the $B$-physics anomalies, $R_{K}$ and $R_{D}$,”
Phys. Rev. D
94 no. 11, (2016) 115021,
arXiv:1608.08501
[hep-ph].
(25)
G. Hiller, D. Loose, and K. Schönwald, “Leptoquark Flavor Patterns & B
Decay Anomalies,” JHEP 12 (2016) 027,
arXiv:1609.08895
[hep-ph].
(26)
A. Crivellin, D. Müller, and T. Ota, “Simultaneous explanation of
$R(D^{(∗)})$ and $b\to s\mu^{+}\mu^{-}$: the last scalar leptoquarks
standing,” JHEP
09 (2017) 040, arXiv:1703.09226 [hep-ph].
(27)
A. Abada, A. M. Teixeira, A. Vicente, and C. Weiland, “Sterile neutrinos in
leptonic and semileptonic decays,”
JHEP 02
(2014) 091, arXiv:1311.2830
[hep-ph].
(28)
G. Cvetic and C. S. Kim, “Rare decays of B mesons via on-shell sterile
neutrinos,” Phys.
Rev. D 94 no. 5, (2016) 053001,
arXiv:1606.04140
[hep-ph]. [Erratum: Phys.Rev.D 95, 039901 (2017)].
(29)
LHCb Collaboration, R. Aaij, B. Adeva, M. Adinolfi, A. Affolder,
Z. Ajaltouni, S. Akar, J. Albrecht, F. Alessio, M. Alexander, S. Ali, and
et al., “Test of lepton universality using $\text{B}^{+}\rightarrow\text{K}^{+}\text{l}^{+}\text{l}^{-}$ decays,”
Phys. Rev. Lett
113 no. 15, (2014) 151601,
arXiv:1406.6482 [hep-ex].
(30)
LHCb Collaboration, R. Aaij, C. Abellán Beteta, B. Adeva,
M. Adinolfi, C. Aidala, Z. Ajaltouni, S. Akar, P. Albicocco, J. Albrecht,
F. Alessio, and et al., “Search for lepton-universality violation in
$\text{B}^{+}\rightarrow\text{K}^{+}\text{l}^{+}\text{l}^{-}$ decays,”
Phys. Rev.
Lett 122 no. 19, (2019) 191801,
arXiv:1903.09252
[hep-ex].
(31)
Belle Collaboration, S. S. e. a. Choudhury, S., “Test of lepton
flavor universality and search for lepton flavor violation in $\text{B}\rightarrow\text{K}\text{l}^{+}\text{l}^{-}$ decays,”
JHEP 2021
no. 3, (2021) 105, arXiv:1908.01848 [hep-ex].
(32)
LHCb Collaboration, R. Aaij, C. A. Beteta, T. Ackernley, B. Adeva,
M. Adinolfi, H. Afsharnia, and et al., “Test of lepton universality in
beauty-quark decays,” arXiv:2103.11769 [hep-ex].
(33)
LHCb Collaboration, R. Aaij, B. Adeva, M. Adinolfi, Z. Ajaltouni,
S. Akar, J. Albrecht, F. Alessio, M. Alexander, S. Ali, and et al., “Test
of lepton universality with $\text{B}^{0}\rightarrow\text{K}^{0*}\text{l}^{+}\text{l}^{-}$ decays,” JHEP 2017 no. 8, (2017) 55,
arXiv:1705.05802
[hep-ex].
(34)
Belle Collaboration, S. Wehle, I. Adachi, K. Adamczyk, H. Aihara,
D. Asner, H. Atmacan, V. Aulchenko, T. Aushev, R. Ayad, V. Babu, and et al.,
“Test of lepton universality with $\text{B}^{0}\rightarrow\text{K}^{0*}\text{l}^{+}\text{l}^{-}$ in decays at Bell,”
Phys. Rev, Lett
126 no. 16, (2021) 161801,
arXiv:1904.02440
[hep-ex].
(35)
L. collaboration, “Test of lepton universality in $b\rightarrow s\ell^{+}\ell^{-}$ decays,” 2022.
(36)
L. collaboration, “Measurement of lepton universality parameters in $b^{+}\to k^{+}\ell^{+}\ell^{-}$ and $b^{0}\to k^{*0}\ell^{+}\ell^{-}$ decays,” 2022.
(37)
HFLAV Collaboration, M. Bordone, G. Isidori, and A. Pattori, “On
the standard model predictions for $\text{R}_{\text{K}},\text{R}_{\text{K}^{*}}$,”
Eur. Phys. J. C
76 no. 8, (2016) 440,
arXiv:1605.07633
[hep-ph].
(38)
D. M. Straub, “flavio: a Python package for flavour and precision
phenomenology in the Standard Model and beyond,”
arXiv:1810.08132
[hep-ph].
(39)
P. S. Wolfgang Altmannshofer, Christoph Niehoff and D. M. Straub, “Status of
the $\text{B}^{0}\rightarrow\text{K}^{0*}\text{l}^{+}\text{l}^{-}$ anomaly after
Moriond 2017,” Eur. Phys. J. C 77 no. 6, (2017) 377,
arXiv:1703.09189
[hep-ph].
(40)
S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente, and J. Virto,
“Non-abelian gauge extensions for B-decay anomalies,”
Phys. Lett. B
760 (2016) 214–219,
arXiv:1604.03088
[hep-ph].
(41)
R. Alonso, B. Grinstein, and J. Martin Camalich, “Lepton universality
violation and lepton flavor conservation in $B$-meson decays,”
JHEP 10
(2015) 184, arXiv:1505.05164 [hep-ph].
(42)
D. Buttazzo, A. Greljo, G. Isidori, and D. Marzocca, “Toward a coherent
solution of diphoton and flavor anomalies,”
JHEP 08
(2016) 035, arXiv:1604.03940 [hep-ph].
(43)
B. Bhattacharya, A. Datta, D. London, and S. Shivashankara, “Simultaneous
Explanation of the $R_{K}$ and $R(D^{(*)})$ Puzzles,”
Phys. Lett. B
742 (2015) 370–374,
arXiv:1412.7164 [hep-ph].
(44)
L. Calibbi, A. Crivellin, and T. Ota, “Effective Field Theory Approach to
$b\to s\ell\ell^{(^{\prime})}$, $B\to K^{(*)}\nu\overline{\nu}$ and $B\to D^{(*)}\tau\nu$ with Third Generation Couplings,”
Phys. Rev.
Lett. 115 (2015) 181801,
arXiv:1506.02661
[hep-ph].
(45)
D. Van Loi, P. Van Dong, and D. Van Soa, “Neutrino mass and dark matter from
an approximate $\text{B}-\text{L}$ symmetry,”
JHEP 2020
no. 5, (2020) 090, arXiv:1911.04902 [hep-ph].
(46)
F. Pisano and V. Pleitez, “$\text{SU(3)}\times\text{U(1)}$ model for
electroweak interactions,”
Phys. Rev. D
46 no. 1, (1992) 410,
arXiv:9206242 [hep-ph].
(47)
P. H. Frampton, “Chiral dilepton model and the flavor question,”
Phys. Rev. Lett
69 (1992) 2889.
(48)
R. Foot, O. F. Hernández, F. Pisano, and V. Pleitez, “Lepton masses in an
$\text{SU(3)}_{\text{L}}\times\text{U(1)}_{\text{N}}$ gauge model,”
Phys. Rev. D
47 no. 9, (1993) 4158,
arXiv:9207264 [hep-ph].
(49)
R. Foot, O. F. Hernández, F. Pisano, and V. Pleitez, “Canonical
neutral-current predictions from the weak-electromagnetic gauge group
$\text{SU(3)}\times\text{U(1)}$,”
Phys. Rev. D
22 no. 0, (1980) 738.
(50)
J. C. Montero, F. Pisano, and V. Pleitez, “Neutral currents and
glashow-iliopoulos-maiani mechanism in $\text{SU(3)}_{\text{L}}\times\text{U(1)}_{\text{N}}$ models for electroweak interactions,”
Phys. Rev. D
47 no. 7, (1993) 2918,
arXiv:9212271 [hep-ph].
(51)
R. Foot, H. N. Long, and T. A. Tran, “$\text{SU(3)}_{\text{L}}\times\text{U(1)}_{\text{N}}$ and $\text{SU(4)}_{\text{L}}\times\text{U(1)}_{\text{N}}$
gauge models with right-handed neutrinos,”
Phys. Rev. D
50 no. 1, (1994) R34,
arXiv:9402243 [hep-ph].
(52)
D. Huong, D. Dinh, L. Thien, and P. Van Dong, “Dark matter and flavor changing
in the flipped 3-3-1 model,”
JHEP 2019
no. 8, (2019) 051, arXiv:1906.05240 [hep-ph].
(53)
N. T. Duy, P. N. Thu, and D. T. Huong, “New physics in $\text{b}\rightarrow\text{s}$ transitions in the MF331 model,”
arXiv:2205.02995
[hep-ph].
(54)
R. M. Fonseca and M. Hirsch, “A flipped 331 model,”
JHEP 2016
no. 8, (2016) 003, arXiv:1606.01109 [hep-ph].
(55)
Z. Ligeti and F. J. Tackmann, “Precise predictions for
$b\rightarrow{X}_{c}\tau\overline{\nu}$
decay distributions,”
Phys. Rev. D
90 (Aug, 2014) 034021.
https://link.aps.org/doi/10.1103/PhysRevD.90.034021.
(56)
Particle Data Group Collaboration, R. L. Workman et al.,
“Review of Particle Physics,”
PTEP 2022
(2022) 083C01.
(57)
V. Cirigliano and I. Rosell, “pi/K —$>$ e anti-nu(e) branching
ratios to O(e**2 p**4) in Chiral Perturbation Theory,”
JHEP 10 (2007) 005, arXiv:0707.4464 [hep-ph]. |
\onlineid
0
\vgtccategoryResearch
\vgtcinsertpkg
Visualizing Gender Gap in Film Industry over the Past 100 Years
Junkai Man
Ruitian Wu
Chenglin Zhang
Xin Tong
Corresponding Author: Xin Tong – Division of Natural and Applied Sciences, Duke Kunshan University, Kunshan, Jiangsu 215316, China; Email: xt43@duke.edu. Authors: Junkai Man, Ruitian Wu, Chenglin Zhang – Division of Natural and Applied Sciences, Duke Kunshan University, Kunshan, Jiangsu 215316. This report is co-authored by the three authors.
Duke Kunshan University
Abstract
Visualizing big data can provide valuable insights into social science research. In this project, we focused on visualizing the potential gender gap in the global film industry over the past 100 years. We profiled the differences both for the actors/actresses and male/female movie audiences and analyzed the IMDb data of the most popular 10,000 movies (the composition and importance of casts of different genders, the cooperation network of the actors/actresses, the movie genres, the movie descriptions, etc.) and audience ratings (the differences between male’s and female’s ratings). Findings suggest that the gender gap has been distinct in many aspects, but a recent trend is that this gap narrows down and women are gaining discursive power in the film industry. Our study presented rich data, vivid illustrations, and novel perspectives that can serve as the foundation for further studies on related topics and their social implications.
\CCScatlist\CCScatTwelve
DataVisualizationMovie IndustryGenders;
Introduction
The colorful and epic motion picture industry has enjoyed more than 100 years of prosperity; at the same time, it reflects social trends, public opinions, and even stereotypes. In turn, great movies and producers also profoundly impact on the development of the movie industry and society itself. We are inspired by the famous movie series Star Wars, which is mentioned by Kagan et al. [1] in Figure 1. The gender ratio in the first few Star Wars episodes is quite skewed, but it’s more balanced than before for most recent episodes. In our work, we want to investigate this phenomenon of the merging gender gap in a broader context. In addition, we also want to study the changes in the preferences of male and female audiences since the audience ratings of the recent Star Wars episodes are lower than the early ones.
Males and females play different roles either as actors/actresses in the movie production process or as audiences. In the movie production process, actors’ and actresses’ engagement differs in movie ratings (e.g., PG-13, R), genres, movie content, and cooperative networks. As audiences, males’ and females’ movie preferences vary over time, e.g., their ratings for each movie. Therefore, we define the male and female’s differences in these two aspects as gender gaps. In this project, we aim to visualize this gender gap using time-series data from 1920 to 2021 to (1) prove the existence of this gender gap and (2) further understand its evolution process over the past 100 years. By looking into these visualizations, we can gain valuable insights into the general trend of the gender gap over the 100 years, and thus will have a starting point for further investigation and statistical examination.
More precisely, our visualizations aim to: 1) Compare actor- and actress-dominant movie genres for the past 100 years. 2) Identify collaborations among actors and actresses. 3) Categorize the yearly number of films by age ratings and protagonist genders. 4) Compare the descriptions for the actor- and actress-dominant movies. 5) Identify male and female audiences’ movie preferences.
1 Visualization Method and Results
Here, we introduce the data collection, processing, and visualization methods to achieve our research goals. In total, we crawled 10,200 movie items from 1920 to 2021, with 100 most voted movies each year. Each data item includes the movie title, year, genre, runtime, certificate, IMDb rating (total and for male and female raters), movie description, director name, star name, their corresponding IMDb ID, number of votes, and box office takings (if available). Moreover, we calculated an index to indicate whether the movie is actor- or actress-dominant by checking the cast list and the sequence of the stars listed in it. Then, we split the dataset into two groups (actor and actress-dominant) by the index, which is about 3:2 in size (actor: actress).
1.1 Co-stardom Network
First, we compared how actors and actresses cooperate differently in different periods through a network diagram, showing times of co-occurrences between actors/actresses. In Figure 2, nodes represent actors/actresses while edges represent co-occurrences between different actors/actresses. Moreover, colors represent genders (yellow as male and blue as female), and the size refers to the times of occurrences for each actor/actress. Although we can observe that actors make up the main skeleton of the whole network and occupy the most significant nodes, actresses are playing more and more important roles in the network as time passes (Figure 2). The network is becoming more balanced in terms of gender, which indicates that actresses are securing a relatively equal place in the film industry compared to actors. Such balance can be reflected by movie ratings as well. For example, in recent years, the rating distribution is becoming more homogeneous, indicating that the gap between the two types of movies is decreasing (they are shifting downward). Nowadays, restrictions on ratings of movies that actresses could star in are being gradually removed.
1.2 Movie Rating Theme River
Rating of movies (e.g., PG-13, R, etc.) is an important characteristic featuring the target audience of the movies. We visualized a Theme River diagram to show the distribution of rating differences between actor- and actress-dominant movies. Figure 3 places actor- and actress-dominant movies above/below the x-axis separately. Colors were used to represent ratings, and their widths represent times of occurrences. In this visualization, types of ratings could be selected in a customized way.
1.3 Movie Theme Word Cloud
Another important indicator for movies is its keyword, so we extracted keywords from the movies’ descriptions. We visualized two word clouds in Figure 4 to illustrate how actor- and actress-dominant movies’ keywords differ. Clear distinctions could be observed between two genders, even though some similar general topics were shared by the two groups. For actor-dominant ones, there are also topics related to ”war”, ”world” and ”murder”. For actress-dominant ones, some words like ”girl”, ”wife” and ”mother” are observed, indicating more explorations of self-identity in these movies. The main story behind actor- and actress-dominant movies still varies, reflecting of diversity instead of the gender gap.
1.4 Genre Chord Diagram
We developed a Chord Diagram (Figure 5) to compare the genre differences between actor- and actress-dominant movies. In Figure 5, the left Chord represents the distribution of film genres for actor-dominant movies. In contrast, the right Chord represents the distribution of film genres for actress-dominant movies. In the visualization, a time period could be selected so that users can observe the changes over time. We found that before the 1970th, most actresses starred in drama and romance films while actors dominated the movies of action, crime, and adventures. However, the gap has been narrowing in recent years as the composition of two chord diagrams are approaching the same. When choosing actors/actresses to star in a particular film, gender has become a less critical indicator to rely on.
1.5 Bubble Chart of Audience Preferences
Besides actors/actresses’ gender gap, the audience is also a crucial part of considering when analyzing the gender gap. Here, we presented two bubble charts showing similarities and differences in audience preference for the two genders. In Figure 6, the top figure shows aggregated ratings for both male and female audiences, while the one on the bottom shows their preference gap. A higher value would indicate a higher preference for male audiences, and vice versa. While the gender gap between actors/actresses is being narrowed down, the audience gap is evolving in the opposite direction. We noticed that compared to movies made before 2000, female audiences have an equal or even higher weight in determining the popularity of movies. The gap also indicates that the gap in the taste between male and female audiences is becoming larger than before. Producers nowadays should spend more efforts to satisfy audiences of different genders.
2 Conclusion
In conclusion, from our visualizations, findings suggest that the gender gap in the movie industry is diminishing, especially in movie casts. However, the preferences of male and female audiences have become more diverse in recent years. The five visualizations give us a glimpse into movie industry’s development and potential trends in the future, which is essential when more people are advocating for Feminism. Overall, this visualization project provides researchers with foundations to dig into this gender gap topic in movies.111The demo page of this project can be found at
https://junkaiman.com/projects/gender-gap-in-film
Acknowledgements.
The authors thank Duke Kunshan University for the academic support throughout this project.
References
[1]
D. Kagan, T. Chesney, and M. Fire.
Using data science to understand the film industry’s gender gap.
CoRR, abs/1903.06469, 2019. |
Generalized Geometries Constructed from Differential Forms on Cotangent Bundle
Radek Suchánek111Department of Mathematics and Statistics, Masaryk University, Brno, Czech Republic
Abstract. We investigate the landscape of generalized geometries that can be derived from Monge-Ampère structures. Instead of following the approaches of Banos, Roubtsov, Kosmann-Schwarzbach, and others, we take a new path, inspired by the results of Hu, Moraru, and Svoboda. We construct a large family of new generalized almost geometries derived from non-degenerate 2D symplectic Monge-Ampère structures and other related geometric objects, such as complex structures. We demonstrate that, under certain assumptions, non-degenerate Monge-Ampère structures give rise to quadric surfaces of generalized almost geometries. Additionally, we discuss the link between Monge-Ampère structures and Monge-Ampère equations within this framework.
Contents
1 Introduction
2 Monge-Ampėre structures and equations
2.1 Monge-Ampėre theory in 2D
2.2 Field of endomorphisms
3 Generalized geometry of M-A structures
3.1 Generalized (almost) structures
3.1.1 Isotropy, Dirac structures, and integrability
3.2 Generalized geometry and M-A theory
3.3 Quadric surfaces of generalized geometries
1 Introduction
Construction of generalized structures directly from tensor fields of the corresponding block representation was considered by many authors [3, 7, 11, 13, 14, 24, 25]. An example of this approach relevant to our case was Crainic’s paper [7], where the relations for tensor fields defining generic (integrable) generalized complex structures were described. Using the notion of a twist of a $2$-form by an endomorphism, Crainic proved the correspondence between generalized complex structures and Hitchin pairs. By Lychagin’s results [15, 16, 17], every 2D Monge-Ampère equation can be encoded by a pair of $2$-forms, which moreover determine a specific endomorphism. Applying Monge-Ampère theory in the context of generalized complex geometry, Banos showed in [3] that Monge-Ampère structures of divergence type give rise to generalized almost complex structures, and integrability of these structures is connected with the local equivalence problem for 2D symplectic Monge-Ampère equations with non-vanishing Pfaffian (i.e. elliptic or hyperbolic equations). This was further used by Kosmann-Schwarzbach and Rubtsov in [14] to deform a Lie algebroid structure on $\mathbb{T}\left(T^{*}\mathcal{B}\right)$, which further induces a Courant algebroid structure on $\mathbb{T}\left(T^{*}\mathcal{B}\right)$ called by the authors Monge-Ampère Courant algebroid.
In this paper, we explore further the landscape of generalized geometries, which can be derived from Monge-Ampère structures. There were two apparent possibilities to do that. Either go into higher dimensions, especially dimension three, where symplectic classification is still possible or use a completely different method than Banos did. We took the latter path. Motivated by the results in [13], where anticommutative pairs and generalized metric compatible structures are considered, we constructed many new generalized almost geometries derived from non-degenerate 2D symplectic Monge-Ampère structures and from other geometric objects, they define (e.g. a complex structure and a pseudo-Riemannian metric). Some of these generalized almost geometries can be proven to be integrable by virtue of results in [2, 3, 14, 15, 18]. On the other hand, the standard notion of integrability via Nijenhuis tensor (or equivalently via Courant involutivity of certain almost Dirac structures) cannot be applied to many of the geometries constructed by our method. For example, for non-isotropic cases, one can consider the notion of weak integrability instead [13, 19].
Original content of the paper. With the help of Monge-Ampère theory [16, 17, 15], we construct a family of generalized almost structures associated to 2D symplectic M-A structures. We show that, under certain assumptions, non-degenerate M-A structures give rise to quadric surfaces of generalized almost geometries. In this framework, we discuss the link between M-A structures and M-A equations.
2 Monge-Ampėre structures and equations
Let $\mathcal{M}$ be a $n$-dimensional symplectic manifold. In this case, $n$ must be an even number. We now define a specific pair of differential forms, which gives rise to other geometric structures that we will study in the next sections.
Definition 2.1.
A pair $(\Omega,\alpha)\in\Omega^{2}\left(\mathcal{M}\right)\times\Omega^{n}\left(\mathcal{M}\right)$ is called a Monge-Ampère structure over $\mathcal{M}$ (or just M-A structure), if $\Omega$ is a symplectic form and
$$\displaystyle\Omega\wedge\alpha=0\ .$$
If $\mathcal{M}=T^{*}\mathcal{B}$, then $(\Omega,\alpha)$ is called a symplectic Monge-Ampère structure.
Notation. In our considerations, we will be dealing with symplectic M-A structures, and the symbol $\Omega$ will always denote the canonical symplectic form of the cotangent bundle.
2.1 Monge-Ampėre theory in 2D
We now consider symplectic 2D M-A structures and show how they look in canonical symplectic coordinates. We then proceed with the description of the corresponding M-A operators and equations. We will discuss some unique features of 2D M-A theory and describe various tensor fields that can be constructed from (non-degenerate) 2D M-A structures, and which are crucial for our further considerations.
Let $\dim\mathcal{B}=2$, and recall that $\Omega$ denotes the canonical symplectic form on the cotangent bundle $T^{*}\mathcal{B}$. In the Darboux coordinates $x,y,p,q$ ($x,y$ are the base coordinates on $\mathcal{B}$), the symplectic form writes as
$$\displaystyle\Omega=\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}p+\operatorname{\operatorname{d}}y\wedge\operatorname{\operatorname{d}}q\ .$$
(1)
Let $\alpha$ be a $2$-form given by
$$\displaystyle\begin{split}\alpha&=A\operatorname{\operatorname{d}}p\wedge\operatorname{\operatorname{d}}y+B(\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}p-\operatorname{\operatorname{d}}y\wedge\operatorname{\operatorname{d}}q)\\
&\qquad+C\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}q+D\operatorname{\operatorname{d}}p\wedge\operatorname{\operatorname{d}}q+E\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}y\ ,\end{split}$$
(2)
where the coefficients $A,B,C,D,E$ are smooth functions on $T^{*}\mathcal{B}$. Then $\alpha\wedge\Omega=0$ holds for all possible choice of the coefficients, and $(\Omega,\alpha)$ is a M-A structure.
Now let $f\colon\mathcal{B}\to\mathbb{R}$ be a smooth function. The differential of $f$ determines a section, $\operatorname{\operatorname{d}}f\colon\mathcal{B}\to T^{*}\mathcal{B}$, given by $\operatorname{\operatorname{d}}f(x):=\operatorname{\operatorname{d}}_{x}f$, and we can pullback $\alpha$ onto $\mathcal{B}$ to obtain a top form $(\operatorname{\operatorname{d}}f)^{*}\alpha\in\Omega^{2}(\mathcal{B})$. Then the equation
$$\displaystyle(\operatorname{\operatorname{d}}f)^{*}\alpha=0$$
(3)
defines a nonlinear second-order PDE for $f$ in two variables. If $\alpha$ is given by (2), then the equation (3) corresponds to 2D symplectic Monge-Ampère equation
$$\displaystyle Af_{xx}+2Bf_{xy}+Cf_{yy}+D\left(f_{xx}f_{yy}-{f_{xy}}^{2}\right)+E=0\ ,$$
(4)
where $f_{xy}:=\frac{\partial^{2}f}{\partial x\partial y}$, and $A,B,C,D,E$ are smooth functions, which depend on $x,y,f_{x},f_{y}$.
In this way, we can represent M-A equations via a pair of differential forms, the M-A structures. Nonetheless, this representation has a certain ambiguity, which can be partially removed with the notion of effectivity. The condition $\Omega\wedge\alpha=O$ can be viewed as a definition of $\alpha$ being effective.
For a detailed exposition of the theory of Monge-Ampère equations and their applications, particularly in dimensions two and three, see [15]. For further details about applications of Monge-Ampère theory, for example in the theory of complex differential equations, see [4], theoretical meteorology, and incompressible fluid theory, see [23, 22, 20, 8, 5].
Example 2.1.
Let $\alpha=-\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}q+\operatorname{\operatorname{d}}y\wedge\operatorname{\operatorname{d}}p$. Then
$$\displaystyle(\operatorname{\operatorname{d}}f)^{*}\alpha=-\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}(f_{y})+\operatorname{\operatorname{d}}y\wedge\operatorname{\operatorname{d}}(f_{x})=(-f_{yy}-f_{xx})\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}y\ .$$
Hence $(\operatorname{\operatorname{d}}f)^{*}\alpha=0$ amounts to the 2D Laplace equation $f_{yy}+f_{xx}=0$ and $(\Omega,\alpha)$ is the corresponding M-A structure. Now let $\alpha=p\operatorname{\operatorname{d}}p\wedge dy+\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}q$. Then $(\operatorname{\operatorname{d}}f)^{*}\alpha=0$ describes the von Karman equation $f_{x}f_{xx}-f_{yy}=0$.
Definition 2.2 ([15]).
Pfaffian of a 2D M-A structure, $\operatorname{\operatorname{Pf}}(\alpha)$, is defined by
$$\displaystyle\alpha\wedge\alpha=\operatorname{\operatorname{Pf}}(\alpha)\Omega\wedge\Omega\ .$$
(5)
M-A structure is called non-degenerate if the Pfaffian is nowhere-vanishing. If $\operatorname{\operatorname{Pf}}(\alpha)>0$, then she structure is called elliptic, and if $\operatorname{\operatorname{Pf}}(\alpha)<0$, then it is called hyperbolic. We call a non-degenerate structure normalized, if $|\operatorname{\operatorname{Pf}}(\alpha)|=1$.
Remark 2.1.
In the modelling of stably stratified geophysical flows, the Pfaffian is related with the Rellich’s parameter [26, 9, 21].
For a general 2D symplectic M-A structure, the Pfaffian in canonical coordinates writes as
$$\displaystyle\operatorname{\operatorname{Pf}}(\alpha)=-B^{2}+AC-DE\ .$$
(6)
This can be directly checked using coordinate expressions (1) and (2).
Example 2.2.
Let $(\Omega,\alpha)$ be the M-A structure described in example 2.1, i.e. structure for 2D Laplace equation. Then $\operatorname{Pf}(\alpha)=1$. Now consider the M-A structure for the von Karman equation $f_{x}f_{xx}-f_{yy}=0$. Then $\operatorname{Pf}(\alpha)=p$. By Lychagin-Rubtsov theorem 2.1 (see section 2.2), the 2D Laplace equation gives rise to integrable complex structure, while the von Karman equation does not.
Normalization. A non-degenerate M-A structure can be normalized $(\Omega,\alpha)\mapsto(\Omega,n(\alpha))$ by
$$\displaystyle n(\alpha):={|\operatorname{\operatorname{Pf}}(\alpha)|}^{-\frac{1}{2}}\alpha$$
(7)
Indeed, the Pfaffian of $n(\alpha)$ satisfies $|\operatorname{\operatorname{Pf}}\left(n(\alpha)\right)|=1$, since
$$\displaystyle n(\alpha)\wedge n(\alpha)={|\operatorname{\operatorname{Pf}}(\alpha)|}^{-1}\alpha\wedge\alpha=\frac{\operatorname{\operatorname{Pf}}(\alpha)}{|\operatorname{\operatorname{Pf}}(\alpha)|}\Omega\wedge\Omega\ ,$$
which implies
$$\displaystyle\operatorname{\operatorname{Pf}}\left(n(\alpha)\right)=\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha)\ .$$
(8)
A non-degenerate M-A structure $(\Omega,\alpha)$ and its normalization $(\Omega,n(\alpha))$ correspond to the same M-A equation, since
$$\displaystyle(\operatorname{\operatorname{d}}f)^{*}n(\alpha)=\left.{|\operatorname{\operatorname{Pf}}(\alpha)|}^{-\frac{1}{2}}\right|_{\operatorname{Im}\operatorname{\operatorname{d}}f}(\operatorname{\operatorname{d}}f)^{*}\alpha=0\ .$$
Remark 2.2.
Notice that we could have rescaled $\alpha$ with an arbitrary non-vanishing function, which would result in a new M-A structure $(\Omega,\tilde{\alpha})$. Then, by the same argument as for the normalization, $(\Omega,\tilde{\alpha})$ defines the same M-A equation as $(\Omega,\alpha)$. The main reason we choose to work with normalized structures is that generalized geometries that we will construct from $\alpha$ are not invariant with respect to the rescaling. Hence the normalization condition provides a consistent choice of the representative in the class $[\alpha]$, where $\tilde{\alpha}\in[\alpha]$, if and only if $\tilde{\alpha}=e^{h}\alpha$ for some function $h$. Of course, the same argument also holds for other structures derived from $\alpha$.
2.2 Field of endomorphisms
In the following paragraphs, we want to describe how every non-degenerate M-A structure defines a field of endomorphisms, which square to either $\operatorname{Id}_{T\left(T^{*}\mathcal{B}\right)}$, or $-\operatorname{Id}_{T\left(T^{*}\mathcal{B}\right)}$. Before we do so, we need to fix some notation.
Notation. Tensor $\sigma\in\Gamma\left(T^{*}\mathcal{B}\otimes T^{*}\mathcal{B}\right)$ can be identified with a $C^{\infty}(\mathcal{B})$-linear map $\sigma_{\#}\colon\Gamma\left(T\mathcal{B}\right)\to\Gamma\left(T^{*}\mathcal{B}\right)$ defined by $\sigma_{\#}(X):=X\mathbin{\lrcorner}\sigma$. Similarly, if $\tau\in\Gamma\left(T\mathcal{B}\otimes T\mathcal{B}\right)$, then we denote by $\tau^{\#}$ the linear map $\tau^{\#}\colon\Gamma\left(T^{*}\mathcal{B}\right)\to\Gamma\left(T\mathcal{B}\right)$, where $\tau^{\#}(\xi):=\xi\mathbin{\lrcorner}\tau$. Now consider a non-degenerate $2$-form $\alpha\in\Omega^{2}(T^{*}\mathcal{B})$, which means that $\alpha_{\#}\colon T\mathcal{B}\to T^{*}\mathcal{B}$ is an isomorphism. Using the inverse $(\alpha_{\#})^{-1}\colon T^{*}\mathcal{B}\to T\mathcal{B}$, we define the bivector $\pi_{\alpha}\in\Gamma\left(\Lambda^{2}TT^{*}\mathcal{B}\right)$ by
$$\displaystyle(\pi_{\alpha})^{\#}:=(\alpha_{\#})^{-1}\ .$$
(9)
For example, if $\Omega=\operatorname{\operatorname{d}}x\wedge\operatorname{\operatorname{d}}p+\operatorname{\operatorname{d}}y\wedge\operatorname{\operatorname{d}}q$ is the canonical symplectic form, then $\pi_{\Omega}=\operatorname{\partial}_{x}\wedge\operatorname{\partial}_{p}+\operatorname{\partial}_{y}\wedge\operatorname{\partial}_{q}$ is the corresponding bivector, where $\partial_{x}:=\frac{\partial}{\partial_{x}}$ is the $x$-coordinate vector field (and similarly for the other fields $\partial_{i}$).
When working with matrices, we will use the underline notation to make the distinction between a morphism and its matrix representation. For example, if $\rho\in\operatorname{End}\left(T\left(T^{*}\mathcal{B}\right)\right)$, then corresponding matrix will be denoted $\underline{\rho}$ (which will always be understood with respect to the canonical coordinates of $\Omega$). Also, we will be omitting the $\#$ symbol when dealing with the morphisms derived from $2$-forms (and $2$-vectors), e.g. the matrix of $\alpha_{\#}$ will be denoted simmply $\underline{\alpha}$. Similarly, the matrix of $(\pi_{\alpha})^{\#}$ will be denoted simply by $\underline{\alpha}^{-1}$ (see defining equation (9)). When dealing with generalized structures, we write $\mathbb{J}$ for coordinate-free description, as well as for the coordinate description via matrices. Since block description of generalized structure either contains coordinate-free objects or their matrices, and the distinction between the two is clear in our notation, there is not much space for confusion.
Almost complex and almost product structure. Let $\rho\in\operatorname{End}\left(TT^{*}\mathcal{B}\right)$ be an endomorphism defined by
$$\displaystyle\rho:={|\operatorname{\operatorname{Pf}}(\alpha)|}^{-\frac{1}{2}}\pi_{\Omega}^{\#}\circ\alpha_{\#}\ .$$
(10)
If $(\Omega,\alpha)$ is elliptic, then $\rho^{2}=-\operatorname{Id}_{T\left(T^{*}\mathcal{B}\right)}$, if $(\Omega,\alpha)$ is hyperbolic, then $\rho^{2}=\operatorname{Id}_{T\left(T^{*}\mathcal{B}\right)}$. The fact that $\rho$ is either an almost complex structure on $T^{*}\mathcal{B}$ (if $\operatorname{\operatorname{Pf}}(\alpha)>0$), or an almost product structure (if $\operatorname{\operatorname{Pf}}(\alpha)<0$), was proven in [18]. In the canonical coordinates,
$$\displaystyle\underline{\rho}={|\operatorname{\operatorname{Pf}}(\alpha)|}^{-\frac{1}{2}}\begin{pmatrix}B&-A&0&-D\\
C&-B&D&0\\
0&E&B&C\\
-E&0&-A&-B\end{pmatrix}$$
(15)
and, as expected, it follows that
$$\displaystyle\underline{\rho}^{2}=\frac{-\operatorname{\operatorname{Pf}}(\alpha)}{|\operatorname{\operatorname{Pf}}(\alpha)|}\operatorname{Id}_{T\left(T^{*}\mathcal{B}\right)}=-\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha)\operatorname{Id}_{T\left(T^{*}\mathcal{B}\right)}\ .$$
Notice that $\rho$ is invariant of the normalization, since (8) yields
$$\displaystyle\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}\left(n(\alpha)\right)=\operatorname{\operatorname{sgn}}^{2}\operatorname{\operatorname{Pf}}(\alpha)=\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha)\ .$$
Integrability. V. Lychagin and V. Rubtsov showed in [18] that there is a direct link between the local equivalence of M-A equations and integrability of the $\rho$ structure derived from the corresponding M-A structure $(\Omega,\alpha)$. Moreover, the integrability condition can be expressed as a certain closedness condition.
Proposition 2.1 (Lychagin-Rubtsov [18]).
A 2D symplectic M-A equation $(\operatorname{\operatorname{d}}f)^{*}\alpha=0$ can be locally transformed via a symplectic transformation to either the Laplace equation $\Delta f=0$, or the wave equation $\Box f=0$, if and only if the $\rho$ structure is integrable, which is equivalent to $\frac{\alpha}{\sqrt{|\operatorname{\operatorname{Pf}}(\alpha)|}}$ being closed.
3 Generalized geometry of M-A structures
We recall some key definitions and objects from generalized geometry. To avoid any confusion, we formulate these notions for a general smooth manifold $\mathcal{M}$ and then choose $\mathcal{M}=T^{*}\mathcal{B}$ to investigate the generalized geometry of symplectic 2D M-A structures and 2D incompressible fluid flows.
3.1 Generalized (almost) structures
A generalized tangent bundle over $\mathcal{M}$ is the vector bundle
$$\begin{tikzcd}$$
(16)
with the bundle projection $\pi$ defined by the the composition
$$\begin{tikzcd}$$
where the left map is the projection on the first factor of the Whitney sum, and the second map is the tangent bundle projection on $\mathcal{M}$. The pairing between vector fields and $1$-forms endows $\mathbb{T}\mathcal{M}$ with a non-degenerate, symmetric, bilinear form $\eta$
$$\displaystyle\eta\left(\left(X,\xi\right),\left(Y,\zeta\right)\right):=\frac{1}{2}\left(\xi(Y)+\zeta(X)\right)\ ,$$
(17)
which defines on $\mathbb{T}\mathcal{M}$ a pseudo-Riemannian metric of signature $(n,n)$. If we choose a coordinate system $(q^{\mu})$ on $\mathcal{M}$, then the corresponding coordinate vector fields and $1$-forms define a local basis $\left((\partial_{q^{\mu}},0),(0,dq^{\mu})\right)$ of $\mathbb{T}\mathcal{M}$. The matrix representation of $\eta$ in this basis is
$$\displaystyle\eta=\begin{pmatrix}0&\mathbb{1}\\
\mathbb{1}&0\end{pmatrix}\ .$$
A generalized almost complex structure on $\mathcal{M}$ is a bundle map $\mathbb{J}\colon\mathbb{T}\mathcal{M}\to\mathbb{T}\mathcal{M}$ such that $\mathbb{J}^{2}=-\operatorname{Id}_{\mathbb{T}\mathcal{M}}$ and for all $\left(X,\xi\right),\left(Y,\zeta\right)\in\mathbb{T}\mathcal{M}$
$$\displaystyle\eta\left(\mathbb{J}\left(X,\xi\right),\mathbb{J}\left(Y,\zeta\right)\right)=\eta\left(\left(X,\xi\right),\left(Y,\zeta\right)\right)\ ,$$
(18)
where $\eta$ is the natural inner product (17) [11, 12].
Example 3.1.
Let $(\Omega,\alpha)$ be a non-degenerate M-A structure with $\alpha$ closed (i.e. a pair of symplectic structures). Consider a $(1,1)$-tensor $A_{\alpha}:=\pi_{\Omega}^{\#}\circ\alpha_{\#}$ (c.f. definition (10)). Then $(\Omega,A_{\alpha})$ defines a Hitchin pair of $2$-forms in the sense of Crainic [7, 14] since
$$\displaystyle\Omega_{\#}A_{\alpha}=A_{\alpha}^{*}\Omega_{\#}\ .$$
Using the notion of Hitchin pairs, B. Banos showed in [3] that every 2D non-degenerate M-A structure satisfying for appropriate $\phi\in C^{\infty}\left(T^{*}\mathcal{B}\right)$ the divergence condition
$$\displaystyle\operatorname{\operatorname{d}}(\alpha+\phi\Omega)=0\ ,$$
yields an integrable generalized almost structure $\mathbb{J}_{\alpha}$ given as follows
$$\displaystyle\mathbb{J}_{\alpha}=\begin{pmatrix}A_{\alpha}&\pi_{\Omega}^{\#}\\
-\left(\Omega_{\#}+\Omega_{\#}A_{\alpha}^{2}\right)&-A_{\alpha}^{*}\end{pmatrix}$$
The result described in example 3.1 was a key motivation for our further investigations of other possibilities of constructing generalized (almost) geometries from Monge-Ampère equations and the corresponding Monge-Ampère structures. In order to state our result, we need to extend the notion of a generalized (almost) complex structure by the following definition.
Definition 3.1 ([13]).
A generalized almost structure on $\mathcal{M}$ (or a generalized almost geometry on $\mathcal{M}$) is a bundle map $\mathbb{J}\colon\mathbb{T}\mathcal{M}\to\mathbb{T}\mathcal{M}$ such that
$$\displaystyle\mathbb{J}^{2}=\gamma_{1}\operatorname{id}_{\mathbb{T}M}\ ,$$
$$\displaystyle\mathbb{J}^{\bullet}\eta=\gamma_{2}\eta\ ,$$
where $\gamma_{1},\gamma_{2}\in\{-1,1\}$ and $\mathbb{J}^{\bullet}\eta\left(\left(X,\xi\right),\left(Y,\zeta\right)\right):=\eta\bigl{(}\mathbb{J}\left(X,\xi\right),\mathbb{J}\left(Y,\zeta\right)\bigr{)}$. Table 1 describes the four possible choices of constants $\gamma_{i}$.
The abbreviations stand for generalized almost product (GaP), generalized almost complex (GaC), generalized almost para-complex (GaPC), and generalized almost anti-complex (GaAC) structure. A generalized structure is called non-degenerate, if its eigenbundles are isomorphic to $T\mathcal{M}$ (if $\mathbb{J}^{2}=\operatorname{Id}_{\mathbb{T}\mathcal{M}}$), or to $T\mathcal{M}\otimes\mathbb{C}$ (if $\mathbb{J}^{2}=-\operatorname{Id}_{\mathbb{T}\mathcal{M}}$)
Proposition 3.1.
Let $J,P\in\operatorname{End}(T\mathcal{M})$ be an almost complex and almost product structures, respectively. Let $\alpha\in\Omega^{2}(\mathcal{M})$ be a non-degenerate $2$-form, and $g\in S^{2}(\mathcal{M})$ a non-degenerate symmetric bilinear form on $\mathcal{M}$. Suppose that $\mathbb{J}_{J},\mathbb{J}_{P},\mathbb{J}_{\alpha},\mathbb{J}_{g}\in\operatorname{End}(\mathbbold{T}\mathcal{M})$ are given as follows
$$\displaystyle\begin{split}\mathbb{J}_{J}=\begin{pmatrix}J&0\\
0&\epsilon J^{*}\end{pmatrix}\ ,\qquad\qquad\mathbb{J}_{\alpha}=\begin{pmatrix}0&\pi_{\alpha}^{\#}\\
\epsilon\alpha_{\#}&0\end{pmatrix}\ ,\\
\mathbb{J}_{P}=\begin{pmatrix}P&0\\
0&\epsilon P^{*}\end{pmatrix}\ ,\qquad\qquad\mathbb{J}_{g}=\begin{pmatrix}0&\pi_{g}^{\#}\\
\epsilon g_{\#}&0\end{pmatrix}\ ,\end{split}$$
(19)
where $\epsilon\in\{-1,1\}$. Then $\mathbb{J}_{J},\mathbb{J}_{P},\mathbb{J}_{\alpha},\mathbb{J}_{g}$ are generalized almost structures and they satisfy
$$\displaystyle\begin{split}-{\mathbb{J}_{J}}^{2}&={\mathbb{J}_{P}}^{2}=\operatorname{Id}_{\mathbb{T}M}\ ,\ \qquad\qquad-\mathbb{J}_{J}^{\bullet}\eta=\mathbb{J}_{P}^{\bullet}\eta=\epsilon\eta\ ,\\
{\mathbb{J}_{\alpha}}^{2}&={\mathbb{J}_{g}}^{2}=\epsilon\operatorname{Id}_{\mathbb{T}M}\ ,\qquad\qquad-\mathbb{J}_{\alpha}^{\bullet}\eta=\mathbb{J}_{g}^{\bullet}\eta=\epsilon\eta\ .\end{split}$$
(20)
Types of these structures are given in the following table.
Proof.
Since $-J^{2}=P^{2}=\operatorname{Id}_{T\mathcal{M}}$ and ${J^{*}}^{2}={J^{2}}^{*}$, it is easy to see that $-{\mathbb{J}_{J}}^{2}={\mathbb{J}_{P}}^{2}=\operatorname{Id}_{\mathbb{T}\mathcal{M}}$. Now we recall (9), which means
$$\displaystyle g_{\#}\pi_{g}^{\#}$$
$$\displaystyle=\operatorname{Id}_{T^{*}\mathcal{M}}=\alpha_{\#}\pi_{\alpha}^{\#}\ ,$$
(21)
$$\displaystyle\pi_{g}^{\#}g_{\#}$$
$$\displaystyle=\operatorname{Id}_{T\mathcal{M}}=\pi_{\alpha}^{\#}\alpha_{\#}\ .$$
(22)
This implies ${\mathbb{J}_{\alpha}}^{2}={\mathbb{J}_{g}}^{2}=\epsilon\operatorname{Id}_{\mathbb{T}M}$. We proceed with the compatibility of $\mathbb{J}_{J}$ with $\eta$. From (17) we have
$$\displaystyle\mathbb{J}_{J}^{\bullet}\eta\left((X,\xi),(Y,\zeta)\right)=\epsilon\left(J^{*}\xi(JY)+J^{*}\zeta(JX)\right)\ .$$
Using the definition of the dual map, we arrive at
$$\displaystyle\mathbb{J}_{J}^{\bullet}\eta\left((X,\xi),(Y,\zeta)\right)=\epsilon\left(\xi(A^{2}Y)+\zeta(A^{2}X)\right)=-\epsilon\eta\left((X,\xi),(Y,\zeta)\right)\ .$$
The computation for $\mathbb{J}_{P}$ is completely analogous with the only difference coming from $P^{2}=-J^{2}$, resulting in $\mathbb{J}_{P}^{\bullet}\eta=\epsilon\eta$. The computation for $\mathbb{J}_{\alpha}$ (and $\mathbb{J}_{g}$) is slightly different. We have
$$\displaystyle(\mathbb{J}_{\alpha})^{\bullet}\eta\left((X,\xi),(Y,\zeta)\right)=\frac{\epsilon}{2}\left(\alpha_{\#}X(\pi_{\alpha}^{\#}\zeta)+\alpha_{\#}Y(\pi_{\alpha}^{\#}\xi)\right)\ .$$
Since $\alpha_{\#}X=\alpha(X,-)$, and due to antisymmetry of $\alpha$,
$$\displaystyle\frac{\epsilon}{2}\left(\alpha_{\#}X(\pi_{\alpha}^{\#}\zeta)+\alpha_{\#}Y(\pi_{\alpha}^{\#}\xi)\right)$$
$$\displaystyle=\frac{\epsilon}{2}\left(\alpha(X,\pi_{\alpha}^{\#}\zeta)+\alpha(Y,\pi_{\alpha}^{\#}\xi)\right)$$
$$\displaystyle=\frac{-\epsilon}{2}\left((\alpha_{\#}\pi^{\#}\zeta)X+(\alpha_{\#}\pi_{\alpha}^{\#}\xi)Y\right)\ .$$
Using the relations (21) once again, we obtain
$$\displaystyle(\mathbb{J}_{\alpha})^{\bullet}\eta\left((X,\xi),(Y,\zeta)\right)=-\epsilon\eta\left((X,\xi),(Y,\zeta)\right)\ .$$
The computation for $\mathbb{J}_{g}$ differs from the case of $\mathbb{J}_{\alpha}$ only in the symmetry of $g$. Thus $-\mathbb{J}_{J}^{\bullet}\eta=\mathbb{J}_{P}^{\bullet}\eta=\epsilon\eta$, and $-\mathbb{J}_{\alpha}^{\bullet}\eta=\mathbb{J}_{g}^{\bullet}\eta=\epsilon\eta$. The table 2 summarizes the resulting type of structures according to definition 3.1.
∎
Example 3.2.
Let $g$ be a non-degenerate symmetric bilinear form. Consider
$\mathbb{A}=\begin{pmatrix}0&\pi_{g}^{\#}\\
g_{\#}&0\\
\end{pmatrix}.$
w $\mathbb{A}$ satisfies $\mathbb{A}=-\operatorname{Id}_{\mathbb{T}\mathcal{M}}$ and $\mathbb{A}^{\bullet}\eta=\eta$, so $\mathbb{A}$ is a generalized almost product structure. The $\pm 1$-eigenbundles of $\mathbb{A}$ are
$$\displaystyle E_{\pm}=\{(X,\pm g_{\#}X)|X\in\Gamma(T\mathcal{M})\}\ .$$
Indeed, for $x_{+}=(X,g_{\#}X)\in E_{+}$, and $y_{-}=(Y,-g_{\#}Y)\in E_{-}$, it holds $\mathbb{A}x_{+}=(X,g_{\#}X)=x_{+}$, and $\mathbb{A}y=(-Y,g_{\#}Y)=-y_{-}$. Now suppose $x_{+},y_{+}\in E_{+}$. Then
$$\displaystyle\eta(x_{+},y_{+})=\frac{1}{2}\left((g_{\#}X)Y+(g_{\#}Y)X\right)=g(X,Y)\ .$$
Obviously there are $X,Y\in\Gamma(T\mathcal{M})$ such that $g(X,Y)\neq 0$, thus $E_{+}$ is not totally isotropic, and hence $\mathbb{A}$ is a non-isotropic structure.
Coordinate description. Since $\left(\Omega_{\#}(\partial_{i})\right)(\partial_{j})=\Omega(\partial_{i},\partial_{j})$, we have
$$\displaystyle\underline{\Omega}=\begin{pmatrix}0&\mathbb{1}\\
-\mathbb{1}&0\end{pmatrix}\ .$$
Similarly for the matrix of $\pi_{\Omega}^{\#}$ we have
$$\displaystyle\underline{\pi_{\Omega}}^{\#}=\underline{\Omega}^{-1}\ .$$
The same holds true for arbitrary non-degerate $2$-form $\alpha$. Hecne we will denote the matrices of $\alpha_{\#}$ and $\pi_{\alpha}^{\#}$ simply by $\underline{\alpha}$ and $\underline{\alpha}^{-1}$, respectively. The situation does not change when dealing with a symmetric non-degenerate $2$-tensor $g$. Finally, we recall that for an endomorphism $A$, the matrix of $A$ and the matrix of $A^{*}$ are related by the transpose $\underline{A}^{*}=\underline{A}^{T}$. Thus the generalized almost structures described in lemma 3.1, which are the building blocks for our subsequent constructions, are written in the canonical coordinates as
$$\displaystyle\mathbb{J}_{J}=\begin{pmatrix}J&0\\
0&\epsilon J^{T}\end{pmatrix}\ ,\qquad\qquad\mathbb{J}_{\alpha}=\begin{pmatrix}0&\underline{\alpha}^{-1}\\
\epsilon\underline{\alpha}&0\end{pmatrix}\ ,$$
$$\displaystyle\mathbb{J}_{P}=\begin{pmatrix}P&0\\
0&\epsilon P^{T}\end{pmatrix}\ ,\qquad\qquad\mathbb{J}_{g}=\begin{pmatrix}0&\underline{g}^{-1}\\
\epsilon\underline{g}&0\end{pmatrix}\ ,$$
The Courant bracket. The space of sections $\Gamma\left(\mathbb{T}\mathcal{M}\right)=\Gamma\left(T\mathcal{M}\right)\oplus\Gamma\left(T^{*}\mathcal{M}\right)$ is equipped with the antisymmetric Courant bracket $[-,-]_{C}$ [6]
$$\displaystyle[\left(X,\xi\right),\left(Y,\zeta\right)]_{C}:=\left([X,Y],\mathcal{L}_{X}\zeta-\mathcal{L}_{Y}\xi-\frac{1}{2}\operatorname{\operatorname{d}}(X\mathbin{\lrcorner}\zeta-Y\mathbin{\lrcorner}\xi)\right)\ ,$$
(23)
where $[X,Y]$ is the Lie bracket and $\mathcal{L}$ is the Lie derivative. Note that the Courant bracket, which is an extension of the Lie bracket for vector fields, does not satisfy the Jacobi identity. More importantly for our considerations, the Courant bracket can be used to define integrability of isotropic structures on $\mathbb{T}\mathcal{M}$.
3.1.1 Isotropy, Dirac structures, and integrability
Every generalized almost structure comes together with the corresponding subbundles $E_{+},E_{-}$, which are $\pm 1$-eigenbundles if $\mathbb{J}^{2}=\operatorname{id}_{\mathbb{T}\mathcal{M}}$, and $\pm i$-eigenbundles if $\mathbb{J}^{2}=-\operatorname{Id}_{\mathbb{T}\mathcal{M}}$.
Definition 3.2.
A subbundle $E\subset\mathbb{T}\mathcal{M}$ is called totally isotropic (w.r.t. the inner product $\eta$), if for all $x,y\in E:\eta(x,y)=0$. Totally isotropic $E$ is called an almost Dirac structure on $\mathcal{M}$ if $\operatorname{rank}E=\operatorname{rank}T\mathcal{M}$. A Dirac structure on $\mathcal{M}$ is an almost Dirac structure $E$ such that $[E,E]_{C}\subset E$.
Remark 3.1.
A Dirac structure on $\mathcal{M}$ can be equivalently defined as a totally isotropic subbundle $E\subset\mathbb{T}\mathcal{M}$ of maximal rank, which is involutive with respect to the Courant bracket.
The four possible generalized almost structures determined by definition 3.1 can be divided into two subsets depending on whether the eigenbundles $E_{\pm}$ are almost Dirac structures or not.
Definition 3.3.
Let $\mathbb{J}$ be a generalized almost structure. If the eigenbundles $E_{\pm}$ are almost Dirac structures, then $\mathbb{J}$ is called isotropic (with respect to $\eta$). Otherwise $\mathbb{J}$ is called non-isotropic.
The involutivity condition of almost Dirac structures $E_{\pm}$ can serve as a definition of integrability only for isotropic $\mathbb{J}$ (see remark 3.2 below).
Definition 3.4.
An isotropic generalized almost structure $\mathbb{J}\in\operatorname{End}(\mathbb{T}\mathcal{M})$ is called integrable if the corresponding eigenbundles $E_{+},E_{-}$ are Dirac structures. An integrable generalized almost structure is called a generalized structure.
Let $\mathbb{J}\in\operatorname{End}(\mathbb{T}\mathcal{M})$ be an isotropic generalized almost structure. Then the torsion of $\mathbb{J}$ is defined for all $x,y\in\Gamma(\mathbb{T}\mathcal{M})$ by
$$\displaystyle N_{\mathbb{J}}(x,y):=[\mathbb{J}x,\mathbb{J}y]_{C}+\mathbb{J}^{2}[x,y]_{C}-\mathbb{J}\left([\mathbb{J}x,y]_{C}+[x,\mathbb{J}y]_{C}\right)\ .$$
(24)
This $(1,2)$-tensor $N_{\mathbb{J}}\colon\Gamma(\mathbb{T}\mathcal{M})\otimes\Gamma(\mathbb{T}\mathcal{M})\to\Gamma(\mathbb{T}\mathcal{M})$ is called (generalized) Nijenhuis tensor. An isotropic $\mathbb{A}$ is integrable if and only if the coresponding Nijenhuis tensor vanishes: $N_{\mathbb{A}}(x,y)=0$ for all $x,y\in\Gamma(\mathbb{T}\mathcal{M})$ [7].
Remark 3.2.
If a generalized almost structure $\mathbb{J}$ is non-isotropic, then the eigenbundles $E_{\pm}$ are not totally isotropic (see example 3.2) and the Courant bracket is not well-defined on them. Thus the torsion (24) for non-isotropic structures is not well-defined as a tensor. As a consequence, the notion of integrability, which is usually given either by the condition of vanishing Nijenhuis tensor or by definition 3.4, cannot be applied to non-isotropic structures.
Remark 3.3.
Authors of [13] considered the notion of weak integrablity, which they defined for a commuting pair consisting of an indefinite generalized metric $\mathbb{G}$ and an arbitrary generalized structure $\mathbb{J}$. This notion of weak integrability can be applied to non-isotropic structures as well but requires the existence of a generalized Bismut connection222A generalized Bismut connection associated to a generalized metric is a Courant algebroid connection that parallelizes the generalized metric [10]. The extension for indefinite generalized metrics was given in [13]. $\mathfrak{D}$ associated to a generalized metric $\mathbb{G}$ [10, 13]. By definition, the weak integrability condition is $\mathfrak{D}\mathbb{J}=0$ [13]. In the case of generalized Kähler and generalized para-Kähler structures, the integrability condition introduced in [10] for the case of generalized Kähler structures (and extended to the case of generalized para-Kähler structures in [13]) implies weak integrability [13].
3.2 Generalized geometry and M-A theory
Recall that $\mathcal{B}$ is $2$-dimensional and $\Omega$ denotes the canonical symplectic form on $T^{*}\mathcal{B}$.
Proposition 3.2.
Let $(\Omega,\alpha)\in\Omega^{2}\left(T^{*}\mathcal{B}\right)\times\Omega^{2}\left(T^{*}\mathcal{B}\right)$ be a non-degenerate Monge-Ampère structure, $\rho\in\operatorname{End}\left(TT^{*}\mathcal{B}\right)$ the corresponding endomorphism defined by (10). Then
$$\displaystyle\mathbb{J}_{\rho}=\begin{pmatrix}\rho&0\\
0&\epsilon_{1}\rho^{*}\ ,\end{pmatrix}$$
$$\displaystyle\mathbb{J}_{\alpha}=\begin{pmatrix}0&\pi_{\alpha}^{\#}\\
\epsilon_{2}\alpha_{\#}&0\end{pmatrix}\ ,$$
$$\displaystyle\mathbb{J}_{\Omega}=\begin{pmatrix}0&\pi_{\Omega}^{\#}\\
\epsilon_{3}\Omega_{\#}&0\end{pmatrix}\ ,$$
(31)
are generalized almost structures.
1.
If the M-A structure is elliptic, $\operatorname{Pf}(\alpha)>0$, then $\mathbb{J}_{\rho}$ is a GaPC structure for $\epsilon_{1}=1$, and it is a GaC structure for $\epsilon_{1}=-1$.
2.
If the M-A structure is hyperbolic, $\operatorname{Pf}(\alpha)<0$, then $\mathbb{J}_{\rho}$ is a GaP structure for $\epsilon_{1}=1$, and it is a GaPC structure for $\epsilon_{1}=-1$.
3.
The types of $\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}$ are independent of $\operatorname{\operatorname{sgn}}\left(\operatorname{Pf}(\alpha)\right)$. The types are determined by the value of $\epsilon_{1}$, and $\epsilon_{2}$, respectively, and are summarized in the table 2. The two structures never coincide, $\mathbb{J}_{\alpha}\neq\mathbb{J}_{\Omega}$.
Proof.
Firstly notice that the assumption of non-degeneracy of the M-A structure assures that $\mathbb{J}_{\rho}$ and $\mathbb{J}_{\alpha}$ can be considered. This can be seen from
$$\displaystyle\det\underline{\alpha}={\operatorname{\operatorname{Pf}}(\alpha)}^{2}\ ,$$
(32)
and recalling that non-degeneracy means that $\operatorname{\operatorname{Pf}}(\alpha)$ is nowhere vanishing.
For the sake of completeness, we proceed with showing that $\rho$ is either an almost complex structure if $\operatorname{\operatorname{Pf}}(\alpha)>0$, or an almost product structure if $\operatorname{\operatorname{Pf}}(\alpha)<0$. The equation (10) writes $\underline{\alpha}=\sqrt{|\operatorname{\operatorname{Pf}}(\alpha)|}\underline{\rho}^{T}\underline{\Omega}$, which implies $\underline{\rho}=\frac{1}{\sqrt{|\operatorname{\operatorname{Pf}}(\alpha)|}}(\underline{\alpha}\underline{\Omega}^{-1})^{T}$. Because $\underline{\Omega},\underline{\alpha}$ are antisymmetric matrices, and $\underline{\Omega}^{-1}=-\underline{\Omega}$, we obtain ${\underline{\rho}}^{2}=\frac{1}{|\operatorname{\operatorname{Pf}}(\alpha)|}(\underline{\Omega}\underline{\alpha})^{2}$.
Since
$$\displaystyle\underline{\Omega}=\begin{pmatrix}0&0&1&0\\
0&0&0&1\\
-1&0&0&0\\
0&-1&0&0\end{pmatrix}\ ,$$
$$\displaystyle\underline{\alpha}=\begin{pmatrix}0&E&B&C\\
-E&0&-A&-B\\
-B&A&0&D\\
-C&B&-D&0\end{pmatrix}\ ,$$
(41)
we have $(\underline{\Omega}\underline{\alpha})^{2}=(B^{2}-AC+DE)\mathbb{1}$. Using the coordinate expression for Pfaffian, $\operatorname{\operatorname{Pf}}(\alpha)=-B^{2}+AC-DE$, we arrive at ${\underline{\rho}}^{2}=-\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha)\mathbb{1}$. Thus ${\underline{\rho}}^{2}=\mathbb{1}$ for $\operatorname{\operatorname{Pf}}<0$, which amounts to a hyperbolic M-A structures, and ${\underline{\rho}}^{2}=-\mathbb{1}$ for $\operatorname{\operatorname{Pf}}>0$, which defines an elliptic M-A structure. These two cases are the only possible results for ${\underline{\rho}}^{2}$ due to the non-degeneracy of the M-A structure $(\Omega,\alpha)$. Now let $T\in\operatorname{Gl}(4,\mathbb{R})$ be a transformation (acting on a fiber of $TM$ above some point of $M$), which maps the basis $b$ induced by the canonical coordinates, to some other basis $\tilde{b}$. Then the matrix of $\rho$ w.r.t. the $\tilde{b}$ basis satisfies $\underline{\tilde{\rho}}=T\underline{\rho}T^{-1}$, which implies ${\underline{\tilde{\rho}}}^{\ 2}=T{\underline{\rho}}^{2}T^{-1}=\pm\mathbb{1}$. Thus $\rho^{2}=\pm\operatorname{Id}_{TM}$.
That $\mathbb{J}_{\rho}$ is a GaP structure for $\epsilon_{1}=1$, and a GaPC structure for $\epsilon_{1}=-1$, now follows from the proposition 3.1. Similarly, the types of generalized almost structures $\mathbb{J}_{\alpha}$ and $\mathbb{J}_{\Omega}$ are given by the table 2. The fact that this is independent of $\operatorname{\operatorname{Pf}}(\alpha)$ follows from the proof of the aforementioned proposition, as well as from the assumption $\operatorname{\operatorname{Pf}}(\alpha)\neq 0$, which assures that $\alpha$ is non-degenerate, and thus satisfies the assumptions of the proposition. Indeed, if $\operatorname{\operatorname{Pf}}(\alpha)\neq 0$, then from (32) follows that $\alpha_{\#}$ is invertible and hence $\alpha$ is non-degenerate.
Finally, the definition 2.1 requires $\alpha\wedge\Omega=0$. The non-degeneracy of the symplectic form $\Omega$ implies that $\alpha$ cannot be written as a sum of two forms $\alpha=\alpha_{1}+\alpha_{2}$, such that one of the summands is colinear with $\Omega$. Thus the structures always satisfy $\mathbb{J}_{\alpha}\neq\mathbb{J}_{\Omega}$.
∎
Remark 3.4.
A coordinate-free proof of $\rho$ being either an almost complex structure on $T^{*}\mathcal{B}$, if $\operatorname{\operatorname{Pf}}(\alpha)>0$, or an almost product structure if $\operatorname{\operatorname{Pf}}(\alpha)<0$, can be found in [15].
3.3 Quadric surfaces of generalized geometries
Consider a smooth manifold equipped with three almost complex structures $I,J,K$ satisfying $IJ+JI=0$ and $K=IJ$. The manifold is then called an almost hypercomplex manifold. On a hypercomplex manifold, there is a $2$-sphere of almost complex structures $\{a_{1}I+a_{2}J+a_{3}K|\sum_{i=1}^{3}{a_{i}}^{2}=1\}$. This follows from the fact that the above relations between $I,J,K$ imply anticommutativity of the triple $\{I,J\}=\{J,K\}=\{I,K\}=0$. We will see that a similar but much richer situation happens when we consider a pair-wise anticommutative triple of generalized geometries constructed from tensors associated with 2D non-degenerate M-A structures.
Proposition 3.3.
Let $(\Omega,\alpha)\in\Omega^{2}\left(T^{*}\mathcal{B}\right)\times\Omega^{2}\left(T^{*}\mathcal{B}\right)$ be a non-degenerate Monge-Ampère structure, $\rho\in\operatorname{End}(TT^{*}\mathcal{B})$ the corresponding endomorphism defined by (10). Then the generalized almost structures $\mathbb{J}_{\rho},\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}$ given by (31) pair-wise anticommute, if and only if $\epsilon_{1}=-1$ and
$$\displaystyle\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\epsilon_{2}\pi_{\Omega}^{\#}\alpha_{\#}=0\ .$$
(42)
In canonical coordinates, the condition (42) is equivalent with
$$\displaystyle B^{2}-AC+DE=-\epsilon_{2}\epsilon_{3}\ ,$$
(43)
where $A,B,C,D,E\in C^{\infty}\left(T^{*}\mathcal{B}\right)$ are coefficients of $\alpha$ in the canonical basis.
Remark 3.5.
In the proof of proposition 3.2, we have shown that $\operatorname{\operatorname{Pf}}(\alpha)=-B^{2}+AC-DE$. Thus the condition (43) can be expressed as $\operatorname{\operatorname{Pf}}(\alpha)=\epsilon_{2}\epsilon_{3}$.
Proof.
The anticommutators are
$$\displaystyle\{\mathbb{J}_{\rho},\mathbb{J}_{\alpha}\}$$
$$\displaystyle=\begin{pmatrix}0&\rho\pi_{\alpha}^{\#}+\epsilon_{1}\pi_{\alpha}^{\#}\rho^{*}\\
\epsilon_{1}\epsilon_{2}\rho^{*}\alpha_{\#}+\epsilon_{2}\alpha_{\#}\rho&0\end{pmatrix}\ ,$$
$$\displaystyle\{\mathbb{J}_{\rho},\mathbb{J}_{\Omega}\}$$
$$\displaystyle=\begin{pmatrix}0&\rho\pi_{\Omega}^{\#}+\epsilon_{1}\pi_{\Omega}^{\#}\rho^{*}\\
\epsilon_{1}\epsilon_{3}\rho^{*}\Omega_{\#}+\epsilon_{3}\Omega_{\#}\rho&0\end{pmatrix}\ ,$$
$$\displaystyle\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}$$
$$\displaystyle=\begin{pmatrix}\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\epsilon_{2}\pi_{\Omega}^{\#}\alpha_{\#}&0\\
0&\epsilon_{2}\alpha_{\#}\pi_{\Omega}^{\#}+\epsilon_{3}\Omega_{\#}\pi_{\alpha}^{\#}\end{pmatrix}\ .$$
The condition $\{\mathbb{J}_{\rho},\mathbb{J}_{\alpha}\}=0$ requires $\rho\pi_{\alpha}^{\#}+\epsilon_{1}\pi_{\alpha}^{\#}\rho^{*}=0$. Multiplying this equation with $\alpha_{\#}$ from the right and left (i.e. pre-composition and composition), we obtain $\alpha_{\#}\rho+\epsilon_{1}\rho^{*}\alpha_{\#}=0$. The dual equation is $\rho^{*}\alpha_{\#}^{*}+\epsilon_{1}\alpha_{\#}^{*}\rho=0$, which, by antisymmetry of $\alpha$, is equivalent with $\epsilon_{1}\epsilon_{2}\rho^{*}\alpha_{\#}+\epsilon_{2}\alpha_{\#}\rho=0$. Similarly
$$\displaystyle\rho\pi_{\Omega}^{\#}+\epsilon_{1}\pi_{\Omega}^{\#}\rho^{*}=0\iff\epsilon_{1}\rho^{*}\Omega_{\#}+\Omega_{\#}\rho=0\ .$$
Thus the first two anticommutators vanish, if and only if
$$\displaystyle\begin{split}\epsilon_{1}\rho^{*}\alpha_{\#}+\alpha_{\#}\rho&=0\ ,\\
\epsilon_{1}\rho^{*}\Omega_{\#}+\Omega_{\#}\rho&=0\ .\end{split}$$
(44)
Working with the canonical coordinates, the system (44) writes
$$\displaystyle\frac{-1}{\sqrt{|\operatorname{\operatorname{Pf}}(\alpha)|}}\underline{\alpha}\underline{\Omega}\left(\epsilon_{1}\underline{\alpha}+\underline{\alpha}\right)$$
$$\displaystyle=0\ ,$$
$$\displaystyle\frac{-1}{\sqrt{|\operatorname{\operatorname{Pf}}(\alpha)|}}{\underline{\Omega}}^{2}\left(\epsilon_{1}\underline{\alpha}+\underline{\alpha}\right)$$
$$\displaystyle=0\ .$$
Due to non-degeneracy of the M-A structure $(\Omega,\alpha)$, the above equations hold, if and only if $\epsilon_{1}=-1$. The remaining condition $\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=0$ is captured by the equation (42) since
$$\displaystyle\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\epsilon_{2}\pi_{\Omega}^{\#}\alpha_{\#}=0\iff\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\epsilon_{2}\pi_{\Omega}^{\#}\alpha_{\#}=0\ .$$
This follows directly from (9). In matrix notation, (42) is equivalent with
$$\displaystyle\left(\underline{\alpha}\underline{\Omega}\right)^{2}=-\epsilon_{2}\epsilon_{3}\mathbb{1}\ .$$
Recalling (41), we can write $\underline{\alpha}$ and $\underline{\Omega}$ in the block form
$$\displaystyle\underline{\alpha}=\begin{pmatrix}P&Q\\
-Q^{T}&R\end{pmatrix}$$
$$\displaystyle\underline{\Omega}=\begin{pmatrix}0&\mathbb{1}\\
-\mathbb{1}&0\end{pmatrix}\ ,$$
(49)
where $P^{T}=-P$ and $R^{T}=-R$. The equation (49) writes
$$\displaystyle\begin{pmatrix}Q^{2}-PR&-QP-PQ^{T}\\
RQ+Q^{T}R&-RP+(Q^{T})^{2}\end{pmatrix}=-\epsilon_{2}\epsilon_{3}\mathbb{1}\ .$$
The conditions $-QP-PQ^{T}=RQ+Q^{T}R=0$ are satisfied automatically, since
$$\displaystyle P=\begin{pmatrix}0&E\\
-E&0\end{pmatrix}\ ,$$
$$\displaystyle Q=\begin{pmatrix}B&C\\
-A&-B\end{pmatrix}\ ,$$
$$\displaystyle R=\begin{pmatrix}0&D\\
-D&0\end{pmatrix}\ ,$$
where $A,B,C,D,E$ are entries of $\underline{\alpha}$, i.e. coefficients of $\alpha$ in the canonical basis. We also notice that
$$\displaystyle Q^{2}-PR=-\epsilon_{2}\epsilon_{3}\mathbb{1}\iff-RP+(Q^{T})^{2}=-\epsilon_{2}\epsilon_{3}\mathbb{1}\ ,$$
which is due to antisymmetry of $A$ and $C$, and $B^{2}$ being diagonal. Finally
$$\displaystyle Q^{2}-PR=\begin{pmatrix}B^{2}-AC+DE&0\\
0&B^{2}-AC+DE\end{pmatrix}=-\epsilon_{2}\epsilon_{3}\mathbb{1}\ .$$
This concludes the proof.
∎
Example 3.3.
Let $(\Omega,\alpha)$ be a non-degenerate M-A structure. We will search for $2$-forms $\alpha$, which satisfy the condition (42) and commute with the canonical symplectic form in the sense $[\underline{\alpha},\underline{\Omega}]=0$, where the matrices are given by the canonical basis of $\Omega$. The commutativity condition is quite arbitrary at this point, but becomes more relevant if one is interested in construction of hyper-(para-)complex structures, or variations of generalized-Kähler structures on $T\oplus T^{*}$. To obtain even more specific family of solutions to (42), we further choose $\epsilon_{2}=-\epsilon_{3}$. Then the condition (42) becomes $\underline{\alpha}^{T}=\underline{\alpha}^{-1}$. Hence we search for antisymmetric elements of the orthogonal group $\operatorname{O}(4)$, which commute with $\underline{\Omega}$. If we take $Q\in\operatorname{O}(2)$, then $\underline{\alpha}:=\begin{pmatrix}0&Q\\
-Q^{T}&0\end{pmatrix}$ satisfies $\underline{\alpha}^{T}=-\underline{\alpha}=\underline{\alpha}^{-1}$. The commutativity with $\underline{\Omega}$ then forces $\det Q=-1$, which amounts to $Q=\begin{pmatrix}a&b\\
b&-a\end{pmatrix}$ for $a,b\in\mathbb{R}$ such that $a^{2}+b^{2}=1$. Comparing this with the general form of $\underline{\alpha}$ described in (41), we obtain
$$\displaystyle\underline{\alpha}=\begin{pmatrix}0&0&B&-A\\
0&0&-A&-B\\
-B&A&0&0\\
A&B&0&0\end{pmatrix}\ .$$
Therefore, the coefficients of $\alpha$ must satisfy $A=-C$, $D=E=0$, and $\operatorname{\operatorname{Pf}}(\alpha)=-A^{2}-B^{2}=-1$. This is in concordance with (43). The corresponding Monge-Ampère equation, which is determined from $(\Omega,\alpha)$ by (3), is
$$\displaystyle Af_{xx}+2(1-A^{2})f_{xy}-Af_{yy}=0\ ,$$
where $A\in C^{\infty}(B)$. For example, if we choose $|A|=1$, we obtain the wave equation $f_{xx}=f_{yy}$.
In the above example, we have described a family of non-degenerate M-A structures (parameterized by smooth functions $A,B$ satisfying $A^{2}+B^{2}=1$), which, according to proposition 3.3, determine the corresponding family of anticommutative pairs $\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}$ of GaPC and GaC structures for $(\epsilon_{2},\epsilon_{3})=(1,-1)$ (or GaC and GaPC structures for $(\epsilon_{2},\epsilon_{3})=(-1,1)$). Choosing $\epsilon_{1}=-1$ extends the pair $\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}$ to a mutually anticommutative triple of generalized almost structures, with the third structure being the GaPC structure $\mathbb{J}_{\rho}$ (since $\operatorname{\operatorname{Pf}}(\alpha)<0$ - see proposition 3.2). In the following, we will show how pair-wise anticommutative triples give rise to certain quadric surfaces of generalized almost structures.
Proposition 3.4.
Let $(\Omega,\alpha)\in\Omega^{2}\left(T^{*}\mathcal{B}\right)\times\Omega^{2}\left(T^{*}\mathcal{B}\right)$ be a non-degenerate M-A structure, and let $\rho\in\operatorname{End}(TT^{*}\mathcal{B})$ be given by (10). If (42) holds, then
$$\displaystyle\mathbb{A}:=\begin{pmatrix}a_{1}\rho&a_{2}\pi_{\alpha}^{\#}+a_{3}\pi_{\Omega}^{\#}\\
a_{2}\epsilon_{2}\alpha_{\#}+a_{3}\epsilon_{3}\Omega_{\#}&-a_{1}\rho^{*}\end{pmatrix}\ ,$$
(52)
where $a_{i}\in\mathbb{R}\ \forall i$, is a generalized almost structure, if and only if
$$\displaystyle k:=-\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha){a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}$$
(53)
satisfies $|k|=1$.
1.
If $(\Omega,\alpha)$ is elliptic, then there are the following quadrics of generalized almost structures in $\mathcal{A}:=\operatorname{span}_{\mathbb{R}}\{\mathbb{J}_{\rho},\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}$:
(a)
for $k=1$, there are two $1$-sheeted hyperboloids, a $2$-sheeted hyperboloid and, a $2$-sphere of GaPC structures in $\mathcal{A}$,
(b)
for $k=-1$, there are two $2$-sheeted hyperboloids and a $1$-sheeted hyperboloid of GaC structures in $\mathcal{A}$.
2.
If $(\Omega,\alpha)$ is hyperbolic, then
(a)
for $k=1$, there are two $2$-sheeted hyperboloids and a $1$-sheeted hyperboloid of GaPC structures in $\mathcal{A}$,
(b)
for $k=-1$, there are two $1$-sheeted hyperoloids, a $2$-sheeted hyperboloid, and a $2$-sphere of GaC structures in $\mathcal{A}$.
Before we move to the proof of the proposition 3.4, we also want to emphasize that statements 1. and 2. are separate cases and should not be mixed together, for example, when searching for a generalized Kähler structure associated to $(\Omega,\alpha)$. This is because a given non-degenerate M-A structure has only one value of $\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha)$, distinguishing the elliptic case from the hyperbolic one.
Proof.
We have $\mathbb{A}=a_{1}\mathbb{J}_{\rho}+a_{2}\mathbb{J}_{\alpha}+a_{3}\mathbb{J}_{\Omega}$, where $\mathbb{J}_{\rho},\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}$ are given by (31) with $\epsilon_{1}=-1$. Then $\mathbb{A}^{\bullet}\eta={a_{1}}^{2}\mathbb{J}_{\rho}^{\bullet}\eta+{a_{2}}^{2}\mathbb{J}_{\alpha}^{\bullet}\eta+{a_{3}}^{2}\mathbb{J}_{\Omega}^{\bullet}\eta$ and
$$\displaystyle\begin{split}\mathbb{A}^{2}&={a_{1}}^{2}{\mathbb{J}_{\rho}}^{2}+{a_{2}}^{2}{\mathbb{J}_{\alpha}}^{2}+{a_{3}}^{2}{\mathbb{J}_{\Omega}}^{2}\\
&\quad+a_{1}a_{2}\{\mathbb{J}_{\rho},\mathbb{J}_{\alpha}\}+a_{1}a_{3}\{\mathbb{J}_{\rho},\mathbb{J}_{\Omega}\}+a_{2}a_{3}\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}\ .\end{split}$$
(54)
By proposition 3.3, our assumptions gives $\{\mathbb{J}_{\rho},\mathbb{J}_{\alpha}\}=\{\mathbb{J}_{\rho},\mathbb{J}_{\Omega}\}=\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=0$. Thus, using relations (20), we obtain
$$\displaystyle\mathbb{A}^{2}={a_{1}}^{2}{\mathbb{J}_{\rho}}^{2}+{a_{2}}^{2}{\mathbb{J}_{\alpha}}^{2}+{a_{3}}^{2}{\mathbb{J}_{\Omega}}^{2}\ ,$$
$$\displaystyle\mathbb{A}^{\bullet}\eta={a_{1}}^{2}\mathbb{J}_{\rho}^{\bullet}\eta-\left({a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}\right)\eta\ .$$
By proposition 3.1 and proposition 3.2, the type of generalized almost structure $\mathbb{J}_{\rho}$ depends on the Pfaffian of the M-A structure $(\Omega,\alpha)$ as follows
$$\displaystyle{\mathbb{J}_{\rho}}^{2}=\begin{cases}\ \ \ \operatorname{Id}_{\mathbb{T}\mathcal{B}}\text{ for }\operatorname{\operatorname{Pf}}(\alpha)<0\\
-\operatorname{Id}_{\mathbb{T}\mathcal{B}}\text{ for }\operatorname{\operatorname{Pf}}(\alpha)>0\end{cases}$$
$$\displaystyle\mathbb{J}_{\rho}^{\bullet}\eta=\begin{cases}-\eta\text{ for }\operatorname{\operatorname{Pf}}(\alpha)<0\\
\ \ \eta\text{ for }\operatorname{\operatorname{Pf}}(\alpha)>0\end{cases}$$
where $\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha):=\frac{\operatorname{\operatorname{Pf}}(\alpha)}{|\operatorname{\operatorname{Pf}}(\alpha)|}$ (which is well-defined as we assume the M-A structure to be non-degenerate, $\operatorname{\operatorname{Pf}}(\alpha)\neq 0$). Hence we arrive at
$$\displaystyle\begin{split}\mathbb{A}^{2}&=\left(-\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha){a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}\right)\operatorname{Id}_{\mathbb{T}\mathcal{B}}\ ,\\
\mathbb{A}^{\bullet}\eta&=\left(-\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha){a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}\right)(-\eta)\ ,\end{split}$$
(55)
with the immediate consequence that $\mathbb{A}$ is a generalized almost structure if and only if $|k|=1$. We proceed with the proof of statements 1. and 2.
1. $\operatorname{\operatorname{Pf}}(\alpha)>0$: elliptic M-A structures. (a) For $k=1$, we see from (55) that $\mathbb{A}^{2}=\operatorname{Id}_{\mathbb{T}\mathcal{B}}$ and $\mathbb{A}^{\bullet}\eta=-\eta$, i.e. $\mathbb{A}$ is a GaPC structure. Moreover, positive Pfaffian implies the coefficients of $\mathbb{A}$ must satisfy $-{a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}=1$. This yields two $1$-sheeted hyperboloids, a $2$-sheeted hyperboloid, and a $2$-sphere, depending on the value of $\epsilon_{2}$ and $\epsilon_{3}$, which is summarized in table 3.
(b) If $k=-1$, then $\mathbb{A}^{2}=-\operatorname{Id}_{\mathbb{T}\mathcal{B}}$ and $\mathbb{A}^{\bullet}\eta=\eta$, meaning $\mathbb{A}$ is a GaC structure, and the coefficients $a_{i}$ must satisfy $-{a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}=-1$. This implies the existence of quadrics described in table 4.
2. $\operatorname{\operatorname{Pf}}(\alpha)<0$: hyperbolic M-A structures. The situation is very similar to the elliptic case. We again obtain either (a) for $k=1$ a GaPC structure, or, (b) for $k=-1$ a GaC structure. The main alteration is that the resulting GaPC/GaC structures differ from those obtained in the elliptic case. This is simply because for a non-degenerate M-A structure, by definition, $\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha)$ is constant. Thus, the distinction between hyperbolic and elliptic case happens on the level of the M-A structure, with the expected diference between the GaPC/GaC structures corresponding to different couples $(\Omega,\alpha)$. The quadrics of generalized almost structures arising from hyperbolic M-A structures are given in table 5 for $k={a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}=1$, and in table 6 for $k={a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}=-1$.
∎
A natural question at this point is whether any of the generalized almost structures described by proposition 3.4 coincide. In the following proposition, we will show there are no coincidences between structures parameterized by different points of either the same quadric or two different quadrics.
Proposition 3.5.
Two different points chosen arbitrarily from the quadrics described in statement 1. (if $\operatorname{\operatorname{Pf}}(\alpha)>0)$, or statement 2. (if $\operatorname{\operatorname{Pf}}(\alpha)<0)$, of proposition 3.4 represent two different generalized almost structures.
Proof.
Let $\mathbb{A},\mathbb{B}$ be given by (52). The difference between the structures may occur via the choice of coefficients $(a_{i}),(b_{i})$, and the values of epsilons, $\epsilon_{i}^{\mathbb{A}},\epsilon_{i}^{\mathbb{B}}$. The coefficients $(a_{i}),(b_{i})$ determine the corresponding $k_{\mathbb{A}},k_{\mathbb{B}}$, defined by (53). Note that both $\mathbb{A}$ and $\mathbb{B}$ are constructed from a single M-A structure $(\Omega,\alpha)$, so the Pfaffian is fixed and thus $\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha)$ is constant. This means that there will always be only one $\rho$ and only one of the statements 1. and 2. of proposition 3.4 will apply to a given non-degenerate M-A structure.
Case 1. Suppose $k_{\mathbb{A}}=k_{\mathbb{B}}$ and $\epsilon_{i}^{\mathbb{A}}=\epsilon_{i}^{\mathbb{B}}$ for $i=2,3$. Then $\mathbb{A}$ and $\mathbb{B}$ are $\mathbb{R}$-linear combinations of the same generalized almost structures, i.e. $\mathbb{A}=a_{1}\mathbb{J}_{\rho}+a_{2}\mathbb{J}_{\alpha}+a_{3}\mathbb{J}_{\Omega}$ and $\mathbb{B}=b_{1}\mathbb{J}_{\rho}+b_{2}\mathbb{J}_{\alpha}+b_{3}\mathbb{J}_{\Omega}$. This means
$$\displaystyle\mathbb{A}=\mathbb{B}\quad\iff\quad a_{i}=b_{i}\ \forall i\ .$$
Note that this case amounts to choosing the structures $\mathbb{A},\mathbb{B}$ on the same quadric.
Case 2. Suppose $k_{\mathbb{A}}=k_{\mathbb{B}}$ but $\epsilon_{i}^{\mathbb{A}}\neq\epsilon_{i}^{\mathbb{B}}$ for at least one $i$. Since the structure $\mathbb{J}_{\rho}$ is fixed for both the $\mathbb{A}$ and $\mathbb{B}$ (due to the same $\operatorname{\operatorname{Pf}}(\alpha)$), we have $\mathbb{A}=\mathbb{B}$, if and only if $a_{1}=b_{1}$ and
$$\displaystyle\begin{pmatrix}0&a_{2}\pi_{\alpha}^{\#}+a_{3}\pi_{\Omega}^{\#}\\
a_{2}\epsilon_{2}^{\mathbb{A}}\alpha_{\#}+a_{3}\epsilon_{3}^{\mathbb{A}}\Omega_{\#}&0\end{pmatrix}=\begin{pmatrix}0&b_{2}\pi_{\alpha}^{\#}+b_{3}\pi_{\Omega}^{\#}\\
b_{2}\epsilon_{2}^{\mathbb{B}}\alpha_{\#}+b_{3}\epsilon_{3}^{\mathbb{B}}\Omega_{\#}&0\end{pmatrix}\ .$$
Comparing the upper right blocks we get
$$\displaystyle(a_{2}-b_{2})\pi_{\alpha}^{\#}=(b_{3}-a_{3})\pi_{\Omega}^{\#}\ .$$
(56)
By definition of the M-A structure $(\Omega,\alpha)$, we have $\alpha\wedge\Omega=0$. This implies that the dual bivectors satisfy $\pi_{\alpha}\wedge\pi_{\Omega}=0$. Consequently, $\pi_{\alpha}^{\#}$ cannot be written as a non-zero multiple of $\pi_{\Omega}^{\#}$, and thus (56) holds, if and only if $a_{2}=b_{2}$ and $a_{3}=b_{3}$.
Case 3. Suppose $k_{\mathbb{A}}\neq k_{\mathbb{B}}$. Then $\mathbb{A}$ is a generalized almost structure of a different type than $\mathbb{B}$, which concludes the proof.
∎
Propositions 3.4 and 3.5 give the following theorem.
Theorem 3.1.
Let $(\Omega,\alpha)\in\Omega^{2}\left(T^{*}\mathcal{B}\right)\times\Omega^{2}\left(T^{*}\mathcal{B}\right)$ be a non-degenerate M-A structure on $T^{*}\mathcal{B}$, and let $\rho\in\operatorname{End}(TT^{*}\mathcal{B})$ be given by (10). Then $(\Omega,\alpha)$ defines by (52) a family of generalized almost geometries $\mathbb{A}\in\operatorname{End}\left(\mathbb{T}T^{*}\mathcal{B}\right)$ on $T^{*}\mathcal{B}$. They are parameterized by quadric surfaces in $\mathbb{R}^{3}$, and described in tables 3 - 6. Two different points of any of the quadrics parameterize two different geometries.
On the necessity of anticommutativity assumptions. Considering the assumptions of proposition 3.4, we now focus on the question whether the pair-wise anticommutativity (which was assumed in the corollary by the choice $\epsilon_{1}=-1$ in (52) and by the equation $\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\epsilon_{2}\pi_{\Omega}^{\#}\alpha_{\#}=0$) is a necessary condition for $\mathbb{A}=a_{1}\mathbb{J}_{\rho}+a_{2}\mathbb{J}_{\alpha}+a_{3}\mathbb{J}_{\Omega}$ to be a generalized almost structure. We want to consider the situation where all three generalized almost structures $\mathbb{J}_{\rho},\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}$ given by (31) are contributing to $\mathbb{A}$. So we assume $a_{i}\neq 0$ for all $i$. Without any assumption about the anticommutativity, $\mathbb{A}^{2}$ is described by (54). If $\mathbb{A}$ is a generalized almost structure, then $\mathbb{A}^{2}=\pm\operatorname{Id}_{\mathbb{T}\mathcal{B}}$ and by proposition 3.2
$$\displaystyle{a_{1}}^{2}{\mathbb{J}_{\rho}}^{2}+{a_{2}}^{2}{\mathbb{J}_{\alpha}}^{2}+{a_{3}}^{2}{\mathbb{J}_{\Omega}}^{2}=c_{1}\operatorname{Id}_{\mathbb{T}\mathcal{B}}\ ,$$
(57)
for appropriate $c_{1}\in\mathbb{R}$. This means
$$\displaystyle a_{1}a_{2}\{\mathbb{J}_{\rho},\mathbb{J}_{\alpha}\}+a_{1}a_{3}\{\mathbb{J}_{\rho},\mathbb{J}_{\Omega}\}+a_{2}a_{3}\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=c_{2}\operatorname{Id}_{\mathbb{T}\mathcal{B}}\ ,$$
for some $c_{2}\in\mathbb{R}$. Since both products $\mathbb{J}_{\rho}\mathbb{J}_{\alpha}$ and $\mathbb{J}_{\rho}\mathbb{J}_{\Omega}$ have only zeros in the diagonal blocks, the last equation implies
$$\displaystyle\{\mathbb{J}_{\rho},\mathbb{J}_{\alpha}\}=0\ ,$$
$$\displaystyle\{\mathbb{J}_{\rho},\mathbb{J}_{\Omega}\}=0\ ,$$
$$\displaystyle a_{2}a_{3}\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=c_{2}\operatorname{Id}_{\mathbb{T}\mathcal{B}}\ .$$
By proposition 3.3, the first two equations are satisfied, if and only if $\epsilon_{1}=-1$ in $\mathbb{J}_{\rho}=\begin{pmatrix}\rho&0\\
0&\epsilon_{1}\rho^{*}\end{pmatrix}$. Moreover, if $\epsilon_{1}=-1$, then from proposition 3.1 follows that (57) is equivalent to
$$\displaystyle c_{1}=-\operatorname{\operatorname{sgn}}\operatorname{\operatorname{Pf}}(\alpha){a_{1}}^{2}+{a_{2}}^{2}\epsilon_{2}+{a_{3}}^{2}\epsilon_{3}\ .$$
Note that $c_{1}$ coincides with $k$ described in (53). Now let us assume that the equation $a_{2}a_{3}\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=c_{2}\operatorname{Id}_{\mathbb{T}M}$ can be solved. As a consequence of $\mathbb{A}^{2}=\pm\operatorname{Id}_{\mathbb{T}\mathcal{B}}$, the constants $c_{1},c_{2}$ must satisfy $|c_{1}+c_{2}|=1$. At the same time, proposition 3.1 implies $\mathbb{A}^{\bullet}\eta=-c_{1}\eta$, and we need $|c_{1}|=1$.
Comming back to the equation $a_{2}a_{3}\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=c_{2}\operatorname{Id}_{\mathbb{T}\mathcal{B}}$. In matrices, this corresponds to
$$\displaystyle\begin{split}\epsilon_{2}\epsilon_{3}\mathbb{1}+(\underline{\alpha}\underline{\Omega})^{2}&=\frac{-c_{2}\epsilon_{2}}{a_{2}a_{3}}\underline{\alpha}\underline{\Omega},\\
\epsilon_{2}\epsilon_{3}\mathbb{1}+(\underline{\Omega}\underline{\alpha})^{2}&=\frac{-c_{2}\epsilon_{2}}{a_{2}a_{3}}\underline{\Omega}\underline{\alpha}\ .\end{split}$$
(58)
Since $(\underline{\alpha}\underline{\Omega})^{2}=(\underline{\Omega}\underline{\alpha})^{2}$, the system (58) can be solved, if and only if $[\underline{\alpha},\underline{\Omega}]=0$. In this case, the two equations coincide. Note that the requirement $[\underline{\alpha},\underline{\Omega}]=0$ is not present if we want $\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=0$ (which corresponds to $c_{2}=0$). This is because the right-hand side of (58) vanish if we rewrite $\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=0$ as a matrix equation.
Thus we arrived at the following conclusion. If we do not assume a priori any relation between $\mathbb{J}_{\rho},\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}$, then for $\mathbb{A}$ to be a generalized almost structure, $\mathbb{J}_{\rho}$ must anticommute with $\mathbb{J}_{\alpha}$ and $\mathbb{J}_{\Omega}$. The assumption $\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=0$ can be replaced with $\{\mathbb{J}_{\alpha},\mathbb{J}_{\Omega}\}=c\operatorname{Id}_{\mathbb{T}\mathcal{B}}$, with the price given by the necessity of $[\underline{\alpha},\underline{\Omega}]=0$, as well as the requirement $|c+k|=1$, which is in addition to $|k|=1$, where $k$ is given by (53).
On the link between M-A structures and M-A equations. We have seen in propositions 3.2 and 3.4 how a non-degenerate M-A structure gives rise to various generalized almost structures. We have also described the link between M-A structures and M-A equations, which is provided by (3). Now we want to use this link to define generalized geometries associated with a given M-A equation. To do this, we have to take care of ambiguities (and the corresponding well-definedness issue) associated with the following observation.
Consider non-degenerate M-A structures $(\Omega,\alpha)$ and $(\Omega,\tilde{\alpha})$. By definition, $\alpha\wedge\Omega=0=\tilde{\alpha}\wedge\Omega$. Moreover, for $h\in C^{\infty}(\mathcal{B})$
$$\displaystyle(\operatorname{\operatorname{d}}f)^{*}(h\alpha)=h|_{\operatorname{Im}\operatorname{\operatorname{d}}f}(\operatorname{\operatorname{d}}f)^{*}\alpha\ ,$$
where $f\in C^{\infty}(\mathcal{B})$ and $\operatorname{Im}\operatorname{\operatorname{d}}f$ is understood in the sense $\operatorname{\operatorname{d}}f\colon B\to T^{*}\mathcal{B},x\mapsto\operatorname{\operatorname{d}}_{x}f$. Consequently, if $h$ is everywhere non-zero,
$$\displaystyle(\operatorname{\operatorname{d}}f)^{*}\alpha=0\quad\iff\quad(\operatorname{\operatorname{d}}f)^{*}(h\alpha)=0\ .$$
Thus two non-degenerate M-A structures give rise to the same M-A equation, if and only if $\tilde{\alpha}=h\alpha$. Denote by $[\alpha]$ the equivalence class of $2$-forms satisfying $\alpha\wedge\Omega=0$, where
$$\displaystyle\tilde{\alpha}\in[\alpha]\quad\overset{\text{def.}}{\iff}\quad\tilde{\alpha}=h\alpha\ ,$$
for a non-vanishing $h\in C^{\infty}(T^{*}\mathcal{B}$. Then there is the following $1-1$ correspondence between equivalence classes and M-A equations
$$\displaystyle\{[\alpha]\ |\ \alpha\in\Gamma\left(\Lambda^{2}T^{*}\mathcal{B}\right)\}\longleftrightarrow\{(\operatorname{\operatorname{d}}f)^{*}\alpha=0\ |\ f\in C^{\infty}(\mathcal{B})\}\ .$$
To put this in the context of generalized geometries associated with M-A structures, we want to see how the change of representative in the equivalence class $[\alpha]$ changes the family of generalized almost structures given by proposition (3.2). Recall that $\mathbb{A}$ determined by a non-degenerate M-A structure $(\Omega,\alpha)$ is defined via three generalized almost structures given by (31) with $\epsilon_{1}=-1$, and satisfying the condition $\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\epsilon_{2}\pi_{\Omega}^{\#}\alpha_{\#}=0$.
Consider a non-degenerate M-A structure $(\Omega,\tilde{\alpha})$, where $\tilde{\alpha}\in[\alpha]$. Firstly, it is clear that $\mathbb{J}_{\Omega}$ is the same for both M-A structures, since $\Omega$ remains the same. Secondly, we compute the change in $\rho$ and the corresponding $\mathbb{J}_{\rho}$. Starting with the Pfaffian,
$$\displaystyle\tilde{\alpha}\wedge\tilde{\alpha}=h^{2}\operatorname{\operatorname{Pf}}(\alpha)\Omega\wedge\Omega\quad\Rightarrow\quad\operatorname{\operatorname{Pf}}(\tilde{\alpha})=h^{2}\operatorname{\operatorname{Pf}}(\alpha)\ .$$
(59)
Now using the definition (10), we have
$$\displaystyle\tilde{\rho}=\frac{1}{\sqrt{|\operatorname{\operatorname{Pf}}(\tilde{\alpha})|}}\pi_{\Omega}^{\#}\tilde{\alpha}_{\#}=\frac{h}{|h|\sqrt{|\operatorname{\operatorname{Pf}}(\alpha)|}}\pi_{\Omega}^{\#}\alpha_{\#}=\operatorname{\operatorname{sgn}}h\rho\ ,$$
(60)
with the immediate consequence
$$\displaystyle\mathbb{J}_{\tilde{\rho}}=\operatorname{\operatorname{sgn}}h\mathbb{J}_{\rho}\ .$$
(61)
Thirdly, we focus on $\mathbb{J}_{\alpha}$. We have $\tilde{\alpha}_{\#}=h\alpha_{\#}$ and $\pi_{\tilde{\alpha}}^{\#}=\frac{1}{h}\pi_{\alpha}^{\#}$, which follows from $C^{\infty}$-linearity of tensor fields and the definition of $\pi_{\alpha}^{\#}$. Thus $\alpha\mapsto\tilde{\alpha}=h\alpha$ results in
$$\displaystyle\mathbb{J}_{\alpha}=\begin{pmatrix}0&\pi_{\alpha}^{\#}\\
\epsilon_{2}\alpha^{\#}&0\end{pmatrix}\quad\longmapsto\quad\begin{pmatrix}0&\frac{1}{h}\pi_{\alpha}^{\#}\\
h\epsilon_{2}\alpha^{\#}&0\end{pmatrix}=\mathbb{J}_{\tilde{\alpha}}$$
(66)
Now we are ready to investigate the anticommutativity of generalized almost strutures associated with $(\Omega,\tilde{\alpha})$. We have $\{\mathbb{J}_{\tilde{\rho}},\mathbb{J}_{\Omega}\}=\operatorname{\operatorname{sgn}}h\{\mathbb{J}_{\rho},\mathbb{J}_{\Omega}\}$, and thus
$$\displaystyle\{\mathbb{J}_{\tilde{\rho}},\mathbb{J}_{\Omega}\}=0\quad\iff\quad\epsilon_{1}=-1\ .$$
Proceeding with the anticommutator of $\mathbb{J}_{\tilde{\rho}}$ and $\mathbb{J}_{\tilde{\alpha}}$, we have
$$\displaystyle\{\mathbb{J}_{\tilde{\rho}},\mathbb{J}_{\tilde{\alpha}}\}=\operatorname{\operatorname{sgn}}h\begin{pmatrix}0&\frac{1}{h}\left(\rho\pi_{\alpha}^{\#}+\epsilon_{1}\pi_{\alpha}^{\#}\rho^{*}\right)\\
h\left(\epsilon_{1}\epsilon_{2}\rho^{*}\alpha_{\#}+\epsilon_{2}\alpha_{\#}\rho\right)&0\end{pmatrix}\ ,$$
which implies
$$\displaystyle\{\mathbb{J}_{\tilde{\rho}},\mathbb{J}_{\tilde{\alpha}}\}=0\quad\iff\quad\begin{cases}0=\rho\pi_{\alpha}^{\#}+\epsilon_{1}\pi_{\alpha}^{\#}\rho^{*}\\
0=\epsilon_{1}\epsilon_{2}\rho^{*}\alpha_{\#}+\epsilon_{2}\alpha_{\#}\rho\end{cases}$$
Considering the right side of the equivalence, we have seen in the proof of proposition 3.3 that the two equations are equivalent, and are satisfied, if and only if $\epsilon_{1}=-1$. Finally, we consider the anticommutator of $\mathbb{J}_{\tilde{\alpha}}$ and $\mathbb{J}_{\Omega}$, after which we discuss the overall dependence of $\mathbb{A}$ on the choice of the representative in the class $[\alpha]$. The anticommutator is
$$\displaystyle\{\mathbb{J}_{\tilde{\alpha}},\mathbb{J}_{\Omega}\}=\begin{pmatrix}h\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\frac{\epsilon_{2}}{h}\pi_{\Omega}^{\#}\alpha_{\#}&0\\
0&h\epsilon_{2}\alpha_{\#}\pi_{\Omega}^{\#}+\frac{\epsilon_{3}}{h}\Omega_{\#}\pi_{\alpha}^{\#}\end{pmatrix}\ .$$
Analogously as with the previous anticommutators, vanishing of $\{\mathbb{J}_{\tilde{\alpha}},\mathbb{J}_{\Omega}\}$ boils down to one equation
$$\displaystyle\{\mathbb{J}_{\tilde{\alpha}},\mathbb{J}_{\Omega}\}=0\quad\iff\quad h^{2}\epsilon_{3}\pi_{\alpha}^{\#}\Omega_{\#}+\epsilon_{2}\pi_{\Omega}^{\#}\alpha_{\#}=0\ .$$
In canonical coordinates, this amounts to
$$\displaystyle(\underline{\alpha}\underline{\Omega})^{2}=\frac{-\epsilon_{2}\epsilon_{3}}{h^{2}}\mathbb{1}\ .$$
(67)
Now we are ready to compare the situation between $(\Omega,\alpha)$ and $(\Omega,\tilde{\alpha})$.
From the above discussion we see that there are exactly two obstructions for $(\Omega,\alpha)$ and $(\Omega,\tilde{\alpha})$ to determine the same family of generalized almost structures. The first obstruction follows from the change of $\mathbb{J}_{\alpha}$, described in (66). The second obstruction, which is a consequence of the first one on the level of anticommutators is contained in the equation (67). Both obstructions can be avoided by choosing $h=1$, but this corresponds to leaving the representative of $[\alpha]$ unchanged.
Proposition 3.6.
Two non-degenerate M-A structures $(\Omega,\alpha)$ and $(\Omega,\tilde{\alpha})$, where $\tilde{\alpha}\in[\alpha]$, give rise to the same family of generalized almost structures determined by proposition 3.4, if and only if $\tilde{\alpha}=\alpha$.
The anticommutativity between $\mathbb{J}_{\tilde{\rho}}$ and $\mathbb{J}_{\Omega}$ (as well as $\mathbb{J}_{\tilde{\alpha}}$) is satisfied by the same condition for all representatives of $[\alpha]$, namely by choosing $\epsilon_{1}=-1$. Since the change of representative of $[\alpha]$ transforms $\mathbb{J}_{\rho}$ according to (61), we arrive at the following
Proposition 3.7.
There is a one-to-one correspondence between 2D symplectic non-degenerate M-A equations and equivalence classes $[\mathbb{A}]$ of generalized almost structures given by proposition 3.4 with $a_{2}=0$. Representatives of $[\mathbb{A}]$ have the same type.
Proof.
Based on the discussion preceding the proposition, the only thing to notice is that $a_{2}=0$ means that $\mathbb{A}$ is constructed only from $\mathbb{J}_{\rho}$ and $\mathbb{J}_{\Omega}$. This further implies that the condition (42), which is equivalent to anticommutativity between $\mathbb{J}_{\alpha}$ and $\mathbb{J}_{\Omega}$, does not have to be satisfied due to (54). Of course, $a_{2}=0$ also affects the quadrics described in statements 1. and 2. of the proposition. Some of them will cease to exist (such as the $2$-sheeted hyperboloid of GaPC structures corresponding to $\operatorname{Pf}(\alpha)>0$ and $(\epsilon_{2},\epsilon_{3})=(1,-1)$) and those that survive will become quadric curves instead of surfaces. Finally, $\tilde{\mathbb{A}}\in[\mathbb{A}]\iff\tilde{a}_{1}=\operatorname{\operatorname{sgn}}ha_{1}$ implies together with (59), that the two representatives of $[\mathbb{A}]$ have the same type.
∎
The notion of normalization (7) yields a unified way of choosing a representative in $[\alpha]$. This leads to the following definition and the subsequent theorem.
Definition 3.5.
Let $(\operatorname{\operatorname{d}}f)^{*}\alpha=0$ be a 2D symplectic Monge-Ampère equation satisfying $Pf(\alpha)\neq 0$. A generalized almost geometry associated with the M-A equation is a generalized almost geometry determined by the normalized M-A structure $(\Omega,n(\alpha))$ corresponding to the equation.
Now we are ready to summarize by the following theorem, which is based on propositions 3.6 and 3.7.
Theorem 3.2.
Let $(\operatorname{\operatorname{d}}f)^{*}\alpha=0$ be a 2D symplectic Monge-Ampère equation on $\mathcal{B}$, such that $Pf(\alpha)\neq 0$ everywhere on $T^{*}\mathcal{B}$. Then there is a unique family of generalized almost geometries $\mathbb{A}\in\operatorname{End}\left(\mathbb{T}T^{*}\mathcal{B}\right)$ on $T^{*}\mathcal{B}$, determined by theorem 3.1, which are associated with the equation.
Conlusion and Outlooks
We have described a construction of generalized almost geometries determined by non-degenerate 2D symplectic Monge-Ampère structures and the corresponding PDEs. Inspired by the results in [3, 7, 11, 13, 14, 24, 25], we constructed many new generalized almost geometries derived from Monge-Ampère structures and from geometric objects they define. We have shown that non-degenerate Monge-Ampère structures give rise to quadric surfaces of generalized almost geometries. We also discussed the link between Monge-Ampère structures and Monge-Ampère equations in this context. In future work, we will be interested in constructing generalized geometries in dimensions higher than two, particularly in dimension three, where the situation is much richer. We will also focus on studying various notions of integrability, especially the notion of weak integrability in our context, as well as the closely related notion of Bismut connections [13, 1, 25]. Particularly interesting would be to find further links between (weak) integrability of generalized structures and the local equivalence problem of certain Monge-Apmère PDEs in dimension three.
References
[1]
A. Andrada and R. Villacampa.
Bismut connection on Vaisman manifolds.
Mathematische Zeitschrift, 302:1091–1126, 2022.
[2]
B. Banos.
Integrable geometries and Monge-Ampère equations.
arXiv: Differential Geometry, 2006.
[3]
B. Banos.
Monge–Ampère equations and generalized complex geometry— The
two-dimensional case.
Journal of Geometry and Physics, 57(3):841–853, 2007.
[4]
B. Banos.
Complex solutions of Monge-Ampère equations.
Journal of Geometry and Physics, 61(11):2187–2198, 2011.
[5]
B. Banos, V. Rubtsov, and I. Roulstone.
Monge–Ampère Structures and the Geometry of Incompressible
Flows.
Journal of Physics A: Mathematical and Theoretical, 49, 10
2016.
[6]
T. J. Courant.
Dirac Manifolds.
Transactions of the American Mathematical Society,
319(2):631–661, 1990.
[7]
M. Crainic.
Generalized complex structures and Lie brackets.
Bulletin of the Brazilian Mathematical Society, New Series,
42:559–578, 2004.
[8]
S. Delahaies.
Complex and contact geometry in geophysical fluid dynamics.
PhD thesis, University of Surrey, 01 2009.
[9]
D. G. Dritschel and Á. Viúdez.
A balanced approach to modelling rotating stably stratified
geophysical flows.
Journal of Fluid Mechanics, 488:123–150, 2003.
[10]
M. Gualtieri.
Branes on Poisson varieties.
arXiv: Differential Geometry, 2007.
[11]
M. Gualtieri.
Generalized complex geometry.
Annals of Mathematics, 174(1):75–123, 2011.
[12]
N. Hitchin.
Generalized Calabi–Yau Manifolds.
The Quarterly Journal of Mathematics, 54(3):281–308, 09 2003.
[13]
S. Hu, R. Moraru, and D. Svoboda.
Commuting Pairs, Generalized para-Kähler Geometry and Born
Geometry.
arXiv:1909.04646, 2019.
[14]
Y. Kosmann-Schwarzbach and V. Rubtsov.
Compatible Structures on Lie Algebroids and Monge-Ampère
Operators.
Acta Applicandae Mathematicae, 109(1):101–135, Jan 2010.
[15]
A. Kushner, V. Lychagin, and V. Rubtsov.
Contact Geometry and Nonlinear Differential Equations.
Encyclopedia of Mathematics and its Applications. Cambridge
University Press, 2006.
[16]
V. V. Lychagin.
Contact Geometry and Non-Linear Second-Order Differential
Equations.
Russian Mathematical Surveys, 34(1):149–180, February 1979.
[17]
V. V. Lychagin.
Differential equations on two-dimensional manifolds.
Izv. Vyssh. Uchebn. Zaved. Mat., 5:43–57, 1992.
[18]
V. V. Lychagin and V. N. Rubtsov.
Local classification of Monge-Ampère differential
equations.
Sov. Math., Dokl., 28:328–332, 1983.
[19]
L. Reidel, F. J. Rudolph, and D. Svoboda.
A Unique Connection for Born Geometry .
Communications in Mathematical Physics, 372:119–150, 2019.
[20]
I. Roulstone, B. Banos, J. D. Gibbon, and V. Roubtsov.
Kähler geometry and Burgers’ vortices.
Proceedings of Ukrainian National Academy Mathematics,
16(2):303 – 321, 2009.
[21]
V. Rubtsov.
Geometry of Monge–Ampère Structures, pages 95–156.
Springer International Publishing, Cham, 2019.
[22]
V. Rubtsov and I. Roulstone.
Holomorphic structures in hydrodynamical models of nearly
geostrophic flow.
Proc. R. Soc. Lond. A., 457:1519–1531, 06 2001.
[23]
V. N. Rubtsov and I. Roulstone.
Examples of quaternionic and Kähler structures in Hamiltonian
models of nearly geostrophic flow.
Journal of Physics A: Mathematical and General, 30(4):L63–L68,
feb 1997.
[24]
M. Salvai.
Generalized geometric structures on complex and symplectic
manifolds.
Annali di Matematica Pura ed Applicata, 194:1505–1525, 2015.
[25]
I. Vaisman.
Generalized para-Kähler manifolds.
Differential Geometry and its Applications, 42:84–103, 2015.
[26]
Á. Viúdez and D. G. Dritschel.
An explicit potential-vorticity-conserving approach to modelling
nonlinear internal gravity waves.
Journal of Fluid Mechanics, 458:75–101, 2002. |
Inclusive search for supersymmetry using razor variables in $\Pp\Pp$ collisions at $\sqrt{s}=13\TeV$
(November 21, 2020)
Abstract
An inclusive search for supersymmetry using razor variables
is performed in events with four or more jets and no more than one lepton.
The results are based on a sample of proton-proton collisions corresponding to an
integrated luminosity of 2.3\fbinvcollected with the CMS experiment at a
center-of-mass energy of $\sqrt{s}=13\TeV$. No significant excess
over the background prediction is observed in data, and 95%
confidence level exclusion limits are placed on the masses of new
heavy particles in a variety of simplified models.
Assuming that pair-produced gluinos decay only via three-body
processes involving third-generation quarks plus a neutralino, and
that the neutralino is the lightest supersymmetric particle with a
mass of 200\GeV, gluino masses below 1.6\TeVare excluded for any
branching fractions for the individual gluino decay modes. For some
specific decay mode scenarios, gluino masses up to 1.65\TeVare
excluded. For decays to first- and second-generation quarks and a
neutralino with a mass of 200\GeV, gluinos with masses up to 1.4\TeVare excluded. Pair production of top squarks decaying to a top quark
and a neutralino with a mass of 100\GeVis excluded for top squark masses
up to 750\GeV.
\cmsNoteHeader
SUS-15-004
\RCS
$Revision:368532$
\RCS$HeadURL:svn+ssh://ghm@svn.cern.ch/reps/tdr2/papers/SUS-15-004/trunk/SUS-15-004%
.tex$
\RCS$Id:SUS-15-004.tex3685322016-09-2222:09:39Zghm$
\cmsNoteHeader
SUS-15-004
0.1 Introduction
Supersymmetry (SUSY) is a proposed extended spacetime symmetry that
introduces a bosonic (fermionic) partner for every fermion (boson) in the
standard model
(SM) [1, 2, 3, 4, 5, 6, 7, 8, 9].
Supersymmetric extensions of the SM are particularly compelling
because they yield solutions to the gauge hierarchy problem without the
need for large fine tuning of fundamental parameters [10, 11, 12, 13, 14, 15],
exhibit gauge coupling unification [16, 17, 18, 19, 20, 21],
and can provide weakly interacting particle candidates for dark matter [22, 23].
For SUSY to provide a “natural” solution to the gauge hierarchy
problem, the three Higgsinos, two neutral and one charged, are
expected to be light, and two top squarks, one bottom squark, and the gluino must have masses below a few
TeV, making them potentially accessible at the CERN LHC. Previous searches for SUSY by the
CMS [24, 25, 26, 27, 28, 29, 30]
and ATLAS
[31, 32, 33, 34, 35, 36, 37] Collaborations
have probed SUSY particle masses near the TeV scale, and the increase in the center-of-mass
energy of the LHC from 8 to 13\TeVprovides an opportunity to
significantly extend the sensitivity to higher SUSY particle
masses [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51].
In R-parity [52] conserving SUSY scenarios, the lightest SUSY particle (LSP) is stable and assumed
to be weakly interacting. For many of these models, the experimental signatures at the LHC
are characterized by an abundance of jets and a large transverse momentum imbalance,
but the exact form of the final state can vary significantly,
depending on the values of the unconstrained model parameters. To ensure sensitivity
to a broad range of SUSY parameter space, we adopt an inclusive search
strategy, categorizing events according to the number of identified leptons and \PQb-tagged jets. The razor kinematic variables $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ [53, 54]
are used as search variables and are generically sensitive to
pair production of massive particles with subsequent direct or cascading
decays to weakly interacting stable particles. Searches for SUSY and
other beyond the SM phenomena using razor variables have been performed by both the
CMS [55, 53, 54, 56, 57, 58] and
ATLAS [59, 60]
Collaborations in the past.
We interpret the results of the inclusive search using
simplified SUSY scenarios for pair production of gluinos and top
squarks. First, we consider models in which the gluino undergoes
three-body decay, either to a
bottom or top quark-antiquark pair and the lightest neutralino $\chiz_{1}$, assumed to be the lightest SUSY
particle; or to a bottom quark (antiquark), a top antiquark (quark), and the
lightest chargino $\chipm_{1}$, assumed to be the next-to-lightest SUSY
particle (NLSP). The NLSP is assumed to have a mass that is $5\GeV$ larger
than the mass of the LSP, motivated by the fact that in many natural
SUSY scenarios the lightest chargino and the two lightest neutralinos
are Higgsino-like and quasi-degenerate [61]. The NLSP
decays to an LSP and an off-shell
$\PW$ boson, whose decay products mostly have too low momentum to be
identifiable. The specific choice of the NLSP-LSP mass splitting does not
have a large impact on the results of the interpretation.
The full range of branching fractions to the three possible decay modes ($\bbbar\chiz_{1}$,
$\PQb\cPaqt\chip_{1}$ or $\cPaqb\cPqt\chim_{1}$, and $\ttbar\chiz_{1}$) is considered, assuming that these sum to 100%. We also consider
a model in which the gluino decays to a first- or second-generation quark-antiquark pair and the LSP.
Finally, we consider top squark pair production with the top squark decaying to
a top quark and the LSP. Diagrams of these simplified model processes are shown in Fig. 1.
This paper is organized as follows. Section 0.2
presents an overview of the CMS detector. A description of simulated
signal and background samples is given in
Section 0.3. Section 0.4 describes
physics object reconstruction and the event
selection. Section 0.5 describes the analysis
strategy and razor variables, and the background estimation techniques
used in this analysis are described in
Section 0.6. Section 0.7 covers the
systematic uncertainties. Finally, our results and their interpretation
are presented in Section 0.8, followed by a summary in Section 0.9.
0.2 The CMS detector
The central feature of the CMS detector is a
superconducting solenoid of 6\unitm internal diameter, providing a
magnetic field of 3.8\unitT. Within the solenoid
volume are a silicon pixel and a silicon strip tracker, a
lead tungstate crystal electromagnetic calorimeter (ECAL), and a
brass and scintillator hadron calorimeter (HCAL), each comprising a barrel and
two endcap sections. Muons are measured in gas-ionization detectors
embedded in the magnet steel flux-return yoke outside the
solenoid. Extensive forward calorimetry complements the coverage
provided by the barrel and endcap detectors. Jets are
reconstructed within the pseudorapidity region $\abs{\eta}<5$
covered by the ECAL and HCAL,
where $\eta\equiv-\ln[\tan(\theta/2)]$ and $\theta$ is the
polar angle of the trajectory of the particle with respect to
the counterclockwise beam direction.
Electrons and muons are reconstructed in the region with
$\abs{\eta}<2.5$ and $2.4$, respectively.
Events are selected by a two-level trigger system. The first
level is based on a hardware
trigger, followed by a software-based high level trigger. A more
detailed description of the CMS detector, together with a definition
of the coordinate system used and the relevant kinematic variables,
can be found in Ref. [62].
0.3 Simulated event samples
Simulated Monte Carlo (MC) samples are used for modeling of the SM backgrounds
in the search regions and for calculating the selection efficiencies for
SUSY signal models. The production of $\ttbar$+jets, $\PW$+jets, $\cPZ$+jets, $\cPgg$+jets,
and QCD multijet events, as well as production of gluino and top squark
pairs, is simulated with the MC generator \MADGRAPHv5 [63]. Single
top quark events are modeled at next-to-leading order (NLO) with \MADGRAPH_amc@nlo v2.2 [64]
for the $s$-channel, and with \POWHEG v2 [65, 66]
for the $t$-channel and $\PW$-associated production. Contributions from
$\ttbar\PW$, $\ttbar\cPZ$ are also simulated with
\MADGRAPH_amc@nlo v2.2. Simulated events are interfaced with \PYTHIAv8.2 [67] for fragmentation and parton
showering.
The NNPDF3.0LO and NNPDF3.0NLO [68] parton distribution functions (PDF) are
used, respectively, with \MADGRAPH, and with \POWHEGand \MADGRAPH_amc@nlo.
The SM background events are simulated using a \GEANTfour-based model [69] of the CMS detector.
The simulation of SUSY signal model events is performed using the CMS fast
simulation package [70]. All simulated events include the
effects of pileup, i.e. multiple $\Pp\Pp$ collisions within the same or
neighboring bunch crossings, and are processed with the same chain of
reconstruction programs as is used for collision data. Simulated events are weighted to
reproduce the observed distribution of pileup vertices in the data set, calculated based on the measured
instantaneous luminosity.
The SUSY signal production cross sections are calculated to next-to-leading
order (NLO) plus next-to-leading-logarithm (NLL)
accuracy [71, 72, 73, 74, 75, 76], assuming all
SUSY particles other than those in the relevant diagram to be too
heavy to participate in the interaction. The NLO+NLL cross section and
its associated uncertainty [76] are used to derive
the exclusion limit on the masses of the SUSY
particles. The hard scattering was generated with \MADGRAPHup to
two extra partons to model initial-state radiation at the matrix element level, and
simulated events were interfaced to \PYTHIAfor the showering,
fragmentation and hadronization steps.
0.4 Object reconstruction and selection
Physics objects are defined using the particle-flow (PF)
algorithm [77, 78]. The PF
algorithm reconstructs and identifies each individual particle with an optimized
combination of information from the various elements of the CMS
detector. All reconstructed PF candidates are clustered into jets using the
anti-$\kt$ algorithm [79, 80]
with a distance parameter
of 0.4. The jet momentum is determined as the vector sum of all particle momenta
in the jet, and jet-energy corrections are derived from simulation and
confirmed by in-situ measurements of the energy balance in dijet
and photon+jet events. Jets are required to pass loose identification criteria
on the jet composition designed to reject spurious signals arising from noise and
failures in the event reconstruction [81, 82].
For this search, we consider jets with transverse momentum $\pt>40\GeV$ and
$|\eta|<3.0$. The missing transverse momentum vector \ptvecmissis defined as the projection on the plane perpendicular to the beams of
the negative vector sum of the momenta of all reconstructed PF
candidates in an event. Its magnitude is referred to as the missing
transverse energy \ETmiss.
Electrons are reconstructed by associating a cluster of
energy deposited in the ECAL with a reconstructed track [83],
and are required to have $\pt>5\GeV$ and $|\eta|<2.5$. A “tight” selection
used to identify prompt electrons is based on requirements
on the electromagnetic shower shape, the geometric matching of
the track to the calorimeter cluster, the track quality and impact
parameter, and isolation. The isolation of electrons and muons is
defined as the scalar sum of the transverse momenta of all neutral and
charged PF candidates within a cone $\Delta R=\sqrt{\smash[b]{(\Delta\eta)^{2}+(\Delta\phi)^{2}}}$ along the lepton
direction. The variable is corrected for the effects of pileup using an
effective area correction [84], and the cone size
$\Delta R$ shrinks with increasing lepton $\pt$ according to
$$\displaystyle\Delta R=\begin{cases}0.2,&\pt\leq 50\GeV\\
{10\GeV}/{\pt},&50<\pt\leq 200\GeV\\
0.05,&\pt>200\GeV.\\
\end{cases}$$
(1)
The use of the lepton $\pt$ dependent isolation cone enhances the
efficiency for identifying leptons in events containing a large amount of hadronic
energy, such as those with $\ttbar$ production. For tight electrons, the isolation is required to be less than $10\%$ of
the electron $\pt$. The selection efficiency for tight electrons
increases from $60\%$ for $\pt$ around $20\GeV$
to $70\%$ for $\pt$ around $40\GeV$ and to $80\%$ for $\pt$ above $50\GeV$.
To improve the purity of all-hadronic signals in the zero-lepton event categories, a looser “veto”
selection is also defined. For this selection, electrons are required to have $\pt>5\GeV$. The output of a boosted decision tree is used to identify electrons based on shower
shape and track information [83].
For electrons with $\pt>20\GeV$, the isolation is required to be less than $20\%$ of the
electron $\pt$. For electrons with $\pt$ between $5$ and $20\GeV$, the value of the
isolation, computed by summing the $\pt$’s of all particle flow candidates within a
$\Delta R$ cone of 0.3, is required to be less than $5\GeV$. For the
veto electron selection, the efficiency increases from $60\%$ for
$\pt$ around $5\GeV$ to $80\%$ for $\pt$ around $15\GeV$ and $90\%$ for $\pt$ above $20\GeV$.
Muons are reconstructed by combining tracks found in the muon system with
corresponding tracks in the silicon detectors [85],
and are required to have $\pt>5\GeV$ and $|\eta|<2.4$. Muons are identified
based on the quality of the track fit, the number of detector hits used in the
tracking algorithm, and the compatibility between track
segments. As for electrons, we define “tight” and “veto” muon selections. The absolute value of the 3D impact
parameter significance of the muon track, which is defined as the ratio of the impact
parameter to its estimated uncertainty, is required to be less than
4. For both tight and veto muons with
$\pt>20\GeV$ the isolation is required to be less than $20\%$
of the muon $\pt$, while for veto muons with $\pt$ between $5$ and $20\GeV$
the isolation computed using a $\Delta R$ cone of $0.4$
is required to be less than $10\GeV$. For tight muons we require $d_{0}<0.2\unit{cm}$, where $d_{0}$ is the transverse impact parameter of the muon
track, while this selection is not applied for veto muons.
The selection efficiency for tight muons increases from $65\%$ for
$\pt$ around $20\GeV$ to $75\%$ for $\pt$ around $40\GeV$ and to $80\%$ for $\pt$ above $50\GeV$.
For the veto muon selection, the efficiency increases from $85\%$ for
$\pt$ around $5\GeV$ to $95\%$ for $\pt$ above $20\GeV$.
We additionally reconstruct and identify hadronically decaying $\tau$
leptons ($\tau_{\mathrm{h}}$) to further enhance the all-hadronic purity
of the zero-lepton event categories, using the hadron-plus-strips algorithm [86], which
identifies $\tau$ decay modes
with one charged hadron and up to two neutral pions, or three charged hadrons.
The $\tau_{\mathrm{h}}$ candidate is required to have
$\pt>20\GeV$, and the isolation, defined as the $\pt$ sum of other nearby PF candidates, must be below a certain threshold.
The loose cutoff-based selection [86] is used and results in an efficiency
of about $50\%$ for successfully reconstructed $\tau_{\mathrm{h}}$ decays.
To identify jets originating from \PQb-hadron decays, we use the
combined secondary vertex \PQb jet tagger, which uses the inclusive
vertex finder to select \PQb jets [87, 88]. The “medium”
working point is used to define the event categories for the search signal regions,
and for jets with $\pt$ between $40$ and $200\GeV$ yields an
efficiency of approximately $70\%$ for \PQb jets and an average
misidentification probability of $1.5\%$ for jets originating from light-flavor
quarks or gluons in typical background events relevant for this search.
Photon candidates are reconstructed from clusters of energy deposits in
the ECAL. They are identified using
selections on the transverse shower width $\sigma_{\eta\eta}$ as defined
in Ref. [89], and the hadronic to electromagnetic energy ratio ($H/E$).
Photon isolation, defined as the scalar $\pt$ sum of charged particles within a cone of
$\Delta R<0.3$, must be less than $2.5\GeV$. Finally, photon candidates that share
the same energy cluster as an identified electron are vetoed.
0.5 Analysis strategy and event selection
We select events with four or more jets, using search categories
defined by the number of leptons and \PQb-tagged jets in the event.
The Multijet category consists of events with no electrons or muons passing the tight or veto selection, and no selected $\tau_{\mathrm{h}}$.
Events in the one electron (muon) category, denoted as the Electron Multijet (Muon Multijet) category,
are required to have one and only one electron (muon) passing the tight selection.
Within these three event classes, we divide the events further into categories depending on
whether the events have zero, one, two, or more than two \PQb-tagged jets.
Each event in the above categories is treated as a dijet-like event by grouping selected leptons
and jets in the event into two “megajets”, whose four-momenta are
defined as the vector sum of the four-momenta of their constituent physics objects [55]. The
clustering algorithm selects the grouping that minimizes the sum of the squares of the invariant masses
of the two megajets. We define the razor variables $M_{\mathrm{R}}$ and $M_{\mathrm{T}}^{\mathrm{R}}$ as
$$\displaystyle M_{\mathrm{R}}$$
$$\displaystyle\equiv\sqrt{(\abs{\vec{p}^{\,\mathrm{j}_{1}}}+\abs{\vec{p}^{\,%
\mathrm{j}_{2}}})^{2}-({p}^{\,\mathrm{j}_{1}}_{z}+{p}^{\,\mathrm{j}_{2}}_{z})^%
{2}},$$
(2)
$$\displaystyle M_{\mathrm{T}}^{\mathrm{R}}$$
$$\displaystyle\equiv\sqrt{\frac{\ETm(\pt^{\,\mathrm{j}_{1}}+\pt^{\,\mathrm{j}_{%
2}})-\ptvecmiss\cdot(\ptvec^{\,\mathrm{j}_{1}}+\ptvec^{\,\mathrm{j}_{2}})}{2}},$$
(3)
where $\vec{p}_{\mathrm{j}_{i}}$, $\ptvec^{\,\mathrm{j}_{i}}$, and
$p^{\,\mathrm{j}_{i}}_{z}$ are the momenta of the $i$th megajet and its transverse and longitudinal components with
respect to the beam axis, respectively. The dimensionless variable $\mathrm{R}$ is defined as
$$\mathrm{R}\equiv\frac{M_{\mathrm{T}}^{\mathrm{R}}}{M_{\mathrm{R}}}.$$
(4)
For a typical SUSY decay of a superpartner $\PSq$ decaying into an
invisible neutralino $\chiz_{1}$ and the standard model partner $\Pq$,
the mass variable $M_{\mathrm{R}}$ peaks at a characteristic mass scale [53, 54]
${(m_{\PSq}^{2}-m_{\chiz_{1}}^{2})/m_{\chiz_{1}}}$. For
standard model background processes, the distribution of $M_{\mathrm{R}}$ has an
exponentially falling shape. The variable $\mathrm{R}^{2}$ is
related to the missing transverse energy and is used to
suppress QCD multijet background.
The events of interest are triggered either by the presence of a high-$\pt$ electron or muon, or
through dedicated hadronic triggers requiring the presence of at least two highly energetic jets
and with loose thresholds on the razor variables $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$. The single-electron (single-muon) triggers require at least one isolated electron
(muon) with $\pt>$ 23 (20)\GeV. The isolation requirement is dropped for electrons (muons) with
$\pt>$ 105 (50)\GeV. The efficiencies for the single electron (muon) triggers
are above 70% for $\pt$ around 25 (20)\GeV, and reach a
plateau above 97% for $\pt>40\GeV$. The efficiencies for the single electron trigger were measured in data and simulation and
found to be in good agreement, as were the corresponding efficiencies for muons. Corrections for residual difference of trigger
efficiency between data and MC simulation are applied to simulated samples.
The hadronic razor trigger requires at least two jets with $\pt>80\GeV$ or at least
four jets with $\pt>40\GeV$. The events are also required to pass selections on the
razor variables $M_{\mathrm{R}}>200\GeV$ and $\mathrm{R}^{2}>0.09$ and on the product
$(M_{\mathrm{R}}+300\GeV)\times(\mathrm{R}^{2}+0.25)>240\GeV$.
The efficiency of the hadronic razor trigger for events passing the baseline
$M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ selections described below is $97\%$ and is consistent
with the prediction from MC simulation.
For events in the Electron or Muon Multijet categories, the search region
is defined by the selections $M_{\mathrm{R}}>400\GeV$ and $\mathrm{R}^{2}>0.15$.
The $\pt$ of the electron (muon)
is required to be larger than 25 (20)\GeV. To suppress backgrounds from the $\PW(\ell\nu)$+jets
and $\ttbar$ processes, we require that the transverse mass
$M_{\mathrm{T}}$ formed by the lepton momentum
and \ptvecmissbe larger than $120\GeV$.
For events in the Multijet category, the search uses a region defined by the
selections $M_{\mathrm{R}}>500\GeV$ and $\mathrm{R}^{2}>0.25$ and requires the presence of at least
two jets with $\pt>80\GeV$ within $|\eta|<3.0$, for compatibility with the requirements
imposed by the hadronic razor triggers. For QCD multijet background events, the
\ETmissarises mainly from mismeasurement of
the energy of one of the leading jets. In such cases, the two razor
megajets tend to lie in a back-to-back configuration. Therefore, to suppress the QCD multijet
background we require that the azimuthal angle $\Delta\phi_{\mathrm{R}}$ between the two razor
megajets be less than $2.8$ radians.
Finally, events containing signatures consistent with beam-induced background or anomalous noise
in the calorimeters are rejected using dedicated
filters [90, 91].
0.6 Background modeling
The main background processes in the search regions considered are
$\PW(\ell\nu)$+jets (with $\ell=\Pe$, $\Pgm$, $\tau$), $\cPZ(\nu\bar{\nu})$+jets, $\ttbar$, and QCD multijet production. For event categories with
zero \PQb-tagged jets, the background is primarily composed of the $\PW(\ell\nu)$+jets and $\cPZ(\nu\bar{\nu})$+jets
processes, while for categories with two or more \PQb-tagged jets it is
dominated by the $\ttbar$ process. There are also very small contributions from
the production of two or three electroweak bosons and from the production of $\ttbar$ in
association with a $\PW$ or $\cPZ$ boson. These
contributions are summed and labeled “Other” in Fig. 2-5.
We model the background using two independent methods based on control samples in data with entirely
independent sets of systematic assumptions. The first method (A) is based on the use of
dedicated control regions that isolate a specific background processes in order
to control and correct the predictions of the MC simulation.
The second method (B) is based on a fit to an assumed functional
form for the shape of the observed data distribution in the two-dimensional $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ plane.
These two background predictions are compared and cross-checked against each other in order
to significantly enhance the robustness of the background estimate.
0.6.1 Method A: simulation-assisted background prediction from data
The simulation-assisted method defines dedicated control regions that isolate
each of the main background processes. Data in these control regions are used
to control and correct the accuracy of the MC prediction for each of the
background processes. Corrections for the jet energy response and lepton momentum response
are applied to the MC, as are corrections for the trigger
efficiency and the selection efficiency of electrons, muons, and \PQb-tagged jets. Any
disagreement observed in these control regions is then interpreted as an inaccuracy of the
MC in predicting the hadronic recoil spectrum and jet multiplicity.
Two alternative formulations of the method are typically used in searches for
new physics [25, 30, 31].
In the first formulation, the data control region yields are extrapolated to the search regions
via translation factors derived from simulation. In the second formulation, simulation to data
correction factors are derived in bins of the razor variables $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$
and are then applied to the simulation prediction of the search region yields.
The two formulations are identical and the choice of which formulation is used
depends primarily on the convenience of the given data processing sequence.
In both cases, the contributions from background processes other than the one
under study are subtracted using the MC prediction.
We employ the first formulation of the method for the estimate of the QCD background,
while the second formulation is used for modeling all other major backgrounds.
Details of the control regions used for each of
the dominant background processes are described in the subsections below.
Finally, the small contribution from rare background processes such as $\ttbar\cPZ$ is
modeled using simulation. Systematic uncertainties on the cross sections of these processes
are propagated to the final result.
The $\ttbar$ and $\PW(\ell\nu)$+jets background
The control region to isolate the $\ttbar$ and $\PW(\ell\nu)$+jets processes is defined by requiring
at least one tight electron or muon. To suppress QCD multijet
background, the quantities \ETmissand $M_{\mathrm{T}}$ are both required to be larger than $30\GeV$. To minimize
contamination from potential SUSY processes and to explicitly separate the control region
from the search regions, we require $M_{\mathrm{T}}<100\GeV$. The $\ttbar$ enhanced control region is defined by requiring that there be at
least one \PQb-tagged jet, and the $\PW(\ell\nu)$+jets enhanced control region is defined by
requiring no such \PQb-tagged jets. Other than these \PQb-tagged jet
requirements, we place no explicit requirement on the number of jets in
the event, in order to benefit from significantly larger control
samples.
We first derive corrections for the $\ttbar$ background, and then measure
corrections for the $W(\ell\nu)$+jets process after first applying the corrections already obtained
for the $\ttbar$ background in the $W(\ell\nu)$+jets control region.
As discussed above, the corrections to the MC prediction are derived in two-dimensional bins of the
$M_{\mathrm{R}}$-$\mathrm{R}^{2}$ plane. We observe that the $M_{\mathrm{R}}$ spectrum predicted by the simulation
falls off less steeply than the control region data for both the $\ttbar$ and $\PW(\ell\nu)$+jets
processes, as shown in Fig. 2.
In Fig. 3, we show the two dimensional $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ distributions
for data and simulation in the $\PW(\ell\nu)$+jets control region. The statistical uncertainties in the correction factors
due to limited event yields in the control region bins are propagated and dominate the total uncertainty
of the background prediction. For bins at large $M_{\mathrm{R}}$ (near $1000\GeV$), the statistical uncertainties
range between $15\%$ and $50\%$.
Corrections to the MC simulation are first measured and applied as a function of $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$, inclusively in the
number of selected jets. As our search region requires a higher multiplicity of jets, an additional correction factor
is required to accurately model the jet multiplicity. We measure this additional
correction factor to be $0.90\pm 0.03$ by comparing the data and the MC prediction in the $\PW(\ell\nu)$+jets and $\ttbar$
control region for events with four or more jets.
To control for possible simulation mismodeling that is correlated between the number of jets and the razor
variables, we perform additional cross-checks of the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ distributions in bins of
the number of \PQb-tagged jets in the $\ttbar$ and $\PW(\ell\nu)$+jets
control regions for events with four or more jets. For bins that show statistically significant disagreement,
the size of the disagreement is propagated as a systematic uncertainty. The typical range of these additional
systematic uncertainties is between $10\%$ and $30\%$.
The $\ttbar$ and $\PW(\ell\nu)$+jets backgrounds in the zero-lepton Multijet event category are
composed of events with at least one lepton in the final state, which is either out of
acceptance or fails the veto electron, muon, or $\tau_{\mathrm{h}}$ selection.
Two additional control regions are defined in order to control the accuracy of the modeling of the
acceptance and efficiency for selecting electrons or muons, and $\tau_{\mathrm{h}}$.
We require events in the veto lepton ($\tau_{\mathrm{h}}$ candidate) control region to have at least one veto electron or muon
($\tau_{\mathrm{h}}$ candidate) selected. The $M_{\mathrm{T}}$ is required to be between $30$ and $100\GeV$ in order to
suppress QCD multijet background and contamination from potential new physics processes. At least two jets
with $\pt>80$ GeV and at least four jets with $\pt>40\GeV$ are required,
consistent with the search region requirements. Finally, we consider events with
$M_{\mathrm{R}}>400$ GeV and $\mathrm{R}^{2}>0.25$. The distribution of the veto lepton $\pt$ for events in the veto
lepton and veto $\tau_{\mathrm{h}}$ control regions are shown in Fig. 4,
and demonstrate that the MC models describe well the observed data.
The observed discrepancies in any bin are propagated as systematic uncertainties in the
prediction of the $\ttbar$ and $\PW(\ell\nu)$+jets in the Multijet category search region.
The $\ttbar$ background in the Electron and Muon Multijet categories is primarily from
the dilepton decay mode as the $M_{\mathrm{T}}$ requirement highly suppresses the semi-leptonic decay
mode. Corrections to the MC simulation derived from the $\ttbar$
control region primarily arise from semi-leptonic decays.
We define an additional control region enhanced in dilepton $\ttbar$ decays
to confirm that the MC corrections derived from a region dominated by
semi-leptonic decays also apply to dilepton decays. We select events with two tight leptons,
both with $\pt>30\GeV$, $\ETmiss>40\GeV$, and
dilepton mass larger than $20\GeV$. For events with two leptons of the same flavor, we additionally
veto events with a dilepton mass between $76$ and $106\GeV$ in order to suppress background from $\cPZ$ boson
decays. At least one \PQb-tagged jet is required to enhance the purity for the $\ttbar$
process. Finally, we mimic the phase space region similar to our search region in the Electron and
Muon Multijet categories by treating one lepton as having failed the identification criteria
and applying the $M_{\mathrm{T}}$ requirement using the other lepton. The correction factors measured in the
$\ttbar$ control region are applied to the MC prediction of the dilepton
$\ttbar$ cross-check region in bins of $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$.
In Fig. 3
we show the $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ distribution for the dilepton $\ttbar$ cross-check region
in events with four or more jets, and we observe no significant
mismodeling by the simulation, indicating that the measured corrections are accurate.
The $\cPZ\to\nu\bar{\nu}$ background
Three independent control regions are used to predict the $\cPZ(\nu\bar{\nu})$+jets background,
relying on the assumption that the hadronic recoil spectrum and the jet multiplicity distribution
of the $\cPZ(\nu\bar{\nu})$+jets process are similar to those of the $\PW(\ell\nu)$+jets and $\cPgg$+jets
processes. The primary and most populated control region is the $\cPgg$+jets control region,
defined by selecting events with at least one photon passing loose identification and
isolation requirements. The events are triggered using single-photon triggers, and
the photon is required to have $\pt>50\GeV$. The momentum of the photon candidate
in the transverse plane is added vectorially to $\ptvecmiss$
in order to simulate an invisible particle, as one would have in the case of a
$\cPZ\to\nu\bar{\nu}$ decay, and the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ variables are computed according to
this invisible decay scenario.
A template fit to the distribution of $\sigma_{\eta\eta}$ is
performed to determine the contribution from misidentified photons to the $\cPgg$+jets
control region and is found to be about $5\%$, independent of the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$.
Events from the $\cPgg$+jets process where the photon is produced within the cone of a jet
(labeled as $\cPgg$+jets fragmentation) are considered to be background and subtracted
using the MC prediction. Backgrounds from rarer processes such as $\PW\cPgg$, $\cPZ\cPgg$,
and $\ttbar\cPgg$ are also subtracted similarly.
In Fig. 5, we show the $M_{\mathrm{R}}$ distribution as well as the two-dimensional $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ distribution for the $\cPgg$+jets control region, where we again
observe a steeper $M_{\mathrm{R}}$ falloff in the data compared to the simulation. Correction factors
are derived in bins of $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ and applied to the MC prediction for the
$\cPZ\to\nu\bar{\nu}$ background in the search region. The statistical uncertainties for the
correction factors range between $10\%$ and $30\%$ and are among the dominant uncertainties
for the $\cPZ\to\nu\bar{\nu}$ background prediction.
Analogous to the procedure for the $\ttbar$ and $\PW(\ell\nu)$+jets control region, we derive an additional correction
factor of $0.87\pm 0.05$ to accurately describe the yield in events with four or more jets. Additional
cross-checks are performed in bins of the number of b-tagged jets and systematic uncertainties ranging
from $4\%$ for events with zero b-tagged jets to $58\%$ for events with three or more b-tagged jets are
derived.
The second control region, enhanced in the $\PW(\ell\nu)$+jets, is defined
identically to the $\PW(\ell\nu)$+jets control region described in Section 0.6.1, except that the lepton is treated as invisible
by adding its momentum vectorially to $\ptvecmiss$, and the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$
variables are computed accordingly. Correction factors are computed using events from this control region,
and the difference between these correction factors and those computed from the $\cPgg$+jets control region
is propagated as a systematic uncertainty. These uncertainties range between $10\%$ and $40\%$ depending on the $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ bin.
The third control region, enhanced in $\cPZ\to\ell^{+}\ell^{-}$ decays,
is defined by selecting events with two tight electrons or two tight muons, and requiring that the dilepton mass is
between $76$ and $106\GeV$. Events are required to have no \PQb-tagged jets
in order to suppress $\ttbar$ background. The two leptons are treated as invisible by adding their
momenta vectorially to $\ptvecmiss$. We apply the correction factors obtained from the
$\cPgg$+jet control region to the $\cPZ\to\ell^{+}\ell^{-}$ MC prediction and perform a cross-check against data
in this control region. No significant discrepancy between the data and the prediction is observed.
The QCD Multijet background
The QCD multijet processes contribute about $10\%$ of the total background in the zero-lepton Multijet
event category for bins with zero or one \PQb-tagged jets. Such events enter the search regions
in the tails of the \METdistribution when the energy of
one of the jets in the event is significantly under- or over-measured.
In most such situations, the \ptvecmisspoints either toward
or away from the leading jets and therefore the two megajets tend to
be in a back-to-back configuration. The search region is defined by requiring that
the azimuthal angle between the two megajets $\Delta\phi_{R}$ be less than
$2.8$, which was found to be an optimal selection based on studies
of QCD multijet and signal simulated samples. We define the control region for the QCD background process to be events
with $\Delta\phi_{\mathrm{R}}>2.8$, keeping all other selection requirements identical to those for
the search region. The purity of the QCD multijet process in the control region
is more than $70\%$.
After subtracting the non-QCD background,
we project the observed data yield in the control region to the search region using
the translation factor $\zeta$:
$$\zeta=\frac{N(|\Delta\phi_{\mathrm{R}}|<2.8)}{N(|\Delta\phi_{\mathrm{R}}|>2.8)},$$
(5)
where the numerator and denominator are the number of events
passing and failing the selection on $|\Delta\phi_{\mathrm{R}}|<2.8$, respectively. We
find that the translation factor calculated from the MC simulation
decreases as a function of $M_{\mathrm{R}}$ and is, to a large degree, constant as a function of $\mathrm{R}^{2}$.
Using data events in the low $\mathrm{R}^{2}$ region ($0.15$ to $0.25$), dominated
by QCD multijet background, we measure the translation factor $\zeta$ as a function of
$M_{\mathrm{R}}$ to cross-check the values obtained from the simulation.
The $M_{\mathrm{R}}$ dependence of $\zeta$ is modeled as the sum of a power law
and a constant. This functional shape is fitted to the values of $\zeta$ calculated from the MC.
A systematic uncertainty of $87\%$ is propagated, covering both the
spread around the fitted model as a function of $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ in
simulation, and the difference between the values measured in simulation
and data. The function used
for $\zeta$ and the values measured in data and simulation are
shown in Fig. 6.
We perform two additional cross-checks on the accuracy of the MC prediction for
$\zeta$ in control regions dominated by processes similar to the QCD multijet
background with no invisible neutrinos in the final state. The first
cross-check is performed on a dimuon control region enhanced in $\cPZ\rightarrow\Pgm^{+}\Pgm^{-}$ decays,
and the second cross-check is performed on a dijet control region enhanced in QCD dijet events.
In both cases, the events at large $\mathrm{R}^{2}$ result from cases similar to our search region
where the energy of a leading jet is severely mismeasured. We compare the values of
$\zeta$ measured in these data control regions to the values predicted
by the simulation and observe an agreement at the $20\%$ level, well within the
systematic uncertainty of $87\%$ assigned to the QCD background estimate.
0.6.2 Method B: fit-based background prediction
The second background prediction method is based on a fit to the data with an
assumed functional form for the shape of the background distribution in the $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ plane.
Based on past studies [54, 56], the shape of the background in
the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ variables is found to be well described by the following functional form:
$$f_{\mathrm{SM}}(M_{\mathrm{R}},\mathrm{R}^{2})=\bigl{[}b(M_{\mathrm{R}}-{M_{%
\mathrm{R}}^{0}})^{1/n}(\mathrm{R}^{2}-{\mathrm{R}^{2}_{0}})^{1/n}-1\bigr{]}%
\re^{-bn(M_{\mathrm{R}}-{M_{\mathrm{R}}^{0}})^{1/n}(\mathrm{R}^{2}-{\mathrm{R}%
^{2}_{0}})^{1/n}},$$
(6)
where $M_{\mathrm{R}}^{0}$, $\mathrm{R}^{2}_{0}$, $b$, and $n$ are free parameters.
In the original study [54], this function with $n$ fixed to
$1$ was used to model the data in each category. The function choice
was motivated by the observation that for $n=1$, the
function projects to an exponential both on $\mathrm{R}^{2}$ and $M_{\mathrm{R}}$, and $b$
is proportional to the exponential rate parameter in each
one-dimensional projection. The generalized function
in Eq. (6) was found to be in better agreement with the SM
backgrounds over a larger range of $\mathrm{R}^{2}$ and $M_{\mathrm{R}}$ [56]. The two
parameters $b$ and $n$ determine the tail of the distribution in the
two-dimensional plane, while the $M_{\mathrm{R}}^{0}$ ($\mathrm{R}^{2}_{0}$) parameter affects the tail of the
one-dimensional projection on $\mathrm{R}^{2}$ ($M_{\mathrm{R}}$).
Background estimation is performed using an extended, binned, maximum likelihood fit to the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$
distribution in one of two ways:
•
A fit to the data in the sideband regions in $M_{\mathrm{R}}$ and
$\mathrm{R}^{2}$, defined more precisely below, as a model-independent way to look for excesses or
discrepancies. The fit is performed using only the data in the
sideband, and the functional form is extrapolated to the full $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ plane.
•
A fit to the data in the full search region in $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ under
background-only and signal-plus-background hypotheses, following
a modified frequentist approach (LHC $\mathrm{CL}_{\textit{s}}$) [92, 93, 94, 95, 96]
to interpret the data in the context of particular SUSY simplified models.
The sideband region is defined to be $100\GeV$ in width in $M_{\mathrm{R}}$
and $0.05$ in $\mathrm{R}^{2}$. Explicitly, for the
Multijet event category, it comprises the region $500\GeV<M_{\mathrm{R}}<600\GeV$ and $\mathrm{R}^{2}>0.3$, plus the region $M_{\mathrm{R}}>500\GeV$
and $0.25<\mathrm{R}^{2}<0.3$. For the Muon and Electron Multijet
event categories, it comprises the region $400\GeV<M_{\mathrm{R}}<500\GeV$
and $\mathrm{R}^{2}>0.2$, plus the region $M_{\mathrm{R}}>400\GeV$ and
$0.15<\mathrm{R}^{2}<0.2$.
For each event category, we fit the two-dimensional distribution of
$M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ in the sideband region using the
above functional form, separately for events with zero, one, two, and three or more \PQb-tagged jets. The
normalization in each event category and each \PQb-tagged jet bin is
independently varied in the fit. Due to the lack of data events in the category
with three or more \PQb-tagged jets, we constrain
the shape in this category to be related to the shape for events with two
\PQb-tagged jets as follows:
$$f^{{\geq}3\PQb}_{\mathrm{SM}}(M_{\mathrm{R}},\mathrm{R}^{2})=(1+m_{M_{\mathrm{%
R}}}(M_{\mathrm{R}}-M^{\mathrm{offset}}_{R}))f^{2\PQb}_{\mathrm{SM}}(M_{%
\mathrm{R}},\mathrm{R}^{2}),$$
(7)
where $f^{2\PQb}_{\mathrm{SM}}(M_{\mathrm{R}},\mathrm{R}^{2})$ and $f^{{\geq}3\PQb}_{\mathrm{SM}}(M_{\mathrm{R}},\mathrm{R}^{2})$ are the
probability density functions for events with two and with three or more \PQb-tagged jets,
respectively; $M_{\mathrm{R}}^{\mathrm{offset}}$ is the lowest $M_{\mathrm{R}}$ value in a particular
event category; and $m_{M_{\mathrm{R}}}$ is a floating parameter constrained by a Gaussian distribution
centered at the value measured using the simulation and with a
100% uncertainty. The above form for the shape of the background events
with three or more \PQb-tagged jets is verified in simulation.
Numerous tests are performed to establish the robustness of the fit
model in adequately describing the underlying distributions. To
demonstrate that the background model gives an accurate description of the
background distributions, we construct a representative
data set using MC samples, and perform the background fit using
the form given by Eq. (6). Goodness of fit is
evaluated by comparing the background prediction from the fit with the
prediction from the simulation. This procedure is performed
separately for each of the search categories and we find
that the fit function yields an accurate representation of the
background predicted by the simulation.
We also observe that the fit model is insensitive to variations of the background
composition predicted by the simulation in each event category by altering
relative contributions of the dominant backgrounds, performing a
new fit with the alternative background composition, and comparing
the new fit results to the nominal fit result. The contributions
of the main $\ttbar$, $\PW(\ell\nu)$+jets, and $\cPZ(\nu\bar{\nu})$ backgrounds
are varied by 30%, and the rare backgrounds from QCD multijet and
$\mathrm{\ttbar}\cPZ$ processes are varied by 100%. For the Muon and Electron Multijet event categories,
we also vary the contributions from the dileptonic and semi-leptonic decays
of the $\ttbar$ background separately by 30%. In each of these tests, we
observe that the chosen functional form can adequately describe the shapes of
the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ distributions as predicted by the modified MC simulation.
Additional pseudo-experiment studies are performed comparing the background prediction
from the sideband fit and the full region fit to evaluate the average
deviation between the two fit predictions. We observe that
the sideband fit and the full region fit predictions in the
signal-sensitive region differ by up to 15% and we propagate an
additional systematic uncertainty to the sideband fit background
prediction to cover this average difference.
To illustrate method B, we present the data and fit-based background predictions
in Fig. 7, for events in the 2 \PQb-tag and ${\geq}3$ \PQb-tag
Multijet categories. The number of events observed in data is compared to the
prediction from the sideband fit in the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ bins. To
quantify the agreement between the background model and the observation, we generate
alternative sets of background shape parameters from the covariance matrix calculated
by the fit. An ensemble of pseudo-experiment data sets is created, generating
random ($M_{\mathrm{R}}$, $\mathrm{R}^{2}$) pairs distributed according to each of these alternative shapes.
For each $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ bin, the distribution of the predicted yields from the
ensemble of pseudo-experiments is compared to the observed yield in data.
The agreement between the predicted and the observed yields is described as a two-sided
p-value and translated into the corresponding number of standard deviations for a normal
distribution. Positive (negative) significance indicates the observed
yield is larger (smaller) than the predicted one. We find that the pattern of
differences between data and background predictions in the different
bins considered is consistent with statistical fluctuations.
To demonstrate that the model-independent sideband fit procedure
used in the analysis would be sensitive to the presence of a
signal, we perform a signal injection test. We sample a signal-plus-background
pseudo-data set and perform a background-only fit in the sideband. We show one illustrative
example of such a test in Fig. 8, where we inject a signal
corresponding to gluino pair production, in which each gluino decays to a neutralino and
a $\bbbar$ pair with $m_{\sGlu}=1.4\TeV$ and $m_{\chiz_{1}}=100\GeV$. The
deviations with respect to the fit predictions are shown for the 2
\PQb-tag and ${\geq}3$ \PQb-tag Multijet categories. We observe characteristic patterns
of excesses in two adjacent groups of bins neighboring in $M_{\mathrm{R}}$.
0.6.3 Comparison of two methods
The background predictions obtained from methods A and B are systematically compared
in all of the search region categories. For method B, the model-independent
fit to the sideband is used for this comparison. In Fig. 9,
we show the comparison of the two background predictions for two example event categories.
The predictions from the two methods agree within the uncertainties of each method.
The uncertainty from the fit-based method tends to be slightly larger at high
$M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ due to the additional uncertainty in the exact shape of
the tail of the distribution, as the $n$ and $b$ parameters are not strongly
constrained by the sideband data.
The two background predictions use methods based on data that make
very different systematic assumptions. Method A assumes that corrections
to the simulation prediction measured in control regions apply also to the
signal regions, while method B assumes that the shape of the background distribution
in $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ is well described by a particular exponentially falling functional
form. The agreement observed between predictions obtained using these two very different
methods significantly enhances the confidence of the background modeling, and
also validates the respective assumptions.
0.7 Systematic uncertainties
Various systematic uncertainties are considered in the evaluation of the
signal and background predictions. Different types of systematic
uncertainties are considered for the two different background models.
For method A, the largest uncertainties arise from the precision with
which the MC corrections are measured. The dominant uncertainties
in the correction factors result from statistical uncertainties due to
the limited size of the control region event sample. We also propagate systematic
uncertainties in the theoretical cross-section for the small residual backgrounds
present in the control regions, and they contribute 2–5% to the
correction factor uncertainty.
Additional systematic uncertainties are computed from the procedure that
tests that the accuracy of the MC corrections as a function of
($M_{\mathrm{R}}$, $\mathrm{R}^{2}$), and the number of \PQb-tagged jets in events with four or more jets.
The total uncertainty from this procedure ranges from $10\%$ for the most populated bins to
$50\%$ and $100\%$ for the least populated bins. For the $\cPZ\rightarrow\nu\bar{\nu}$ process, we
also propagate the difference in the correction factors measured in the three alternative
control regions as a systematic uncertainty, intended to estimate the possible differences in
the simulation mismodeling of the hadronic recoil for the $\cPgg$+jets process and
the $\cPZ(\nu\bar{\nu})$+jets process. These systematic uncertainties
range from $10$ to $40\%$. For the QCD background prediction the statistical uncertainty
due to limited event counts in the $\Delta\phi_{\mathrm{R}}>2.8$ control regions and the systematic
uncertainty of $87\%$ in the translation factor $\zeta$ are propagated.
For method B, the systematic uncertainties in the background are propagated as part of
the maximum likelihood fit procedure. For each event category, the background shape in
$M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ is described by four independent parameters: two
that control the exponential fall off and two that control the behavior of the
nonexponential tail. Systematic uncertainties in the background are propagated
through the profiling of these unconstrained shape parameters. For more populated bins, such as
the 0 \PQb-tag and 1 \PQb-tag bins in the Multijet category, the systematic uncertainties range from
about 30% at low $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ to about 70% at high $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$.
For sparsely populated bins such as the 3-or-more \PQb-tag bin in the Muon Multijet or Electron
Multijet categories, the systematic uncertainties range from
about 60% at low $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ to more than 200% at high $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$.
Systematic uncertainties due to instrumental and theoretical effects are propagated as shape
uncertainties in the signal predictions for methods A and B, and on the background
predictions for method A. The background prediction from method B is not affected
by these uncertainties as the shape and normalization are measured from data.
Uncertainties in the trigger and lepton selection efficiency, and the
integrated luminosity [97] primarily affect the total normalization. Uncertainties in
the \PQb-tagging efficiency affect the relative yields between different \PQb-tag categories.
The uncertainties from missing higher-order corrections and the uncertainties in the jet
energy and lepton momentum scale affect the shapes of the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ distributions.
For the signal predictions, we also propagate systematic uncertainties due to
possible inaccuracies of the fast simulation in modeling the lepton selection and
\PQbtagging efficiencies. These uncertainties were evaluated by comparing
the $\ttbar$ and signal $\GEANT$ based MC samples with those
that used fast simulation. Finally, we propagate an uncertainty in the modeling of initial-state radiation for signal predictions, that ranges from $15\%$ for signal events with
recoil between $400$ and $600\GeV$ to $30\%$ for events with recoil above $600\GeV$.
The systematic uncertainties and their typical impact on the background and signal
predictions are summarized in Table 0.7.
0.8 Results and interpretations
We present results of the search using method A as it provides slightly better sensitivity.
The two-dimensional $M_{\mathrm{R}}$-$\mathrm{R}^{2}$ distributions for the search regions in the
Multijet, Electron Multijet, and Muon Multijet categories observed in data are shown in
Figures 10-15,
along with the background prediction from method A.
We observe no statistically significant discrepancies and interpret the null search
result using method A by determining the 95%
confidence level (CL) upper limits on the production cross sections of
the SUSY models presented in Section 0.1 using a global likelihood determined by combining the
likelihoods of the different search boxes and sidebands. Following the LHC $\mathrm{CL}_{\textit{s}}$
procedure [96], we use
the profile likelihood ratio test statistic and the asymptotic
formula to evaluate the 95% CL observed and expected limits on the
SUSY production cross section $\sigma$.
Systematic uncertainties are taken into account by
incorporating nuisance parameters $\boldsymbol{\theta}$, representing different sources of
systematic uncertainty, into the likelihood function $\mathcal{L}(\sigma,\boldsymbol{\theta})$.
For each signal model the simulated SUSY events are used to estimate the effect of possible signal
contamination in the analysis control regions, and the method A background prediction is corrected
accordingly.
To determine a confidence interval for $\sigma$, we construct the profile likelihood ratio test
statistic $-2\ln[\mathcal{L}(\sigma,\boldsymbol{\hat{\theta}}_{\sigma})/\mathcal{L}(\hat{%
\sigma},\boldsymbol{\hat{\theta}})]$ as a function of
$\sigma$, where $\boldsymbol{\hat{\theta}}_{\sigma}$ refers to the conditional maximum
likelihood estimators of $\boldsymbol{\theta}$ assuming a given value
$\sigma$, and $\hat{\sigma}$ and $\boldsymbol{\hat{\theta}}$ correspond to the
global maximum of the likelihood. Then for example, a 68% confidence interval for $\sigma$
can be taken as the region for which the test statistic is less than 1. By
allowing each nuisance parameter to vary, the test statistic
curve is wider, reflecting the systematic uncertainty arising from
each source, and resulting in a larger confidence interval for $\sigma$.
First, we consider the scenario of gluino pair production
decaying to third-generation quarks. Gluino decays to the third-generation
are enhanced if the masses of the third-generation squarks are significantly
lighter than those of the first two generations, a scenario that is
strongly motivated in natural SUSY
models [61, 98, 99, 100]. Prompted by this, we consider the three decay
modes:
•
$\PSg\rightarrow\bbbar\PSGcz$ ;
•
$\PSg\rightarrow\ttbar\PSGcz$ ;
•
$\PSg\rightarrow\PQb\cPaqt\chip_{1}\rightarrow\PQb\cPaqt\PW^{\ast+}\PSGcz_{1}$ or
charge conjugate,
where $\PW^{\ast}$ denotes a virtual $\PW$ boson. Due to a
technical limitation inherent in the event generator, we consider these
three decay modes for $\abs{m_{\sGlu}-m_{\chiz_{1}}}\geq 225\GeV$. For
$\abs{m_{\sGlu}-m_{\chiz_{1}}}<225\GeV$, we only consider the $\PSg\rightarrow\bbbar\PSGcz$ decay mode.
The three-body gluino decays considered here capture
all of the possible final states within this natural SUSY context
including those of two-body gluino decays with intermediate top or bottom
squarks. Past studies have shown that LHC searches exhibit a similar sensitivity to
three-body and two-body gluino decays with a only a weak dependence on
the intermediate squark mass [40].
We perform a scan over all possible branching fractions to these three decay modes
and compute limits on the production cross section under each such scenario. The production cross section
limits for a few characteristic branching fraction scan points are shown on the left of
Fig. 16 as a function of the gluino and neutralino masses. We find a range of excluded regions
for different branching fraction assumptions and generally observe the strongest limits for
the $\PSg\rightarrow\bbbar\PSGcz_{1}$ decay mode over the full two-dimensional mass plane
and the weakest limits for the $\PSg\rightarrow\ttbar\PSGcz_{1}$ decay
mode. For scenarios that include the intermediate decay
$\chipm_{1}\to\PW^{\ast\pm}\PSGcz_{1}$ and small values of $m_{\chiz_{1}}$ the sensitivity
is reduced because the LSP carries very little momentum in both the
NLSP rest frame and the laboratory frame, resulting in small values of
$\ETmiss$ and $\mathrm{R}^{2}$. By considering the limits obtained for all scanned branching fractions, we
calculate the exclusion limits valid for any assumption on the branching
fractions, presented on the right of Fig. 16. For
an LSP with mass of a few hundred $\GeV$, we exclude pair production of gluinos decaying
to third-generation quarks for mass below about $1.6\TeV$. This result
represents a unique attempt to obtain a branching fraction independent limit on
gluino pair production at the LHC for the scenario in which gluino decays are dominated by
three-body decays to third-generation quarks and a neutralino LSP.
In Figure 17, we present additional interpretations for
simplified model scenarios of interest. On the left, we show the production cross section
limits on gluino pair production where the gluino decays to two light-flavored
quarks and the LSP, and on the right we show the production cross section limits on
top squark pair production where the top squark decays to a top quark and the LSP.
For a very light LSP, we exclude top squark production with mass below
$750\GeV$.
0.9 Summary
We have presented an inclusive search for supersymmetry in events
with no more than one lepton, a large multiplicity of energetic jets, and
missing transverse energy. The search is sensitive to a broad
range of SUSY scenarios including pair production of gluinos and top
squarks. The event categorization in the number of leptons and
the number of \PQb-tagged jets enhances the search sensitivity for a variety of different SUSY
signal scenarios.
Two alternative background estimation methods are presented, both
based on transfer factors between data control regions and the search
regions, but having very different systematic assumptions:
one relying on the simulation and associated corrections derived in
the control regions, and the other relying on the accuracy of an
assumed functional form for the shape of background distribution in the $M_{\mathrm{R}}$ and $\mathrm{R}^{2}$ variables.
The two predictions agree within their uncertainties, thereby
demonstrating the robustness of the background modeling.
No significant deviations from the predicted standard model background are
observed in any of the search regions, and this result is interpreted
in the context of simplified models of gluino or top
squark pair production. For decays to a top quark and an LSP with a mass of 100\GeV, we
exclude top squarks with masses below 750\GeV. Considering
separately the decays to bottom quarks and the LSP or first- and
second-generation quarks and the LSP, gluino masses up to
1.65\TeVor 1.4\TeVare excluded, respectively.
Furthermore, this search goes beyond the existing simplified model paradigm by
interpreting results in a broader context inspired by natural SUSY,
with multiple gluino decay modes considered simultaneously.
By scanning over all possible branching fractions
for three-body gluino decays to third generation quarks, exclusion
limits are derived on gluino pair production that are valid for any
values of the gluino decay branching fractions.
For a chargino NLSP nearly degenerate in mass with the LSP and LSP
masses in the range between 200 and 600\GeV, we exclude gluinos
with mass below 1.55 to 1.6\TeV, regardless of their decays. This
result is a more generic constraint on gluino production than
previously reported at the LHC.
Acknowledgements.
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucléaire et de Physique des Particules / CNRS, and Commissariat à l’Énergie Atomique et aux Énergies Alternatives / CEA, France; the Bundesministerium für Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Fundação para a Ciência e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretaría de Estado de Investigación, Desarrollo e Innovación and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.
Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and 2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clarín-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845.
References
[1]
J. Wess and B. Zumino, “Supergauge transformations in
four-dimensions”, Nucl. Phys. B 70 (1974) 39,
10.1016/0550-3213(74)90355-1.
[2]
Yu. A.
Gol’fand and E. P. Likhtman, “Extension of the algebra of Poincaré
group generators and violation of P invariance”, JETP Lett.
13 (1971) 323.
[3]
D. V. Volkov and
V. P. Akulov, “Possible universal neutrino interaction”, JETP
Lett. 16 (1972) 438.
[4]
A. H. Chamseddine, R. L. Arnowitt, and P. Nath, “Locally
supersymmetric grand unification”, Phys. Rev. Lett. 49
(1982) 970,
10.1103/PhysRevLett.49.970.
[5]
G. L. Kane, C. F. Kolda, L. Roszkowski, and J. D. Wells,
“Study of constrained minimal supersymmetry”, Phys. Rev. D
49 (1994) 6173,
10.1103/PhysRevD.49.6173,
arXiv:hep-ph/9312272.
[6]
P. Fayet, “Supergauge invariant extension of the Higgs
mechanism and a model for the electron and its neutrino”, Nucl.
Phys. B 90 (1975) 104,
10.1016/0550-3213(75)90636-7.
[7]
R. Barbieri, S. Ferrara, and C. A. Savoy, “Gauge models with
spontaneously broken local supersymmetry”, Phys. Lett. B
119 (1982) 343,
10.1016/0370-2693(82)90685-2.
[8]
L. J. Hall, J. D. Lykken, and S. Weinberg, “Supergravity as
the messenger of supersymmetry breaking”, Phys. Rev. D
27 (1983) 2359,
10.1103/PhysRevD.27.2359.
[9]
P. Ramond, “Dual theory for free fermions”, Phys.
Rev. D 3 (1971) 2415,
10.1103/PhysRevD.3.2415.
[10]
E. Witten, “Dynamical breaking of supersymmetry”,
Nucl. Phys. B 188 (1981) 513,
10.1016/0550-3213(81)90006-7.
[11]
S. Dimopoulos and H. Georgi, “Softly broken supersymmetry and
SU(5)”, Nucl. Phys. B 193 (1981) 150,
10.1016/0550-3213(81)90522-8.
[12]
M. Dine, W. Fischler, and M. Srednicki, “Supersymmetric
technicolor”, Nucl. Phys. B 189 (1981) 575,
10.1016/0550-3213(81)90582-4.
[13]
S. Dimopoulos and S. Raby, “Supercolor”, Nucl.
Phys. B 192 (1981) 353,
10.1016/0550-3213(81)90430-2.
[14]
N. Sakai, “Naturalness in supersymmetric GUTs”,
Z. Phys. C 11 (1981) 153,
10.1007/BF01573998.
[15]
R. K. Kaul and P. Majumdar, “Cancellation of quadratically
divergent mass corrections in globally supersymmetric spontaneously broken
gauge theories”, Nucl. Phys. B 199 (1982) 36,
10.1016/0550-3213(82)90565-X.
[16]
S. Dimopoulos, S. Raby, and F. Wilczek, “Supersymmetry and
the scale of unification”, Phys. Rev. D 24 (1981)
1681,
10.1103/PhysRevD.24.1681.
[17]
W. J. Marciano and G. Senjanovic, “Predictions of
supersymmetric grand unified theories”, Phys. Rev. D
25 (1982) 3092,
10.1103/PhysRevD.25.3092.
[18]
M. B. Einhorn and D. R. T. Jones, “The weak mixing angle and
unification mass in supersymmetric SU(5)”, Nucl. Phys. B
196 (1982) 475,
10.1016/0550-3213(82)90502-8.
[19]
L. E. Ibanez and G. G. Ross, “Low-energy predictions in
supersymmetric grand unified theories”, Phys. Lett. B
105 (1981) 439,
10.1016/0370-2693(81)91200-4.
[20]
U. Amaldi, W. de Boer, and H. Furstenau, “Comparison of grand
unified theories with electroweak and strong coupling constants measured at
LEP”, Phys. Lett. B 260 (1991) 447,
10.1016/0370-2693(91)91641-8.
[21]
P. Langacker and N. Polonsky, “The strong coupling,
unification, and recent data”, Phys. Rev. D 52 (1995)
3081,
10.1103/PhysRevD.52.3081,
arXiv:hep-ph/9503214.
[22]
J. R. Ellis et al., “Supersymmetric relics from the Big
Bang”, Nucl. Phys. B 238 (1984) 453,
10.1016/0550-3213(84)90461-9.
[23]
G. Jungman, M. Kamionkowski, and K. Griest, “Supersymmetric
dark matter”, Phys. Rept. 267 (1996) 195,
10.1016/0370-1573(95)00058-5,
arXiv:hep-ph/9506380.
[24]
CMS Collaboration, “Search for top-squark pair production
in the single-lepton final state in $\Pp\Pp$ collisions at $\sqrt{s}=8\TeV$”, Eur. Phys. J. C 73 (2013) 2677,
10.1140/epjc/s10052-013-2677-2,
arXiv:1308.1586.
[25]
CMS Collaboration, “Search for gluino mediated bottom- and
top-squark production in multijet final states in $\Pp\Pp$ collisions at
8\TeV”, Phys. Lett. B 725 (2013) 243,
10.1016/j.physletb.2013.06.058,
arXiv:1305.2390.
[26]
CMS Collaboration, “Search for new physics in the multijet
and missing transverse momentum final state in proton-proton collisions at
$\sqrt{s}=8\TeV$”, JHEP 06 (2014) 055,
10.1007/JHEP06(2014)055,
arXiv:1402.4770.
[27]
CMS Collaboration, “Search for supersymmetry in $\Pp\Pp$
collisions at $\sqrt{s}=8\TeV$ in events with a single lepton, large jet
multiplicity, and multiple b jets”, Phys. Lett. B 733
(2014) 328,
10.1016/j.physletb.2014.04.023,
arXiv:1311.4937.
[28]
CMS Collaboration, “Search for new physics in events with
same-sign dileptons and jets in $\Pp\Pp$ collisions at $\sqrt{s}=8\TeV$”, JHEP 01 (2014) 163,
10.1007/JHEP01(2014)163,
arXiv:1311.6736.
[29]
CMS Collaboration, “Search for supersymmetry in hadronic
final states with missing transverse energy using the variables
$\alpha_{\mathrm{T}}$ and \PQb-quark multiplicity in $\Pp\Pp$ collisions at
8\TeV”, Eur. Phys. J. C 73 (2013) 2568,
10.1140/epjc/s10052-013-2568-6,
arXiv:1303.2985.
[30]
CMS Collaboration, “Searches for supersymmetry using the
$M_{\mathrm{T2}}$ variable in hadronic events produced in $\Pp\Pp$
collisions at 8\TeV”, JHEP 05 (2015) 078,
10.1007/JHEP05(2015)078,
arXiv:1502.04358.
[31]
ATLAS Collaboration, “Search for new phenomena in final
states with large jet multiplicities and missing transverse momentum at
$\sqrt{s}=8\TeV$ proton-proton collisions using the ATLAS experiment”,
JHEP 10 (2013) 130,
10.1007/JHEP10(2013)130,
arXiv:1308.1841.
[32]
ATLAS Collaboration, “Search for strong production of
supersymmetric particles in final states with missing transverse momentum and
at least three $\PQb$-jets at $\sqrt{s}=8\TeV$ proton-proton collisions
with the ATLAS detector”, JHEP 10 (2014) 24,
10.1007/JHEP10(2014)024,
arXiv:1407.0600.
[33]
ATLAS Collaboration, “Search for supersymmetry at $\sqrt{s}=8\TeV$ in final states with jets and two same-sign leptons or three leptons
with the ATLAS detector”, JHEP 06 (2014) 035,
10.1007/JHEP06(2014)035,
arXiv:1404.2500.
[34]
ATLAS Collaboration, “Search for direct pair production of
the top squark in all-hadronic final states in proton-proton collisions at
$\sqrt{s}=8\TeV$ with the ATLAS detector”, JHEP 09
(2014) 015,
10.1007/JHEP09(2014)015,
arXiv:1406.1122.
[35]
ATLAS Collaboration, “Search for direct top-squark pair
production in final states with two leptons in $\Pp\Pp$ collisions at
$\sqrt{s}=8\TeV$ with the ATLAS detector”, JHEP 06
(2014) 124,
10.1007/JHEP06(2014)124,
arXiv:1403.4853.
[36]
ATLAS Collaboration, “ATLAS Run 1 searches for direct pair
production of third-generation squarks at the Large Hadron Collider”,
Eur. Phys. J. C 75 (2015) 510,
10.1140/epjc/s10052-015-3726-9,
arXiv:1506.08616.
[37]
ATLAS Collaboration, “Summary of the searches for squarks
and gluinos using $\sqrt{s}=8\TeV$ $\Pp\Pp$ collisions with the ATLAS
experiment at the LHC”, JHEP 10 (2015) 054,
10.1007/JHEP10(2015)054,
arXiv:1507.05525.
[38]
CMS Collaboration, “Search for supersymmetry in the
multijet and missing transverse momentum final state in $\Pp\Pp$ collisions
at 13\TeV”, Phys. Lett. B 758 (2016) 152,
10.1016/j.physletb.2016.05.002,
arXiv:1602.06581.
[39]
CMS Collaboration, “Search for new physics with the
$M_{\mathrm{T2}}$ variable in all-jets final states produced in $\Pp\Pp$
collisions at $\sqrt{s}=13\TeV$”, (2016).
arXiv:1603.04053.
Submitted to JHEP.
[40]
CMS Collaboration, “Search for supersymmetry in $\Pp\Pp$
collisions at $\sqrt{s}=13\TeV$ in the single-lepton final state using the
sum of masses of large-radius jets”, JHEP 08 (2016)
122,
10.1007/JHEP08(2016)122,
arXiv:1605.04608.
[41]
CMS Collaboration, “Search for new physics in same-sign
dilepton events in proton-proton collisions at $\sqrt{s}=13\TeV$”,
Eur. Phys. J. C 76 (2016) 439,
10.1140/epjc/s10052-016-4261-z,
arXiv:1605.03171.
[42]
CMS Collaboration, “Search for new physics in final states
with two opposite-sign, same-flavor leptons, jets, and missing transverse
momentum in $\Pp\Pp$ collisions at $\sqrt{s}=13\TeV$”, (2016).
arXiv:1607.00915.
Submitted to JHEP.
[43]
ATLAS Collaboration, “Search for new phenomena in final
states with large jet multiplicities and missing transverse momentum with
ATLAS using $\sqrt{s}=13\TeV$ proton-proton collisions”, Phys.
Lett. B 757 (2016) 334,
10.1016/j.physletb.2016.04.005,
arXiv:1602.06194.
[44]
ATLAS Collaboration, “Search for supersymmetry at $\sqrt{s}=13\TeV$ in final states with jets and two same-sign leptons or three
leptons with the ATLAS detector”, Eur. Phys. J. C 76
(2016) 259,
10.1140/epjc/s10052-016-4095-8,
arXiv:1602.09058.
[45]
ATLAS Collaboration, “Search for new phenomena in final
states with an energetic jet and large missing transverse momentum in $pp$
collisions at $\sqrt{s}=13\TeV$ using the ATLAS detector”,
Phys. Rev. D 94 (2016) 032005,
10.1103/PhysRevD.94.032005,
arXiv:1604.07773.
[46]
ATLAS Collaboration, “Search for squarks and gluinos in
final states with jets and missing transverse momentum at $\sqrt{s}=13\TeV$with the ATLAS detector”, Eur. Phys. J. C 76
(2016) 392,
10.1140/epjc/s10052-016-4184-8,
arXiv:1605.03814.
[47]
ATLAS Collaboration, “Search for gluinos in events with an
isolated lepton, jets and missing transverse momentum at $\sqrt{s}=13\TeV$
with the ATLAS detector”, (2016).
arXiv:1605.04285.
Submitted to EPJC.
[48]
ATLAS Collaboration, “Search for pair production of gluinos
decaying via stop and sbottom in events with $b$-jets and large missing
transverse momentum in $pp$ collisions at $\sqrt{s}=13\TeV$ with the ATLAS
detector”, Phys. Rev. D 94 (2016) 032003,
10.1103/PhysRevD.94.032003,
arXiv:1605.09318.
[49]
ATLAS Collaboration, “Search for top squarks in final
states with one isolated lepton, jets, and missing transverse momentum in
$\sqrt{s}=13\TeV$ $pp$ collisions with the ATLAS detector”,
Phys. Rev. D 94 (2016) 052009,
10.1103/PhysRevD.94.052009,
arXiv:1606.03903.
[50]
ATLAS Collaboration, “Search for bottom squark pair
production in proton-proton collisions at $\sqrt{s}=13\TeV$ with the ATLAS
detector”, (2016).
arXiv:1606.08772.
Submitted to EPJC.
[51]
ATLAS Collaboration, “Search for supersymmetry in a final
state containing two photons and missing transverse momentum in $\sqrt{s}=13\TeV$ $pp$ collisions at the LHC using the ATLAS detector”, (2016).
arXiv:1606.09150.
Submitted to EPJC.
[52]
G. R. Farrar and P. Fayet, “Phenomenology of the production,
decay, and detection of new hadronic states associated with
supersymmetry”, Phys. Lett. B 76 (1978) 575,
10.1016/0370-2693(78)90858-4.
[53]
CMS Collaboration, “Inclusive search for supersymmetry
using the razor variables in $\Pp\Pp$ collisions at $\sqrt{s}=7\TeV$”,
Phys. Rev. Lett. 111 (2013) 081802,
10.1103/PhysRevLett.111.081802,
arXiv:1212.6961.
[54]
CMS Collaboration, “Search for supersymmetry with razor
variables in $\Pp\Pp$ collisions at $\sqrt{s}=7\TeV$”, Phys.
Rev. D 90 (2014) 112001,
10.1103/PhysRevD.90.112001,
arXiv:1405.3961.
[55]
CMS Collaboration, “Inclusive search for squarks and
gluinos in pp collisions at $\sqrt{s}=7\TeV$”, Phys. Rev. D
85 (2012) 012004,
10.1103/PhysRevD.85.012004,
arXiv:1107.1279.
[56]
CMS Collaboration, “Search for supersymmetry using razor
variables in events with \PQb-tagged jets in $\Pp\Pp$ collisions at
$\sqrt{s}=8\TeV$”, Phys. Rev. D 91 (2015) 052018,
10.1103/PhysRevD.91.052018,
arXiv:1502.00300.
[57]
CMS Collaboration, “Search for supersymmetry in $\Pp\Pp$
collisions at $\sqrt{s}=8\TeV$ in final states with boosted \PWbosons and
\PQbjets using razor variables”, Phys. Rev. D 93
(2016) 092009,
10.1103/PhysRevD.93.092009,
arXiv:1602.02917.
[58]
CMS Collaboration, “Search for dark matter particles in
proton-proton collisions at $\sqrt{s}=8\TeV$ using the razor variables”,
(2016).
arXiv:1603.08914.
Submitted to JHEP.
[59]
ATLAS Collaboration, “Multi-channel search for squarks and
gluinos in $\sqrt{s}=7\TeV$ $pp$ collisions with the ATLAS detector”,
Eur. Phys. J. C 73 (2013) 2362,
10.1140/epjc/s10052-013-2362-5,
arXiv:1212.6149.
[60]
ATLAS Collaboration, “Search for squarks and gluinos in
events with isolated leptons, jets and missing transverse momentum at
$\sqrt{s}=8\TeV$ with the ATLAS detector”, JHEP 04
(2015) 116,
10.1007/JHEP04(2015)116,
arXiv:1501.03555.
[61]
M. Papucci, J. T. Ruderman, and A. Weiler, “Natural SUSY
endures”, JHEP 09 (2012) 035,
10.1007/JHEP09(2012)035,
arXiv:1110.6926.
[62]
CMS Collaboration, “The CMS experiment at the CERN LHC”,
JINST 3 (2008) S08004,
10.1088/1748-0221/3/08/S08004.
[63]
J. Alwall et al., “MadGraph5: going beyond”,
JHEP 06 (2011) 128,
10.1007/JHEP06(2011)128,
arXiv:1106.0522.
[64]
J. Alwall et al., “The automated computation of tree-level
and next-to-leading order differential cross sections, and their matching to
parton shower simulations”, JHEP 07 (2014) 079,
10.1007/JHEP07(2014)079,
arXiv:1405.0301.
[65]
S. Alioli, P. Nason, C. Oleari, and E. Re, “NLO single-top
production matched with shower in POWHEG: s- and t-channel
contributions”, JHEP 09 (2009) 111,
10.1088/1126-6708/2009/09/111,
arXiv:0907.4076.
[Erratum: \DOIJHEP 02 (2010) 011].
[66]
E. Re, “Single-top $Wt$-channel production matched with
parton showers using the POWHEG method”, Eur. Phys. J. C
71 (2011) 1547,
10.1140/epjc/s10052-011-1547-z,
arXiv:1009.2450.
[67]
T. Sjöstrand, S. Mrenna, and P. Skands, “A brief
introduction to PYTHIA 8.1”, Comp. Phys. Commun. 178
(2008) 852,
10.1016/j.cpc.2008.01.036.
[68]
NNPDF Collaboration, “Parton distributions for the LHC Run
II”, JHEP 04 (2015) 040,
10.1007/JHEP04(2015)040,
arXiv:1410.8849.
[69]
GEANT4 Collaboration, “GEANT4—a simulation toolkit”,
Nucl. Instrum. Meth. A 506 (2003) 250,
10.1016/S0168-9002(03)01368-8.
[70]
CMS Collaboration, “The fast simulation of the CMS detector
at LHC”, J. Phys.: Conf. Ser. 331 (2011) 032049,
10.1088/1742-6596/331/3/032049.
[71]
W. Beenakker, R. Höpker, M. Spira, and P. M.
Zerwas, “Squark and gluino production at hadron colliders”,
Nucl. Phys. B 492 (1997) 51,
10.1016/S0550-3213(97)80027-2,
arXiv:hep-ph/9610490.
[72]
A. Kulesza and L. Motyka, “Threshold resummation for
squark-antisquark and gluino-pair production at the LHC”, Phys.
Rev. Lett. 102 (2009) 111802,
10.1103/PhysRevLett.102.111802,
arXiv:0807.2405.
[73]
A. Kulesza and L. Motyka, “Soft gluon resummation for the
production of gluino-gluino and squark-antisquark pairs at the LHC”,
Phys. Rev. D 80 (2009) 095004,
10.1103/PhysRevD.80.095004,
arXiv:0905.4749.
[74]
W. Beenakker et al., “Soft-gluon resummation for squark
and gluino hadroproduction”, JHEP 12 (2009) 041,
10.1088/1126-6708/2009/12/041,
arXiv:0909.4418.
[75]
W. Beenakker et al., “Squark and gluino
hadroproduction”, Int. J. Mod. Phys. A 26 (2011) 2637,
10.1142/S0217751X11053560,
arXiv:1105.1110.
[76]
C. Borschensky et al., “Squark and gluino production cross
sections in $pp$ collisions at $\sqrt{s}=$ 13, 14, 33 and 100\TeV”,
Eur. Phys. J. C 74 (2014) 3174,
10.1140/epjc/s10052-014-3174-y,
arXiv:1407.5066.
[77]
CMS Collaboration,
“Particle-flow event reconstruction in CMS and performance for jets,
taus, and \MET”, CMS Physics Analysis Summary CMS-PAS-PFT-09-001, CERN,
2009.
[78]
CMS Collaboration,
“Commissioning of the particle-flow event with the first LHC collisions
recorded in the CMS detector”, CMS Physics Analysis Summary
CMS-PAS-PFT-10-001, CERN, 2010.
[79]
M. Cacciari, G. P. Salam, and G. Soyez, “The anti-$k_{t}$ jet
clustering algorithm”, JHEP 04 (2008) 063,
10.1088/1126-6708/2008/04/063,
arXiv:0802.1189.
[80]
M. Cacciari, G. P. Salam, and G. Soyez, “FastJet user
manual”, Eur. Phys. J. C 72 (2012) 1896,
10.1140/epjc/s10052-012-1896-2,
arXiv:1111.6097.
[81]
CMS Collaboration, “Jet
performance in $\Pp\Pp$ collisions at $\sqrt{s}=7\TeV$”, CMS Physics
Analysis Summary CMS-PAS-JME-10-003, CERN, 2010.
[82]
CMS Collaboration, “Jet energy scale and resolution in the
CMS experiment in $\Pp\Pp$ collisions at 8\TeV”, (2016).
arXiv:1607.03663.
Submitted to JINST.
[83]
CMS Collaboration, “Performance of electron reconstruction
and selection with the CMS detector in proton-proton collisions at $\sqrt{s}=8\TeV$”, JINST 10 (2015) P06005,
10.1088/1748-0221/10/06/P06005,
arXiv:1502.02701.
[84]
CMS Collaboration, “Study of
pileup removal algorithms for jets”, CMS Physics Analysis Summary
CMS-PAS-JME-14-001, CERN, 2014.
[85]
CMS Collaboration, “Performance of CMS muon reconstruction
in $\Pp\Pp$ collision events at $\sqrt{s}=7\TeV$”, JINST
7 (2012) P10002,
10.1088/1748-0221/7/10/P10002,
arXiv:1206.4071.
[86]
CMS Collaboration, “Reconstruction and identification of
$\tau$ lepton decays to hadrons and $\nu_{\tau}$ at CMS”, JINST
11 (2016) P01019,
10.1088/1748-0221/11/01/P01019,
arXiv:1510.07488.
[87]
CMS Collaboration,
“Identification of b quark jets at the CMS Experiment in the LHC Run 2”,
CMS Physics Analysis Summary CMS-PAS-BTV-15-001, CERN, 2016.
[88]
CMS Collaboration, “Identification of \PQb-quark jets with
the CMS experiment”, JINST 8 (2013) P04013,
10.1088/1748-0221/8/04/P04013,
arXiv:1211.4462.
[89]
CMS Collaboration, “Performance of photon reconstruction
and identification with the CMS detector in proton-proton collisions at
$\sqrt{s}=8\TeV$”, JINST 10 (2015) P08010,
10.1088/1748-0221/10/08/P08010,
arXiv:1502.02702.
[90]
CMS Collaboration, “Missing transverse energy performance
of the CMS detector”, JINST 6 (2011) P09001,
10.1088/1748-0221/6/09/P09001,
arXiv:1106.5048.
[91]
CMS Collaboration, “Performance of the CMS missing
transverse momentum reconstruction in $\Pp\Pp$ data at $\sqrt{s}=8\TeV$”, JINST 10 (2015) P02006,
10.1088/1748-0221/10/02/P02006,
arXiv:1411.0511.
[92]
T. Junk, “Confidence level computation for combining searches
with small statistics”, Nucl. Instrum. Meth. A 434
(1999) 435,
10.1016/S0168-9002(99)00498-2,
arXiv:hep-ex/9902006.
[93]
A. L. Read, “Presentation of search results: the
$\mathrm{CL_{s}}$ technique”, J. Phys. G 28 (2002)
2693,
10.1088/0954-3899/28/10/313.
[94]
A. L. Read, “Modified frequentist
analysis of search results (the $\mathrm{CL_{s}}$ method)”,
(2000).
[95]
G. Cowan, K. Cranmer, E. Gross, and O. Vitells, “Asymptotic
formulae for likelihood-based tests of new physics”, Eur. Phys. J.
C 71 (2011) 1554,
10.1140/epjc/s10052-011-1554-0,
arXiv:1007.1727.
[96]
ATLAS and CMS Collaborations,
“Procedure for the LHC Higgs boson search combination in summer
2011”, Technical Report ATL-PHYS-PUB-2011-011, CMS-NOTE-2011-005, CERN,
2011.
[97]
CMS Collaboration, “CMS
luminosity measurement for the 2015 data taking period”, Technical Report
CMS-PAS-LUM-15-001, CERN, 2016.
[98]
Particle Data Group, K. A. Olive et al., “Review of
particle physics”, Chin. Phys. C 38 (2014) 090001,
10.1088/1674-1137/38/9/090001.
[99]
M. Dine, A. Kagan, and S. Samuel, “Naturalness in
supersymmetry, or raising the supersymmetry breaking scale”,
Physics Letters B 243 (1990), no. 3, 250,
10.1016/0370-2693(90)90847-Y.
[100]
A. G. Cohen, D. B. Kaplan, and A. E. Nelson, “The More
minimal supersymmetric standard model”, Phys. Lett. B
388 (1996) 588–598,
10.1016/S0370-2693(96)01183-5,
arXiv:hep-ph/9607394.
.10 Results of method B fit-based background prediction
In Section 0.6.2, we detail the fit-based background
prediction methodology and present the model-independent SUSY search
results in the 2 \PQb-tag and ${\geq}3$
\PQb-tag bins of the Multijet category in Fig. 7.
In Figs. 18-22
in this Appendix, we present the results of the search for SUSY signal
events in the remaining categories, namely the 0 \PQb-tag and 1
\PQb-tag bins of the Multijet, the Muon Multijet, and Electron Multijet
categories. No statistically significant deviations from the expected
background predictions are observed in these categories in data.
.11 The CMS Collaboration
Yerevan Physics Institute, Yerevan, Armenia
V. Khachatryan, A.M. Sirunyan, A. Tumasyan
\cmsinstskipInstitut für Hochenergiephysik, Wien, Austria
W. Adam, E. Asilar, T. Bergauer, J. Brandstetter, E. Brondolin, M. Dragicevic, J. Erö, M. Flechl, M. Friedl, R. Frühwirth\cmsAuthorMark1, V.M. Ghete, C. Hartl, N. Hörmann, J. Hrubec, M. Jeitler\cmsAuthorMark1, A. König, I. Krätschmer, D. Liko, T. Matsushita, I. Mikulec, D. Rabady, N. Rad, B. Rahbaran, H. Rohringer, J. Schieck\cmsAuthorMark1, J. Strauss, W. Treberer-Treberspurg, W. Waltenberger, C.-E. Wulz\cmsAuthorMark1
\cmsinstskipNational Centre for Particle and High Energy Physics, Minsk, Belarus
V. Mossolov, N. Shumeiko, J. Suarez Gonzalez
\cmsinstskipUniversiteit Antwerpen, Antwerpen, Belgium
S. Alderweireldt, E.A. De Wolf, X. Janssen, J. Lauwers, M. Van De Klundert, H. Van Haevermaet, P. Van Mechelen, N. Van Remortel, A. Van Spilbeeck
\cmsinstskipVrije Universiteit Brussel, Brussel, Belgium
S. Abu Zeid, F. Blekman, J. D’Hondt, N. Daci, I. De Bruyn, K. Deroover, N. Heracleous, S. Lowette, S. Moortgat, L. Moreels, A. Olbrechts, Q. Python, S. Tavernier, W. Van Doninck, P. Van Mulders, I. Van Parijs
\cmsinstskipUniversité Libre de Bruxelles, Bruxelles, Belgium
H. Brun, C. Caillol, B. Clerbaux, G. De Lentdecker, H. Delannoy, G. Fasanella, L. Favart, R. Goldouzian, A. Grebenyuk, G. Karapostoli, T. Lenzi, A. Léonard, J. Luetic, T. Maerschalk, A. Marinov, A. Randle-conde, T. Seva, C. Vander Velde, P. Vanlaer, R. Yonamine, F. Zenoni, F. Zhang\cmsAuthorMark2
\cmsinstskipGhent University, Ghent, Belgium
A. Cimmino, T. Cornelis, D. Dobur, A. Fagot, G. Garcia, M. Gul, D. Poyraz, S. Salva, R. Schöfbeck, M. Tytgat, W. Van Driessche, E. Yazgan, N. Zaganidis
\cmsinstskipUniversité Catholique de Louvain, Louvain-la-Neuve, Belgium
H. Bakhshiansohi, C. Beluffi\cmsAuthorMark3, O. Bondu, S. Brochet, G. Bruno, A. Caudron, S. De Visscher, C. Delaere, M. Delcourt, B. Francois, A. Giammanco, A. Jafari, P. Jez, M. Komm, V. Lemaitre, A. Magitteri, A. Mertens, M. Musich, C. Nuttens, K. Piotrzkowski, L. Quertenmont, M. Selvaggi, M. Vidal Marono, S. Wertz
\cmsinstskipUniversité de Mons, Mons, Belgium
N. Beliy
\cmsinstskipCentro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, Brazil
W.L. Aldá Júnior, F.L. Alves, G.A. Alves, L. Brito, C. Hensel, A. Moraes, M.E. Pol, P. Rebello Teles
\cmsinstskipUniversidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
E. Belchior Batista Das Chagas, W. Carvalho, J. Chinellato\cmsAuthorMark4, A. Custódio, E.M. Da Costa, G.G. Da Silveira\cmsAuthorMark5, D. De Jesus Damiao, C. De Oliveira Martins, S. Fonseca De Souza, L.M. Huertas Guativa, H. Malbouisson, D. Matos Figueiredo, C. Mora Herrera, L. Mundim, H. Nogima, W.L. Prado Da Silva, A. Santoro, A. Sznajder, E.J. Tonelli Manganote\cmsAuthorMark4, A. Vilela Pereira
\cmsinstskipUniversidade Estadual Paulista ${}^{a}$, Universidade Federal do ABC ${}^{b}$, São Paulo, Brazil
S. Ahuja${}^{a}$, C.A. Bernardes${}^{b}$, S. Dogra${}^{a}$, T.R. Fernandez Perez Tomei${}^{a}$, E.M. Gregores${}^{b}$, P.G. Mercadante${}^{b}$, C.S. Moon${}^{a}$, S.F. Novaes${}^{a}$, Sandra S. Padula${}^{a}$, D. Romero Abad${}^{b}$, J.C. Ruiz Vargas
\cmsinstskipInstitute for Nuclear Research and Nuclear Energy, Sofia, Bulgaria
A. Aleksandrov, R. Hadjiiska, P. Iaydjiev, M. Rodozov, S. Stoykova, G. Sultanov, M. Vutova
\cmsinstskipUniversity of Sofia, Sofia, Bulgaria
A. Dimitrov, I. Glushkov, L. Litov, B. Pavlov, P. Petkov
\cmsinstskipBeihang University, Beijing, China
W. Fang\cmsAuthorMark6
\cmsinstskipInstitute of High Energy Physics, Beijing, China
M. Ahmad, J.G. Bian, G.M. Chen, H.S. Chen, M. Chen, Y. Chen\cmsAuthorMark7, T. Cheng, C.H. Jiang, D. Leggat, Z. Liu, F. Romeo, S.M. Shaheen, A. Spiezia, J. Tao, C. Wang, Z. Wang, H. Zhang, J. Zhao
\cmsinstskipState Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China
Y. Ban, G. Chen, Q. Li, S. Liu, Y. Mao, S.J. Qian, D. Wang, Z. Xu
\cmsinstskipUniversidad de Los Andes, Bogota, Colombia
C. Avila, A. Cabrera, L.F. Chaparro Sierra, C. Florez, J.P. Gomez, C.F. González Hernández, J.D. Ruiz Alvarez, J.C. Sanabria
\cmsinstskipUniversity of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Split, Croatia
N. Godinovic, D. Lelas, I. Puljak, P.M. Ribeiro Cipriano, T. Sculac
\cmsinstskipUniversity of Split, Faculty of Science, Split, Croatia
Z. Antunovic, M. Kovac
\cmsinstskipInstitute Rudjer Boskovic, Zagreb, Croatia
V. Brigljevic, D. Ferencek, K. Kadija, S. Micanovic, L. Sudic, T. Susa
\cmsinstskipUniversity of Cyprus, Nicosia, Cyprus
A. Attikis, G. Mavromanolakis, J. Mousa, C. Nicolaou, F. Ptochos, P.A. Razis, H. Rykaczewski
\cmsinstskipCharles University, Prague, Czech Republic
M. Finger\cmsAuthorMark8, M. Finger Jr.\cmsAuthorMark8
\cmsinstskipUniversidad San Francisco de Quito, Quito, Ecuador
E. Carrera Jarrin
\cmsinstskipAcademy of Scientific Research and Technology of the Arab Republic of Egypt, Egyptian Network of High Energy Physics, Cairo, Egypt
Y. Assran\cmsAuthorMark9${}^{,}$\cmsAuthorMark10, T. Elkafrawy\cmsAuthorMark11, A. Mahrous\cmsAuthorMark12
\cmsinstskipNational Institute of Chemical Physics and Biophysics, Tallinn, Estonia
B. Calpas, M. Kadastik, M. Murumaa, L. Perrini, M. Raidal, A. Tiko, C. Veelken
\cmsinstskipDepartment of Physics, University of Helsinki, Helsinki, Finland
P. Eerola, J. Pekkanen, M. Voutilainen
\cmsinstskipHelsinki Institute of Physics, Helsinki, Finland
J. Härkönen, V. Karimäki, R. Kinnunen, T. Lampén, K. Lassila-Perini, S. Lehti, T. Lindén, P. Luukka, T. Peltola, J. Tuominiemi, E. Tuovinen, L. Wendland
\cmsinstskipLappeenranta University of Technology, Lappeenranta, Finland
J. Talvitie, T. Tuuva
\cmsinstskipIRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France
M. Besancon, F. Couderc, M. Dejardin, D. Denegri, B. Fabbro, J.L. Faure, C. Favaro, F. Ferri, S. Ganjour, S. Ghosh, A. Givernaud, P. Gras, G. Hamel de Monchenault, P. Jarry, I. Kucher, E. Locci, M. Machet, J. Malcles, J. Rander, A. Rosowsky, M. Titov, A. Zghiche
\cmsinstskipLaboratoire Leprince-Ringuet, Ecole Polytechnique, IN2P3-CNRS, Palaiseau, France
A. Abdulsalam, I. Antropov, S. Baffioni, F. Beaudette, P. Busson, L. Cadamuro, E. Chapon, C. Charlot, O. Davignon, R. Granier de Cassagnac, M. Jo, S. Lisniak, P. Miné, M. Nguyen, C. Ochando, G. Ortona, P. Paganini, P. Pigard, S. Regnard, R. Salerno, Y. Sirois, T. Strebler, Y. Yilmaz, A. Zabi
\cmsinstskipInstitut Pluridisciplinaire Hubert Curien, Université de Strasbourg, Université de Haute Alsace Mulhouse, CNRS/IN2P3, Strasbourg, France
J.-L. Agram\cmsAuthorMark13, J. Andrea, A. Aubin, D. Bloch, J.-M. Brom, M. Buttignol, E.C. Chabert, N. Chanon, C. Collard, E. Conte\cmsAuthorMark13, X. Coubez, J.-C. Fontaine\cmsAuthorMark13, D. Gelé, U. Goerlach, A.-C. Le Bihan, J.A. Merlin\cmsAuthorMark14, K. Skovpen, P. Van Hove
\cmsinstskipCentre de Calcul de l’Institut National de Physique Nucleaire et de Physique des Particules, CNRS/IN2P3, Villeurbanne, France
S. Gadrat
\cmsinstskipUniversité de Lyon, Université Claude Bernard Lyon 1, CNRS-IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne, France
S. Beauceron, C. Bernet, G. Boudoul, E. Bouvier, C.A. Carrillo Montoya, R. Chierici, D. Contardo, B. Courbon, P. Depasse, H. El Mamouni, J. Fan, J. Fay, S. Gascon, M. Gouzevitch, G. Grenier, B. Ille, F. Lagarde, I.B. Laktineh, M. Lethuillier, L. Mirabito, A.L. Pequegnot, S. Perries, A. Popov\cmsAuthorMark15, D. Sabes, V. Sordini, M. Vander Donckt, P. Verdier, S. Viret
\cmsinstskipGeorgian Technical University, Tbilisi, Georgia
T. Toriashvili\cmsAuthorMark16
\cmsinstskipTbilisi State University, Tbilisi, Georgia
Z. Tsamalaidze\cmsAuthorMark8
\cmsinstskipRWTH Aachen University, I. Physikalisches Institut, Aachen, Germany
C. Autermann, S. Beranek, L. Feld, A. Heister, M.K. Kiesel, K. Klein, M. Lipinski, A. Ostapchuk, M. Preuten, F. Raupach, S. Schael, C. Schomakers, J.F. Schulte, J. Schulz, T. Verlage, H. Weber, V. Zhukov\cmsAuthorMark15
\cmsinstskipRWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany
M. Brodski, E. Dietz-Laursonn, D. Duchardt, M. Endres, M. Erdmann, S. Erdweg, T. Esch, R. Fischer, A. Güth, M. Hamer, T. Hebbeker, C. Heidemann, K. Hoepfner, S. Knutzen, M. Merschmeyer, A. Meyer, P. Millet, S. Mukherjee, M. Olschewski, K. Padeken, T. Pook, M. Radziej, H. Reithler, M. Rieger, F. Scheuch, L. Sonnenschein, D. Teyssier, S. Thüer
\cmsinstskipRWTH Aachen University, III. Physikalisches Institut B, Aachen, Germany
V. Cherepanov, G. Flügge, W. Haj Ahmad, F. Hoehle, B. Kargoll, T. Kress, A. Künsken, J. Lingemann, T. Müller, A. Nehrkorn, A. Nowack, I.M. Nugent, C. Pistone, O. Pooth, A. Stahl\cmsAuthorMark14
\cmsinstskipDeutsches Elektronen-Synchrotron, Hamburg, Germany
M. Aldaya Martin, C. Asawatangtrakuldee, K. Beernaert, O. Behnke, U. Behrens, A.A. Bin Anuar, K. Borras\cmsAuthorMark17, A. Campbell, P. Connor, C. Contreras-Campana, F. Costanza, C. Diez Pardos, G. Dolinska, G. Eckerlin, D. Eckstein, E. Eren, E. Gallo\cmsAuthorMark18, J. Garay Garcia, A. Geiser, A. Gizhko, J.M. Grados Luyando, P. Gunnellini, A. Harb, J. Hauk, M. Hempel\cmsAuthorMark19, H. Jung, A. Kalogeropoulos, O. Karacheban\cmsAuthorMark19, M. Kasemann, J. Keaveney, J. Kieseler, C. Kleinwort, I. Korol, D. Krücker, W. Lange, A. Lelek, J. Leonard, K. Lipka, A. Lobanov, W. Lohmann\cmsAuthorMark19, R. Mankel, I.-A. Melzer-Pellmann, A.B. Meyer, G. Mittag, J. Mnich, A. Mussgiller, E. Ntomari, D. Pitzl, R. Placakyte, A. Raspereza, B. Roland, M.Ö. Sahin, P. Saxena, T. Schoerner-Sadenius, C. Seitz, S. Spannagel, N. Stefaniuk, K.D. Trippkewitz, G.P. Van Onsem, R. Walsh, C. Wissing
\cmsinstskipUniversity of Hamburg, Hamburg, Germany
V. Blobel, M. Centis Vignali, A.R. Draeger, T. Dreyer, E. Garutti, D. Gonzalez, J. Haller, M. Hoffmann, A. Junkes, R. Klanner, R. Kogler, N. Kovalchuk, T. Lapsien, T. Lenz, I. Marchesini, D. Marconi, M. Meyer, M. Niedziela, D. Nowatschin, F. Pantaleo\cmsAuthorMark14, T. Peiffer, A. Perieanu, J. Poehlsen, C. Sander, C. Scharf, P. Schleper, A. Schmidt, S. Schumann, J. Schwandt, H. Stadie, G. Steinbrück, F.M. Stober, M. Stöver, H. Tholen, D. Troendle, E. Usai, L. Vanelderen, A. Vanhoefer, B. Vormwald
\cmsinstskipInstitut für Experimentelle Kernphysik, Karlsruhe, Germany
C. Barth, C. Baus, J. Berger, E. Butz, T. Chwalek, F. Colombo, W. De Boer, A. Dierlamm, S. Fink, R. Friese, M. Giffels, A. Gilbert, P. Goldenzweig, D. Haitz, F. Hartmann\cmsAuthorMark14, S.M. Heindl, U. Husemann, I. Katkov\cmsAuthorMark15, P. Lobelle Pardo, B. Maier, H. Mildner, M.U. Mozer, Th. Müller, M. Plagge, G. Quast, K. Rabbertz, S. Röcker, F. Roscher, M. Schröder, I. Shvetsov, G. Sieber, H.J. Simonis, R. Ulrich, J. Wagner-Kuhr, S. Wayand, M. Weber, T. Weiler, S. Williamson, C. Wöhrmann, R. Wolf
\cmsinstskipInstitute of Nuclear and Particle Physics (INPP), NCSR Demokritos, Aghia Paraskevi, Greece
G. Anagnostou, G. Daskalakis, T. Geralis, V.A. Giakoumopoulou, A. Kyriakis, D. Loukas, I. Topsis-Giotis
\cmsinstskipNational and Kapodistrian University of Athens, Athens, Greece
A. Agapitos, S. Kesisoglou, A. Panagiotou, N. Saoulidou, E. Tziaferi
\cmsinstskipUniversity of Ioánnina, Ioánnina, Greece
I. Evangelou, G. Flouris, C. Foudas, P. Kokkas, N. Loukas, N. Manthos, I. Papadopoulos, E. Paradas
\cmsinstskipMTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös Loránd University, Budapest, Hungary
N. Filipovic
\cmsinstskipWigner Research Centre for Physics, Budapest, Hungary
G. Bencze, C. Hajdu, P. Hidas, D. Horvath\cmsAuthorMark20, F. Sikler, V. Veszpremi, G. Vesztergombi\cmsAuthorMark21, A.J. Zsigmond
\cmsinstskipInstitute of Nuclear Research ATOMKI, Debrecen, Hungary
N. Beni, S. Czellar, J. Karancsi\cmsAuthorMark22, A. Makovec, J. Molnar, Z. Szillasi
\cmsinstskipUniversity of Debrecen, Debrecen, Hungary
M. Bartók\cmsAuthorMark21, P. Raics, Z.L. Trocsanyi, B. Ujvari
\cmsinstskipNational Institute of Science Education and Research, Bhubaneswar, India
S. Bahinipati, S. Choudhury\cmsAuthorMark23, P. Mal, K. Mandal, A. Nayak\cmsAuthorMark24, D.K. Sahoo, N. Sahoo, S.K. Swain
\cmsinstskipPanjab University, Chandigarh, India
S. Bansal, S.B. Beri, V. Bhatnagar, R. Chawla, U.Bhawandeep, A.K. Kalsi, A. Kaur, M. Kaur, R. Kumar, A. Mehta, M. Mittal, J.B. Singh, G. Walia
\cmsinstskipUniversity of Delhi, Delhi, India
Ashok Kumar, A. Bhardwaj, B.C. Choudhary, R.B. Garg, S. Keshri, S. Malhotra, M. Naimuddin, N. Nishu, K. Ranjan, R. Sharma, V. Sharma
\cmsinstskipSaha Institute of Nuclear Physics, Kolkata, India
R. Bhattacharya, S. Bhattacharya, K. Chatterjee, S. Dey, S. Dutt, S. Dutta, S. Ghosh, N. Majumdar, A. Modak, K. Mondal, S. Mukhopadhyay, S. Nandan, A. Purohit, A. Roy, D. Roy, S. Roy Chowdhury, S. Sarkar, M. Sharan, S. Thakur
\cmsinstskipIndian Institute of Technology Madras, Madras, India
P.K. Behera
\cmsinstskipBhabha Atomic Research Centre, Mumbai, India
R. Chudasama, D. Dutta, V. Jha, V. Kumar, A.K. Mohanty\cmsAuthorMark14, P.K. Netrakanti, L.M. Pant, P. Shukla, A. Topkar
\cmsinstskipTata Institute of Fundamental Research-A, Mumbai, India
T. Aziz, S. Dugad, G. Kole, B. Mahakud, S. Mitra, G.B. Mohanty, B. Parida, N. Sur, B. Sutar
\cmsinstskipTata Institute of Fundamental Research-B, Mumbai, India
S. Banerjee, S. Bhowmik\cmsAuthorMark25, R.K. Dewanjee, S. Ganguly, M. Guchait, Sa. Jain, S. Kumar, M. Maity\cmsAuthorMark25, G. Majumder, K. Mazumdar, T. Sarkar\cmsAuthorMark25, N. Wickramage\cmsAuthorMark26
\cmsinstskipIndian Institute of Science Education and Research (IISER), Pune, India
S. Chauhan, S. Dube, V. Hegde, A. Kapoor, K. Kothekar, A. Rane, S. Sharma
\cmsinstskipInstitute for Research in Fundamental Sciences (IPM), Tehran, Iran
H. Behnamian, S. Chenarani\cmsAuthorMark27, E. Eskandari Tadavani, S.M. Etesami\cmsAuthorMark27, A. Fahim\cmsAuthorMark28, M. Khakzad, M. Mohammadi Najafabadi, M. Naseri, S. Paktinat Mehdiabadi\cmsAuthorMark29, F. Rezaei Hosseinabadi, B. Safarzadeh\cmsAuthorMark30, M. Zeinali
\cmsinstskipUniversity College Dublin, Dublin, Ireland
M. Felcini, M. Grunewald
\cmsinstskipINFN Sezione di Bari ${}^{a}$, Università di Bari ${}^{b}$, Politecnico di Bari ${}^{c}$, Bari, Italy
M. Abbrescia${}^{a}$${}^{,}$${}^{b}$, C. Calabria${}^{a}$${}^{,}$${}^{b}$, C. Caputo${}^{a}$${}^{,}$${}^{b}$, A. Colaleo${}^{a}$, D. Creanza${}^{a}$${}^{,}$${}^{c}$, L. Cristella${}^{a}$${}^{,}$${}^{b}$, N. De Filippis${}^{a}$${}^{,}$${}^{c}$, M. De Palma${}^{a}$${}^{,}$${}^{b}$, L. Fiore${}^{a}$, G. Iaselli${}^{a}$${}^{,}$${}^{c}$, G. Maggi${}^{a}$${}^{,}$${}^{c}$, M. Maggi${}^{a}$, G. Miniello${}^{a}$${}^{,}$${}^{b}$, S. My${}^{a}$${}^{,}$${}^{b}$, S. Nuzzo${}^{a}$${}^{,}$${}^{b}$, A. Pompili${}^{a}$${}^{,}$${}^{b}$, G. Pugliese${}^{a}$${}^{,}$${}^{c}$, R. Radogna${}^{a}$${}^{,}$${}^{b}$, A. Ranieri${}^{a}$, G. Selvaggi${}^{a}$${}^{,}$${}^{b}$, L. Silvestris${}^{a}$${}^{,}$\cmsAuthorMark14, R. Venditti${}^{a}$${}^{,}$${}^{b}$, P. Verwilligen${}^{a}$
\cmsinstskipINFN Sezione di Bologna ${}^{a}$, Università di Bologna ${}^{b}$, Bologna, Italy
G. Abbiendi${}^{a}$, C. Battilana, D. Bonacorsi${}^{a}$${}^{,}$${}^{b}$, S. Braibant-Giacomelli${}^{a}$${}^{,}$${}^{b}$, L. Brigliadori${}^{a}$${}^{,}$${}^{b}$, R. Campanini${}^{a}$${}^{,}$${}^{b}$, P. Capiluppi${}^{a}$${}^{,}$${}^{b}$, A. Castro${}^{a}$${}^{,}$${}^{b}$, F.R. Cavallo${}^{a}$, S.S. Chhibra${}^{a}$${}^{,}$${}^{b}$, G. Codispoti${}^{a}$${}^{,}$${}^{b}$, M. Cuffiani${}^{a}$${}^{,}$${}^{b}$, G.M. Dallavalle${}^{a}$, F. Fabbri${}^{a}$, A. Fanfani${}^{a}$${}^{,}$${}^{b}$, D. Fasanella${}^{a}$${}^{,}$${}^{b}$, P. Giacomelli${}^{a}$, C. Grandi${}^{a}$, L. Guiducci${}^{a}$${}^{,}$${}^{b}$, S. Marcellini${}^{a}$, G. Masetti${}^{a}$, A. Montanari${}^{a}$, F.L. Navarria${}^{a}$${}^{,}$${}^{b}$, A. Perrotta${}^{a}$, A.M. Rossi${}^{a}$${}^{,}$${}^{b}$, T. Rovelli${}^{a}$${}^{,}$${}^{b}$, G.P. Siroli${}^{a}$${}^{,}$${}^{b}$, N. Tosi${}^{a}$${}^{,}$${}^{b}$${}^{,}$\cmsAuthorMark14
\cmsinstskipINFN Sezione di Catania ${}^{a}$, Università di Catania ${}^{b}$, Catania, Italy
S. Albergo${}^{a}$${}^{,}$${}^{b}$, M. Chiorboli${}^{a}$${}^{,}$${}^{b}$, S. Costa${}^{a}$${}^{,}$${}^{b}$, A. Di Mattia${}^{a}$, F. Giordano${}^{a}$${}^{,}$${}^{b}$, R. Potenza${}^{a}$${}^{,}$${}^{b}$, A. Tricomi${}^{a}$${}^{,}$${}^{b}$, C. Tuve${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Firenze ${}^{a}$, Università di Firenze ${}^{b}$, Firenze, Italy
G. Barbagli${}^{a}$, V. Ciulli${}^{a}$${}^{,}$${}^{b}$, C. Civinini${}^{a}$, R. D’Alessandro${}^{a}$${}^{,}$${}^{b}$, E. Focardi${}^{a}$${}^{,}$${}^{b}$, V. Gori${}^{a}$${}^{,}$${}^{b}$, P. Lenzi${}^{a}$${}^{,}$${}^{b}$, M. Meschini${}^{a}$, S. Paoletti${}^{a}$, G. Sguazzoni${}^{a}$, L. Viliani${}^{a}$${}^{,}$${}^{b}$${}^{,}$\cmsAuthorMark14
\cmsinstskipINFN Laboratori Nazionali di Frascati, Frascati, Italy
L. Benussi, S. Bianco, F. Fabbri, D. Piccolo, F. Primavera\cmsAuthorMark14
\cmsinstskipINFN Sezione di Genova ${}^{a}$, Università di Genova ${}^{b}$, Genova, Italy
V. Calvelli${}^{a}$${}^{,}$${}^{b}$, F. Ferro${}^{a}$, M. Lo Vetere${}^{a}$${}^{,}$${}^{b}$, M.R. Monge${}^{a}$${}^{,}$${}^{b}$, E. Robutti${}^{a}$, S. Tosi${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Milano-Bicocca ${}^{a}$, Università di Milano-Bicocca ${}^{b}$, Milano, Italy
L. Brianza\cmsAuthorMark14, M.E. Dinardo${}^{a}$${}^{,}$${}^{b}$, S. Fiorendi${}^{a}$${}^{,}$${}^{b}$, S. Gennai${}^{a}$, A. Ghezzi${}^{a}$${}^{,}$${}^{b}$, P. Govoni${}^{a}$${}^{,}$${}^{b}$, M. Malberti, S. Malvezzi${}^{a}$, R.A. Manzoni${}^{a}$${}^{,}$${}^{b}$${}^{,}$\cmsAuthorMark14, B. Marzocchi${}^{a}$${}^{,}$${}^{b}$, D. Menasce${}^{a}$, L. Moroni${}^{a}$, M. Paganoni${}^{a}$${}^{,}$${}^{b}$, D. Pedrini${}^{a}$, S. Pigazzini, S. Ragazzi${}^{a}$${}^{,}$${}^{b}$, T. Tabarelli de Fatis${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Napoli ${}^{a}$, Università di Napoli ’Federico II’ ${}^{b}$, Napoli, Italy, Università della Basilicata ${}^{c}$, Potenza, Italy, Università G. Marconi ${}^{d}$, Roma, Italy
S. Buontempo${}^{a}$, N. Cavallo${}^{a}$${}^{,}$${}^{c}$, G. De Nardo, S. Di Guida${}^{a}$${}^{,}$${}^{d}$${}^{,}$\cmsAuthorMark14, M. Esposito${}^{a}$${}^{,}$${}^{b}$, F. Fabozzi${}^{a}$${}^{,}$${}^{c}$, A.O.M. Iorio${}^{a}$${}^{,}$${}^{b}$, G. Lanza${}^{a}$, L. Lista${}^{a}$, S. Meola${}^{a}$${}^{,}$${}^{d}$${}^{,}$\cmsAuthorMark14, P. Paolucci${}^{a}$${}^{,}$\cmsAuthorMark14, C. Sciacca${}^{a}$${}^{,}$${}^{b}$, F. Thyssen
\cmsinstskipINFN Sezione di Padova ${}^{a}$, Università di Padova ${}^{b}$, Padova, Italy, Università di Trento ${}^{c}$, Trento, Italy
P. Azzi${}^{a}$${}^{,}$\cmsAuthorMark14, N. Bacchetta${}^{a}$, L. Benato${}^{a}$${}^{,}$${}^{b}$, D. Bisello${}^{a}$${}^{,}$${}^{b}$, A. Boletti${}^{a}$${}^{,}$${}^{b}$, R. Carlin${}^{a}$${}^{,}$${}^{b}$, A. Carvalho Antunes De Oliveira${}^{a}$${}^{,}$${}^{b}$, P. Checchia${}^{a}$, M. Dall’Osso${}^{a}$${}^{,}$${}^{b}$, P. De Castro Manzano${}^{a}$, T. Dorigo${}^{a}$, U. Dosselli${}^{a}$, F. Gasparini${}^{a}$${}^{,}$${}^{b}$, U. Gasparini${}^{a}$${}^{,}$${}^{b}$, A. Gozzelino${}^{a}$, S. Lacaprara${}^{a}$, M. Margoni${}^{a}$${}^{,}$${}^{b}$, A.T. Meneguzzo${}^{a}$${}^{,}$${}^{b}$, J. Pazzini${}^{a}$${}^{,}$${}^{b}$${}^{,}$\cmsAuthorMark14, N. Pozzobon${}^{a}$${}^{,}$${}^{b}$, P. Ronchese${}^{a}$${}^{,}$${}^{b}$, F. Simonetto${}^{a}$${}^{,}$${}^{b}$, E. Torassa${}^{a}$, M. Zanetti, P. Zotto${}^{a}$${}^{,}$${}^{b}$, A. Zucchetta${}^{a}$${}^{,}$${}^{b}$, G. Zumerle${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Pavia ${}^{a}$, Università di Pavia ${}^{b}$, Pavia, Italy
A. Braghieri${}^{a}$, A. Magnani${}^{a}$${}^{,}$${}^{b}$, P. Montagna${}^{a}$${}^{,}$${}^{b}$, S.P. Ratti${}^{a}$${}^{,}$${}^{b}$, V. Re${}^{a}$, C. Riccardi${}^{a}$${}^{,}$${}^{b}$, P. Salvini${}^{a}$, I. Vai${}^{a}$${}^{,}$${}^{b}$, P. Vitulo${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Perugia ${}^{a}$, Università di Perugia ${}^{b}$, Perugia, Italy
L. Alunni Solestizi${}^{a}$${}^{,}$${}^{b}$, G.M. Bilei${}^{a}$, D. Ciangottini${}^{a}$${}^{,}$${}^{b}$, L. Fanò${}^{a}$${}^{,}$${}^{b}$, P. Lariccia${}^{a}$${}^{,}$${}^{b}$, R. Leonardi${}^{a}$${}^{,}$${}^{b}$, G. Mantovani${}^{a}$${}^{,}$${}^{b}$, M. Menichelli${}^{a}$, A. Saha${}^{a}$, A. Santocchia${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Pisa ${}^{a}$, Università di Pisa ${}^{b}$, Scuola Normale Superiore di Pisa ${}^{c}$, Pisa, Italy
K. Androsov${}^{a}$${}^{,}$\cmsAuthorMark31, P. Azzurri${}^{a}$${}^{,}$\cmsAuthorMark14, G. Bagliesi${}^{a}$, J. Bernardini${}^{a}$, T. Boccali${}^{a}$, R. Castaldi${}^{a}$, M.A. Ciocci${}^{a}$${}^{,}$\cmsAuthorMark31, R. Dell’Orso${}^{a}$, S. Donato${}^{a}$${}^{,}$${}^{c}$, G. Fedi, A. Giassi${}^{a}$, M.T. Grippo${}^{a}$${}^{,}$\cmsAuthorMark31, F. Ligabue${}^{a}$${}^{,}$${}^{c}$, T. Lomtadze${}^{a}$, L. Martini${}^{a}$${}^{,}$${}^{b}$, A. Messineo${}^{a}$${}^{,}$${}^{b}$, F. Palla${}^{a}$, A. Rizzi${}^{a}$${}^{,}$${}^{b}$, A. Savoy-Navarro${}^{a}$${}^{,}$\cmsAuthorMark32, P. Spagnolo${}^{a}$, R. Tenchini${}^{a}$, G. Tonelli${}^{a}$${}^{,}$${}^{b}$, A. Venturi${}^{a}$, P.G. Verdini${}^{a}$
\cmsinstskipINFN Sezione di Roma ${}^{a}$, Università di Roma ${}^{b}$, Roma, Italy
L. Barone${}^{a}$${}^{,}$${}^{b}$, F. Cavallari${}^{a}$, M. Cipriani${}^{a}$${}^{,}$${}^{b}$, G. D’imperio${}^{a}$${}^{,}$${}^{b}$${}^{,}$\cmsAuthorMark14, D. Del Re${}^{a}$${}^{,}$${}^{b}$${}^{,}$\cmsAuthorMark14, M. Diemoz${}^{a}$, S. Gelli${}^{a}$${}^{,}$${}^{b}$, E. Longo${}^{a}$${}^{,}$${}^{b}$, F. Margaroli${}^{a}$${}^{,}$${}^{b}$, P. Meridiani${}^{a}$, G. Organtini${}^{a}$${}^{,}$${}^{b}$, R. Paramatti${}^{a}$, F. Preiato${}^{a}$${}^{,}$${}^{b}$, S. Rahatlou${}^{a}$${}^{,}$${}^{b}$, C. Rovelli${}^{a}$, F. Santanastasio${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Torino ${}^{a}$, Università di Torino ${}^{b}$, Torino, Italy, Università del Piemonte Orientale ${}^{c}$, Novara, Italy
N. Amapane${}^{a}$${}^{,}$${}^{b}$, R. Arcidiacono${}^{a}$${}^{,}$${}^{c}$${}^{,}$\cmsAuthorMark14, S. Argiro${}^{a}$${}^{,}$${}^{b}$, M. Arneodo${}^{a}$${}^{,}$${}^{c}$, N. Bartosik${}^{a}$, R. Bellan${}^{a}$${}^{,}$${}^{b}$, C. Biino${}^{a}$, N. Cartiglia${}^{a}$, F. Cenna${}^{a}$${}^{,}$${}^{b}$, M. Costa${}^{a}$${}^{,}$${}^{b}$, R. Covarelli${}^{a}$${}^{,}$${}^{b}$, A. Degano${}^{a}$${}^{,}$${}^{b}$, N. Demaria${}^{a}$, L. Finco${}^{a}$${}^{,}$${}^{b}$, B. Kiani${}^{a}$${}^{,}$${}^{b}$, C. Mariotti${}^{a}$, S. Maselli${}^{a}$, E. Migliore${}^{a}$${}^{,}$${}^{b}$, V. Monaco${}^{a}$${}^{,}$${}^{b}$, E. Monteil${}^{a}$${}^{,}$${}^{b}$, M.M. Obertino${}^{a}$${}^{,}$${}^{b}$, L. Pacher${}^{a}$${}^{,}$${}^{b}$, N. Pastrone${}^{a}$, M. Pelliccioni${}^{a}$, G.L. Pinna Angioni${}^{a}$${}^{,}$${}^{b}$, F. Ravera${}^{a}$${}^{,}$${}^{b}$, A. Romero${}^{a}$${}^{,}$${}^{b}$, M. Ruspa${}^{a}$${}^{,}$${}^{c}$, R. Sacchi${}^{a}$${}^{,}$${}^{b}$, K. Shchelina${}^{a}$${}^{,}$${}^{b}$, V. Sola${}^{a}$, A. Solano${}^{a}$${}^{,}$${}^{b}$, A. Staiano${}^{a}$, P. Traczyk${}^{a}$${}^{,}$${}^{b}$
\cmsinstskipINFN Sezione di Trieste ${}^{a}$, Università di Trieste ${}^{b}$, Trieste, Italy
S. Belforte${}^{a}$, M. Casarsa${}^{a}$, F. Cossutti${}^{a}$, G. Della Ricca${}^{a}$${}^{,}$${}^{b}$, C. La Licata${}^{a}$${}^{,}$${}^{b}$, A. Schizzi${}^{a}$${}^{,}$${}^{b}$, A. Zanetti${}^{a}$
\cmsinstskipKyungpook National University, Daegu, Korea
D.H. Kim, G.N. Kim, M.S. Kim, S. Lee, S.W. Lee, Y.D. Oh, S. Sekmen, D.C. Son, Y.C. Yang
\cmsinstskipChonbuk National University, Jeonju, Korea
A. Lee
\cmsinstskipHanyang University, Seoul, Korea
J.A. Brochero Cifuentes, T.J. Kim
\cmsinstskipKorea University, Seoul, Korea
S. Cho, S. Choi, Y. Go, D. Gyun, S. Ha, B. Hong, Y. Jo, Y. Kim, B. Lee, K. Lee, K.S. Lee, S. Lee, J. Lim, S.K. Park, Y. Roh
\cmsinstskipSeoul National University, Seoul, Korea
J. Almond, J. Kim, H. Lee, S.B. Oh, B.C. Radburn-Smith, S.h. Seo, U.K. Yang, H.D. Yoo, G.B. Yu
\cmsinstskipUniversity of Seoul, Seoul, Korea
M. Choi, H. Kim, H. Kim, J.H. Kim, J.S.H. Lee, I.C. Park, G. Ryu, M.S. Ryu
\cmsinstskipSungkyunkwan University, Suwon, Korea
Y. Choi, J. Goh, C. Hwang, J. Lee, I. Yu
\cmsinstskipVilnius University, Vilnius, Lithuania
V. Dudenas, A. Juodagalvis, J. Vaitkus
\cmsinstskipNational Centre for Particle Physics, Universiti Malaya, Kuala Lumpur, Malaysia
I. Ahmed, Z.A. Ibrahim, J.R. Komaragiri, M.A.B. Md Ali\cmsAuthorMark33, F. Mohamad Idris\cmsAuthorMark34, W.A.T. Wan Abdullah, M.N. Yusli, Z. Zolkapli
\cmsinstskipCentro de Investigacion y de Estudios Avanzados del IPN, Mexico City, Mexico
H. Castilla-Valdez, E. De La Cruz-Burelo, I. Heredia-De La Cruz\cmsAuthorMark35, A. Hernandez-Almada, R. Lopez-Fernandez, R. Magaña Villalba, J. Mejia Guisao, A. Sanchez-Hernandez
\cmsinstskipUniversidad Iberoamericana, Mexico City, Mexico
S. Carrillo Moreno, C. Oropeza Barrera, F. Vazquez Valencia
\cmsinstskipBenemerita Universidad Autonoma de Puebla, Puebla, Mexico
S. Carpinteyro, I. Pedraza, H.A. Salazar Ibarguen, C. Uribe Estrada
\cmsinstskipUniversidad Autónoma de San Luis Potosí, San Luis Potosí, Mexico
A. Morelos Pineda
\cmsinstskipUniversity of Auckland, Auckland, New Zealand
D. Krofcheck
\cmsinstskipUniversity of Canterbury, Christchurch, New Zealand
P.H. Butler
\cmsinstskipNational Centre for Physics, Quaid-I-Azam University, Islamabad, Pakistan
A. Ahmad, M. Ahmad, Q. Hassan, H.R. Hoorani, W.A. Khan, M.A. Shah, M. Shoaib, M. Waqas
\cmsinstskipNational Centre for Nuclear Research, Swierk, Poland
H. Bialkowska, M. Bluj, B. Boimska, T. Frueboes, M. Górski, M. Kazana, K. Nawrocki, K. Romanowska-Rybinska, M. Szleper, P. Zalewski
\cmsinstskipInstitute of Experimental Physics, Faculty of Physics, University of Warsaw, Warsaw, Poland
K. Bunkowski, A. Byszuk\cmsAuthorMark36, K. Doroba, A. Kalinowski, M. Konecki, J. Krolikowski, M. Misiura, M. Olszewski, M. Walczak
\cmsinstskipLaboratório de Instrumentação e Física Experimental de Partículas, Lisboa, Portugal
P. Bargassa, C. Beirão Da Cruz E Silva, A. Di Francesco, P. Faccioli, P.G. Ferreira Parracho, M. Gallinaro, J. Hollar, N. Leonardo, L. Lloret Iglesias, M.V. Nemallapudi, J. Rodrigues Antunes, J. Seixas, O. Toldaiev, D. Vadruccio, J. Varela, P. Vischia
\cmsinstskipJoint Institute for Nuclear Research, Dubna, Russia
S. Afanasiev, V. Alexakhin, M. Gavrilenko, I. Golutvin, I. Gorbunov, A. Kamenev, V. Karjavin, A. Lanev, A. Malakhov, V. Matveev\cmsAuthorMark37${}^{,}$\cmsAuthorMark38, P. Moisenz, V. Palichik, V. Perelygin, M. Savina, S. Shmatov, N. Skatchkov, V. Smirnov, N. Voytishin, A. Zarubin
\cmsinstskipPetersburg Nuclear Physics Institute, Gatchina (St. Petersburg), Russia
L. Chtchipounov, V. Golovtsov, Y. Ivanov, V. Kim\cmsAuthorMark39, E. Kuznetsova\cmsAuthorMark40, V. Murzin, V. Oreshkin, V. Sulimov, A. Vorobyev
\cmsinstskipInstitute for Nuclear Research, Moscow, Russia
Yu. Andreev, A. Dermenev, S. Gninenko, N. Golubev, A. Karneyeu, M. Kirsanov, N. Krasnikov, A. Pashenkov, D. Tlisov, A. Toropin
\cmsinstskipInstitute for Theoretical and Experimental Physics, Moscow, Russia
V. Epshteyn, V. Gavrilov, N. Lychkovskaya, V. Popov, I. Pozdnyakov, G. Safronov, A. Spiridonov, M. Toms, E. Vlasov, A. Zhokin
\cmsinstskipMoscow Institute of Physics and Technology
A. Bylinkin\cmsAuthorMark38
\cmsinstskipNational Research Nuclear University ’Moscow Engineering Physics Institute’ (MEPhI), Moscow, Russia
R. Chistov\cmsAuthorMark41, M. Danilov\cmsAuthorMark41, V. Rusinov
\cmsinstskipP.N. Lebedev Physical Institute, Moscow, Russia
V. Andreev, M. Azarkin\cmsAuthorMark38, I. Dremin\cmsAuthorMark38, M. Kirakosyan, A. Leonidov\cmsAuthorMark38, S.V. Rusakov, A. Terkulov
\cmsinstskipSkobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow, Russia
A. Baskakov, A. Belyaev, E. Boos, M. Dubinin\cmsAuthorMark42, L. Dudko, A. Ershov, A. Gribushin, V. Klyukhin, O. Kodolova, I. Lokhtin, I. Miagkov, S. Obraztsov, S. Petrushanko, V. Savrin, A. Snigirev
\cmsinstskipNovosibirsk State University (NSU), Novosibirsk, Russia
V. Blinov\cmsAuthorMark43, Y.Skovpen\cmsAuthorMark43
\cmsinstskipState Research Center of Russian Federation, Institute for High Energy Physics, Protvino, Russia
I. Azhgirey, I. Bayshev, S. Bitioukov, D. Elumakhov, V. Kachanov, A. Kalinin, D. Konstantinov, V. Krychkine, V. Petrov, R. Ryutin, A. Sobol, S. Troshin, N. Tyurin, A. Uzunian, A. Volkov
\cmsinstskipUniversity of Belgrade, Faculty of Physics and Vinca Institute of Nuclear Sciences, Belgrade, Serbia
P. Adzic\cmsAuthorMark44, P. Cirkovic, D. Devetak, M. Dordevic, J. Milosevic, V. Rekovic
\cmsinstskipCentro de Investigaciones Energéticas Medioambientales y Tecnológicas (CIEMAT), Madrid, Spain
J. Alcaraz Maestre, M. Barrio Luna, E. Calvo, M. Cerrada, M. Chamizo Llatas, N. Colino, B. De La Cruz, A. Delgado Peris, A. Escalante Del Valle, C. Fernandez Bedoya, J.P. Fernández Ramos, J. Flix, M.C. Fouz, P. Garcia-Abia, O. Gonzalez Lopez, S. Goy Lopez, J.M. Hernandez, M.I. Josa, E. Navarro De Martino, A. Pérez-Calero Yzquierdo, J. Puerta Pelayo, A. Quintario Olmeda, I. Redondo, L. Romero, M.S. Soares
\cmsinstskipUniversidad Autónoma de Madrid, Madrid, Spain
J.F. de Trocóniz, M. Missiroli, D. Moran
\cmsinstskipUniversidad de Oviedo, Oviedo, Spain
J. Cuevas, J. Fernandez Menendez, I. Gonzalez Caballero, J.R. González Fernández, E. Palencia Cortezon, S. Sanchez Cruz, I. Suárez Andrés, J.M. Vizan Garcia
\cmsinstskipInstituto de Física de Cantabria (IFCA), CSIC-Universidad de Cantabria, Santander, Spain
I.J. Cabrillo, A. Calderon, J.R. Castiñeiras De Saa, E. Curras, M. Fernandez, J. Garcia-Ferrero, G. Gomez, A. Lopez Virto, J. Marco, C. Martinez Rivero, F. Matorras, J. Piedra Gomez, T. Rodrigo, A. Ruiz-Jimeno, L. Scodellaro, N. Trevisani, I. Vila, R. Vilar Cortabitarte
\cmsinstskipCERN, European Organization for Nuclear Research, Geneva, Switzerland
D. Abbaneo, E. Auffray, G. Auzinger, M. Bachtis, P. Baillon, A.H. Ball, D. Barney, P. Bloch, A. Bocci, A. Bonato, C. Botta, T. Camporesi, R. Castello, M. Cepeda, G. Cerminara, M. D’Alfonso, D. d’Enterria, A. Dabrowski, V. Daponte, A. David, M. De Gruttola, F. De Guio, A. De Roeck, E. Di Marco\cmsAuthorMark45, M. Dobson, B. Dorney, T. du Pree, D. Duggan, M. Dünser, N. Dupont, A. Elliott-Peisert, S. Fartoukh, G. Franzoni, J. Fulcher, W. Funk, D. Gigi, K. Gill, M. Girone, F. Glege, D. Gulhan, S. Gundacker, M. Guthoff, J. Hammer, P. Harris, J. Hegeman, V. Innocente, P. Janot, H. Kirschenmann, V. Knünz, A. Kornmayer\cmsAuthorMark14, M.J. Kortelainen, K. Kousouris, M. Krammer\cmsAuthorMark1, P. Lecoq, C. Lourenço, M.T. Lucchini, L. Malgeri, M. Mannelli, A. Martelli, F. Meijers, S. Mersi, E. Meschi, F. Moortgat, S. Morovic, M. Mulders, H. Neugebauer, S. Orfanelli, L. Orsini, L. Pape, E. Perez, M. Peruzzi, A. Petrilli, G. Petrucciani, A. Pfeiffer, M. Pierini, A. Racz, T. Reis, G. Rolandi\cmsAuthorMark46, M. Rovere, M. Ruan, H. Sakulin, J.B. Sauvan, C. Schäfer, C. Schwick, M. Seidel, A. Sharma, P. Silva, M. Simon, P. Sphicas\cmsAuthorMark47, J. Steggemann, M. Stoye, Y. Takahashi, M. Tosi, D. Treille, A. Triossi, A. Tsirou, V. Veckalns\cmsAuthorMark48, G.I. Veres\cmsAuthorMark21, N. Wardle, H.K. Wöhri, A. Zagozdzinska\cmsAuthorMark36, W.D. Zeuner
\cmsinstskipPaul Scherrer Institut, Villigen, Switzerland
W. Bertl, K. Deiters, W. Erdmann, R. Horisberger, Q. Ingram, H.C. Kaestli, D. Kotlinski, U. Langenegger, T. Rohe
\cmsinstskipInstitute for Particle Physics, ETH Zurich, Zurich, Switzerland
F. Bachmair, L. Bäni, L. Bianchini, B. Casal, G. Dissertori, M. Dittmar, M. Donegà, P. Eller, C. Grab, C. Heidegger, D. Hits, J. Hoss, G. Kasieczka, P. Lecomte${}^{\textrm{\textdagger}}$, W. Lustermann, B. Mangano, M. Marionneau, P. Martinez Ruiz del Arbol, M. Masciovecchio, M.T. Meinhard, D. Meister, F. Micheli, P. Musella, F. Nessi-Tedaldi, F. Pandolfi, J. Pata, F. Pauss, G. Perrin, L. Perrozzi, M. Quittnat, M. Rossini, M. Schönenberger, A. Starodumov\cmsAuthorMark49, V.R. Tavolaro, K. Theofilatos, R. Wallny
\cmsinstskipUniversität Zürich, Zurich, Switzerland
T.K. Aarrestad, C. Amsler\cmsAuthorMark50, L. Caminada, M.F. Canelli, A. De Cosa, C. Galloni, A. Hinzmann, T. Hreus, B. Kilminster, C. Lange, J. Ngadiuba, D. Pinna, G. Rauco, P. Robmann, D. Salerno, Y. Yang
\cmsinstskipNational Central University, Chung-Li, Taiwan
V. Candelise, T.H. Doan, Sh. Jain, R. Khurana, M. Konyushikhin, C.M. Kuo, W. Lin, Y.J. Lu, A. Pozdnyakov, S.S. Yu
\cmsinstskipNational Taiwan University (NTU), Taipei, Taiwan
Arun Kumar, P. Chang, Y.H. Chang, Y.W. Chang, Y. Chao, K.F. Chen, P.H. Chen, C. Dietz, F. Fiori, W.-S. Hou, Y. Hsiung, Y.F. Liu, R.-S. Lu, M. Miñano Moya, E. Paganis, A. Psallidas, J.f. Tsai, Y.M. Tzeng
\cmsinstskipChulalongkorn University, Faculty of Science, Department of Physics, Bangkok, Thailand
B. Asavapibhop, G. Singh, N. Srimanobhas, N. Suwonjandee
\cmsinstskipCukurova University, Adana, Turkey
A. Adiguzel, M.N. Bakirci\cmsAuthorMark51, S. Cerci\cmsAuthorMark52, S. Damarseckin, Z.S. Demiroglu, C. Dozen, I. Dumanoglu, S. Girgis, G. Gokbulut, Y. Guler, E. Gurpinar, I. Hos, E.E. Kangal\cmsAuthorMark53, O. Kara, A. Kayis Topaksu, U. Kiminsu, M. Oglakci, G. Onengut\cmsAuthorMark54, K. Ozdemir\cmsAuthorMark55, B. Tali\cmsAuthorMark52, S. Turkcapar, I.S. Zorbakir, C. Zorbilmez
\cmsinstskipMiddle East Technical University, Physics Department, Ankara, Turkey
B. Bilin, S. Bilmis, B. Isildak\cmsAuthorMark56, G. Karapinar\cmsAuthorMark57, M. Yalvac, M. Zeyrek
\cmsinstskipBogazici University, Istanbul, Turkey
E. Gülmez, M. Kaya\cmsAuthorMark58, O. Kaya\cmsAuthorMark59, E.A. Yetkin\cmsAuthorMark60, T. Yetkin\cmsAuthorMark61
\cmsinstskipIstanbul Technical University, Istanbul, Turkey
A. Cakir, K. Cankocak, S. Sen\cmsAuthorMark62
\cmsinstskipInstitute for Scintillation Materials of National Academy of Science of Ukraine, Kharkov, Ukraine
B. Grynyov
\cmsinstskipNational Scientific Center, Kharkov Institute of Physics and Technology, Kharkov, Ukraine
L. Levchuk, P. Sorokin
\cmsinstskipUniversity of Bristol, Bristol, United Kingdom
R. Aggleton, F. Ball, L. Beck, J.J. Brooke, D. Burns, E. Clement, D. Cussans, H. Flacher, J. Goldstein, M. Grimes, G.P. Heath, H.F. Heath, J. Jacob, L. Kreczko, C. Lucas, D.M. Newbold\cmsAuthorMark63, S. Paramesvaran, A. Poll, T. Sakuma, S. Seif El Nasr-storey, D. Smith, V.J. Smith
\cmsinstskipRutherford Appleton Laboratory, Didcot, United Kingdom
K.W. Bell, A. Belyaev\cmsAuthorMark64, C. Brew, R.M. Brown, L. Calligaris, D. Cieri, D.J.A. Cockerill, J.A. Coughlan, K. Harder, S. Harper, E. Olaiya, D. Petyt, C.H. Shepherd-Themistocleous, A. Thea, I.R. Tomalin, T. Williams
\cmsinstskipImperial College, London, United Kingdom
M. Baber, R. Bainbridge, O. Buchmuller, A. Bundock, D. Burton, S. Casasso, M. Citron, D. Colling, L. Corpe, P. Dauncey, G. Davies, A. De Wit, M. Della Negra, R. Di Maria, P. Dunne, A. Elwood, D. Futyan, Y. Haddad, G. Hall, G. Iles, T. James, R. Lane, C. Laner, R. Lucas\cmsAuthorMark63, L. Lyons, A.-M. Magnan, S. Malik, L. Mastrolorenzo, J. Nash, A. Nikitenko\cmsAuthorMark49, J. Pela, B. Penning, M. Pesaresi, D.M. Raymond, A. Richards, A. Rose, C. Seez, S. Summers, A. Tapper, K. Uchida, M. Vazquez Acosta\cmsAuthorMark65, T. Virdee\cmsAuthorMark14, J. Wright, S.C. Zenz
\cmsinstskipBrunel University, Uxbridge, United Kingdom
J.E. Cole, P.R. Hobson, A. Khan, P. Kyberd, D. Leslie, I.D. Reid, P. Symonds, L. Teodorescu, M. Turner
\cmsinstskipBaylor University, Waco, USA
A. Borzou, K. Call, J. Dittmann, K. Hatakeyama, H. Liu, N. Pastika
\cmsinstskipThe University of Alabama, Tuscaloosa, USA
O. Charaf, S.I. Cooper, C. Henderson, P. Rumerio, C. West
\cmsinstskipBoston University, Boston, USA
D. Arcaro, A. Avetisyan, T. Bose, D. Gastler, D. Rankin, C. Richardson, J. Rohlf, L. Sulak, D. Zou
\cmsinstskipBrown University, Providence, USA
G. Benelli, E. Berry, D. Cutts, A. Garabedian, J. Hakala, U. Heintz, J.M. Hogan, O. Jesus, E. Laird, G. Landsberg, Z. Mao, M. Narain, S. Piperov, S. Sagir, E. Spencer, R. Syarif
\cmsinstskipUniversity of California, Davis, Davis, USA
R. Breedon, G. Breto, D. Burns, M. Calderon De La Barca Sanchez, S. Chauhan, M. Chertok, J. Conway, R. Conway, P.T. Cox, R. Erbacher, C. Flores, G. Funk, M. Gardner, W. Ko, R. Lander, C. Mclean, M. Mulhearn, D. Pellett, J. Pilot, F. Ricci-Tam, S. Shalhout, J. Smith, M. Squires, D. Stolp, M. Tripathi, S. Wilbur, R. Yohay
\cmsinstskipUniversity of California, Los Angeles, USA
R. Cousins, P. Everaerts, A. Florent, J. Hauser, M. Ignatenko, D. Saltzberg, E. Takasugi, V. Valuev, M. Weber
\cmsinstskipUniversity of California, Riverside, Riverside, USA
K. Burt, R. Clare, J. Ellison, J.W. Gary, G. Hanson, J. Heilman, P. Jandir, E. Kennedy, F. Lacroix, O.R. Long, M. Olmedo Negrete, M.I. Paneva, A. Shrinivas, H. Wei, S. Wimpenny, B. R. Yates
\cmsinstskipUniversity of California, San Diego, La Jolla, USA
J.G. Branson, G.B. Cerati, S. Cittolin, M. Derdzinski, R. Gerosa, A. Holzner, D. Klein, V. Krutelyov, J. Letts, I. Macneill, D. Olivito, S. Padhi, M. Pieri, M. Sani, V. Sharma, S. Simon, M. Tadel, A. Vartak, S. Wasserbaech\cmsAuthorMark66, C. Welke, J. Wood, F. Würthwein, A. Yagil, G. Zevi Della Porta
\cmsinstskipUniversity of California, Santa Barbara - Department of Physics, Santa Barbara, USA
R. Bhandari, J. Bradmiller-Feld, C. Campagnari, A. Dishaw, V. Dutta, K. Flowers, M. Franco Sevilla, P. Geffert, C. George, F. Golf, L. Gouskos, J. Gran, R. Heller, J. Incandela, N. Mccoll, S.D. Mullin, A. Ovcharova, J. Richman, D. Stuart, I. Suarez, J. Yoo
\cmsinstskipCalifornia Institute of Technology, Pasadena, USA
D. Anderson, A. Apresyan, J. Bendavid, A. Bornheim, J. Bunn, Y. Chen, J. Duarte, J.M. Lawhorn, A. Mott, H.B. Newman, C. Pena, M. Spiropulu, J.R. Vlimant, S. Xie, R.Y. Zhu
\cmsinstskipCarnegie Mellon University, Pittsburgh, USA
M.B. Andrews, V. Azzolini, T. Ferguson, M. Paulini, J. Russ, M. Sun, H. Vogel, I. Vorobiev
\cmsinstskipUniversity of Colorado Boulder, Boulder, USA
J.P. Cumalat, W.T. Ford, F. Jensen, A. Johnson, M. Krohn, T. Mulholland, K. Stenson, S.R. Wagner
\cmsinstskipCornell University, Ithaca, USA
J. Alexander, J. Chaves, J. Chu, S. Dittmer, K. Mcdermott, N. Mirman, G. Nicolas Kaufman, J.R. Patterson, A. Rinkevicius, A. Ryd, L. Skinnari, L. Soffi, S.M. Tan, Z. Tao, J. Thom, J. Tucker, P. Wittich, M. Zientek
\cmsinstskipFairfield University, Fairfield, USA
D. Winn
\cmsinstskipFermi National Accelerator Laboratory, Batavia, USA
S. Abdullin, M. Albrow, G. Apollinari, S. Banerjee, L.A.T. Bauerdick, A. Beretvas, J. Berryhill, P.C. Bhat, G. Bolla, K. Burkett, J.N. Butler, H.W.K. Cheung, F. Chlebana, S. Cihangir${}^{\textrm{\textdagger}}$, M. Cremonesi, V.D. Elvira, I. Fisk, J. Freeman, E. Gottschalk, L. Gray, D. Green, S. Grünendahl, O. Gutsche, D. Hare, R.M. Harris, S. Hasegawa, J. Hirschauer, Z. Hu, B. Jayatilaka, S. Jindariani, M. Johnson, U. Joshi, B. Klima, B. Kreis, S. Lammel, J. Linacre, D. Lincoln, R. Lipton, T. Liu, R. Lopes De Sá, J. Lykken, K. Maeshima, N. Magini, J.M. Marraffino, S. Maruyama, D. Mason, P. McBride, P. Merkel, S. Mrenna, S. Nahn, C. Newman-Holmes${}^{\textrm{\textdagger}}$, V. O’Dell, K. Pedro, O. Prokofyev, G. Rakness, L. Ristori, E. Sexton-Kennedy, A. Soha, W.J. Spalding, L. Spiegel, S. Stoynev, N. Strobbe, L. Taylor, S. Tkaczyk, N.V. Tran, L. Uplegger, E.W. Vaandering, C. Vernieri, M. Verzocchi, R. Vidal, M. Wang, H.A. Weber, A. Whitbeck
\cmsinstskipUniversity of Florida, Gainesville, USA
D. Acosta, P. Avery, P. Bortignon, D. Bourilkov, A. Brinkerhoff, A. Carnes, M. Carver, D. Curry, S. Das, R.D. Field, I.K. Furic, J. Konigsberg, A. Korytov, P. Ma, K. Matchev, H. Mei, P. Milenovic\cmsAuthorMark67, G. Mitselmakher, D. Rank, L. Shchutska, D. Sperka, L. Thomas, J. Wang, S. Wang, J. Yelton
\cmsinstskipFlorida International University, Miami, USA
S. Linn, P. Markowitz, G. Martinez, J.L. Rodriguez
\cmsinstskipFlorida State University, Tallahassee, USA
A. Ackert, J.R. Adams, T. Adams, A. Askew, S. Bein, B. Diamond, S. Hagopian, V. Hagopian, K.F. Johnson, A. Khatiwada, H. Prosper, A. Santra, M. Weinberg
\cmsinstskipFlorida Institute of Technology, Melbourne, USA
M.M. Baarmand, V. Bhopatkar, S. Colafranceschi\cmsAuthorMark68, M. Hohlmann, D. Noonan, T. Roy, F. Yumiceva
\cmsinstskipUniversity of Illinois at Chicago (UIC), Chicago, USA
M.R. Adams, L. Apanasevich, D. Berry, R.R. Betts, I. Bucinskaite, R. Cavanaugh, O. Evdokimov, L. Gauthier, C.E. Gerber, D.J. Hofman, P. Kurt, C. O’Brien, I.D. Sandoval Gonzalez, P. Turner, N. Varelas, H. Wang, Z. Wu, M. Zakaria, J. Zhang
\cmsinstskipThe University of Iowa, Iowa City, USA
B. Bilki\cmsAuthorMark69, W. Clarida, K. Dilsiz, S. Durgut, R.P. Gandrajula, M. Haytmyradov, V. Khristenko, J.-P. Merlo, H. Mermerkaya\cmsAuthorMark70, A. Mestvirishvili, A. Moeller, J. Nachtman, H. Ogul, Y. Onel, F. Ozok\cmsAuthorMark71, A. Penzo, C. Snyder, E. Tiras, J. Wetzel, K. Yi
\cmsinstskipJohns Hopkins University, Baltimore, USA
I. Anderson, B. Blumenfeld, A. Cocoros, N. Eminizer, D. Fehling, L. Feng, A.V. Gritsan, P. Maksimovic, M. Osherson, J. Roskes, U. Sarica, M. Swartz, M. Xiao, Y. Xin, C. You
\cmsinstskipThe University of Kansas, Lawrence, USA
A. Al-bataineh, P. Baringer, A. Bean, S. Boren, J. Bowen, C. Bruner, J. Castle, L. Forthomme, R.P. Kenny III, A. Kropivnitskaya, D. Majumder, W. Mcbrayer, M. Murray, S. Sanders, R. Stringer, J.D. Tapia Takaki, Q. Wang
\cmsinstskipKansas State University, Manhattan, USA
A. Ivanov, K. Kaadze, S. Khalil, M. Makouski, Y. Maravin, A. Mohammadi, L.K. Saini, N. Skhirtladze, S. Toda
\cmsinstskipLawrence Livermore National Laboratory, Livermore, USA
F. Rebassoo, D. Wright
\cmsinstskipUniversity of Maryland, College Park, USA
C. Anelli, A. Baden, O. Baron, A. Belloni, B. Calvert, S.C. Eno, C. Ferraioli, J.A. Gomez, N.J. Hadley, S. Jabeen, R.G. Kellogg, T. Kolberg, J. Kunkle, Y. Lu, A.C. Mignerey, Y.H. Shin, A. Skuja, M.B. Tonjes, S.C. Tonwar
\cmsinstskipMassachusetts Institute of Technology, Cambridge, USA
D. Abercrombie, B. Allen, A. Apyan, R. Barbieri, A. Baty, R. Bi, K. Bierwagen, S. Brandt, W. Busza, I.A. Cali, Z. Demiragli, L. Di Matteo, G. Gomez Ceballos, M. Goncharov, D. Hsu, Y. Iiyama, G.M. Innocenti, M. Klute, D. Kovalskyi, K. Krajczar, Y.S. Lai, Y.-J. Lee, A. Levin, P.D. Luckey, A.C. Marini, C. Mcginn, C. Mironov, S. Narayanan, X. Niu, C. Paus, C. Roland, G. Roland, J. Salfeld-Nebgen, G.S.F. Stephans, K. Sumorok, K. Tatar, M. Varma, D. Velicanu, J. Veverka, J. Wang, T.W. Wang, B. Wyslouch, M. Yang, V. Zhukova
\cmsinstskipUniversity of Minnesota, Minneapolis, USA
A.C. Benvenuti, R.M. Chatterjee, A. Evans, A. Finkel, A. Gude, P. Hansen, S. Kalafut, S.C. Kao, Y. Kubota, Z. Lesko, J. Mans, S. Nourbakhsh, N. Ruckstuhl, R. Rusack, N. Tambe, J. Turkewitz
\cmsinstskipUniversity of Mississippi, Oxford, USA
J.G. Acosta, S. Oliveros
\cmsinstskipUniversity of Nebraska-Lincoln, Lincoln, USA
E. Avdeeva, R. Bartek, K. Bloom, D.R. Claes, A. Dominguez, C. Fangmeier, R. Gonzalez Suarez, R. Kamalieddin, I. Kravchenko, A. Malta Rodrigues, F. Meier, J. Monroy, J.E. Siado, G.R. Snow, B. Stieger
\cmsinstskipState University of New York at Buffalo, Buffalo, USA
M. Alyari, J. Dolen, J. George, A. Godshalk, C. Harrington, I. Iashvili, J. Kaisen, A. Kharchilava, A. Kumar, A. Parker, S. Rappoccio, B. Roozbahani
\cmsinstskipNortheastern University, Boston, USA
G. Alverson, E. Barberis, D. Baumgartel, A. Hortiangtham, B. Knapp, A. Massironi, D.M. Morse, D. Nash, T. Orimoto, R. Teixeira De Lima, D. Trocino, R.-J. Wang, D. Wood
\cmsinstskipNorthwestern University, Evanston, USA
S. Bhattacharya, K.A. Hahn, A. Kubik, A. Kumar, J.F. Low, N. Mucia, N. Odell, B. Pollack, M.H. Schmitt, K. Sung, M. Trovato, M. Velasco
\cmsinstskipUniversity of Notre Dame, Notre Dame, USA
N. Dev, M. Hildreth, K. Hurtado Anampa, C. Jessop, D.J. Karmgard, N. Kellams, K. Lannon, N. Marinelli, F. Meng, C. Mueller, Y. Musienko\cmsAuthorMark37, M. Planer, A. Reinsvold, R. Ruchti, G. Smith, S. Taroni, M. Wayne, M. Wolf, A. Woodard
\cmsinstskipThe Ohio State University, Columbus, USA
J. Alimena, L. Antonelli, J. Brinson, B. Bylsma, L.S. Durkin, S. Flowers, B. Francis, A. Hart, C. Hill, R. Hughes, W. Ji, B. Liu, W. Luo, D. Puigh, B.L. Winer, H.W. Wulsin
\cmsinstskipPrinceton University, Princeton, USA
S. Cooperstein, O. Driga, P. Elmer, J. Hardenbrook, P. Hebda, D. Lange, J. Luo, D. Marlow, T. Medvedeva, K. Mei, M. Mooney, J. Olsen, C. Palmer, P. Piroué, D. Stickland, C. Tully, A. Zuranski
\cmsinstskipUniversity of Puerto Rico, Mayaguez, USA
S. Malik
\cmsinstskipPurdue University, West Lafayette, USA
A. Barker, V.E. Barnes, S. Folgueras, L. Gutay, M.K. Jha, M. Jones, A.W. Jung, K. Jung, D.H. Miller, N. Neumeister, X. Shi, J. Sun, A. Svyatkovskiy, F. Wang, W. Xie, L. Xu
\cmsinstskipPurdue University Calumet, Hammond, USA
N. Parashar, J. Stupak
\cmsinstskipRice University, Houston, USA
A. Adair, B. Akgun, Z. Chen, K.M. Ecklund, F.J.M. Geurts, M. Guilbaud, W. Li, B. Michlin, M. Northup, B.P. Padley, R. Redjimi, J. Roberts, J. Rorie, Z. Tu, J. Zabel
\cmsinstskipUniversity of Rochester, Rochester, USA
B. Betchart, A. Bodek, P. de Barbaro, R. Demina, Y.t. Duh, T. Ferbel, M. Galanti, A. Garcia-Bellido, J. Han, O. Hindrichs, A. Khukhunaishvili, K.H. Lo, P. Tan, M. Verzetti
\cmsinstskipRutgers, The State University of New Jersey, Piscataway, USA
J.P. Chou, E. Contreras-Campana, Y. Gershtein, T.A. Gómez Espinosa, E. Halkiadakis, M. Heindl, D. Hidas, E. Hughes, S. Kaplan, R. Kunnawalkam Elayavalli, S. Kyriacou, A. Lath, K. Nash, H. Saka, S. Salur, S. Schnetzer, D. Sheffield, S. Somalwar, R. Stone, S. Thomas, P. Thomassen, M. Walker
\cmsinstskipUniversity of Tennessee, Knoxville, USA
M. Foerster, J. Heideman, G. Riley, K. Rose, S. Spanier, K. Thapa
\cmsinstskipTexas A&M University, College Station, USA
O. Bouhali\cmsAuthorMark72, A. Celik, M. Dalchenko, M. De Mattia, A. Delgado, S. Dildick, R. Eusebi, J. Gilmore, T. Huang, E. Juska, T. Kamon\cmsAuthorMark73, R. Mueller, Y. Pakhotin, R. Patel, A. Perloff, L. Perniè, D. Rathjens, A. Rose, A. Safonov, A. Tatarinov, K.A. Ulmer
\cmsinstskipTexas Tech University, Lubbock, USA
N. Akchurin, C. Cowden, J. Damgov, C. Dragoiu, P.R. Dudero, J. Faulkner, S. Kunori, K. Lamichhane, S.W. Lee, T. Libeiro, S. Undleeb, I. Volobouev, Z. Wang
\cmsinstskipVanderbilt University, Nashville, USA
A.G. Delannoy, S. Greene, A. Gurrola, R. Janjam, W. Johns, C. Maguire, A. Melo, H. Ni, P. Sheldon, S. Tuo, J. Velkovska, Q. Xu
\cmsinstskipUniversity of Virginia, Charlottesville, USA
M.W. Arenton, P. Barria, B. Cox, J. Goodell, R. Hirosky, A. Ledovskoy, H. Li, C. Neu, T. Sinthuprasith, Y. Wang, E. Wolfe, F. Xia
\cmsinstskipWayne State University, Detroit, USA
C. Clarke, R. Harr, P.E. Karchin, P. Lamichhane, J. Sturdy
\cmsinstskipUniversity of Wisconsin - Madison, Madison, WI, USA
D.A. Belknap, S. Dasu, L. Dodd, S. Duric, B. Gomber, M. Grothe, M. Herndon, A. Hervé, P. Klabbers, A. Lanaro, A. Levine, K. Long, R. Loveless, I. Ojalvo, T. Perry, G.A. Pierro, G. Polese, T. Ruggles, A. Savin, A. Sharma, N. Smith, W.H. Smith, D. Taylor, N. Woods
\cmsinstskip†: Deceased
1: Also at Vienna University of Technology, Vienna, Austria
2: Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China
3: Also at Institut Pluridisciplinaire Hubert Curien, Université de Strasbourg, Université de Haute Alsace Mulhouse, CNRS/IN2P3, Strasbourg, France
4: Also at Universidade Estadual de Campinas, Campinas, Brazil
5: Also at Universidade Federal de Pelotas, Pelotas, Brazil
6: Also at Université Libre de Bruxelles, Bruxelles, Belgium
7: Also at Deutsches Elektronen-Synchrotron, Hamburg, Germany
8: Also at Joint Institute for Nuclear Research, Dubna, Russia
9: Also at Suez University, Suez, Egypt
10: Now at British University in Egypt, Cairo, Egypt
11: Also at Ain Shams University, Cairo, Egypt
12: Now at Helwan University, Cairo, Egypt
13: Also at Université de Haute Alsace, Mulhouse, France
14: Also at CERN, European Organization for Nuclear Research, Geneva, Switzerland
15: Also at Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow, Russia
16: Also at Tbilisi State University, Tbilisi, Georgia
17: Also at RWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany
18: Also at University of Hamburg, Hamburg, Germany
19: Also at Brandenburg University of Technology, Cottbus, Germany
20: Also at Institute of Nuclear Research ATOMKI, Debrecen, Hungary
21: Also at MTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös Loránd University, Budapest, Hungary
22: Also at University of Debrecen, Debrecen, Hungary
23: Also at Indian Institute of Science Education and Research, Bhopal, India
24: Also at Institute of Physics, Bhubaneswar, India
25: Also at University of Visva-Bharati, Santiniketan, India
26: Also at University of Ruhuna, Matara, Sri Lanka
27: Also at Isfahan University of Technology, Isfahan, Iran
28: Also at University of Tehran, Department of Engineering Science, Tehran, Iran
29: Also at Yazd University, Yazd, Iran
30: Also at Plasma Physics Research Center, Science and Research Branch, Islamic Azad University, Tehran, Iran
31: Also at Università degli Studi di Siena, Siena, Italy
32: Also at Purdue University, West Lafayette, USA
33: Also at International Islamic University of Malaysia, Kuala Lumpur, Malaysia
34: Also at Malaysian Nuclear Agency, MOSTI, Kajang, Malaysia
35: Also at Consejo Nacional de Ciencia y Tecnología, Mexico city, Mexico
36: Also at Warsaw University of Technology, Institute of Electronic Systems, Warsaw, Poland
37: Also at Institute for Nuclear Research, Moscow, Russia
38: Now at National Research Nuclear University ’Moscow Engineering Physics Institute’ (MEPhI), Moscow, Russia
39: Also at St. Petersburg State Polytechnical University, St. Petersburg, Russia
40: Also at University of Florida, Gainesville, USA
41: Also at P.N. Lebedev Physical Institute, Moscow, Russia
42: Also at California Institute of Technology, Pasadena, USA
43: Also at Budker Institute of Nuclear Physics, Novosibirsk, Russia
44: Also at Faculty of Physics, University of Belgrade, Belgrade, Serbia
45: Also at INFN Sezione di Roma; Università di Roma, Roma, Italy
46: Also at Scuola Normale e Sezione dell’INFN, Pisa, Italy
47: Also at National and Kapodistrian University of Athens, Athens, Greece
48: Also at Riga Technical University, Riga, Latvia
49: Also at Institute for Theoretical and Experimental Physics, Moscow, Russia
50: Also at Albert Einstein Center for Fundamental Physics, Bern, Switzerland
51: Also at Gaziosmanpasa University, Tokat, Turkey
52: Also at Adiyaman University, Adiyaman, Turkey
53: Also at Mersin University, Mersin, Turkey
54: Also at Cag University, Mersin, Turkey
55: Also at Piri Reis University, Istanbul, Turkey
56: Also at Ozyegin University, Istanbul, Turkey
57: Also at Izmir Institute of Technology, Izmir, Turkey
58: Also at Marmara University, Istanbul, Turkey
59: Also at Kafkas University, Kars, Turkey
60: Also at Istanbul Bilgi University, Istanbul, Turkey
61: Also at Yildiz Technical University, Istanbul, Turkey
62: Also at Hacettepe University, Ankara, Turkey
63: Also at Rutherford Appleton Laboratory, Didcot, United Kingdom
64: Also at School of Physics and Astronomy, University of Southampton, Southampton, United Kingdom
65: Also at Instituto de Astrofísica de Canarias, La Laguna, Spain
66: Also at Utah Valley University, Orem, USA
67: Also at University of Belgrade, Faculty of Physics and Vinca Institute of Nuclear Sciences, Belgrade, Serbia
68: Also at Facoltà Ingegneria, Università di Roma, Roma, Italy
69: Also at Argonne National Laboratory, Argonne, USA
70: Also at Erzincan University, Erzincan, Turkey
71: Also at Mimar Sinan University, Istanbul, Istanbul, Turkey
72: Also at Texas A&M University at Qatar, Doha, Qatar
73: Also at Kyungpook National University, Daegu, Korea |
Charm production nearby threshold in pA-interactions at 70 GeV
A. Afonin${}^{1}$, E. Ardashev${}^{1}$, V. Balandin${}^{2}$, G. Bogdanova${}^{3}$,
M. Bogolyubsky${}^{1}$, O. Gavrishchuk${}^{2}$, S. Golovnia${}^{1}$,
S. Gorokhov${}^{1}$, V. Golovkin${}^{1}$,
D. Karmanov${}^{3}$, A. Kiryakov${}^{1}$,
${}^{2}$, V. Kramarenko${}^{3}$, A. Leflat${}^{3}$,
Yu. Petukhov${}^{2}$, A. Pleskach${}^{1}$, V. Popov${}^{3}$,
V. Riadovikov${}^{1}$, V. Ronjin${}^{1}$, I. Rufanov${}^{2}$,
Yu. Tsyupa${}^{1}$, V. Volkov${}^{3}$,
A. Vorobiev${}^{1}$, A. Voronin${}^{3}$, A. Yukaev${}^{2}$, V. Zapolsky${}^{1}$, E. Zverev${}^{3}$
${}^{1}$ Institute for High Energy Physics, Sq. Nauki 1, Protvino, Moscow region, 142281 Russia
${}^{2}$ Joint Institute for Nuclear Research, Joliot-Curie 6, Dubna, Moscow region, 141980 Russia
${}^{3}$ M.V. Lomonosov MSU, SINP MSU, Leninskie gory, Moscow 119991, Russia
E-mail:
, ,
,
, ,
, ,
, ,
, ,
, ,
, ,
, ,
, ,
, ,
, ,
, ,
,
,
Abstract:
The results of the SERP-E-184 experiment at the U-70 accelerator (IHEP, Protvino) are presented. Interactions of the 70 GeV proton beam with C, Si and Pb targets were studied to detect decays of charmed $D^{0}$, $\overline{D}^{0}$, $D^{+}$, $D^{-}$ mesons and $\Lambda_{c}^{+}$ baryon near their production threshold. Measurements of lifetimes and masses are shown a good agreement with PDG data. The inclusive cross sections of charm production and their A-dependencies were obtained. The yields of these particles are compared with the theoretical predictions and the data of other experiments. The measured cross section of the total open charm production ($\sigma_{\mathrm{tot}}(c\overline{c})$ = 7.1 $\pm$ 2.3(stat) $\pm$1.4(syst) $\mu$b/nucleon) at the collision c.m. energy $\sqrt{s}$ = 11.8 GeV is well above the QCD model predictions. The contributions of different species of charmed particles to the total cross section of the open charm production in proton-nucleus interactions vary with energy.
1 Monte Carlo simulation and selection of events with the charmed particles
The SERP-E-184 experiment ”Investigation of mechanisms of the production of charmed particles in $p$A- interactions at 70 GeV and their decays” at the U-70 accelerator (IHEP, Protvino) was carried out at the SVD-2 (Spectrometer with Vertex Detector) setup [1]. This setup was constructed to study the charmed particles production in $pp$- and $p$A-interactions by the SVD collaboration including IHEP (Protvino), JINR (Dubna) and SINP MSU (Moscow). The main elements of the setup are the high-precision micro-strip vertex detector (MSVD) with an active target (AT) and a magnetic spectrometer. The AT contains 5 silicon detectors each 300-$\mu$m thickness and 1-mm pitch strips, a Pb-plate (220 $\mu$m thick) and a C-plate (500 $\mu$m thick), placed as Si-Si-Pb-Si-C-Si-Si. The tracking part of MSVD consists of 10 Si-detectors: four XY pair and one XYUV quadruplet , U and V are the oblique planes. The angular acceptance of MSVD is $\pm$ 250 mrad. The spectrometer features allow one to get the effective mass resolution of $\sigma$ = 4.4 MeV/$c^{2}$ for $K^{0}_{s}$ and 1.6 MeV/$c^{2}$ for $\Lambda^{0}_{c}$ masses.
Monte Carlo (MC) events were obtained with FRITIOF separately for interactions on C, Si, and Pb with the charm production. Decays of unstable particles happened later within GEANT code. Certain decay modes were imposed for charmed particles ($D^{0}\to K^{-}\pi^{+}$, $\overline{D}^{0}\to K^{+}\pi^{-}$, $D^{+}\to K^{-}\pi^{+}\pi^{+}$, $D^{-}\to K^{+}\pi^{-}\pi^{-}$, $\Lambda_{c}^{+}\to pK^{-}\pi^{+}$). GEANT3.21 package was used to simulate registration of $p$A-interactions. We analysed the simulated events in order to work out the selection criteria [2] for $D^{0}\to K^{-}\pi^{+}$ and $\overline{D}^{0}\to K^{+}\pi^{-}$. The effective mass spectra of the $K\pi$ system after applying of all criteria were fitted by the sum of the straight line and the Gaussian function. It gives 1861 $\pm$ 7 MeV/$c^{2}$ for $D^{0}$ ($\overline{D}^{0}$) mass and the signal-to-noise ratio of (51$\pm$17)/(38$\pm$13). The detection efficiency of ($D^{0}/\overline{D}^{0}$) particles with taking into account of the efficiency of visual inspection is equal to $\epsilon(D^{0}/\overline{D}^{0}$) = 0.036.
For reconstruction of the charged charmed mesons, we analysed the $K\pi\pi$-systems: $D^{+}\to K^{-}\pi^{+}\pi^{+}$, $D^{-}\to K^{+}\pi^{-}\pi^{-}$. The charged charmed mesons were found by analysing of the events with a three-prong secondary vertexes (the selection procedure is described in [3]). After parametrisation of the spectrum as sum of the Gaussian function and polynomial we were got 15.5 $\pm$ 5.6 (15.0 $\pm$ 4.7) signal events from $D^{+}$ ($D^{-}$) meson decay over the background of 16.6 $\pm$ 6.0 (8.7 $\pm$ 2.7) events. Also, the mass of $D^{+}$: $M$($D^{+}$) = 1874 $\pm$ 5 MeV/$c^{2}$ (PDG $-$ 1869.6), $\epsilon$($D^{+}$) = 0.014 (efficiency of a signal extraction); the mass of $D^{-}$: $M$($D^{-}$)= 1864 $\pm$ 8 MeV/$c^{2}$, $\epsilon$($D^{-}$) = 0.008.
The charmed $\Lambda_{c}^{+}$-baryon was analysed with the three-prong decay $\Lambda_{c}^{+}\to pK^{-}\pi^{+}$. Application of all the selection criteria [4] resulted in the effective mass spectrum with the signal-to-noise ratio: (21.6 $\pm$ 6.0)/(16.4 $\pm$ 4) and mass $M$($\Lambda_{c}^{+}$) = 2287 $\pm$ 4 MeV/$c^{2}$ (PDG $-$ 2286.5), $\epsilon$($\Lambda_{c}^{+}$) = 0.011.
2 Cross sections for charmed particle production and their A- dependence
We have calculated inclusive cross sections for charmed particle $i$
($i$ = $D^{0}$, $\overline{D}^{0}$,
$D^{\pm}$ or $\Lambda_{c}^{+}$) using the relation:
$N_{s}$($i$) = ($N_{0}~{}\sigma$($i$) A${}^{\alpha}$/($\sigma_{\mathrm{pp}}$ A${}^{0.7}$))($B$($i$)
$\epsilon$($i$))/$K_{\mathrm{tr}}$ or $ln$($N_{\mathrm{s}}$($i$)/$C$($i$)) =
$\alpha\times ln$(A) + $ln\sigma$($i$), where $C$($i$) = [$N_{0}/(\sigma_{pp}\times A^{0.7})]\times[(B(i)\times\epsilon(i))/K_{tr}]$,
$N_{s}$($i$) determines the number of events in the signal for the $i$-charmed particle produced in the given target, $N_{0}$ $-$ the number of inelastic interactions in this target, $\sigma$($i$) $-$ the cross section for charmed particle production at a single nucleon of the target.
A-dependence of the charmed particle production in $p$A-interactions at the AT (C, Si and Pb) is close to 1 for all charmed particles [4] as shown in Fig. 1, $a$. For the largest number of the reconstructed mesons ($D^{0}$/$\overline{D}$${}^{0}$) the dependences of $\alpha-$parameter on $x_{\mathrm{F}}$ and $p_{\mathrm{T}}^{2}$ is shown in Fig. 1, b and c, respectively. The lines describe MC events (FRITIOF).
Relative yields of charmed particles are shown in Fig. 1, $d$ where $\bullet$ $-$ $D^{0}$, $\circ$ $-$ $\overline{D}^{0}$, $\blacksquare$ $-$ $D^{+}$, $\square$ $-$ $D^{+}$, $\blacktriangle$ $-$ $\Lambda^{+}_{c}$ [2, 3, 4] are the experimental points, the theoretical curves (with designation of a particle) are taken from [6].
The total cross section of the charmed particle production in $pp$ at 70 GeV/c is estimated as
$\sigma_{\mathrm{tot}}$($c\overline{c}$) = 7.1 $\pm$ 2.3 (stat) $\pm$ 1.4 (syst) $\mu$b/nucleon [4].
3 Conclusion
Our basic results of study of the charmed particle production are careful measurements of
$\star$ $\sigma_{\mathrm{tot}}$($c\overline{c}$) = 7.1 $\pm$ 2.3 (stat) $\pm$ 1.4 (syst) $\mu$b/nucleon
at c.m. energy $\sqrt{s}$ = 11.8 GeV that is much above the QCD model predictions (Fig. 2, the left panel);
$\star$ the contributions of $\sigma$($i$), where $i$ = $D^{0}$, $\overline{D}$${}^{0}$, $D^{+}$, $D^{-}$ and
$\Lambda^{+}_{c}$ into the total cross section $\sigma$($c\overline{c}$)
vary at lower collision energies (Fig. 1, $d$);
$\star$ the cross section for $\Lambda^{+}_{c}$ production at $\sqrt{s}$ ¿ 30 GeV contradicts $\sigma$($c\overline{c}$)
for the open charm production cross section (Fig. 2, the right panel). $\sigma$($\Lambda^{+}_{c}$)
are extraordinarily large in this area.
References
[1]
V.V. Avdeichikov et al.,
Multiparticle production processes in p p interaction with high multiplicity at E(p) = 70-GeV,
Proposal ”Termalization”
(In Russian). Preprint JINR. JINR-P1-2004-190.
[2]
V.N. Ryadovikov, The SVD-2 Collaboration. Detection of the production and decays of neutral charmed mesons in proton-nucleus interactions at 70-GeV with the SVD-2 facility, Phys.Atom.Nucl. 73 (2010) 1539.
[3]
V.N. Ryadovikov, The SVD-2 Collaboration. Detection of charged charmed
D${}^{\pm}$ mesons in proton-nucleus interactions at 70, Phys.Atom.Nucl, 77 (2014) 716.
[4]
V.N. Ryadovikov, The SVD-2 Collaboration. Measurement of the production cross section for charmed baryons in proton-nucleus interactions at 70 GeV, Phys.Atom.Nucl. 79 (2016) 144.
[5]
A. Andronic et al., Charmonium and open charm production in nuclear collisions at SPS/FAIR energies and the possible influence of a hot hadronic medium, Phys. Lett. B 659 (2008) 149.
[6]
A. N. Aleev et al., The $\Lambda$(c)${}^{+}$ baryon production in interactions of 40-GeV to 70-GeV neutrons with carbon nuclei, Z.Phys. C. Particle and Fields. 23 (1984) 333. |
Schemic Grothendieck rings and motivic rationality
Hans Schoutens
Department of Mathematics
City University of New York
365 Fifth Avenue
New York, NY 10016 (USA)
hschoutens@citytech.cuny.edu
(Date:: November 22, 2020)
Abstract.
We propose a suitable substitute for the classical Grothendieck ring of an algebraically closed field, in which any quasi-projective scheme is represented, while maintaining its non-reduced structure. This yields a more subtle invariant, called the schemic Grothendieck ring, in which we can formulate a form of integration resembling Kontsevich’s motivic integration via arc schemes. In view of its more functorial properties, we can present a characteristic-free proof of the rationality of the geometric Igusa zeta series for certain hypersurfaces, thus generalizing the ground-breaking work on motivic integration by Denef and Loeser. The construction uses first-order formulae, and some infinitary versions, called formularies.
1991 Mathematics Subject Classification:
13D15;14C35;14G10;18F30
1. Introduction
The classical Grothendieck ring $K_{0}(k)$ of an algebraically closed field $k$ is defined as the quotient of the free Abelian group on varieties over $k$, modulo the relations ${\left[X\right]}-{\left[X^{\prime}\right]}$, if $X\cong X^{\prime}$, and
(1)
$${\left[X\right]}={\left[X-Y\right]}+{\left[Y\right]},$$
if $Y$ is a closed subvariety, for $Y,X,X^{\prime}$ varieties (=reduced, separated schemes of finite type over $k$). We will refer to the former relations as isomorphism relations and to the latter as scissor relations, in the sense that we “cut out $Y$ from $X$.” In this way, we cannot just take the class of a variety, but of any constructible subset. Multiplication on $K_{0}(k)$ is then induced by the fiber product. In sum, the three main ingredients for building the Grothendieck ring are: an isomorphism relation, scissor relations, and a product. Only the former causes problems if one wants to generalize the construction of the Grothendieck ring to include not just classes of varieties, but also of finitely generated schemes (with their nilpotent structure). Put bluntly, we cannot cut a scheme in two, as there is no notion of a scheme-theoretic complement. To describe what this ought to be, we turn to model-theory.
To model-theorists, constructible subsets are nothing else than definable subsets (in view of quantifier elimination for algebraically closed fields). Moreover, union and intersection correspond to disjunction and conjunction of the corresponding formulae. Therefore, instead of working with the theory of algebraically closed fields, we could repeat the previous construction over any first-order theory $\mathbf{T}$. However, now it is less obvious what it means for two formulae to be isomorphic. The most straightforward way is to introduce the notion of a definable isomorphism. However, even for the theory of algebraically closed fields, this yields a priori a different notion of isomorphism than the geometric one: whereas the former allows for arbitrary quantifier free formulae, the latter is given by polynomials, that is to say, of formulae of the form $y=f(x)$, which we will call explicit formulae. This observation suggests that we should consider not necessarily all first-order formulae, but also some restricted classes. This general construction is discussed in §3.
It is beneficial to develop the theory in a relative setup, so we work over an arbitrary affine, Noetherian scheme $X=\operatorname{Spec}A$, instead of just over an algebraically closed field. To construct a generalized Grothendieck ring for schemes, a so-called schemic Grothendieck ring, we need to settle on a first-order theory $\mathbf{T}$. The classical Grothendieck ring is obtained by taking for $\mathbf{T}$ the theory of algebraically closed fields that are also $A$-algebras. Alternatively, one could also have chosen the theory of all $A$-algebras without zero-divisors, and so, to include all schemes, we could simply replace this by the theory $\mathbf{T}_{A}$ of all $A$-algebras. Refinements lead to more relations and hence more manageable Grothendieck rings, the most important one of which is the theory $\mathbf{Art}_{A}$ of all Artinian local $A$-algebras (=algebras that have finite length as an $A$-module). Since we no longer have quantifier elimination, we also need to make a decision on which formulae we will allow, both for our definition of isomorphism as well as for the classes we want to study. Varieties, and more generally schemes, are given by equations, and so the family of formulae of the form $f_{1}=\dots=f_{s}=0$, with $f_{i}\in A[x]$ will provide the proper candidate for our generalization to schemes; we call such formulae therefore schemic. We show that there is a one-one correspondence between schemic formulae in $n$ free variables up to $\mathbf{T}_{A}$-equivalence,111Two formulae are equivalent modulo a theory if they define the same subsets in each model of the theory. and closed subschemes of ${\mathbb{A}_{X}^{n}}$ (Theorem 4.1). In fact, this result remains true when working in the theory $\mathbf{Art}_{A}$. As for isomorphisms, we may take either the class of explicit isomorphisms, or the larger class of schemic isomorphisms, both choices leading to the same schemic Grothendieck ring ${\mathbf{Gr}(X^{\operatorname{sch}})}$. There is an obvious ring homomorphism ${\mathbf{Gr}(X^{\operatorname{sch}})}\to K_{0}(X)$ to the classical Grothendieck ring $K_{0}(X)$ of $X$. The main result, Theorem 5.4, is that two affine schemes of finite type over $X$ are isomorphic if and only if they have the same class in ${\mathbf{Gr}(X^{\operatorname{sch}})}$.
However, if we want more relations to hold in our Grothendieck ring, we need to enlarge the family of formulae, and work in the appropriate theory. In §6, we explain how in order to define the class of a non-affine scheme, we need to work modulo the theory $\mathbf{Art}_{A}$ in the larger class of pp-formulae, that is to say, existentially quantified schemic formulae. This is apparent already when dealing with basic open subsets: if $U=\operatorname{D}(f)$ is the basic open subset of ${\mathbb{A}_{X}^{n}}$, that is to say, $U=\operatorname{Spec}(A[x]_{f})$, then as an abstract affine scheme, it is given by the schemic formula $f(x)z=1$, where $z$ is an extra variable, whereas as an open subset of ${\mathbb{A}_{X}^{n}}$, it is given by the pp-formula $(\exists z)f(x)z=1$; the isomorphism between these two sets is only true modulo $\mathbf{Art}_{A}$, and is given by a (non-explicit) schemic formula. This leads to the pp-Grothendieck ring ${\mathbf{Gr}(X^{\operatorname{pp}})}$ of $X$, where instead of quantifier free formulae, we take Boolean combinations of pp-formulae, up to schemic isomorphisms. To any scheme of finite type over $X$, we can, by taking an open affine covering, associate a unique element in ${\mathbf{Gr}(X^{\operatorname{pp}})}$.
Unfortunately, the original scissor relation (1) is no longer valid. Indeed, the complement of an open $U\subseteq Y$ does not carry a unique closed scheme structure anymore. The solution is to take the limit over all these structures, yielding the formal completion $\widehat{Y}_{Z}$, where $Z$ is the underlying variety of $Y-U$. At the level of formulae (for simplicity, we assume $A=F$ is an algebraically closed field henceforth), the negation of the pp-formula defining a basic open subset is equivalent with an infinite disjunction of schemic formulae, having the property that in any Artinian $F$-algebra, the set defined by the disjunction is already definable by one of the disjuncts (but different models may require different disjuncts). Such an infinitary (whence non-first-order) disjunction will be termed a formulary. Replacing formulae by formularies in the definition of the pp-Grothendieck ring, yields the infinitary Grothendieck ring $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$, in which every formal scheme over $F$ is represented by the class of some formulary, resurrecting the old scissor relation into a new one: for any closed immersion $Z\subseteq Y$ of schemes of finite type over $X$, we have ${\left[Y\right]}={\left[Y-Z\right]}+{\left[\widehat{Y}_{Z}\right]}$. All this is explained in §8.
There is one more variant that will be considered here, called the formal Grothendieck ring ${\mathbf{Gr}}(F^{\operatorname{form}})$, in which we revert to the reduced situation by factoring out the ideal generated by all ${\left[\widehat{Y}_{Z}\right]}-{\left[Z\right]}$ for all closed immersions $Z\subseteq Y$. However, we will only work in this latter quotient (in which any two schemes with the same underlying variety have the same class) after we have taken arcs (see below). This does make a difference, as can be seen already on easy examples (Table 1). The advantage is that we get back the original scissor relation (1), which makes it easier to invoke inductive arguments when proving rationality of the motivic Igusa zeta series (to be discussed below). The relation between all these Grothendieck rings is given by the following (ring) homomorphisms
(2)
$${\mathbf{Gr}(F^{\operatorname{sch}})}\to{\mathbf{Gr}(F^{\operatorname{pp}})}%
\to\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})\to{\mathbf{Gr}}(F^{%
\operatorname{form}})\to{\mathbf{Gr}(F^{\operatorname{var}})}.$$
To discuss our main application, the motivic rationality of the geometric Igusa zeta series, we introduce a weak version of motivic integration in §7. For any Artinian $F$-algebra $R$, and any affine scheme $X$ over $F$, we define the arc scheme $\nabla_{\!R}X$ of $X$ along $\operatorname{Spec}R$ as the scheme whose $S$-rational points correspond to the $R\otimes_{F}S$-rational points of $X$, for any Artinian $F$-algebra $S$. This generalizes the truncated arc space of a variety, which is obtained by taking $R=F[\xi]/\xi^{n}F[\xi]$ and ignoring the nilpotent structure. The arc integral $\int Z\ dX$ is then defined as the class ${\left[\nabla_{\!Z}X\right]}$ in ${\mathbf{Gr}(F^{\operatorname{pp}})}$, and the main result is that it only depends only the classes of $X$ and $Z$ (unlike in the classical case). The geometric Igusa zeta series of $X$ along the germ $(Y,P)$ is then defined as the formal power series
$$\operatorname{Igu}^{(Y,P)}_{X}(t):=\sum_{n}\left(\int J_{P}^{n}Y\ dX\right)t^{n}$$
in ${\mathbf{Gr}(F^{\operatorname{pp}})}[[t]]$, where the $n$-th jet of a germ $(Y,P)$ is defined as the Artinian scheme $J_{P}^{n}Y:=\operatorname{Spec}({\mathcal{O}}_{Y}/\mathfrak{m}^{n}_{P})$, with $\mathfrak{m}_{P}$ the maximal ideal of the closed point $P$. For the remainder of this introduction, I will assume that $(Y,P)$ is the germ of a point on a line, and simply write $\operatorname{Igu}^{\operatorname{geom}}_{X}$ for this zeta series. Under the homomorphism from (2), this power series becomes the Denef-Loeser geometric Igusa zeta series. The aim is to recover within the new framework their result that $\operatorname{Igu}^{\operatorname{geom}}_{X}$ is rational over the localization ${\mathbf{Gr}(F^{\operatorname{var}})}_{\mathbb{L}}$, where ${\mathbb{L}}$ is the class of an affine line, called the Lefschetz class. Their proof relies on Embedded Resolution of Singularities, and hence works in positive characteristic only for surfaces. In §§9 and 10, I will give examples of hypersurfaces, in any characteristic, for which we can derive the rationality of the Igusa zeta series (in fact, over ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$), without any appeal to resolution of singularities. The proofs are, moreover, far more elementary and algorithmic in nature because of the functorial properties of our construction.
2. The Grothendieck group of a lattice
The most general setup in which one can define a Grothendieck group is the category of semi-lattices. Recall that a lattice $\Lambda$ is a partially ordered set in which every finite subset has an infimum and a supremum. For any two elements $a,b\in\Lambda$, we let $a\wedge b$ and $a\vee b$ denote respectively the infimum and the supremum of $\{a,b\}$. If only infima exist, then we call $\Lambda$ a semi-lattice. Given a semi-lattice $\Lambda$, we call a finite subset $S\subseteq\Lambda$ admissible if it has a supremum $a$, in which case we call $S$ a covering of $a$.
Scissor relations
For each $n>1$, we define the $n$-th scissor polynomial
$$S_{n}(x):=1-\prod_{i=1}^{n}(1-x_{i})=x_{1}+\dots+x_{n}-\dots+(-1)^{n-1}x_{1}%
\cdots x_{n}\in\mathbb{Z}[x]$$
where $x=(x_{1},\dots,x_{n})$.
Let $\mathfrak{i}_{n}$ be the generic idempotency ideal, that is to say, the ideal in $\mathbb{Z}[x]$ generated by the relations $x_{i}^{2}-x_{i}$, for $i=1,\dots,n$. The following identities among scissor polynomials will be useful later:
2.1 Lemma.
For each $n$, we have an equivalence relation
(3)
$$S_{n}(x_{1},\dots,x_{n})+S_{n-1}(x_{1}x_{n},\dots,x_{n-1}x_{n})\equiv S_{n-1}(%
x_{1},\dots,x_{n-1})+x_{n}\mod\mathfrak{i}_{n}.$$
in $\mathbb{Z}[x]$, with $x=(x_{1},\dots,x_{n})$. More generally, for $y=(y_{1},\dots,y_{m})$, we have
(4)
$$\displaystyle S_{n+m}(x_{1},\dots,x_{n},y_{1},\dots,y_{m})+S_{nm}(x_{1}y_{1},x%
_{1}y_{2},\dots,x_{n}y_{m})\equiv\\
\displaystyle S_{n}(x_{1},\dots,x_{n})+S_{m}(y_{1},\dots,y_{m})\mod\mathfrak{i%
}_{n+m}.$$
Proof.
Note that (3) is just a special case of (4). Let us first prove that in any ring $D$, we have an identity
(5)
$$\prod_{i=1}^{n}(1-ea_{i})=1-e+e\prod_{i=1}^{n}(1-a_{i})$$
for $a_{i}\in D$ and $e$ an idempotent in $D$. Indeed, write $1-ea_{i}=1-e+e(1-a_{i})$. Since $e(1-e)=0$, the expansion of the product on the left hand side of (5) yields $(1-e)^{n}+e^{n}\prod_{i}(1-a_{i})$, and the claim follows since $e$ and $1-e$ are both idempotents.
To prove (4), we carry our calculations out in $C:=\mathbb{Z}[x,y]/\mathfrak{i}_{n+m}$. To simplify notation, let us write $P:=\prod_{i}(1-x_{i})$ and $Q:=\prod_{j}(1-y_{j})$. Hence the first, third, and fourth term in (4) are respectively $1-PQ$, $1-P$ and $1-Q$. Let us therefore expand the second term. Applying (5), for each $i$, with $x_{i}$ as idempotent in the product indexed by $j$, and then again in the last line, with $1-Q$ as idempotent, we get
$$\displaystyle 1-S_{nm}(x_{1}y_{1},x_{1}y_{2},\dots,x_{n}y_{m})$$
$$\displaystyle=\prod_{i=1}^{n}\left(\prod_{j=1}^{m}(1-x_{i}y_{j})\right)$$
$$\displaystyle=\prod_{i=1}^{n}(1-x_{i}+x_{i}Q)$$
$$\displaystyle=\prod_{i=1}^{n}(1-(1-Q)x_{i})=Q+(1-Q)P$$
From this, (4) now follows immediately.
∎
We can write any scissor polynomial $S_{n}$ as the difference $S_{n}^{+}-S_{n}^{-}$ of two polynomials $S_{n}^{+}$ and $S_{n}^{-}$ with positive coefficients, that is to say, $S_{n}^{+}$ is the sum of terms in $S_{n}$ of odd degree, and $S_{n}^{-}$ is minus the sum of all terms of even degree. Put differently, $S_{n}^{+}$ and $S_{n}^{-}$ are the respective sums of all square-free monomials in the variables $(x_{1},\dots,x_{n})$ of respectively odd and (positive) even degree.
The scissor group of a lattice
Given a lattice $\Lambda$, let $\mathbb{Z}[\Lambda]$ be the free Abelian group on $\Lambda$. Using the infimum of $\Lambda$ as multiplication, we get a ring structure on $\mathbb{Z}[\Lambda]$, that is to say,
$$(\sum_{i=1}^{n}a_{i})\cdot(\sum_{j=1}^{m}b_{j}):=\sum_{i=1}^{n}\sum_{j=1}^{m}a%
_{i}\wedge b_{j}$$
for $a_{i},b_{j}\in\Lambda$.
Let $\mathbf{a}:=(a_{1},\dots,a_{n})$ be a tuple in $\Lambda$. We will write $\bigvee\mathbf{a}$ for $a_{1}\vee\dots\vee a_{n}$. Substitution induces a ring homomorphism $\mathbb{Z}[x]\to\mathbb{Z}[\Lambda]\colon x_{i}\mapsto a_{i}$. In particular, $S_{n}(\mathbf{a})$ is a well-defined element in $\mathbb{Z}[\Lambda]$, and we may abbreviate it as $S(\mathbf{a})$, since the arity is clear from the context. Note that, since any element of $\Lambda$ is idempotent in $\mathbb{Z}[\Lambda]$, the kernel of this homomorphism $\mathbb{Z}[x]\to\mathbb{Z}[\Lambda]$ contains $\mathfrak{i}$, the generic idempotency ideal. We define the scissor relation on $\mathbf{a}$ as the formal sum
(6)
$$(\bigvee\mathbf{a})-S(\mathbf{a}).$$
For instance, if $n=2$, then the (second) scissor relation $(a_{1}\vee a_{2})-S_{2}(a_{1},a_{2})$ is equal to
(7)
$$(a_{1}\vee a_{2})-a_{1}-a_{2}+(a_{1}\wedge a_{2}).$$
Similarly, for $n=3$, we get
$$(a_{1}\vee a_{2}\vee a_{3})-a_{1}-a_{2}-a_{3}+(a_{1}\wedge a_{2})+(a_{1}\wedge
a%
_{3})+(a_{2}\wedge a_{3})-(a_{1}\wedge a_{2}\wedge a_{3}).$$
We define the scissor group of $\Lambda$, denoted ${\mathbf{Sciss}(\Lambda)}$, as the quotient of $\mathbb{Z}[\Lambda]$ by the subgroup $N$ generated by all second scissor relations (7). Although we later will make a notational distinction between an element and its class in the scissor group, at present, no such distinction is needed, and so we continue to write $a$ for the image of $a\in\Lambda$ in ${\mathbf{Sciss}(\Lambda)}$.
2.2 Remark.
The ring structure on $\mathbb{Z}[\Lambda]$, given by $\wedge$, descends to a ring structure on ${\mathbf{Sciss}(\Lambda)}$, since $N$ is in fact an ideal. When we apply this to formulae in the next section, this ring structure on ${\mathbf{Sciss}(\Lambda)}$ will play a minor role, and instead, a different multiplication will be introduced.
2.3 Proposition.
For each tuple $\mathbf{a}$ in $\Lambda$, we have a scissor relation
$$\bigvee\mathbf{a}=S(\mathbf{a})$$
in ${\mathbf{Sciss}(\Lambda)}$.
Proof.
We prove this by induction on the length $n\geq 2$ of $\mathbf{a}=(a_{1},\dots,a_{n})$, where the case $n=2$ is just the definition. Since $\mathfrak{i}$ lies in the kernel of $\mathbb{Z}[x]\to\mathbb{Z}[\Lambda]$, the equivalence (3) in Lemma 2.1 becomes an identity in the latter group, that is to say,
(8)
$$S_{n}(a_{1},\dots,a_{n})=S_{n-1}(a_{1},\dots,a_{n-1})-S_{n-1}(a_{1}\wedge a_{n%
},\dots,a_{n-1}\wedge a_{n})+a_{n}.$$
Viewing $\bigvee\mathbf{a}$ as the disjunction $b\vee a_{n}$, where $b:=a_{1}\vee\dots\vee a_{n-1}$, the defining (second) scissor relation yields an identity
(9)
$$\bigvee\mathbf{a}=b+a_{n}-(b\wedge a_{n})=(a_{1}\vee\dots\vee a_{n-1})+{a_{n}}%
-{(a_{1}\wedge a_{n})\vee\dots\vee(a_{n-1}\wedge a_{n})}.$$
Subtracting (8) from (9), the left hand side is $\bigvee\mathbf{a}-S_{n}(\mathbf{a})$, and the right hand side is equal to
$$\displaystyle\big{(}(a_{1}\vee\dots\vee a_{n-1})-{S_{n-1}(a_{1},\dots,a_{n-1})%
}\big{)}-\\
\displaystyle\big{(}{(a_{1}\wedge a_{n})\vee\dots(a_{n-1}\vee a_{n})}-{S_{n-1}%
(a_{1}\wedge a_{n},\dots,a_{n-1}\wedge a_{n})}\big{)}.$$
By induction, both terms are zero in ${\mathbf{Sciss}(\Lambda)}$, and the result follows.
∎
In particular, $N$ is generated by all scissor relations, of any arity. We may generalize this to a semi-lattice $\Lambda$ (with multiplication still given by $\wedge$) as follows. Let $N$ be the subgroup of $\mathbb{Z}[\Lambda]$ generated by all expressions $a-S_{n}(\mathbf{b})$, where $\mathbf{b}=(b_{1},\dots,b_{n})$ (or rather its entries) ranges over all admissible coverings of $a$, that is to say, such that $a$ is the supremum of the $b_{i}$. It is not hard to check that $N$ is in fact an ideal, and the resulting residue ring is the scissor ring ${\mathbf{Sciss}(\Lambda)}:=\mathbb{Z}[\Lambda]/N$.
The Grothendieck group of a graph (semi-)lattice
Let $\Lambda$ be a (semi-)lattice.
By a (directed) graph on $\Lambda$ we simply mean a binary relation $E$ on $\Lambda$ (we do not require it to be compatible with the join or the meet). We define the Grothendieck group of $(\Lambda,E)$, denoted $\mathbf{K}_{E}(\Lambda)$, as the factor group of ${\mathbf{Sciss}(\Lambda)}$ modulo the subgroup generated by all elements of the form $a-b$ such that there is an edge from $a$ to $b$. In other words, $\mathbf{K}_{E}(\Lambda)$ is the quotient of the free Abelian group $\mathbb{Z}[\Lambda]$ on $\Lambda$ modulo the subgroup $N_{\text{sciss}}+N_{E}$, where $N_{\text{sciss}}=N$ is the group of scissor relations and $N_{E}$ the group generated by all $a-b$ with $(a,b)\in E$.
If $\tilde{E}$ is the equivalence relation generated by $E$ (meaning that $a$ is equivalent to $b$ if there is an undirected path from $a$ to $b$), then $N_{E}=N_{\tilde{E}}$, and hence both graphs have the same Grothendieck group, as do all intermediate graphs between $E$ and $\tilde{E}$. Therefore, often, though not always, $E$ will already be an equivalence relation. Although the quotient $\mathbb{Z}[\Lambda]/N_{E}$ is equal to the free Abelian group on the quotient $\Lambda/\tilde{E}$, the latter is no longer a semi-lattice, and so a priori, does not admit a scissor subgroup. We may paraphrase this situation as: cut first, then identify.
3. The Grothendieck ring of a theory
Inspired by the ground-breaking work of Denef and Loeser, model-theorists have recently been interested in the Grothendieck ring of an arbitrary first-order theory, see for instance [CH, CHGrot, HuKaz, KraSca]. The new perspective offered here is that rather than looking at all formulae and all definable isomorphisms, much better behaved objects can be obtained when restricting these classes.
Fix a
language $\mathcal{L}$, by which we mean the collection of all well-formed
formulae in a certain signature, in a fixed countable collection of variables $v_{1},v_{2},\dots$.222We usually start numbering from $1$, as any non-logician would. Note that some
authors use the terms language and signature interchangeably. We denote a formula by Greek lower case letters $\phi,\psi,\dots$, and often we give names to their free variables as well, taken from the last letters of the Latin alphabet: $x,y,z$. If $\phi(x_{1},\dots,x_{s})$ is an $\mathcal{L}$-formula, and $M$ an $\mathcal{L}$-structure, then the set defined by $\phi$ in $M$, or the interpretation of $\phi$ in $M$ is the following subset $\phi(M)$. Suppose $x_{i}=v_{n_{i}}$, and let $n$ be the maximum of all $n_{i}$. Then $\phi(M)$ is the subset of $M^{n}$ of all $(a_{1},\dots,a_{n})$ such that $\phi(a_{n_{1}},\dots,a_{n_{s}})$ holds in $M$. Any set of the form $\phi(M)$ will be called a definable subset. Note that the Cartesian power $n$ is not determined by the number of free variables $s$, but by the highest index of a $v_{i}$ occurring in the formula $\phi$; we call $n$ the arity of $\phi$ (which therefore is not to be confused with its number of free variables). For instance, the subset defined by $\phi:=(v_{3}=0)\wedge(v_{7}=1)$ in $\mathbb{Z}$ is the $7$-ary subset $\mathbb{Z}^{2}\times\{0\}\times\mathbb{Z}^{3}\times\{1\}$ of $\mathbb{Z}^{7}$. Also note that this leaves a certain amount of ambiguity: the formula $v_{3}=0$ has, prima facie, arity $3$, but as a conjunct of $\phi$ it behaves as a formula of arity $7$. Notwithstanding all this, the tacit rule will be that if $(x_{1},\dots,x_{n})$ denotes the tuple of free variables of $\phi$, then $(x_{1},\dots,x_{n})$ stands for the tuple $(v_{1},\dots,v_{n})$, or more generally, if $x,y,z,\dots$ are tuples of free variables of $\phi$, which are listed in that order, and whose total number equals $n$, then these variables represent the first $n$ variables $v_{i}$, that is to say, $(x_{1},\dots,x_{s},y_{1},\dots,y_{t},z,\dots)=(v_{1},\dots,v_{n})$. Put differently, unless mentioned explicitly, the arity of a formula is its number of free variables. In this respect, it is useful to introduce the the primary form of $\phi$, defined as $\phi^{\circ}:=\phi(v_{1},\dots,v_{s})$, where $s$ is the number of free variables of $\phi$. The implicit assumption is that a formula is in primary form, unless the variables are stated explicitly. Furthermore, the implicit assumption is that the disjunction or conjunction of two formulae is equal to the maximum of their arities.
This definition also applies to a sentence $\sigma$, that is to say, a formula without free variables: given an $\mathcal{L}$-structure $M$, we let $\sigma(M)$ be one of the two possible subsets of $M^{0}:=\{\emptyset\}$, namely $\{\emptyset\}$ if $\sigma$ holds in $M$, and $\emptyset$ if it does not. If $\phi$ and $\psi$ are two $\mathcal{L}$-formulae, then $\phi\wedge\psi$ and $\phi\vee\psi$ are again $\mathcal{L}$-formulae, defining in each $\mathcal{L}$-structure $M$ respectively $\phi(M)\cap\psi(M)$ and $\phi(M)\cup\psi(M)$. Moreover, $\neg\phi$ defines the complement of $\phi(M)$ in $M^{n}$, where $n$ is the arity of $\phi$. A trivial yet important formula is the $n$-th Lefschetz formula
$$\lambda_{n}:=(v_{1}=v_{1})\wedge\dots\wedge(v_{n}=v_{n}),$$
defining the full Cartesian power $M^{n}$ in any model. We abbreviate the Lefschetz formula $\lambda_{n}(x):=(x=x)$ with $x$ an $n$-tuple of variables (which, according to our tacit assumptions, stand for the $n$ first variables $v_{i}$), and write $\lambda(x)$ for $\lambda_{1}(x)$.
An $\mathcal{L}$-theory $\mathbf{T}$ is any non-empty collection of consistent $\mathcal{L}$-sentences (it is convenient to assume that $\mathbf{T}$ contains at least one sentence, which we always could assume to be a tautology like $\forall x\lambda(x)$). A model of $\mathbf{T}$ is an $\mathcal{L}$-structure for which all sentences in $\mathbf{T}$ are true. By the compactness theorem, every theory has at least one model. Given a (non-empty) collection $\mathfrak{K}$ of $\mathcal{L}$-structures, we define the $\mathcal{L}$-theory of $\mathfrak{K}$, denoted $\mathbf{T}_{\mathcal{L}}(\mathfrak{K})$, to be the collection of all $\mathcal{L}$-sentences that are true in any structure belonging to $\mathfrak{K}$. The collection $\mathfrak{K}$ is axiomatizable (also called first-order), if it consists precisely of the models of $\mathbf{T}_{\mathcal{L}}(\mathfrak{K})$.
Let $\mathbf{T}$ be a theory in the language $\mathcal{L}$.
We say that two formulae $\phi$ and
$\psi$ are $\mathbf{T}$-equivalent, denoted $\phi\sim_{\mathbf{T}}\psi$,
if they define the same subset in any model of $\mathbf{T}$, that is to say, if
$\phi(M)=\psi(M)$, for any model $M$ of $\mathbf{T}$. By the compactness
theorem, this is equivalent with $\mathbf{T}$ proving that $(\forall x)\phi(x)\leftrightarrow\psi(x)$. In particular, equivalent formula must have the same free variables $v_{i}$. Note that the logical connectives $\wedge$ and $\vee$, as well as negation $\neg$, respect this equivalence relation. If $\mathbf{T}$ consists entirely of tautologies, then two formulae are $\mathbf{T}$-equivalent if and only if they are logically equivalent. If $\mathbf{T}=\mathbf{T}_{\mathcal{L}}(\mathfrak{K})$ is the theory of a non-empty class $\mathfrak{K}$ of $\mathcal{L}$-language, then $\phi$ and $\psi$ are $\mathbf{T}$-equivalent if and only if they define the same subset in each structure belonging to $\mathfrak{K}$. In other words, we do not need to check all models, but only those that “generate” the theory, and therefore, we will often make no distinction between theories and collections of structures. (Caveat: when dealing with infinitary formulae, as in §8, this is no longer true.) For instance, instead of calling two formulae $\mathbf{T}_{\mathcal{L}}(\mathfrak{K})$-equivalent, we may just simply call them $\mathfrak{K}$-equivalent. When the theory $\mathbf{T}$ is fixed, we will often identify $\mathbf{T}$-equivalent formulae. Formally, we therefore introduce:
The category of definable sets
The category of $\mathbf{T}$-definable sets, $\operatorname{Def}({\mathbf{T}})$, has
as objects the $\mathbf{T}$-equivalence classes of formulae and as morphisms
the definable maps. There, are in fact a few variant ways of defining the latter notion, and for our purposes, the following will be most suitable. Let $\phi$ and $\psi$ be $\mathcal{L}$-formulae of arity $n$ and $m$ respectively, and let $x=(x_{1},\dots,x_{n})$ and $y=(y_{1},\dots,y_{m})$ be variables. A formula $\theta(x,y)$ (or, rather, its $\mathbf{T}$-equivalence class) is called morphic (on $\phi$), or defines a $\mathbf{T}$-morphism $f_{\theta}\colon\phi\to\psi$, if $\mathbf{T}$ proves the following sentences
(1)
$(\forall x)(\exists y)\phi(x)\Rightarrow\theta(x,y)$
(2)
$(\forall x,y,y^{\prime})\phi(x)\wedge\theta(x,y)\wedge\theta(x,y^{\prime})%
\Rightarrow y=y^{\prime}$
(3)
$(\forall x,y)\phi(x)\wedge\theta(x,y)\Rightarrow\psi(x)$.
In other words, (1) and (2) express that in each model $M$ of $\mathbf{T}$, the definable subset $\theta(M)\subseteq M^{n}\times M^{m}$, when restricted to $\phi(M)\times M^{m}$, is the graph of a function $f_{M}\colon\phi(M)\to M^{m}$, and, furthermore, (3) ensures that the image of $f_{M}$ lies in $\psi(M)\subseteq M^{m}$. We will therefore denote this map by $f_{M}\colon\phi(M)\to\psi(M)$. Although slightly inaccurate, we will express this situation also by simply saying that $\theta(M)$ is the graph of a function $\phi(M)\to\psi(M)$. As part of the definition of morphic formula is an, often implicit, division of the free variables in two sets, the source variables, $x$, and the target variables, $y$. The next result shows that these definitions constitute a category.
3.1 Lemma.
If $f_{\theta}\colon\phi(x)\to\psi(y)$ and $g_{\zeta}\colon\psi(y)\to\gamma(z)$ are morphisms defined by $\theta(x,y)$ and $\zeta(y,z)$ respectively, then the formula $(\zeta\circ\theta)(x,z):=(\exists y)\theta(x,y)\wedge\zeta(y,z)$ is again a morphic formula, defining the composition $h\colon\phi(x)\to\gamma(z)$.
Proof.
The proof is straightforward but technical; since we will rely on it heavily, we will provide it in some detail. Put $\xi:=\zeta\circ\theta$. We first verify condition (1), and to this end, we may work in a fixed model $M$ of $\mathbf{T}$. So, let $\mathbf{a}\in\phi(M)$. By (1) and (3) applied to $\theta(x,y)$, we get a tuple $\mathbf{b}\in\psi(M)$ such that $(\mathbf{a},\mathbf{b})\in\theta(M)$. By the same argument, applied to $\zeta$, we then get $\mathbf{c}\in\gamma(M)$ such that $(\mathbf{b},\mathbf{c})\in\zeta(M)$. In particular, $\mathbf{b}$ witnesses that $(\mathbf{a},\mathbf{c})\in\xi(M)$, proving (1) for $\xi$. The same argument essentially also shows that (3) holds. So remains to show (2) for $\xi$. To this end, let $\mathbf{a}\in\phi(M)$ such that $(\mathbf{a},\mathbf{c})$ and $(\mathbf{a},\mathbf{c}^{\prime})$ both belong to $\xi(M)$. This means that there are tuples $\mathbf{b},\mathbf{b}^{\prime}$ such that $\theta(\mathbf{a},\mathbf{b})\wedge\zeta(\mathbf{b},\mathbf{c})$ and $\theta(\mathbf{a},\mathbf{b}^{\prime})\wedge\zeta(\mathbf{b}^{\prime},\mathbf{%
c}^{\prime})$ hold in $M$. By condition (2) for $\theta$, we get $\mathbf{b}=\mathbf{b}^{\prime}$, and by (3), this tuple then belongs to $\psi(M)$. So we may repeat this argument to the tuples $\mathbf{b},\mathbf{c},\mathbf{c}^{\prime}$ and $\zeta$, to get $\mathbf{c}=\mathbf{c}^{\prime}$, as we wanted to show.
∎
We
call a morphism $f_{\theta}\colon\phi\to\psi$, or the corresponding formula $\theta$,
injective, surjective, or bijective, if all the
corresponding maps $f_{M}\colon\phi(M)\to\psi(M)$ are. The next result shows that any definable bijection is an isomorphism in the category $\operatorname{Def}({\mathbf{T}})$, that is to say, its inverse is also a morphism, which we therefore call a $\mathbf{T}$-isomorphism.
3.2 Lemma.
Let $\theta(x,y)$ be a morphic formula defining a bijection $f_{\theta}\colon\phi(x)\to\psi(y)$. If $\operatorname{inv}(\theta)(x,y):=\theta(y,x)$, then $\operatorname{inv}(\theta)$ defines a morphism $g\colon\psi\to\phi$, which gives the inverse of $f_{M}$ on each model $M$ of $\mathbf{T}$.
Proof.
To show that $\zeta(x,y):=\operatorname{inv}(\theta)(x,y)$ is morphic, yielding a morphism $\psi\to\phi$, we may again check this in a model $M$ of $\mathbf{T}$. Suppose $\mathbf{a}\in\psi(M)$. Since $f_{M}$ is bijective, there is some $\mathbf{b}\in\phi(M)$ such that $f_{M}(\mathbf{b})=\mathbf{a}$. This means that $(\mathbf{b},\mathbf{a})\in\theta(M)$, whence $(\mathbf{a},\mathbf{b})\in\zeta(M)$, proving conditions (1) and (3). Since $f_{M}$ is a bijection, the tuple $\mathbf{b}$ is unique, and this proves (2). It is now easy to see that the map $g_{M}\colon\mathbf{a}\mapsto\mathbf{b}$ is the inverse of $f_{M}$.
∎
3.2.1. $\mathcal{I}$-morphisms
Given a
family $\mathcal{I}\subseteq\mathcal{L}$ of formulae (closed under $\mathbf{T}$-equivalence), by an
$\mathcal{I}$-definable $\mathbf{T}$-map, or simply, an $\mathcal{I}$-morphism between $\phi$ and $\psi$, we mean
a morphic formula $\theta$ belonging to $\mathcal{I}$ which defines a morphism
$f_{\theta}\colon\phi\to\psi$
(without imposing any restriction on $\phi$ and $\psi$). We call $f_{\theta}\colon\phi\to\psi$ an $\mathcal{I}$-isomorphism (modulo $\mathbf{T}$), if $f_{\theta}$ is bijective and its inverse is again an $\mathcal{I}$-morphism. In view of Lemma 3.2, a bijective morphism $f_{\theta}$ is an $\mathcal{I}$-isomorphism, if, for instance, both $\theta$ and $\operatorname{inv}(\theta)$ belong to $\mathcal{I}$. However, in general, a bijective $\mathcal{I}$-morphism need not be an $\mathcal{I}$-isomorphism (see, for instance, Example 4.7).
A note of caution: in general, $\mathcal{I}$-isomorphism, in spite of its name, is not an equivalence relation, since it is not clear that the composition of two $\mathcal{I}$-isomorphism is again an $\mathcal{I}$-morphism. In view of Lemma 3.1, the collection of $\mathcal{I}$-morphisms is closed under composition, if, for instance, $\mathcal{I}$ is closed under existential quantification, although, as we shall see, this is not the only instance in which this is true.
3.2.2. Explicit formulae
A morphic formula, in general, only defines a partial map, but there is an important
type of morphism that is always global: by an explicit morphism, we mean a formula $\theta(x,y)$ of the form $\bigwedge_{j=1}^{m}y_{j}=t_{j}(x)$, with each $t_{j}$ an $\mathcal{L}$-term, and with source variables $x=(x_{1},\dots,x_{n})$ and target variables $y=(y_{1},\dots,y_{m})$; we will abbreviate this by $y=t(x)$, for $t:=(t_{1},\dots,t_{m})$. Such a formula always defines a global map on each model $M$, given as $t_{M}\colon M^{n}\to M^{m}\colon\mathbf{a}\mapsto t(\mathbf{a})$. In particular, if $f\colon\phi\to\psi$ is explicit, then $f_{M}$ is the restriction of $t_{M}$ to $\phi(M)$. We denote the collection of all explicit formulae by $\mathcal{E}xpl$. Note that $\mathcal{E}xpl$ is compositionally closed, for if $y=t(x)$ and $z=s(y)$ are two explicit formulae, then the explicit formula $z=s(t(x))$ defines their composition. However, as Example 4.7 shows, a bijective $\mathcal{E}xpl$-morphism need not be an $\mathcal{E}xpl$-isomorphism. At any rate, any formula $\phi$ is $\mathcal{E}xpl$-isomorphic to its primary form $\phi^{\circ}$.
We already observed that the logical connectives $\wedge$ and $\vee$ are well-defined on $\operatorname{Def}({\mathbf{T}})$. We introduce two further operations on $\operatorname{Def}({\mathbf{T}})$.
3.2.3. Multiplication on $\operatorname{Def}({\mathbf{T}})$
Let $\phi$ and $\psi$ be
two $\mathcal{L}$-formulae, in $n$ and $m$ free variables respectively (or, more correctly, of arity $n$ and $m$ respectively).
We define their product as
$$(\phi\times\psi)(v_{1},\dots,v_{n+m}):={\phi(v_{1},\dots,v_{n})\wedge\psi(v_{n%
+1},\dots,v_{n+m})}.$$
A note of caution: it is not always true that $(\phi\times\psi)(M)$ is equal to $\phi(M)\times\psi(M)$, in a model $M$ of $\mathbf{T}$, due to the numbering of the variables. This only holds for primary forms since always $\phi\times\psi=\phi^{\circ}\times\psi^{\circ}$, and the interpretation of this formula in $M$ is equal to $\phi^{\circ}(M)\times\psi^{\circ}(M)$. In particular, although this multiplication is not Abelian, it is up to explicit isomorphism (given by permuting the variables appropriately). For Lefschetz formulae we obviously have $\lambda_{n}\times\lambda_{m}=\lambda_{m+n}$.
We leave it to the reader to verify that if $\phi_{i}$ and $\psi_{i}$ are $\mathbf{T}$-equivalent, for $i=1,2$, then so are the respective products $\phi_{1}\times\phi_{2}$ and $\psi_{1}\times\psi_{2}$. In other words, the multiplication is well-defined modulo $\mathbf{T}$-equivalence, and hence yields a multiplication on $\operatorname{Def}({\mathbf{T}})$. The multiplicative unit in $\operatorname{Def}({\mathbf{T}})$ is the class of any sentence $\sigma$ which is a logical consequence of $\mathbf{T}$, and will be denoted $\top$ (for instance, one may take $\sigma$ to be the tautology $(\forall x)\lambda(x)$). We will also write $\bot$ for the class of $\neg\sigma$, and we have $\bot\times\phi=\bot=\phi\times\bot$, for all
formulae $\phi$ (note that, per convention, the Cartesian product of any set with the empty set is the empty set).
Given two morphisms $f_{\theta}\colon\phi\to\psi$ and
$f_{\theta^{\prime}}\colon{\phi^{\prime}}\to{\psi^{\prime}}$ of $\mathcal{L}$-formulae, they induce a morphism $f\colon(\phi\times\phi^{\prime})\to(\psi\times\psi^{\prime})$ between the respective products as follows. If $\theta(\mathbf{x},\mathbf{y})$ and $\theta^{\prime}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})$ are the respective defining formulae, then the order of the variables in the product $\theta\times\theta^{\prime}$ is by definition $\mathbf{x},\mathbf{y},\mathbf{x}^{\prime},\mathbf{y}^{\prime}$. The formula obtained from this product by changing this order to $\mathbf{x},\mathbf{x}^{\prime},\mathbf{y},\mathbf{y}^{\prime}$ then defines $f$. Note that the formula defining $f$ is therefore $\mathcal{E}xpl$-isomorphic with $\theta\times\theta^{\prime}$: indeed, this isomorphism is given by
(10)
$$\delta(\mathbf{x},\mathbf{y},\mathbf{x}^{\prime},\mathbf{y},\mathbf{z}_{1},%
\mathbf{z}_{2},\mathbf{z}_{3},\mathbf{z}_{4}):=(\mathbf{z}_{1}=\mathbf{x})%
\wedge(\mathbf{z}_{2}=\mathbf{x}^{\prime})\wedge(\mathbf{z}_{3}=\mathbf{y})%
\wedge(\mathbf{z}_{4}=\mathbf{y}^{\prime})$$
with $\mathbf{z}_{1},\mathbf{z}_{2},\mathbf{z}_{3},\mathbf{z}_{4}$ tuples of variables (of the appropriate length).
3.2.4. Disjoint sum
To define the second operation on $\operatorname{Def}({\mathbf{T}})$, we need to assume that $\mathcal{L}$ contains at least two constant symbols which are interpreted in each model of $\mathbf{T}$ as different elements. Since in all our applications, $\mathbf{T}$ will always be a theory of rings, for which we have the distinct constants $0$ and $1$, we will for simplicity assume that these two constant symbols are denoted $0$ and $1$ (not to be confused with the $\mathbf{T}$-equivalence classes $\bot$ and $\top$).
We define the disjoint sum of two formulae $\phi$ and $\psi$ as the formula
$$\phi\oplus\psi:=\big{(}\phi\wedge(v_{n+1}=0)\big{)}\vee\big{(}\psi\wedge(v_{n+%
1}=1)\big{)},$$
where $n$ is the maximum of the arities of $\phi$ and $\psi$ (so that $v_{n+1}$ is the “next” free variable). The disjoint sum is a commutative operation, but without identity element: we only have that $\phi\oplus\bot$ is $\mathcal{E}xpl$-isomorphic with $\phi$ via the morphic formula $y=x$ (note that its inverse $\phi\to\phi\oplus\bot$ is given by $(y=x)\wedge(v_{n+1}=0)$). Unlike $\vee$, disjoint sum is not idempotent: $\phi\oplus\phi$ will in general be different from $\phi$. We will assume that the (set-theoretic) disjoint union $V\sqcup W$ of two sets $V$ and $W$ is defined as the union of $V\times\{0\}$ and $W\times\{1\}$, so that we proved the following characterization of disjoint sum:
3.3 Lemma.
Given two formulae $\phi$ and $\psi$, their direct sum $\phi\oplus\psi$ is, up to $\mathbf{T}$-equivalence, the unique formula $\gamma$ such that
$$\gamma(M)=\phi(M)\sqcup\psi(M),$$
for all models $M$ of $\mathbf{T}$.∎
The Grothendieck ring of a theory
It is useful to construct Grothendieck rings over restricted classes of
formulas, like quantifier free, or pp-formulae. To do this in as general a
setup as possible, fix a language $\mathcal{L}$ with constant symbols for $0$ and $1$ (so that disjoint sums are defined). By a sub-semi-lattice $\mathcal{G}$ of $\operatorname{Def}({\mathbf{T}})$, we mean a collection of formulae closed under conjunction. If $\mathcal{G}$ is also closed under disjunction, we call it a sublattice, and if it is moreover closed under negation, we call it Boolean.333Although only the restrictions $\mathcal{G}\cap\mathcal{L}_{n}$ are then Boolean lattices, this should not cause any confusion.
We say that a sub-semi-lattice $\mathcal{G}$ is primary if it contains the Lefschetz formula $\lambda$ and all formulae of the form $v_{i}=c$, with $c$ a constant, and, moreover, $\phi$ belongs to $\mathcal{G}$ if and only if its primary form $\phi^{\circ}$ does. It follows that $\mathcal{G}$ is closed under multiplication, and if $\mathcal{G}$ is a lattice, then it is also closed under disjoint sums.
For the remainder of this section, we fix two primary sub-semi-lattices $\mathcal{F}\subseteq\operatorname{Def}({\mathbf{T}})$ (the “formulae”) and $\mathcal{I}\subseteq\operatorname{Def}({\mathbf{T}})$
(the “isomorphisms”). We assume, moreover, that $\mathcal{E}xpl\subseteq\mathcal{I}$; and, more often than not, $\mathcal{F}$ will actually be a lattice. In any case, by §2, we can define the scissor group ${\mathbf{Sciss}(\mathcal{F})}$. The $\mathcal{I}$-isomorphism relation induces a binary relation on $\mathcal{F}$, which we denote by $\cong_{\mathcal{I}}$ (strictly speaking, we should consider the equivalence relation generated by this relation, but this does not matter when working with Grothendieck groups). Recall that the corresponding Grothendieck group $\mathbf{K}_{\cong_{\mathcal{I}}}(\mathcal{F})$ is the quotient of $\mathbb{Z}[\mathcal{F}]$ modulo the subgroup $\mathfrak{N}:=\mathfrak{N}_{\mathcal{I}}+\mathfrak{N}_{\text{sciss}}$, where $\mathfrak{N}_{\mathcal{I}}$ is generated by all expressions ${\langle\phi\rangle}-{\langle\phi^{\prime}\rangle}$, for any pair of $\mathcal{I}$-isomorphic formulae $\phi,\phi^{\prime}\in\mathcal{F}$, and $\mathfrak{N}_{\text{sciss}}$ is generated, in the lattice case, by all second scissor relations ${\langle\phi\vee\psi\rangle}-{\langle\phi\rangle}-{\langle\psi\rangle}+{%
\langle\phi\wedge\psi\rangle}$, for any pair $\phi,\psi\in\mathcal{F}$, and where for clarity, we have written ${\langle\phi\rangle}$ for the $\mathbf{T}$-equivalence class of $\phi$ (although we will continue our practice of confusing $\mathbf{T}$-equivalence classes with their representatives). If $\mathcal{F}$ is only a semi-lattice, then $\mathfrak{N}_{\text{sciss}}$ is generated by all scissor relations ${\langle\psi\rangle}-{\langle S_{n}(\phi_{1},\dots,\phi_{n})\rangle}$, for all $\phi_{i}\in\mathcal{F}$ such that ${\langle\psi\rangle}={\langle\phi_{1}\vee\dots\vee\phi_{n}\rangle}$.
3.4 Lemma.
The subgroup $\mathfrak{N}$ is a two-sided ideal in $\mathbb{Z}[\mathcal{F}]$ with respect to the multiplication $\times$ on formulae, and the quotient ring is commutative.
Proof.
Note that in §2 we used the multiplication given by $\wedge$ to define scissor relations, whereas here multiplication is as defined in §3.2.3, and is not commutative.
Let $\alpha,\phi,\psi$ be formulae in $\mathcal{F}$. If $\phi\cong_{\mathcal{I}}\psi$,
then $\phi\times\alpha\cong_{\mathcal{I}}\psi\times\alpha$ by the discussion of (10), showing that ${\langle\alpha\rangle}\times({\langle\phi\rangle}-{\langle\psi\rangle})\in%
\mathfrak{N}_{\mathcal{I}}$, and a similar result for multiplication from the right. Hence $\mathfrak{N}_{\mathcal{I}}$ is a two-sided ideal. Moreover, since $\alpha\times\phi$ is explicitly isomorphic, whence $\mathcal{I}$-isomorphic, to $\phi\times\alpha$, both formulae have the same image modulo $\mathfrak{N}_{\mathcal{I}}$, showing that multiplication is commutative in the quotient.
For simplicity, we only give the proof that $\mathfrak{N}$ is an ideal in the lattice case, and leave the semi-lattice case to the reader. It suffices to show that any multiple of
$$u:={\langle\phi\vee\psi\rangle}-{\langle\phi\rangle}-{\langle\psi\rangle}+{%
\langle\phi\wedge\psi\rangle}$$
lies in $\mathfrak{N}$. Choose $\alpha^{\prime}$ to be $\mathcal{I}$-isomorphic to $\alpha$, but having free variables distinct from those of $\phi$ and $\psi$ (this can always be accomplished with an explicit change of variables). It follows from what we just proved that ${\langle\alpha\rangle}\times u-{\langle\alpha^{\prime}\rangle}\times u$ belongs to $\mathfrak{N}$, and so we may assume from the start that $\alpha$ has no free variables in common with $\phi$ and $\psi$. In particular, ${\langle\alpha\rangle}\times{\langle\phi\rangle}={\langle\phi\wedge\alpha\rangle}$, and similarly for any other term in $u$. It follows that
(11)
$${\langle\alpha\rangle}\times u={\langle(\phi\wedge\alpha)\vee(\psi\wedge\alpha%
)\rangle}-{\langle\phi\wedge\alpha\rangle}-{\langle\psi\wedge\alpha\rangle}+{%
\langle(\phi\wedge\alpha)\wedge(\psi\wedge\alpha)\rangle}$$
which is none other than the second scissor relation on the two formulae $\phi\wedge\alpha$ and $\psi\wedge\alpha$, and therefore, by definition, lies in $\mathfrak{N}$.
∎
Note that $\mathfrak{N}_{\text{sciss}}$ is in general not an ideal.
In any case, $\mathbf{K}_{\cong_{\mathcal{I}}}(\mathcal{F})$ has the structure of a commutative ring, which we will call the $\mathcal{I}$-Grothendieck ring of $\mathcal{F}$-formulae modulo $\mathbf{T}$ and which we denote, for simplicity, by $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$, or just $\mathbf{Gr}_{{\mathcal{I}}}({\mathcal{F}})$, if the underlying theory $\mathbf{T}$ is understood.
We denote the class of a
formula $\phi$ in $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$ by ${\left[\phi\right]}$, or in case we want to emphasize the isomorphism type, by
${\left[\phi\right]}_{\mathcal{I}}$. We denote the class of $\bot$ and $\top$ by $0$ and $1$ respectively; they are the neutral elements for addition and multiplication in $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$ respectively. Note that $1$ is equal to the class of $v_{1}=0$ defining a singleton.
The full Grothendieck ring of a theory $\mathbf{T}$ is obtained by taking for $\mathcal{F}$ and $\mathcal{I}$ simply all formulae. It will be denoted ${\mathbf{Gr}(\mathbf{T})}$.
Immediately from the definitions, we have:
3.5 Corollary.
For each pair $\mathcal{F},\mathcal{I}\subseteq\operatorname{Def}({\mathbf{T}})$ as above, there exists a canonical additive epimorphism of groups ${\mathbf{Sciss}({\mathcal{F}})}\twoheadrightarrow\mathbf{Gr}^{\mathbf{T}}_{%
\mathcal{I}}({\mathcal{F}})$.
Moreover,
if $\mathbf{T}^{\prime}$ is a subtheory of $\mathbf{T}$, and $\mathcal{E}xpl\subseteq\mathcal{I}^{\prime}\subseteq\mathcal{I}$ and $\mathcal{F}^{\prime}\subseteq\mathcal{F}$ primary sub-semi-lattices , then we have a natural homomorphism of Grothendieck rings $\mathbf{Gr}^{\mathbf{T}^{\prime}}_{\mathcal{I}^{\prime}}({\mathcal{F}^{\prime}%
})\to\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$.∎
Note that even if $\mathbf{T}=\mathbf{T}^{\prime}$ and $\mathcal{I}=\mathcal{I}^{\prime}$, the latter homomorphism need not be injective, as there are potentially more relations when the class of formulae is larger.
The Lefschetz class
We denote the class of the first Lefschetz formula $\lambda:=(v_{1}=v_{1})$ by ${\mathbb{L}}$ (recall that by assumption $\lambda$ belongs to $\mathcal{F}$), and call it the Lefschetz class of $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$. By definition of product, we immediately get:
3.6 Lemma.
For every $n$, we have ${\left[\lambda_{n}\right]}={\mathbb{L}}^{n}$ in $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$.∎
3.7 Lemma.
If $\mathcal{F}$ is a lattice, then ${\left[\phi\oplus\psi\right]}={\left[\phi\right]}+{\left[\psi\right]}$ in $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$, for all $\phi,\psi\in{\mathcal{F}}$. If $\mathcal{F}$ is moreover Boolean, then${\left[\neg\phi\right]}={\mathbb{L}}^{n}-{\left[\phi\right]}$, where $n$ is the arity of $\phi$.
Proof.
To prove the first assertion, let $\phi^{\prime}(x,z):=\phi(x)\wedge(z=0)$ and $\psi^{\prime}(x,z):=\psi(x)\wedge(z=1)$. By definition, and after possibly taking primary forms, we have ${\left[\phi^{\prime}\vee\psi^{\prime}\right]}={\left[\phi\oplus\psi\right]}$. The claim now follows since $\phi^{\prime}\wedge\psi^{\prime}$ defines the empty set in any model $M$ of $\mathbf{T}$, and hence its class is zero.
To prove the second, observe that ${\langle\phi\wedge\neg\phi\rangle}=0$, whereas $\phi\vee\neg\phi$ is $\mathbf{T}$-equivalent with $\lambda_{n}$. Hence the result follows by Lemma 3.6.∎
3.8 Lemma.
If $\mathcal{F}$ is Boolean, then the image of the ideal $\mathfrak{N}_{\text{sciss}}$ in $Z:=\mathbb{Z}[\mathcal{F}]/\mathfrak{N}_{\mathcal{I}}$
is generated as a group by all formal sums of the form
${\langle\phi\rangle}+{\langle\psi\rangle}-{\langle\phi\oplus\psi\rangle}$, with $\phi$ and $\psi$
formulae in ${\mathcal{F}}$.
Proof.
Let $\mathfrak{N}^{\prime}$ be the set of all formal sums in $Z$ of scissor relations
${\langle\alpha\rangle}+{\langle\beta\rangle}-{\langle\alpha\oplus\beta\rangle}$, with $\alpha$ and
$\beta$ in ${\mathcal{F}}$. Let $\phi$ and $\psi$ be two formulae
in ${\mathcal{F}}$. Then the scissor relation
${\langle\phi\rangle}+{\langle\psi\rangle}-{\langle\phi\wedge\psi\rangle}-{%
\langle\phi\vee\psi\rangle}$ is equal to the
difference
$$\Big{(}{\langle\phi\rangle}+{\langle\neg\phi\wedge\psi\rangle}-{\langle\phi%
\vee\psi\rangle}\Big{)}-\Big{(}{\langle\neg\phi\wedge\psi\rangle}+{\langle\phi%
\wedge\psi\rangle}-{\langle\psi\rangle}\Big{)}$$
whence belongs to $\mathfrak{N}^{\prime}$, since $\phi\vee\psi$ is $\mathcal{I}$-isomorphic with $\phi\oplus(\neg\phi\wedge\psi)$. Using (11), it is clear that $\mathfrak{N}^{\prime}$ is closed under multiples, whence is an ideal of $Z$.
∎
In particular, $\mathfrak{N}$ is generated as a group in $\mathbb{Z}[\mathcal{F}]$ by $\mathfrak{N}_{\mathcal{I}}$ and all formal sums ${\langle\phi\rangle}+{\langle\psi\rangle}-{\langle\phi\oplus\psi\rangle}$, for $\phi,\psi\in\mathcal{F}$.
3.9 Corollary.
Suppose $\mathcal{F}$ is a Boolean lattice. If $\mathcal{G}\subseteq\mathcal{F}$ is a sub-semi-lattice whose Boolean closure is equal to $\mathcal{F}$, then the natural map $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{G}})\to\mathbf{Gr}^{\mathbf{T%
}}_{\mathcal{I}}({\mathcal{F}})$ is surjective, and hence $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$ is generated as a group by classes of formulae in $\mathcal{G}$. If $\mathcal{G}$ is moreover closed under disjoint sums, then any element in $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$ is of the form ${\left[\psi\right]}-{\left[\psi^{\prime}\right]}$, with $\psi,\psi^{\prime}\in\mathcal{G}$.
Proof.
Let $G$ be the image of $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{G}})\to\mathbf{Gr}^{\mathbf{T%
}}_{\mathcal{I}}({\mathcal{F}})$ (see Corollary 3.5), that is to say, the subgroup generated by all ${\left[\phi\right]}$ with $\phi\in\mathcal{G}$.
By Lemma 3.7, the class of the negation of a formula in $\mathcal{G}$ lies in $G$. Since every term in ${\left[S_{n}(\phi_{1},\dots,\phi_{n})\right]}$, for $\phi_{i}\in\mathcal{G}$, lies by assumption in $G$, so does the class of any disjunction $\phi_{1}\vee\dots\vee\phi_{n}$ by Proposition 2.3 and Corollary 3.5. This proves the first assertion.
To prove the last, we can write, by what we just proved, any element $u$ of $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$ as a sum ${\left[\phi_{1}\right]}+\dots+{\left[\phi_{s}\right]}-{\left[\phi_{s+1}\right]%
}-\dots-{\left[\phi_{t}\right]}$, with $\phi_{i}\in\mathcal{G}$. Let $\psi$ be the disjoint sum $\phi_{1}\oplus\dots\oplus\phi_{s}$ and let $\psi^{\prime}$ be the disjoint sum $\phi_{s+1}\oplus\dots\oplus\phi_{t}$. By assumption, both $\psi$ and $\psi^{\prime}$ lie in $\mathcal{G}$, and $u={\left[\psi\right]}-{\left[\psi^{\prime}\right]}$ by Lemma 3.8.
∎
We
call two ${\mathcal{F}}$-formulae
$\phi_{1}$ and $\phi_{2}$ stably $\mathcal{I}$-isomorphic in
${\mathcal{F}}$ (modulo $\mathbf{T}$),
if there exists
an ${\mathcal{F}}$-formula $\psi$ such that $\phi_{1}\oplus\psi\cong_{\mathcal{I}}\phi_{2}\oplus\psi$. A priori, this is weaker than being $\mathcal{I}$-isomorphic, but in many cases, as we shall see, it is equivalent to it. In any case, we have:
3.10 Lemma.
Suppose $\mathcal{F}$ is Boolean. If $\phi_{1}\oplus\psi\cong_{\mathcal{I}}\phi_{2}\oplus\psi$ and $\psi\Rightarrow\psi^{\prime}$ modulo $\mathbf{T}$, then $\phi_{1}\oplus\psi^{\prime}\cong_{\mathcal{I}}\phi_{2}\oplus\psi^{\prime}$.
Proof.
By adding a disjoint copy to either side, we get
$$\phi_{1}\oplus\psi\oplus(\psi^{\prime}\wedge\neg\psi)\cong_{\mathcal{I}}\phi_{%
2}\oplus\psi\oplus(\psi^{\prime}\wedge\neg\psi).$$
Since $\psi\Rightarrow\psi^{\prime}$, the formulae $\psi^{\prime}$ and $\psi\oplus(\psi^{\prime}\wedge\neg\psi)$ are $\mathcal{I}$-isomorphic.
∎
3.11 Theorem.
Suppose $\mathcal{F}$ is Boolean, and $\cong_{\mathcal{I}}$ is an equivalence relation. Two formulae $\phi,\psi\in{\mathcal{F}}$ have the same class in $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})$ if and only if they are stably $\mathcal{I}$-isomorphic in ${\mathcal{F}}$.
Proof.
One direction is easy, so that we only need to verify the direct implication. Let $\bar{\mathcal{F}}$ be the quotient of $\mathcal{F}$ modulo the equivalence $\cong_{\mathcal{I}}$, that is to say, the collection of $\mathcal{I}$-isomorphism classes of formulae in $\mathcal{F}$. Let
$Z:=\mathbb{Z}[\mathcal{F}]/\mathfrak{N}_{\mathcal{I}}$, so that $\mathbf{Gr}^{\mathbf{T}}_{\mathcal{I}}({\mathcal{F}})\cong Z/\mathfrak{N}_{%
\text{sciss}}Z$. Moreover, the quotient map $\mathcal{F}\to\bar{\mathcal{F}}$ induces an isomorphism $Z\cong\mathbb{Z}[\bar{\mathcal{F}}]$, so that as an Abelian group, $Z$ is freely generated.
By Lemma 3.8, we can find ${\mathcal{F}}$-formulae $\phi_{i}$, $\phi_{i}^{\prime}$, $\psi_{i}$, and $\psi_{i}^{\prime}$ (without loss of generality we may assume that their number is the same),
such that
(12)
$${\langle\phi\rangle}+\sum_{i}{\langle\phi_{i}\rangle}+{\langle\phi_{i}^{\prime%
}\rangle}+{\langle\psi_{i}\oplus\psi_{i}^{\prime}\rangle}={\langle\psi\rangle}%
+\sum_{i}{\langle\psi_{i}\rangle}+{\langle\psi_{i}^{\prime}\rangle}+{\langle%
\phi_{i}\oplus\phi_{i}^{\prime}\rangle}$$
in $Z$.
Let $\sigma$ be the disjoint sum of all the formulae
$\phi_{i},\phi_{i}^{\prime},\psi_{i}$ and $\psi_{i}^{\prime}$. Since $Z$ is freely generated every formula in the left hand side of equation (12) also appears in its right hand side, and vice versa. Hence both formulae
$\phi\oplus\sigma$ and $\psi\oplus\sigma$ yield the same class in $Z\cong\mathbb{Z}[\bar{\mathcal{F}}]$, whence must be $\mathcal{I}$-isomorphic.
∎
4. Affine schemes of finite type
All schemes are assumed to be Noetherian, even if we do not always mention this. Let $X=\operatorname{Spec}A$ be an affine Noetherian scheme. By an $X$-scheme, we mean a separated scheme $Y$ together with a morphism of finite type $Y\to X$. Hence, affine $X$-schemes are in one-one correspondence with finitely generated $A$-algebras.
We call a scheme $Y$ a variety if it is reduced (but not necessarily irreducible).
Fix an affine Noetherian scheme $X=\operatorname{Spec}A$. Let $\mathcal{L}_{A}$ be the language of $A$-algebras
in the signature consisting of two binary operations $+$ and $\cdot$, plus
constant symbols for each element in $A$. A formula in this language will simply be called an
$A$-algebra formula. By a schemic formula in $\mathcal{L}_{A}$, we
mean a
finite conjunction of formulae $f(x)=0$, where $f\in A[x]$ and
$x$ is a finite tuple of indeterminates. We denote the collection of all
schemic formulae by $\mathcal{S}ch$.
Let $\mathbf{T}_{A}$ be the theory of $A$-algebras, that is to say, the theory
whose models are the $\mathcal{L}_{A}$-structures that carry the structure of an
$A$-algebra. We also consider some extensions of this theory.
As we shall see, and as would not be surprising to an algebraic geometer, it suffices to work with local rings. Being local is a first-order property as it is equivalence with the statement that the sum of any two non-units is again a non-unit, and hence $\mathbf{T}_{A}^{loc}$ is the theory $\mathbf{T}_{A}$ to which we adjoin the first-order sentence
$$(\forall x,y)\Big{(}(\forall a)(ax\neq 1)\wedge(\forall b)(by\neq 1)%
\Rightarrow(\forall c)((x+y)c\neq 1)\Big{)}.$$
But not only can we restrict to local rings, we may restrict our theory to zero-dimensional algebras. More precisely, let $\mathbf{Art}_{A}$ be the class of all local $A$-algebras of finite length as an $A$-module, for short, the local $A$-Artinian algebras. Unfortunately, $\mathbf{Art}_{A}$ is not elementary, and hence its theory will have models that are not Artinian local rings. Nonetheless, as observed earlier, whenever we have to verify an equivalence or an isomorphism modulo this theory, it suffices to check this on the rings in $\mathbf{Art}_{A}$. Finally, the “classical” theory is recovered from looking at the theory $\mathbf{ACF}_{A}$, consisting of all algebraically closed fields that are $A$-algebras. Instead of writing $\mathcal{L}_{A}$ and $\mathbf{Art}_{A}$, we also may write $\mathcal{L}_{X}$ or $\mathbf{Art}_{X}$, when we take a more geometrical point of view. Similarly, given an $A$-algebra $B$ and a schemic formula $\phi$, we call $\phi(B)$ sometimes the definable subset of $Y:=\operatorname{Spec}B$ given by $\phi$, an denote it $\phi(Y)$.
4.1 Theorem.
Let $A$ be a Noetherian ring, $x$ an $n$-tuple of indeterminates, and
${\mathbb{A}_{A}^{n}}$ the affine scheme $\operatorname{Spec}(A[x])$.
There is a one-one correspondence between the following three sets:
(1)
the set of $\mathbf{T}_{A}$-equivalence classes of
schemic formulae of arity $n$;
(2)
the set of $\mathbf{Art}_{A}$-equivalence classes of
schemic formulae of arity $n$;
(3)
the set of all ideals in
$A[x]$;
(4)
the set of all closed
subschemes of ${\mathbb{A}_{A}^{n}}$.
Proof.
The one-one correspondence between (3) and (4) is of course
well-known: to an ideal $I\subseteq A[x]$ one associates the closed subscheme
$\operatorname{Spec}(A[x]/I)$. Let $\phi$ be the schemic formula
$f_{(}x)=\dots=f_{s}(x)=0$, with $f_{i}\in A[x]$, and let $I(\phi):=(f_{1},\dots,f_{s})A[x]$. Suppose $\psi$ is another schemic formula in the free
variables $x$ which is
$\mathbf{T}_{A}$-equivalent to $\phi$.
We need to show that $I(\phi)=I(\psi)$. Let $C_{\phi}:=A[x]/I(\psi)$. Since
$x$ satisfies $\psi$ in the $A$-algebra $C_{\psi}$, it must also satisfy
$\phi$ in $C_{\psi}$, by definition of equivalence. This means that
all $f_{i}(x)$ are zero in $C_{\psi}$, that is to say, $f_{i}\in I(\psi)$. Hence
$I(\phi)\subseteq I(\psi)$. Reversing the argument then shows that both ideals are
equal. Conversely, if both ideals are the same, then writing the $f_{i}$ in terms
of the generators of $I(\psi)$ shows that any solution to $\psi$, in any
$A$-algebra, is also a solution to $\phi$, and vice versa. Hence the
two formulas are equivalent.
So remains to show the equivalence of (2) with the remaining conditions. One direction is trivial, so assume $\phi$ and $\psi$ are $\mathbf{Art}_{A}$-equivalent, that is to say, $\phi(R)=\psi(R)$ for all local $A$-Artinian algebras $R$. Towards a contradiction, assume $I:=I(\phi)$ and $J:=I(\psi)$ are different ideals in $B:=A[x]$. Hence, there exists a maximal ideal $\mathfrak{m}\subseteq B$ such that $IB_{\mathfrak{m}}\neq JB_{\mathfrak{m}}$. Moreover, by Krull’s Intersection Theorem, there exists an $n$ such that $IR\neq JR$, where $R:=B_{\mathfrak{m}}/\mathfrak{m}^{n}B_{\mathfrak{m}}=B/\mathfrak{m}^{n}$. Let $a$ be the image of the $n$-tuple $x$ in $R$. We have $a\in\phi(R/IR)$, since each $f_{i}(a)=0$ in $R/IR$. Since $R/IR$ has finite length as an $A$-module, $\phi(R/IR)=\psi(R/IR)$. Hence, $g(a)=0$ in $R/IR$, for any $g\in J$, showing that $g\in IR$ whence $JR\subseteq IR$. Switching the role of $I$ and $J$, the latter inclusion is in fact an equality, contradicting the choice of $R$.
∎
From the proof of Theorem 4.1, we see that the ideal $I(\phi)$ associated to a schemic formula $\phi$ only depends on the $\mathbf{Art}_{A}$-equivalence class of $\phi$. We denote the affine scheme corresponding to $\phi$ by $Y_{\phi}$, that is to say, $Y_{\phi}:=\operatorname{Spec}(A[x]/I(\phi))$.
4.1.1. Base change
Let $A^{\prime}$ be an $A$-algebra, that is to say, a morphism $X^{\prime}:=\operatorname{Spec}(A^{\prime})\to X:=\operatorname{Spec}(A)$. We may assign to each $A$-algebra $B$ its scalar extension $A^{\prime}\otimes_{A}B$, or in terms of affine schemes, $Y=\operatorname{Spec}(B)$ yields by base change the affine $X^{\prime}$-scheme $X^{\prime}\times_{X}Y$. In terms of formulae, if $\phi$ is the schemic $\mathcal{L}_{A}$-formula defining $Y$, then we may view $\phi$ also as an $\mathcal{L}_{A^{\prime}}$-formula. As such, it defines the base change $X^{\prime}\times_{X}Y$.
Under the one-one correspondence of Theorem 4.1, schemic sentences correspond to ideals of $A$. More precisely, if $\sigma$ is the schemic sentence $a_{1}=\dots=a_{s}=0$ with $a_{i}\in A$, and $\mathfrak{a}:=(a_{1},\dots,a_{s})A$ the corresponding ideal, then in a model $C$ of $\mathbf{T}_{A}$, the interpretation of $\sigma$ is either the empty set, in case $\mathfrak{a}C\neq 0$, or the singleton $\{\emptyset\}$, in case $\mathfrak{a}C=0$.
Note that the previous result is false for non-schemic formulae. For instance, if $\phi(x):=(x^{2}=x)$ and $\psi(x):=(x=0)\vee(x=1)$, then $\phi\Rightarrow\psi$ in the theory $\mathbf{Art}_{A}$ (since local rings only have trivial idempotents), but not in $\mathbf{T}_{A}$ (take, for instance, as model $C:=A[x]/(x^{2}-x)A[x]$). In fact, the schemic formula $y=x$ defines a morphism $\phi\to\psi$ in $\mathbf{Art}_{A}$, but not in $\mathbf{T}_{A}$ (take again $C$ as the model; see also Example 4.3 below).
Unless stated explicitly otherwise, we will from now on assume that the underlying theory is $\mathbf{Art}_{A}$.
Disjoint sums and unions
Although we have the general construction of a disjoint sum $\oplus$ of two schemic formula, the result is no longer a schemic formula. To this end, we define the disjoint union of two schemic formulae $\phi$ and $\psi$ as follows. Let $n$ be the maximum of the arities of $\phi$ and $\psi$, and put $z:=v_{n+1}$. Let $I(\phi)$ and $I(\psi)$ be the respective ideals of $\phi$ and $\psi$ in $\mathbb{Z}[x]$, with $x=(x_{1},\dots,x_{n})$, and put
$$\mathfrak{a}:=((1-z)I(\phi),zI(\psi),z(z-1))\mathbb{Z}[x,z].$$
Then the disjoint union of $\phi$ and $\psi$ is the schemic formula $\phi\sqcup\psi$ given by the ideal $\mathfrak{a}$, that is to say, the conjunction of all equations $(z-1)f=0$ and $zg=0$, with $f\in I(\phi)$ and $g\in I(\psi)$, together with $z(z-1)=0$.
4.2 Lemma.
Given schemic formulae $\phi$ and $\psi$, their disjoint union $\phi\sqcup\psi$ is $\mathbf{Art}_{A}$-equivalent to their disjoint sum $\phi\oplus\psi$.
Proof.
It suffices to show that both formulae define the same subset in any local $A$-Artinian algebra $R$. Let $\phi^{\prime}$ and $\psi^{\prime}$ be the formulae $\phi(x)\wedge(z=0)$ and $\psi(x)\wedge(z=1)$ respectively, where $(x,z)=(v_{1},\dots,v_{n+1})$ as above. In particular, $\phi\oplus\psi=\phi^{\prime}\vee\psi^{\prime}$.
Suppose $(a,c)\in(\phi\sqcup\psi)(R)$. Since $R$ is local, its only idempotents are $0$ and $1$. Hence, since $c^{2}=c$, we may assume, upon reversing the role of $\phi$ and $\psi$ if necessary, that $c=0$. In particular, $(1-c)f(a)=f(a)=0$ in $R$, for all $f\in I(\phi)$, showing that $a\in\phi(R)$, whence $(a,c)\in(\phi^{\prime}\vee\psi^{\prime})(R)$. Conversely, suppose $(\phi^{\prime}\vee\psi^{\prime})(a,c)$ holds in $R$, so that one of the disjuncts is true in $R$, say, $\phi^{\prime}(a,c)$. In particular, $0=c=c(c-1)$ and $f(a)=0$ for all $f\in I(\phi)$, showing that $(a,c)\in(\phi\sqcup\psi)(R)$.
∎
4.3 Example.
All we needed from the models of $\mathbf{Art}_{A}$ was that they were local. However, the result is false in $\mathbf{T}_{A}$: for instance, let $\lambda$ be the schemic (Lefschetz) formula corresponding to the zero ideal in $A[x]$, with $x$ a single variable. Then $\lambda\oplus\lambda$ is the formula $(z=0)\vee(z=1)$, whereas $\lambda\sqcup\lambda$ is the formula $z^{2}-z=0$ (with the usual primary form assumption that $x=v_{1}$ and $z=v_{2}$). However, in the $A$-algebra $C:=A[t]/(t^{2}-t)A[t]$ these formulae define different subsets: $(\lambda\oplus\lambda)(C)$ is the subset $C\times\{0,1\}$, whereas $(\lambda\sqcup\lambda)(C)$ contains $(0,t)$.
Immediately from Lemma 4.2, we get:
4.4 Lemma.
If $\phi$ and $\psi$ are schemic formulae with corresponding affine schemes $Y_{\phi}$ and $Y_{\psi}$, then $\phi\sqcup\psi$ corresponds to the disjoint union $Y_{\phi}\sqcup Y_{\psi}$. ∎
4.5 Lemma.
For $X:=\operatorname{Spec}A$ an affine Noetherian scheme, $\phi$ a schemic formula defining an affine $X$-scheme $Y:=Y_{\phi}$, and $B$ an $A$-algebra, the definable subset $\phi(B)$ is in one-one correspondence with $\operatorname{Mor}_{X}(\operatorname{Spec}B,Y)$, the set of $B$-rational points on $Y$ over $X$.
Proof.
Put $Z:=\operatorname{Spec}B$. Recall that $\operatorname{Mor}_{X}(Z,Y)$ consists of all morphisms of schemes $Z\to Y$ over $X$. Such a map is uniquely determined by an $A$-algebra homomorphism $A[x]/I(\phi)\to B$, and this in turn, is uniquely determined by the image $b$ of $x$ in $B$. Since $b$ is therefore a solution of all $f\in I(\phi)$, it satisfies $\phi$. Conversely, any tuple $b\in\phi(B)$ induces an $A$-algebra homomorphism $A[x]/I(\phi)\to B$.
∎
In fact, we may view $\phi$ as a functor on the category of $A$-algebras and as such it agrees with the functor represented by the scheme $Y_{\phi}$ over $X$. More precisely, if $B\to C$ is an $A$-algebra homomorphism then the induced map $B^{n}\to C^{n}$ on the Cartesian products maps $\phi(B)$ into $\phi(C)$. On the other hand, the associated map of schemes $\operatorname{Spec}C\to\operatorname{Spec}B$ induces, by composition, a map $\operatorname{Mor}_{X}(\operatorname{Spec}B,Y_{\phi})\to\operatorname{Mor}_{X}%
(\operatorname{Spec}C,Y_{\phi})$. Under the one-one correspondence in Lemma 4.5, we get a commutative diagram
(13)
$$\phi(C)$$$$\phi(B)$$$$\operatorname{Mor}_{X}(\operatorname{Spec}C,Y_{\phi}).$$$$\operatorname{Mor}_{X}(\operatorname{Spec}B,Y_{\phi})$$
Schemic morphisms
By definition, a $\mathcal{S}ch$-morphism $f\colon\phi\to\psi$ between $\mathcal{L}_{A}$-formulae $\phi(x)$ and $\psi(y)$ is given by a schemic formula $\theta(x,y)$, such that $\theta(B)$ is the graph of a function $f_{B}\colon\phi(B)\to\psi(B)$, for every $A$-algebra $B$. In particular, $\theta$ is explicit, that is to say, belongs to $\mathcal{E}xpl$, if it is of the form $\bigwedge_{i=1}^{m}y_{i}=p_{i}(x)$, with $p_{i}\in A[x]$. For each $A$-algebra $B$, we denote the global map defined by this explicit formula by $p_{B}\colon{\mathbb{A}_{B}^{n}}\to{\mathbb{A}_{B}^{m}}$, where $p=(p_{1},\dots,p_{m})$. In particular, $f_{B}\colon\phi(B)\to\psi(B)$ is then the restriction of $p_{B}$.
4.6 Proposition.
Every schemic morphism between schemic formulae is explicit.
Proof.
Let us first prove this modulo $\mathbf{T}_{A}$. Let $f_{\theta}\colon\phi\to\psi$ be a schemic morphism between the schemic formulae $\phi(x)$ and $\psi(y)$, with $x=(x_{1},\dots,x_{n})$ and $y=(y_{1},\dots,y_{m})$. Let $C:=A[x]/I(\phi)$, where $I(\phi)$ is the ideal associated to $\phi$
under the equivalence given in Theorem 4.1, that is to say, the
ideal generated by the equations that make out the schemic formula $\phi$.
Let $a$ denote
the $n$-tuple in $C$ given by the image of the variables $x$ in $C$. In particular,
$a\in\phi(C)$, and hence $f_{C}(a)\in\psi(C)$. Let $p$
be a tuple of polynomials in $A[x]$ whose image is the tuple $f_{C}(a)$ in $C$. Suppose
$\theta$ is the conjunction of formulae $h_{i}(x,y)=0$, with $h_{i}\in A[x,y]$, for $i=1,\dots,s$, and $\psi$ is the conjunction of formulae $g_{j}(y)=0$ with
$g_{j}\in A[y]$, for $j=1,\dots,t$. Put $\tilde{h}_{i}(x):=h_{i}(x,p(x))$
and $\tilde{g}_{j}(x):=g_{j}(p(x))$. Since both $\theta(a,p(a))$ and $\psi(p(a))$ hold in $C$, all $\tilde{h}{}_{i}$ and $\tilde{g}_{j}$ belong to $I(\phi)$.
Therefore, if $B$ is an arbitrary $A$-algebra, and $b\in\phi(B)$, then all $\tilde{h}_{i}(b)$ and $\tilde{g}_{j}(b)$ are
zero.
In particular, $p(b)$ lies in $\psi(B)$ and $\theta(b,p(b))$ holds in $B$. By condition (2) in the definition of a morphism, $f_{B}(b)$ must be equal to $p(b)$. Hence we showed that $\theta(x,y)$ is isomorphic to the explicit formula $\gamma(x,y)$ given as the conjunction of all $y_{i}=p_{i}(x)$. By Theorem 4.1, the two schemic formulae $\theta$ and $\gamma$ being $\mathbf{T}_{A}$-equivalent, are then also $\mathbf{Art}_{A}$-equivalent, showing that $f$ is explicit modulo the latter theory.
∎
4.7 Example.
Not every schemic morphism is explicit. For instance, let $\phi(x_{1},x_{2})$ be the formula $x_{1}x_{2}=1$ and let $\psi(x_{1}):=(\exists x_{2})\phi(x_{1},x_{2})$. Then the formula $(x_{1}=y_{1})\wedge(x_{1}y_{2}=1)$ is morphic and yields a schemic morphism $f\colon\psi(x_{1})\to\phi(y_{1},y_{2})$, but this is not explicit, since $1/x_{1}$ is not a term (polynomial). Put differently, for any $A$-algebra $B$, the map $f_{B}\colon\psi(B)\to\phi(B)\colon b\mapsto(b,1/b)$ is not induced by any total map ${\mathbb{A}_{B}^{1}}\to{\mathbb{A}_{B}^{2}}$. Note that $f$ is even a bijection, and its inverse $\phi(x_{1},x_{2})\to\psi(y_{1})$ is given by the explicit formula $\theta(x_{1},x_{2},y_{1}):={\langle y_{1}=x_{1}\rangle}$. In particular, the latter explicit bijection is not an $\mathcal{E}xpl$-isomorphism, since $\operatorname{inv}(\theta)(x_{1},y_{1},y_{2})={\langle y_{1}=x_{1}\rangle}$ is not explicit, as it does not contain a conjunct of the form $y_{2}=g(x_{1})$ with $g$ a polynomial (as pointed out, $1/x_{1}$ is not a term). By Lemma 3.2, however, $f$ is a $\mathcal{S}ch$-isomorphism.
4.8 Corollary.
There is a one-one correspondence between schemic morphisms $\phi\to\psi$ modulo $\mathbf{Art}_{A}$ and morphisms $Y_{\phi}\to Y_{\psi}$ of schemes over $X$.
Proof.
If $\phi\to\psi$ is a schemic morphism, then by Proposition 4.6, there exists a tuple of polynomials $p$ such that $\phi(B)\to\psi(B)$ is given by the base change of the polynomial map $p\colon{\mathbb{A}_{A}^{n}}\to{\mathbb{A}_{A}^{m}}$. By construction, the restriction of $p$ to the closed subscheme $Y_{\phi}$ of ${\mathbb{A}_{A}^{n}}$ has image inside the closed subscheme $Y_{\psi}\subseteq{\mathbb{A}_{A}^{m}}$, and hence induces a morphism $Y_{\phi}\to Y_{\psi}$ over $X$. Conversely, any morphism $Y_{\phi}\to Y_{\psi}$ is easily seen to induce a morphism $\phi\to\psi$ under the one-one correspondence given by Lemma 4.5.
∎
Proposition 4.6 shows that $\mathcal{S}ch$ is compositionally closed relative to $\mathcal{S}ch$.
4.9 Corollary.
For each Noetherian ring $A$, there is a one-one correspondence between $\mathcal{S}ch$-isomorphism classes of schemic formulae modulo $\mathbf{Art}_{A}$, and isomorphism classes of
affine schemes of finite type over $A$.
Proof.
By Theorem 4.1, a schemic formula $\phi$ corresponds to a scheme $Y_{\phi}$ of finite
type over $X:=\operatorname{Spec}A$, and by Corollary 4.8, a schemic definable
map $\phi\to\psi$ induces a morphism $Y_{\phi}\to Y_{\psi}$ of schemes over $X$. Using this correspondence, one checks that an isomorphism of formulae corresponds to an isomorphism of schemes over $X$.
∎
Recall that a pp-formula (positive primitive formula) is the projection of a schemic formula, that is to say, a formula of the form $(\exists y)\psi(x,y)$, where $\psi(x,y)$ is a schemic formula. To emphasize that pp-formulae represent projections, we will also denote a pp-formula as
$$\texttt{Im}(\psi)(x):=(\exists y)\psi(x,y).$$
It follows from Lemma 3.1 that the collection of all pp-formulae is compositionally closed.
4.10 Corollary.
If $\phi$ is a schemic formula and $\theta$ a pp-morphism $f_{\theta}\colon\phi\to\psi$, then $\theta$ is explicit.
Proof.
By the same argument as in the proof of Proposition 4.6, it suffices to show this modulo $\mathbf{T}_{A}$.
Let us first show this result for $\psi$ a pp-formula.
We can repeat the proof of Proposition 4.6, up to the point that we introduced the polynomials $\tilde{h}_{i}$ and $\tilde{g}_{j}$. Instead, $\theta(x,y)$ is of the form $(\exists z)\zeta(x,y,z)$, with $\zeta$ a schemic formula in the variables $x$, $y$, and $z$, and likewise, $\psi$ is of the form $\texttt{Im}(\gamma)=(\exists z)\gamma(x,z)$, with $\gamma$ a schemic formula in the variables $x$ and $z$. Let $I(\zeta)$ and $I(\gamma)$ be generated respectively by polynomials $h_{i}(x,y,z)\in A[x,y,z]$ and $g_{j}(x,z)\in A[x,z]$, for $i=1,\dots,s$ and $j=1,\dots,t$. Since $\theta(a,p(a)$ and $\psi(p(a))$ hold in $C$, we can find tuples of polynomials $g,q\in A[x]$ so that $\zeta(a,p(a),g(a))$ and $\gamma(p(a),q(a))$ hold in $C$, implying that all $\tilde{h}_{i}(x):=h_{i}(x,p(x),g(x))$ and $\tilde{g}_{j}(x):=g_{j}(p(x),q(x))$ lie in $I(\phi)$. Hence in an arbitrary $A$-algebra $B$, we have for every $b\in\phi(B)$, that $\tilde{h}_{i}(b)$ and $\tilde{g}_{j}(b)$ are all zero. This proves $p(b)\in\psi(B)$ and $(b,p(b))\in\theta(B)$, and we can now finish the proof as above for $\psi$ a pp-formula.
For the general case, let $\psi$ be arbitrary, and define the pp-formula
$$\texttt{Im}(\phi\wedge\theta)(y):=(\exists x)\phi(x)\wedge\theta(x,y).$$
In order to show that $\theta$ defines a morphism $\phi\to\texttt{Im}(\phi\wedge\theta)$, we may verify this in an arbitrary $A$-algebra $B$. Let $c\in\phi(B)$ and put $b:=f_{B}(c)$. Hence $\theta(c,b)$, whence $\texttt{Im}(\phi\wedge\theta)(b)$, holds, showing that $f_{B}$ maps $\phi(B)$ inside $\texttt{Im}(\phi\wedge\theta)(B)$. On the other hand, if $b\in\texttt{Im}(\phi\wedge\theta)(B)$, then there exists $c\in\phi(B)$ such that $\theta(c,b)$ holds, and hence $b=f_{B}(c)\in\psi(B)$. Hence, the implication $\texttt{Im}(\phi\wedge\theta)\Rightarrow\psi$ is explicit. Moreover, by our previous argument, the morphism $\phi\to\texttt{Im}(\phi\wedge\theta)$ is explicit, and therefore, so is the composition $\phi\to\texttt{Im}(\phi\wedge\theta)\to\psi$, as we wanted to show.
∎
4.11 Corollary.
If $\phi$ is a pp-formula and $f\colon\phi\to\psi$ a pp-morphism, then we can lift $f$ to an explicit morphism. More precisely, if $\phi=\texttt{Im}(\gamma)$ with $\gamma$ a schemic formula, then there exist explicit morphisms $p\colon\gamma\to\phi$ and $\tilde{f}\colon\gamma\to\psi$ with $p$ surjective, yielding a commutative diagram
(14)
$$\gamma$$$$\phi$$$$\psi.$$$$f$$$$p$$$$\tilde{f}$$
Proof.
Let $p\colon\gamma\to\phi$ be the projection map, that is to say, the explicit morphism given by $y=x$. Hence the composition $\tilde{f}:=f\circ p$ is a pp-morphism. By Corollary 4.10, this is an explicit morphism, as we wanted to show.
∎
Zariski closure of a formula
Given a formula $\psi$ in $\mathcal{L}_{A}$, we define its Zariski closure $\bar{\psi}$ as follows. Suppose $\psi$ has arity $n$. Let $I$ be the sum of all $I(\phi)$, with $\phi\in\mathcal{S}ch_{n}$ such that $\psi\Rightarrow\phi$ holds in $\mathbf{T}_{A}$, and let $\bar{\psi}$ be a schemic formula corresponding to the ideal $I$. Hence, by Theorem 4.1, the Zariski closure is defined up to $\mathbf{Art}_{A}$-equivalence. Moreover, it satisfies the following universal property: $\psi\Rightarrow\bar{\psi}$, and if $\phi$ is a schemic formula such that $\psi\Rightarrow\phi$, then $\bar{\psi}\Rightarrow\phi$. Form the definition, it follows that $\bar{\psi}$ is equivalent to the infinite conjunction of all $\phi\in\mathcal{S}ch_{n}$ such that $\psi\Rightarrow\phi$.
4.12 Lemma.
Every explicit morphism $f\colon\psi\to\psi^{\prime}$ of $\mathcal{L}_{A}$-formulae extends to an explicit morphism $\bar{f}\colon\bar{\psi}\to\bar{\psi}^{\prime}$.
Proof.
Let $\theta$ be an explicit formula defining the morphism $f_{\theta}\colon\psi\to\psi^{\prime}$, and let $p\colon{\mathbb{A}_{A}^{n}}\to{\mathbb{A}_{A}^{m}}$ be the total map defined by $\theta$. Again, by Theorem 4.1, we may work modulo $\mathbf{T}_{A}$. Hence for any $A$-algebra $B$, the induced map $f_{B}$ is simply the restriction of the base change $p_{B}$ of $p$. So remains to show that $p_{B}$ maps $\bar{\psi}(B)$ inside $\bar{\psi}^{\prime}(B)$. Let $\phi(x)$ be the schemic formula $\bar{\psi}^{\prime}(p(x))$ obtained by substituting $p$ for $y$. It follows that $\psi(B)\subseteq\phi(B)$. Since this holds for all $B$, we get $\psi\Rightarrow\phi$ and hence by the universal property of Zariski closure, $\bar{\psi}\Rightarrow\phi$. It is now easy to see that this means that $p_{B}$ maps $\bar{\psi}(B)$ inside $\bar{\psi}^{\prime}(B)$.
∎
The non-explicit, schemic map from Example 4.7 does not extend to the Zariski closure of $\psi$, as $\bar{\psi}(x_{1})=(x_{1}=x_{1})$.
4.13 Corollary.
If two $\mathcal{L}_{A}$-formulae $\psi$ and $\psi^{\prime}$ are $\mathcal{E}xpl$-isomorphic, then so are their Zariski closures $\bar{\psi}$ and $\bar{\psi}^{\prime}$.
Proof.
Let $\theta$ and $\operatorname{inv}(\theta)$ be explicit formulae defining respectively $f\colon\psi\to\psi^{\prime}$ and its inverse $g\colon\psi^{\prime}\to\psi$. Applying Lemma 4.12 to both morphisms yields explicit morphisms $\bar{f}\colon\bar{\psi}\to\bar{\psi}^{\prime}$ and $\bar{g}\colon\bar{\psi}^{\prime}\to\bar{\psi}$. Moreover, since the compositions $g\circ f\colon\psi\to\psi$ and $f\circ g\colon\psi^{\prime}\to\psi^{\prime}$ are both identity morphisms, so must their Zariski closures be, and it is not hard to see that these are $\bar{g}\circ\bar{f}$ and $\bar{f}\circ\bar{g}$ respectively. Hence, we showed that $\bar{f}$ and $\bar{g}$ are each others inverse.
∎
The Zariski closure of a formula can in general be hard to calculate. Here is a simple example: if $F$ is an algebraically closed field and $f,g\in F[x]$ are relatively prime polynomials in a single variable, then the Zariski closure of $\psi:=(f=0)\vee(g=0)$ is the formula $fg=0$. It is clear that $\psi\Rightarrow(fg=0)$. To show that it satisfies the universal property for Zariski closures, let $\phi$ be any schemic formula implied by $\psi$. We need to show that $I(\phi)\subseteq fgF[x]$. We may reduce therefore to the case that $\phi$ is the formula $h=0$, and hence we have to show that any root of $f$ or $g$ in $F$ is also a root of $h$ of at least the same multiplicity. By the Nullstellensatz, and after a translation, it suffices to prove that if $0$ is a root of $f$ of multiplicity $e$, then it is also a root of $h$ of multiplicity $e$. Write $f(x)=x^{e}\tilde{f}(x)$, for some $\tilde{f}\in F[x]$. Let $B:=F[x]/fgF[x]$, and put $b:=x\tilde{f}g\in F[x]$. Since $f(b)=x^{e}\tilde{f}^{e}g^{e}\tilde{f}(b)$, it is zero in $B$, that is to say, $b\in\psi(B)$. Hence, by assumption, $b\in\phi(B)$, that is to say, $h(b)=0$ in $B$. In particular, $h(b)$ is divisible by $x^{e}$ in $F[x]$, and writing out $h$ as polynomial in $x$ then easily implies that $h$ itself must be divisible by $x^{e}$. Hence $x=0$ is a root of $h$ of multiplicity $e$, as we wanted to show. We expect that this result holds true in far greater generality: is it the case that the Zariski closure of a disjunction $\phi_{1}\vee\phi_{2}\vee\dots\vee\phi_{s}$ of schemic formulae $\phi_{i}$ is the schemic formula corresponding to the ideal $I(\phi_{1})\cap I(\phi_{2})\cap\dots\cap I(\phi_{s})$?
5. The schemic Grothendieck ring
Let $X:=\operatorname{Spec}A$ be an affine, Noetherian scheme.
The classical Grothendieck ring
Before we discuss our generalization to schemes, let us first study the classical case. To this end, we must work in the theory $\mathbf{ACF}_{X}$, the theory of algebraically closed fields having the structure of an $A$-algebra. We have the following analogue of Theorem 4.1.
5.1 Theorem.
Let $A$ be a Noetherian ring, $x$ an $n$-tuple of indeterminates, and
${\mathbb{A}_{A}^{n}}$ the affine scheme $\operatorname{Spec}(A[x])$.
There is a one-one correspondence between the following three sets:
(1)
the set of $\mathbf{ACF}_{A}$-equivalence classes of
schemic formulae of arity $n$;
(2)
the set of radical ideals in
$A[x]$;
(3)
the set of reduced subschemes of ${\mathbb{A}_{A}^{n}}$.
Proof.
The one-one correspondence between the last two sets is again classical. Let $\phi$ and $\psi$ be schemic formulae in the free
variables $x$. Assume first that $\phi$ and $\psi$ are
$\mathbf{ACF}_{A}$-equivalent.
We need to show that $I(\phi)$ and $I(\psi)$ have the same radical. Suppose not, so that there exists a prime ideal $\mathfrak{p}\subseteq A[x]$ containing exactly one of these ideals, say, $I(\phi)$, but not the other. Let $K$ be the algebraic closure of $A[x]/\mathfrak{p}$, so that $K$ is a model of $\mathbf{ACF}_{A}$, and let $a$ denote the image of $x$ in $K$. By assumption, we can find $f\in I(\psi)$ such that $f\notin\mathfrak{p}$. In particular, since $f(a)\neq 0$ in $K$, the tuple $a$ does not belong to $\psi(K)=\phi(K)$. However, for any $g\in I(\phi)$, we have $g\in\mathfrak{p}$ whence $g(a)=0$ in $K$, contradiction.
Conversely, if both ideals have the same radical, then each $f\in I(\phi)$ has some power $f^{N}$ belonging to $I(\psi)$. In particular, if $K$ is a model of $\mathbf{ACF}_{A}$ and $c\in\psi(K)$, then $f^{N}(c)$ whence also $f(c)$ vanishes in $K$, for all $f\in I(\phi)$, showing that $c\in\phi(K)$. This shows that $\psi(K)\subseteq\phi(K)$, and the reverse inclusion follows by the same argument, proving that $\phi$ and $\psi$ are $\mathbf{ACF}_{A}$-equivalent.
∎
Let $X$ be a Noetherian affine scheme. We define its classical Grothendieck ring to be the Grothendieck ring
$${\mathbf{Gr}(X^{\operatorname{var}})}:=\mathbf{Gr}^{\mathbf{ACF}_{X}}_{%
\mathcal{E}xpl}({\mathcal{QF}}),$$
that is to say, the ring obtained by killing the ideal of all scissor relations and all $\mathcal{E}xpl$-isomorphism relations in the free Abelian group on all classes of quantifier free formulae modulo $\mathbf{ACF}_{X}$. The analogue of Corollary 4.9 holds, showing that the set of $\mathcal{E}xpl$-isomorphism classes of schemic formulae modulo $\mathbf{ACF}_{X}$ is in one-one correspondence with the set of isomorphism classes of reduced affine $X$-schemes. Moreover, Corollary 3.9 applies with $\mathcal{G}=\mathcal{S}ch$, so that the classes of schemic formulae generate this Grothendieck ring. Hence, we showed the first assertion of:
5.2 Corollary.
If $F$ is an algebraically closed field, then ${\mathbf{Gr}(F^{\operatorname{var}})}$ is the Grothendieck ring $K_{0}(F)$ obtained by taking the free Abelian group on isomorphism classes of varieties and killing all scissor relations. Moreover, if $F$ has characteristic zero, then it is also equal to the full Grothendieck ring ${\mathbf{Gr}(\mathbf{ACF}_{F})}$ of the theory $\mathbf{ACF}_{F}$.
Proof.
To prove the last assertion, observe that $\mathbf{ACF}_{F}$ has quantifier elimination, and therefore the full Grothendieck ring ${\mathbf{Gr}(\mathbf{ACF}_{F})}$ is generated by the classes of quantifier free formulae. The only issue is the nature of isomorphism.
In view of Corollary 3.9, it suffices to show that if $f\colon Y\to X$ is an $\mathbf{ACF}_{F}$-isomorphism of affine $F$-schemes, then ${\left[X\right]}={\left[Y\right]}$ in ${\mathbf{Gr}(F^{\operatorname{var}})}$. By [Mar, ?], we can find a constructible partition $Y=Y_{1}\cup\dots\cup Y_{s}$ of $Y$, such that each restriction $\left.f\right|_{{Y_{i}}}$ is an explicit isomorphism (note that we need characteristic zero to avoid having to take $p$-th roots). Hence ${\left[Y_{i}\right]}={\left[f(Y_{i})\right]}$ in ${\mathbf{Gr}(F^{\operatorname{var}})}$, and the result now follows since ${\left[Y\right]}$ and ${\left[X\right]}$ are the respective sums of all ${\left[Y_{i}\right]}$ and all ${\left[f(Y_{i})\right]}$.
∎
The schemic Grothendieck ring
Let $\mathcal{QF}$ be the Boolean closure of $\mathcal{S}ch$, that is to say, the lattice of quantifier free formulae. We define the
schemic Grothendieck ring of $X$ as
$${\mathbf{Gr}(X^{\operatorname{sch}})}:=\mathbf{Gr}^{\mathbf{Art}_{X}}_{%
\mathcal{S}ch}({\mathcal{QF}}).$$
As before, we denote the class of a formula by ${\left[\phi\right]}$, and in case $Y$ is an affine $X$-scheme, we also write ${\left[Y\right]}$ for the class of its defining schemic formula given by Corollary 4.9, and henceforth, identify both. In particular, the base scheme $X$ corresponds to the class of the sentence $\top$, which we will denote simply by $1$.
5.3 Lemma.
Let $X$ be an affine, Noetherian scheme. Any element in ${\mathbf{Gr}(X^{\operatorname{sch}})}$ is of the
form $\sum_{i=1}^{s}n_{i}{\left[Y_{i}\right]}$, for some integers $n_{i}$, and some affine $X$-schemes $Y_{i}$. Alternatively, we may write any element as ${\left[Z\right]}-{\left[Z\right]}^{\prime}$, for $Z$ and $Z^{\prime}$ affine $X$-schemes.
Proof.
Both statements follow immediately from Theorem 4.1 and Corollary 3.9, since $\mathcal{S}ch$ is closed, modulo $\mathbf{Art}_{X}$, under conjunctions, and, by Lemma 4.2, disjoint sums.
∎
In particular, the natural morphism $\mathbf{Gr}^{\mathbf{Art}_{X}}_{\mathcal{S}ch}({\mathcal{S}ch})\to{\mathbf{Gr}%
(X^{\operatorname{sch}})}$ is surjective.
Let us call two $X$-schemes $Y$ and $Y^{\prime}$ stably isomorphic, if there exists an affine $X$-scheme $Z$ such that $Y\sqcup Z$ and $Y^{\prime}\sqcup Z$ are isomorphic over $X$. A priori this is a weaker equivalence relation than the isomorphism relation, but for affine Noetherian schemes, it is the same, as we will discuss in Appendix LABEL:s:app.
5.4 Theorem.
Two affine $X$-schemes $Y$ and $Y^{\prime}$ are isomorphic if and only if their classes ${\left[Y\right]}$ and ${\left[Y^{\prime}\right]}$ in ${\mathbf{Gr}(X^{\operatorname{sch}})}$ are the same.
Proof.
By Theorem 3.11, the defining schemic formulae $\phi$ and $\phi^{\prime}$ of respectively $Y$ and $Y^{\prime}$ are stably $\mathcal{S}ch$-isomorphic, that is to say,
$$\phi\oplus\psi\cong_{\mathcal{S}ch}\phi^{\prime}\oplus\psi,$$
for some quantifier free formula $\psi$. By Lemma 3.10, we may replace $\psi$ by any formula implied by it, whence, in particular, by its Zariski closure. In conclusion, we may assume $\psi$ is schemic. By Lemma 4.2, disjoint sum and union are $\mathbf{Art}_{X}$-equivalent. Hence, if $Z$ denotes the affine scheme defined by $\psi$, then $Y\sqcup Z\cong Y^{\prime}\sqcup Z$ by Corollary 4.8, that is to say, $Y$ and $Y^{\prime}$ are stably isomorphic, whence isomorphic, by Theorem LABEL:T:stabisosch.
∎
By definition of the Lefschetz class, the class of ${\mathbb{A}_{X}^{1}}:=\operatorname{Spec}(A[x])$, with $x$ a single indeterminate, is ${\mathbb{L}}$. In particular, we get the following generalization of Lemma 3.6:
5.5 Lemma.
If $Y$ is an affine $X$-scheme, then ${\left[{\mathbb{A}_{Y}^{n}}\right]}={\mathbb{L}}^{n}\cdot{\left[Y\right]}$ in ${\mathbf{Gr}(X^{\operatorname{sch}})}$.∎
6. The pp-Grothendieck ring of $X$
Theorem 5.4 essentially says that ${\mathbf{Gr}(X^{\operatorname{sch}})}$ creates no further relations among $X$-schemes. To obtain non-trivial relations among classes of $X$-schemes, we will now work in a larger class of formulae. It turns out that pp-formulae are sufficiently general to accomplish this. To obtain the greatest amount of versatility, rather than working within the semi-lattice of pp-formulae, we will work in the Boolean lattice $\mathcal{PP}$, given as the Boolean closure of all pp-formulae, that is to say, all finite disjunctions of pp-formulae and their negations. Recall that we write $\texttt{Im}(\phi)(x)$ to denote the pp-formula $(\exists y)\phi(x,y)$, for $\phi$ a schemic formula. We define the pp-Grothendieck ring of $X=\operatorname{Spec}A$ to be the Grothendieck ring
$${\mathbf{Gr}(X^{\operatorname{pp}})}:=\mathbf{Gr}^{\mathbf{Art}_{X}}_{\mathcal%
{S}ch}({\mathcal{PP}}).$$
By Corollary 3.5, we have a natural ring homomorphism ${\mathbf{Gr}(X^{\operatorname{sch}})}\to{\mathbf{Gr}(X^{\operatorname{pp}})}$.
The analogue of Lemma 4.2 holds, and in particular, we can describe the elements of ${\mathbf{Gr}(X^{\operatorname{pp}})}$ as in Lemma 5.3:
6.1 Proposition.
The disjoint sum of two pp-formulae is $\mathbf{Art}_{X}$-equivalent to a pp-formulae. In particular, every element of ${\mathbf{Gr}(X^{\operatorname{pp}})}$ can be written as a difference ${\left[\psi\right]}-{\left[\psi^{\prime}\right]}$ with $\psi$ and $\psi^{\prime}$ pp-formulae.
Proof.
Let $\phi(x):=\texttt{Im}(\phi_{0})$ and $\psi(x):=\texttt{Im}(\psi_{0})$ be two pp-formulae, with $\phi_{0}(x,y)$ and $\psi_{0}(x,y)$ schemic formulae. As in the proof of Lemma 5.3, one easily shows that the disjoint union $\phi\sqcup\psi$ is $\mathbf{Art}_{X}$-equivalent with $(\exists y)(\phi_{0}\sqcup\psi_{0})$. The second assertion now follows from this and Corollary 3.9.
∎
Assume $Y=\operatorname{Spec}B$ is an affine, Noetherian scheme. Let $\Theta^{\text{aff}}_{Y}$ be the collection of all affine opens of $Y$. We view $\Theta^{\text{aff}}_{Y}$ as a semi-lattice with $\wedge$ given by intersection. In general, the union of affine opens need not be affine, so that we cannot define $\vee$ on $\Theta^{\text{aff}}_{Y}$, and therefore, it is only a sub-semi-lattice of the lattice $\Theta_{Y}$ of all opens of $Y$. Recall that for a finite, open affine covering $\mathcal{U}=\{U_{1},\dots,U_{n}\}$ of $Y$, we have a scissor relation
$$Y=S_{n}(U_{1},\dots,U_{n})$$
in ${\mathbf{Sciss}(\Theta_{Y})}$.
The map $\Theta^{\text{aff}}_{Y}\to{\mathbf{Gr}(X^{\operatorname{pp}})}$, sending an affine open $U\subseteq Y$ to its class ${\left[U\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$, extends to an additive map $\mathbb{Z}[\Theta^{\text{aff}}_{Y}]\to{\mathbf{Gr}(X^{\operatorname{pp}})}$ (note that this map is not multiplicative since multiplication on the former is different from that on the latter). We will show in Corollary 6.5 below that it in fact induces a ring homomorphism $\mathbf{K}_{\cong_{X}}(\Theta^{\text{aff}}_{Y})\to{\mathbf{Gr}(X^{%
\operatorname{pp}})}$. Among the members of $\Theta^{\text{aff}}_{Y}$ are the basic open subsets $\operatorname{D}(f)=\operatorname{Spec}(B_{f})$, with $f$ a non-nilpotent element of $B$ (so that in particular, a basic open subset is never empty). Note that if $\operatorname{D}(f)$ is a basic open, and $U\subseteq Y$ and affine open, then $\operatorname{D}(f)\cap U$ is the basic open $\operatorname{D}(\left.f\right|_{{U}})$ in $U$.
6.2 Proposition.
Let $X$ be a Noetherian scheme, and $Y$ an affine $X$-scheme. For every finite covering $\{D_{1},\dots,D_{n}\}$ of $Y$ by basic open subsets, we have an identity ${\left[Y\right]}={\left[S_{n}(D_{1},\dots,D_{n})\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$.
Proof.
Let $A:={\mathcal{O}}_{X}$, and let $\phi(x)$ be a schemic formula defining $Y$, that is to say, $Y=\operatorname{Spec}B$ with $B:=A[x]/I(\phi)$. By definition of basic subset, there exist $f_{i}\in A[x]$ so that $D_{i}=\operatorname{D}(f_{i})=\operatorname{Spec}(B_{f_{i}})$. The fact that the $D_{i}$ cover $Y$ is equivalent with $(f_{1},\dots,f_{n})B$ being the unit ideal. Hence, we can find $u_{i}\in A[x]$ and $m\in I(\phi)$, such that
(15)
$$\sum_{i=1}^{n}u_{i}f_{i}+m=1.$$
Consider the following formulae in the variables $x$ and $z$:
$$\displaystyle\psi_{i}(x,z)$$
$$\displaystyle:=\phi(x)\wedge(f_{i}(x)z=1)$$
$$\displaystyle\tilde{\psi}_{i}(x)$$
$$\displaystyle:=\texttt{Im}(\psi_{i})(x)=(\exists z)\psi_{i}(x,z).$$
The schemic formula $x=y$ defines a schemic isomorphism $\tilde{\psi}_{i}(x)\to\psi_{i}(y,z)$, for each $i$.
Let $\tilde{\psi}$ be the disjunction of all $\tilde{\psi}_{i}$. By Proposition 2.3, we have a scissor identity
${\left[\tilde{\psi}\right]}={\left[S_{n}(\tilde{\psi}_{1},\dots,\tilde{\psi}_{%
n})\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. Since $\psi_{i}$ and $\tilde{\psi}_{i}$ are $\mathcal{S}ch$-isomorphic, so are any of their conjunctions, and hence ${\left[S_{n}(\tilde{\psi}_{1},\dots,\tilde{\psi}_{n})\right]}={\left[S_{n}(%
\psi_{1},\dots,\psi_{n})\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. By definition of basic subset, $\psi_{i}$ is the defining schemic formula of $D_{i}$, and hence ${\left[S_{n}(\psi_{1},\dots,\psi_{n})\right]}$ is equal to the class of $S_{n}(D_{1},\dots,D_{n})$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. So remains to show that $Y={\left[\phi\right]}={\left[\tilde{\psi}\right]}$.
To this end, we have to show that $\tilde{\psi}(R)=\phi(R)$ in any Artinian local $A$-algebra $R$. The direct inclusion is immediate, so assume $a\in\phi(R)$. From (15) and the fact that $m(a)=0$, we get
$$\sum_{i=1}^{n}u_{i}(a)f_{i}(a)=1.$$
Since $R$ is local, one of these terms must be a unit, say, the first one. Therefore, there exists $b\in R$ such that $bf_{1}(a)=1$, and hence $a\in\tilde{\psi}_{1}(R)\subseteq\tilde{\psi}(R)$, as we needed to show.
∎
6.3 Remark.
As already observed, this shows that we have an additive map ${\mathbf{Gr}(\Theta^{\text{aff}}_{Y})}\to{\mathbf{Gr}(X^{\operatorname{pp}})}$.
A cautionary note: although one informally states that a basic open subset $D:=\operatorname{D}(f)$ is given by the equation $f\neq 0$ in $\operatorname{Spec}(B)$ (as it is the complement of the closed subset given by the equation $f=0$), this is not correct from the point of view of formulae. As we saw in the above proof, $D$ is defined by the schemic formula $\psi(x,z):={\langle f(x)z=1\rangle}$, or alternatively, by its projection, the pp-formula $\texttt{Im}(\psi)(x):={\langle(\exists z)f(x)z=1\rangle}$. That this is different from the formula ${\langle f\neq 0\rangle}$ is easily checked on an example: let $f(x):=x\in F[x]$, with $x$ a single variable, and compare both formulae in the Artinian local ring $F[T]/T^{2}F[T]$ (we will see below that in the latter model, nonetheless, the basic open $\operatorname{D}(f)$ is given by the quantifier free (non-schemic) formula $f^{2}\neq 0$).
A second issue requiring some care is the difference between conjunctions and intersections. Let $D^{\prime}:=\operatorname{D}(f^{\prime})$ be another basic open in $Y$, and let $\psi^{\prime}:={\langle f^{\prime}z=1\rangle}$ and $\texttt{Im}(\psi^{\prime}):={\langle(\exists z)f^{\prime}z=1)\rangle}$ be its respective schemic and pp defining formula. The intersection $D\cap D^{\prime}$ is again a basic open subset, whence an affine scheme. However, $D\cap D^{\prime}$ is not defined by the schemic formula $\psi\wedge\psi^{\prime}$, but by the schemic formula ${\langle ff^{\prime}z=1\rangle}$. Nonetheless, $D\cap D^{\prime}$ is defined by the pp-formula $\texttt{Im}(\psi)\wedge\texttt{Im}(\psi^{\prime})$.
6.4 Theorem.
There exists a well-defined map which assigns to any (isomorphism class of an) $X$-scheme $Y$ an element ${\left[Y\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$ which agrees on affine schemes with the class map. Moreover, if $\{U_{1},\dots,U_{n}\}$ is any open covering of an $X$-scheme $Y$, then ${\left[Y\right]}={\left[S_{n}(U_{1},\dots,U_{n})\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$.
Proof.
We start with proving that the second assertion holds in case $Y$ is affine. So let $\mathcal{U}:=\{U_{1},\dots,U_{s}\}$ be an open affine covering of $Y$, and we need to show that ${\left[Y\right]}={\left[S(U_{1},\dots,U_{n})\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. We induct on the number $e$ of non-basic opens among the $U_{i}$. If $e=0$, then the result holds by Proposition 6.2. So assume $e>1$. Let $U_{n}$ be a non-basic open subset, and let $\{D_{1},\dots,D_{m}\}$ be an open covering of $U_{n}$ by basic opens.
Using Lemma 2.1(3), as in the proof of Proposition 2.3, we have an identity
(16)
$$\displaystyle S(U_{1},\dots,U_{n-1},D_{1},\dots,D_{m},U_{n})=\\
\displaystyle S(U_{1},\dots,U_{n-1},D_{1},\dots,D_{m})-S(U_{1}\cap U_{n},\dots%
,U_{n-1}\cap U_{n},D_{1},\dots,D_{m})+U_{n}$$
Since $\{U_{1},\dots,U_{n-1},D_{1},\dots,D_{m}\}$ and $\{U_{1}\cap U_{n},\dots,U_{n-1}\cap U_{n},D_{1},\dots,D_{m}\}$ are open affine coverings of $Y$ and $U_{n}$ respectively, both containing less than $e$ non-basic open subsets, our induction hypothesis yields
$$\displaystyle{\left[Y\right]}$$
$$\displaystyle={\left[S(U_{1},\dots,U_{n-1},D_{1},\dots,D_{m})\right]}\quad%
\text{and}$$
$$\displaystyle{\left[U_{n}\right]}$$
$$\displaystyle={\left[S(U_{1}\cap U_{n},\dots,U_{n-1}\cap U_{n},D_{1},\dots,D_{%
m})\right]},$$
in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. Taking classes of both sides of (16) together with the latter identities, shows that ${\left[Y\right]}={\left[S(U_{1},\dots,U_{n},D_{1},\dots,D_{m})\right]}$ (note that the order in scissor relations is irrelevant). We will now prove by induction on $m$, that $S(U_{1},\dots,U_{n},D_{1},\dots,D_{m})$ and $S(U_{1},\dots,U_{n})$ have the same class in ${\mathbf{Gr}(X^{\operatorname{pp}})}$.
By Lemma 2.1(3),
we have an identity
(17)
$$\displaystyle S(U_{1},\dots,U_{n},D_{1},\dots,D_{m})=S(U_{1},\dots,U_{n},D_{1}%
,\dots,D_{m-1})\\
\displaystyle-S(U_{1}\cap D_{m},\dots,U_{n}\cap D_{m},D_{1}\cap D_{m},\dots,D_%
{m-1}\cap D_{m})+D_{m}$$
in $\mathbb{Z}[\Theta^{\text{aff}}_{Y}]$. By induction on $m$, we have
$${\left[S(U_{1},\dots,U_{n})\right]}={\left[S(U_{1},\dots,U_{n},D_{1},\dots,D_{%
m-1})\right]}$$
in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. On the other hand, since $\{U_{i}\cap D_{m},D_{j}\cap D_{m}\}$, for $i=1,\dots,n$ and $j=1,\dots,m-1$, is an open affine covering of $D_{m}$ containing less than $e$ non-basic open subsets (note that $U_{n}\cap D_{m}=D_{m}$), our first induction hypothesis (on $e$) yields
$${\left[D_{m}\right]}={\left[S(U_{1}\cap D_{m},\dots,U_{n}\cap D_{m},D_{1}\cap D%
_{m},\dots,D_{m-1}\cap D_{m})\right]}$$
in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. Hence, taking classes of both sides of (17) together with the previous two identities, yields the desired conclusion
$${\left[Y\right]}={\left[S(U_{1},\dots,U_{n},D_{1},\dots,D_{m})\right]}={\left[%
S(U_{1},\dots,U_{n})\right]}.$$
We now prove the first assertion for $Y$ an arbitrary $X$-scheme, that is to say, a (not necessarily affine) separated scheme $Y$ of finite type over $X$. Let
$\mathcal{U}:=\{U_{1},\dots,U_{s}\}$ be an open affine covering of $Y$, and define
(18)
$${\left[Y\right]}_{\mathcal{U}}:={\left[S(U_{1},\dots,U_{n})\right]}.$$
Since $Y$ is separated, each intersection $U:=U_{i_{1}}\cap\dots\cap U_{i_{k}}$ is again affine, so that ${\left[S(U_{1},\dots,U_{n})\right]}$ is indeed an element of ${\mathbf{Gr}(X^{\operatorname{pp}})}$. We want to show that ${\left[Y\right]}_{\mathcal{U}}$, as an element of ${\mathbf{Gr}(X^{\operatorname{pp}})}$, does not depend on the open affine covering $\mathcal{U}$. To this end, let $\mathcal{V}$ be a second open affine covering of $Y$, and we seek to show that ${\left[Y\right]}_{\mathcal{U}}={\left[Y\right]}_{\mathcal{V}}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. Replacing $\mathcal{V}$ by $\mathcal{U}\cup\mathcal{V}$ if necessary, we may assume that $\mathcal{U}\subseteq\mathcal{V}$. By induction on the number of members of $\mathcal{V}$, we may then reduce to the case that $\mathcal{V}=\mathcal{U}\cup\{V\}$, for some affine open $V\subseteq Y$. By Lemma 2.1(3), we have, in $\mathbb{Z}[\Theta^{\text{aff}}_{Y}]$, an identity
(19)
$$S(U_{1},\dots,U_{n},V)=S(U_{1},\dots,U_{n})-S(U_{1}\cap V,\dots,U_{n}\cap V)+V.$$
Since $\{U_{1}\cap V,\dots,U_{n}\cap V\}$ is an open affine covering of the affine open $V$, we have
$${\left[V\right]}={\left[S(U_{1}\cap V,\dots,U_{n}\cap V)\right]}$$
in ${\mathbf{Gr}(X^{\operatorname{pp}})}$ by the first part of the proof. Hence, taking classes of both sides of (19) shows that
$${\left[Y\right]}_{\mathcal{V}}={\left[S(U_{1},\dots,U_{n},V)\right]}={\left[S_%
{n}(U_{1},\dots,U_{n})\right]}={\left[Y\right]}_{\mathcal{U}}$$
in ${\mathbf{Gr}(X^{\operatorname{pp}})}$.
So, for $Y$ an arbitrary $X$-scheme, we define ${\left[Y\right]}:={\left[Y\right]}_{\mathcal{U}}$, where $\mathcal{U}$ is any finite open affine covering of $Y$. In particular, if $Y$ is affine, we can take for open cover the singleton $\{Y\}$, showing that this new notation coincides with our former. To show that this assignment only depends on the isomorphism class of $Y$, let $\sigma\colon Y\to Y^{\prime}$ be an isomorphism of $X$-schemes. Let $\mathcal{U}^{\prime}$ consist of all $\sigma(U)$ with $U\in\mathcal{U}$. Hence $\mathcal{U}^{\prime}$ is an open covering of $Y^{\prime}$. Moreover, any intersection of members of $\mathcal{U}$ is
isomorphic to the intersection of the corresponding images under $\sigma$, and hence both have the same class in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. Therefore,
$${\left[Y\right]}={\left[Y\right]}_{\mathcal{U}}={\left[Y^{\prime}\right]}_{%
\mathcal{U}^{\prime}}={\left[Y^{\prime}\right]},$$
showing that the class of $Y$ only depends on its isomorphism type.
Finally, to prove the last assertion, we first show that the additive map ${\left[\cdot\right]}\colon\mathbb{Z}[\Theta_{Y}]\to{\mathbf{Gr}(X^{%
\operatorname{pp}})}$ factors through an additive map ${\mathbf{Sciss}(\Theta_{Y})}\to{\mathbf{Gr}(X^{\operatorname{pp}})}\colon U%
\mapsto{\left[U\right]}$, for every $X$-scheme $Y$ (recall that $\Theta_{Y}$ is the lattice of all opens of $Y$). It suffices to show that any second scissor relation $U\cup U^{\prime}-U-U^{\prime}+U\cap U^{\prime}$, with $U,U^{\prime}\in\Theta_{Y}$, lies in the kernel of $\mathbb{Z}[\Theta_{Y}]\to{\mathbf{Gr}(X^{\operatorname{pp}})}$. Let $\mathbf{U}:=(U_{1},\dots,U_{n})$ and $\mathbf{U}^{\prime}:=(U^{\prime}_{1},\dots,U^{\prime}_{n^{\prime}})$ be affine open coverings of $U$ and $U^{\prime}$ respectively. Hence the union of these two coverings is a covering of $U\cup U^{\prime}$, whereas the collection $\mathbf{V}$ of all $U_{i}\cap U_{j}^{\prime}$, for $i=1,\dots,n$, and $j=1,\dots,n^{\prime}$, is an affine covering of $U\cap U^{\prime}$. Therefore, by (18), we have
(20)
$${\left[U\cup U^{\prime}\right]}-{\left[U\right]}-{\left[U^{\prime}\right]}+{%
\left[U\cap U^{\prime}\right]}={\left[S(\mathbf{U},\mathbf{U}^{\prime})\right]%
}-{\left[S(\mathbf{U})\right]}-{\left[S(\mathbf{U}^{\prime})\right]}+{\left[S(%
\mathbf{V})\right]}$$
However, by Lemma 2.1(4), we have an identity
$${S(\mathbf{U},\mathbf{U}^{\prime})}-{S(\mathbf{U})}-{S(\mathbf{U}^{\prime})}+{%
S(\mathbf{V})}=0$$
in $\mathbb{Z}[\Theta_{Y}]$, showing that the right hand side of (20), whence also the left hand side, is zero. We can now prove the last assertion: let $\{U_{1},\dots,U_{n}\}$ be an arbitrary finite open covering of $Y$. By Proposition 2.3, we have an identity
$$Y=\bigcup_{i=1}^{n}U_{i}=S(U_{1},\dots,U_{n})$$
in ${\mathbf{Sciss}(\Theta_{Y})}$. Applying the additive map ${\mathbf{Sciss}(\Theta_{Y})}\to{\mathbf{Gr}(X^{\operatorname{pp}})}$ then yields the desired identity.
∎
In the course of the proof, we obtained:
6.5 Corollary.
For each $X$-scheme $Y$, we have a homomorphism $\mathbf{K}_{\cong_{X}}(\Theta_{Y})\to{\mathbf{Gr}(X^{\operatorname{pp}})}$ of Grothendieck rings, where $\Theta_{Y}$ is the lattice of opens of $Y$, and $\cong_{X}$ denotes isomorphism as $X$-schemes.∎
6.6 Corollary.
If $U$ is an open in an affine $X$-scheme $Y$, then there exists a disjunction $\psi$ of schemic formulae such that ${\left[U\right]}={\left[\psi\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$.
Proof.
Let $\{D_{1},\dots,D_{n}\}$ be a covering of $U$ by basic open subsets, and, as in the proof of Proposition 6.2, let $\psi_{i}$ be the schemic formula defining the basic open $D_{i}$, that is to say, $\psi_{i}(x,z):={\langle\phi(x)\wedge(f_{i}(x)z=1)\rangle}$, where $\phi$ is the defining formula of $Y$, and $D_{i}$ is the basic open $\operatorname{Spec}({\mathcal{O}}_{Y,f_{i}})$. By Proposition 6.2, the basic open $D_{i}$ is also defined by the pp-formula $\texttt{Im}(\psi_{i})$.
Let $\tilde{\psi}:=\texttt{Im}(\psi_{1})\vee\dots\vee\texttt{Im}(\psi_{n})$. By Proposition 2.3, we have identities $U=S(D_{1},\dots,D_{n})$
in ${\mathbf{Sciss}(\Theta_{Y})}$, and $\tilde{\psi}=S(\texttt{Im}(\psi_{1}),\dots,\texttt{Im}(\psi_{n}))$ in ${\mathbf{Sciss}(\mathcal{PP})}$. By Corollaries 6.5 and 3.5 respectively, we get ${\left[U\right]}={\left[S(D_{1},\dots,D_{n})\right]}$ and ${\left[\tilde{\psi}\right]}={\left[S(\texttt{Im}(\psi_{1}),\dots,\texttt{Im}(%
\psi_{n}))\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$. One easily verifies that any conjunction $\bigwedge_{i\in I}\texttt{Im}(\psi_{i})$ is the defining pp-formula for the corresponding intersection of the $D_{i}$, for $I\subseteq\{1,\dots,n\}$ (but see Remark 6.3 for why we cannot work with the schemic formulae $\psi_{i}$ instead).
Hence
$${\left[U\right]}={\left[S(D_{1},\dots,D_{n})\right]}={\left[S(\texttt{Im}(\psi%
_{1}),\dots,\texttt{Im}(\psi_{n}))\right]}={\left[\tilde{\psi}\right]}$$
in ${\mathbf{Gr}(X^{\operatorname{pp}})}$.
To obtain a disjunction of schemic formulae, let
$$\psi(x,z_{1},\dots,z_{n}):=\bigvee_{i=1}^{n}\psi_{i}(x,z_{i}).$$
As in the proof of Proposition 6.2, one can show that the projection onto the $x$-coordinates yields a (schemic) isomorphism between $\psi$ and $\tilde{\psi}$ modulo $\mathbf{Art}_{X}$, and hence ${\left[\psi\right]}={\left[\tilde{\psi}\right]}$ in ${\mathbf{Gr}(X^{\operatorname{pp}})}$, completing the proof of the assertion.
∎
6.7 Remark.
Theorem 6.4 allows us to calculate the class of a non-affine scheme in terms of classes of affine schemes. For instance, the class of the projective line $\mathbb{P}_{X}^{1}$ is equal to $2{\mathbb{L}}-{\mathbb{L}}^{*}$, where, as before, ${\mathbb{L}}$ is the Lefschetz class, that is to say, the class of the affine line ${\mathbb{A}_{X}^{1}}$, and where ${\mathbb{L}}^{*}$ denotes the class of the affine line without the origin. One would be tempted to think that ${\mathbb{L}}^{*}={\mathbb{L}}-1$, but this is false, for the reason given at end of Remark 6.3. We will give a correct version of this formula in (33) below.
7. Arc integrals
Let $X=\operatorname{Spec}A$ be an affine Noetherian scheme, and let $\phi$ be a schemic formula in $\mathcal{L}_{A}$ with corresponding ideal
$I(\phi)$, and associated affine scheme $Y_{\phi}:=\operatorname{Spec}(A[x]/I(\phi))$.
We call $\phi$
Artinian if the corresponding affine scheme $Y_{\phi}$ is Artinian, that is to say, has (Krull) dimension zero. We denote the Boolean closure of the collection of Artinian formulae by $\mathcal{A}rt$ (not to be confused with the theory $\mathbf{Art}_{A}$).
We say that $\phi$ is a closed point formula, if $I(\phi)$ is a maximal
ideal; we say that $\phi$ is a point formula, if the radical of
$I(\phi)$ is a maximal ideal. Closed point formulae and point formulae are
Artinian. Let $\phi$ be a point formula. There is a unique closed point formula $\bar{\phi}$ implying $\phi$, namely, the one corresponding to the radical of $I(\phi)$. To a point formula corresponds an Artinian local $A$-scheme $Y_{\phi}$, and the closed point of $Y_{\phi}$ then corresponds to $\bar{\phi}$. If
$A=F$ is an
algebraically closed field, then any two point formulae
are $\mathbf{T}_{F}$-equivalent by the Nullstellensatz, but this might fail in
general. For instance, over $A=\mathbb{Q}$, the formulae $x^{2}+1=0$ and
$x=0$ are not isomorphic. Another example, with
$A=F[[t]]$, are the point formulae formulae $tx-1=0$
and $t=x=0$, which cannot be isomorphic, not even after a base change. Let
us denote the Grothendieck ring of Artinian
formulae modulo $\mathbf{Art}_{X}$ by
$${\mathbf{Gr}_{0}(X^{\operatorname{sch}})}:=\mathbf{Gr}^{\mathbf{Art}_{X}}_{%
\mathcal{S}ch}({\mathcal{A}rt}).$$
Given an Artinian $X$-scheme $Y$, we will call its length $\ell(Y)$ the length of the coordinate ring ${\mathcal{O}}_{Y}$ viewed as an $A$-module.
7.1 Corollary.
Any element of ${\mathbf{Gr}_{0}(X^{\operatorname{sch}})}$ can be written as a difference ${\left[Y\right]}-{\left[Y^{\prime}\right]}$, with $Y$ and $Y^{\prime}$ Artinian $X$-schemes.
Proof.
By Lemma 4.2, Artinian formulae are closed under disjoint sums, and the claim follows from Corollary 3.9.
∎
7.2 Lemma.
Any Artinian formula is a disjoint union of finitely many point formulae.
Proof.
Immediate from the fact that an Artinian ring is a direct sum of Artinian
local rings.
∎
For simplicity, we will for the remainder of this section work over an algebraically closed field $F$.
7.3 Lemma.
Let $F$ be an algebraically closed field. There exists a ring homomorphism $\ell\colon{\mathbf{Gr}_{0}(F^{\operatorname{sch}})}\to\mathbb{Z}$ such that $\ell({\left[Y\right]})=\ell(Y)$ for any Artinian $F$-scheme $Y$.
Proof.
Since $F$ is algebraically closed, $\ell(Y)$ is equal to the $F$-vector
space dimension of ${\mathcal{O}}_{Y}$.
Let $\phi$ be a formula in ${\mathcal{A}rt}$. By Corollary 7.1,
we can find Artinian
$F$-schemes $Y$ and $Y^{\prime}$
such that ${\left[\phi\right]}={\left[Y\right]}-{\left[Y^{\prime}\right]}$. Put $\ell({\left[\phi\right]}):=\ell(Y)-\ell({Y^{\prime}})$. To prove that this is independent from the choice of
affine schemes, suppose that also ${\left[\phi\right]}={\left[Z\right]}-{\left[Z^{\prime}\right]}$ for some
Artinian
$F$-schemes $Z$ and $Z^{\prime}$. It follows that ${\left[Y\sqcup Z^{\prime}\right]}={\left[Y^{\prime}\sqcup Z\right]}$, and hence by Theorem 3.11, that the
schemes $Y\sqcup Z^{\prime}$ and $Y^{\prime}\sqcup Z$ are
stably $\mathcal{S}ch$-isomorphic in $\mathcal{A}rt$. This means that $Y\sqcup Z^{\prime}\sqcup T$
and $Y^{\prime}\sqcup Z\sqcup T$ are isomorphic, for some Artinian $F$-scheme $T$.
Since
length is additive on disjoint unions, $\ell(Y)+\ell(Z^{\prime})+\ell(T)=\ell(Y^{\prime})+\ell(Z)+\ell(T)$, showing that $\ell({\left[\phi\right]})$ is
well-defined. Linearity follows immediately from this. Furthermore, given two
Artinian $F$-schemes, we have
$\ell({\left[Y\right]}\cdot{\left[Z\right]})=\ell({\left[Y\times_{F}Z\right]})$. Let $l$ and $m$ be
the length of $Y$ and $Z$ respectively, so that as $F$-vector spaces
${\mathcal{O}}_{Y}\cong F^{l}$ and ${\mathcal{O}}_{Z}\cong F^{m}$. Since the coordinate ring of
$Y\times_{F}Z$ is equal to ${\mathcal{O}}_{Y}\otimes_{F}{\mathcal{O}}_{Z}$, its length is equal to $mn$,
showing that $\ell$ is also multiplicative.
∎
Jets
Given a closed subscheme $Z$ of an affine scheme $Y:=\operatorname{Spec}B$, we define the
$n$-th jet of $Y$ along $Z$ to be the closed subscheme
$$J_{Z}^{n}Y:=\operatorname{Spec}(B/I^{n}),$$
where $I$ is the ideal of the closed subscheme $Z$. Note that $Z$ and any of the jets $J_{Z}^{n}Y$ have the same underlying topological space, and we have an ascending chain of closed subschemes
(21)
$$\emptyset=J_{Z}^{0}Y\subseteq Z=J_{Z}^{1}Y\subseteq J_{Z}^{2}Y\dots\subseteq J%
_{Z}^{n}Y\subseteq\dots$$
In most cases, this will be a proper chain by Nakayama’s Lemma (for instance, if $Z$ is a proper closed subscheme of a variety $Y$, or more generally, if $I$ contains a non-zero divisor).
We can generalize the notion of a jet to formulae: let $\phi$ be an arbitrary formula and $\zeta$ a schemic formula. We define the
$n$-th jet of $\phi$ along $\zeta$ to be the formula
$$J_{\zeta}^{n}\phi:=\phi\wedge\bigwedge_{f_{1},\dots,f_{n}\in I(\zeta)}(f_{1}%
\cdot f_{2}\cdots f_{n}=0).$$
In other words, if $\zeta^{(n)}$ is the formula with defining ideal $I(\zeta)^{n}$, then
$J_{\zeta}^{n}\phi=\phi\wedge\zeta^{(n)}$. In particular, if $\phi$ is also schemic, defining an affine variety $Y:=Y_{\phi}$, and if $Z$ is the closed subscheme defined by $\phi\wedge\zeta$, then $J_{\zeta}^{n}\phi$ is the defining schemic formula of $J_{Z}^{n}Y$.
Formal Hilbert series
Let $\phi$ be a schemic formula, and $\tau$ a closed point
formula implying $\phi$. Each jet $J_{\tau}^{i}\phi$ is an Artinian formula, and hence its class belongs to ${\mathbf{Gr}_{0}(F^{\operatorname{sch}})}$. In particular, if $X$ is the $F$-scheme defined by $\phi$, and $P$ the closed point defined by $\tau$, then $J_{\tau}^{i}\phi=J_{P}^{i}X$.
For $T$ a single variable we can therefore define the
formal Hilbert series of a $F$-scheme $X$ at a closed point $P$ as the series
$$\operatorname{Hilb}_{P}(X):=\sum_{i=0}^{\infty}{\left[J_{P}^{i}X\right]}T^{i}$$
in ${\mathbf{Gr}_{0}(F^{\operatorname{sch}})}[[T]]$. If we
extend the homomorphism $\ell$ to ${\mathbf{Gr}_{0}(F^{\operatorname{sch}})}[[T]]$ by letting it act
on the coefficients of a power series,
then $\ell(\operatorname{Hilb}_{P}(X))$ is a rational function in $\mathbb{Z}[[T]]$
by the Hilbert-Samuel theory (it is the first difference of the classical Hilbert series
of $X$ at $P$).
Arcs
Let $R$ be an Artinian algebra of dimension $l$ over $F$, and fix some basis $\Delta$ of $R$ over $F$. For each $\alpha\in\Delta$, we define the $\alpha$-th coordinate map $\pi_{\alpha}\colon R\to F$ by the rule
$$r=\sum_{\alpha\in\Delta}\pi_{\alpha}(r)\cdot\alpha.$$
We write $\pi_{R}(r)$, or just $\pi(r)$, for the tuple of all $\pi_{\alpha}(r)$, where we fix once and for all an order of $\Delta$. In particular, $\pi$ gives a ($F$-linear) bijection between $R$ and $F^{l}$. We also extend this notation to arbitrary tuples. More generally, if $A$ is a $F$-algebra, then $R\otimes_{F}A$ is a free $A$-module generated by $\Delta$ and the base change of $\pi$ yields an $A$-linear isomorphism $R\otimes_{F}A\cong A^{l}$, which we continue to denote by $\pi$. In this section, we also will fix the following notation. Given an $n$-tuple of variables $x$, we let $\tilde{x}_{\alpha}$, for each $\alpha\in\Delta$, be another $n$-tuple of variables, and we denote the $ln$-tuple consisting of all $\tilde{x}_{\alpha}$ by $\tilde{x}$, referring to them as arc variables. We also associate to each $n$-tuple of variables $x$, an $n$-tuple of generic arcs
$$\dot{x}:=\sum_{\alpha\in\Delta}\tilde{x}_{\alpha}\alpha$$
viewed as a tuple in $R[\tilde{x}]$. In particular, $\pi_{\alpha}(\dot{x})=\tilde{x}_{\alpha}$.
7.4 Proposition.
For each $\mathcal{L}_{F}$-formula $\phi$ of arity $n$, and for each finite $F$-algebra $R$ of dimension $l$, there exists an $\mathcal{L}_{F}$-formula $\nabla_{\!R}\phi$ of arity $ln$ with the following property: if $A$ is an $F$-algebra and $a$ an $n$-tuple in $R\otimes_{F}A$, then $a\in\phi(R\otimes_{F}A)$ if and only if $\pi(a)\in\nabla_{\!R}\phi(A)$. Moreover, if $\phi$ is schemic or pp, then so is $\nabla_{\!R}\phi$.
Proof.
Let $\Delta$ be a basis of $R$ as a vector space over $F$. For each polynomial $f\in F[x]$, define polynomials $\nabla_{\!\alpha}f\in F[\tilde{x}]$ by the rule
(22)
$$f(\dot{x})=f(\sum_{\alpha}\tilde{x}_{\alpha}\alpha)=\sum_{\alpha\in\Delta}%
\nabla_{\!\alpha}f(\tilde{x})\alpha.$$
In particular, for $A$ an $F$-algebra, we have $\pi_{\alpha}(f(a))=(\nabla_{\!\alpha}f)(\pi(a))$, for all $a$ in $R\otimes_{F}A$ and all $\alpha\in\Delta$.
To define $\nabla_{\!R}\phi({\tilde{x}})$, we induct on the complexity of the formula $\phi$. If $\phi$ is the schemic formula $f(x)=0$, then $\nabla_{\!R}\phi$ is the schemic formula
$$\nabla_{\!R}\phi({\tilde{x}}):={\langle\bigwedge_{\alpha\in\Delta}(\nabla_{\!%
\alpha}f({\tilde{x}})=0)\rangle}.$$
If $\phi$ and $\psi$ are formulae for which we already defined $\nabla_{\!R}\phi$ and $\nabla_{\!R}\psi$, then $\nabla_{\!R}(\phi\vee\psi):=\nabla_{\!R}\phi\vee\nabla_{\!R}\psi$, and $\nabla_{\!R}(\neg\phi):=\neg\nabla_{\!R}\phi$. Finally, if $\phi(x)$ is the formula $(\exists y)\psi(x,y)$, then we define $\nabla_{\!R}\phi$ as the formula
$$\nabla_{\!R}\phi(x):={\langle(\exists\tilde{y})\nabla_{\!R}\psi(\tilde{x},%
\tilde{y})\rangle}$$
where, similarly, $\tilde{y}$ is a tuple of $l$ copies of $y$. This concludes the proof of the existence of $\nabla_{\!R}\phi$. That it satisfies the desired one-one correspondence between definable sets is clear from (22), and the last assertion is immediate as well.
∎
We will refer to $\nabla_{\!R}\phi$ as the arc formula of $\phi$ along $R$. Instead of using $R$ as a subscript, we may also use its defining schemic formula, or the Artinian scheme $Z:=\operatorname{Spec}R$ it determines, or even leave out reference to it altogether, whenever it is clear from the context. If $\phi$ is a schemic formula defining an affine scheme $Y$, then we will write $\nabla_{\!Z}Y$ for the affine scheme determined by $\nabla_{\!R}\phi$, and call it the arc scheme of $Y$ along $Z$. The following shows that arcs along an Artinian scheme are generalizations of truncated arcs.
7.5 Proposition.
Let $Z$ be an Artinian $F$-scheme, and let $X$ and $Y$ be affine $F$-schemes. There is a one-one correspondence between $Z\times_{F}X$-rational points on $Y\times_{F}X$ over $X$, and $X$-rational points on the corresponding arc scheme $\nabla_{\!Z}Y$ over $F$, that is to say, we have a one-one correspondence
$$\operatorname{Mor}_{X}(Z\times_{F}X,Y\times_{F}X)\cong\operatorname{Mor}_{F}(X%
,\nabla_{\!Z}Y).$$
Proof.
Let $\phi$ be the schemic formula defining $Y$, and let $R$ and $X$ be the respective coordinate rings of $Z$ and $X$. Viewing $\phi$ in the language $\mathcal{L}_{X}$, it is the schemic formula defining the affine $X$-scheme $Y\times X$, by §4.1.1. By Lemma 4.5, we may identify $\phi(R\otimes_{F}A)$ with $\operatorname{Mor}_{F}(Z\times X,Y\times X)$, and similarly, $\nabla_{\!R}\phi(A)$ with $\operatorname{Mor}_{F}(X,\nabla_{\!Z}Y)$. The result then follows from Proposition 7.4.
∎
In particular, $\operatorname{Mor}_{F}(Z,Y)=\nabla_{\!Z}Y(F)$. For instance, if $Z_{n}:=F[\xi]/\xi^{n}F[\xi]$, then $(\nabla_{\!Z_{n}}Y)^{\text{red}}$ is the truncated arc space $\mathcal{L}_{n}(Y)$ as defined in [BLR, p. 276] or [DLArcs].
7.6 Remark.
Given a morphism of schemes $Z\to Y$ over an arbitrary base scheme $S$, we may view $\operatorname{Mor}_{S}(Z,Y)$ as a contravariant functor on the category of $S$-schemes through base change, that is to say,
$$\operatorname{Mor}_{S}(Z,Y)(X):=\operatorname{Mor}_{X}(Z\times_{S}X,Y\times_{S%
}X),$$
for any $S$-scheme $X$. The content of Proposition 7.5 is then that for any Artinian scheme $Z$ over an algebraically closed field $F$, and any affine $F$-scheme $Y$, the functor $\operatorname{Mor}_{F}(Z,Y)$ is representable. Indeed, in the definition of representability, it suffices to consider only affine $F$-schemes $X$, since $\operatorname{Mor}_{F}(Z,Y)$ is compatible with limits, yielding that the affine $F$-scheme $\nabla_{\!Z}Y$ represents $\operatorname{Mor}_{F}(Z,Y)$.
7.7 Example.
Before we proceed, some simple examples are in order. It is clear from the definitions that $\nabla_{\!R}\lambda=\lambda_{l}\cong\lambda^{l}$, for $\lambda$ the Lefschetz formula $x=x$.
Let us next calculate the arc scheme of the curve given by the formula $\phi:={\langle x^{2}=y^{3}\rangle}$ along the four dimensional algebra $R:=F[\xi,\zeta]/(\xi^{2},\zeta^{2})F[\xi,\zeta]$, using the basis $\Delta:=\{1,\xi,\zeta,\xi\zeta\}$ (in the order listed), and corresponding arc variables the quadruples $\tilde{x}=(\tilde{x}_{(0,0)},\tilde{x}_{(1,0)},\tilde{x}_{(0,1)},\tilde{x}_{(1%
,1)})$ and $\tilde{y}=(\tilde{y}_{(0,0)},\tilde{y}_{(1,0)},\tilde{y}_{(0,1)},\tilde{y}_{(1%
,1)})$. One easily calculates that $\nabla_{\!R}\phi$ is the schemic formula
$$\displaystyle\tilde{x}_{(0,0)}^{2}$$
$$\displaystyle=\tilde{y}_{(0,0)}^{3}$$
$$\displaystyle 2\tilde{x}_{(0,0)}\tilde{x}_{(1,0)}$$
$$\displaystyle=3\tilde{y}_{(0,0)}^{2}\tilde{y}_{(1,0)}$$
$$\displaystyle 2\tilde{x}_{(0,0)}x_{(0,1)}$$
$$\displaystyle=3\tilde{y}_{(0,0)}^{2}y_{(0,1)}$$
$$\displaystyle 2\tilde{x}_{(0,0)}x_{(1,1)}+2\tilde{x}_{(1,0)}x_{(0,1)}$$
$$\displaystyle=3\tilde{y}_{(0,0)}^{2}y_{(1,1)}+6y_{(0,0)}\tilde{y}_{(1,0)}y_{(0%
,1)}.$$
Note that the first equation is $\phi(\tilde{x}_{(0,0)},\tilde{y}_{(0,0)})$, and that above the singular point $\tilde{x}_{(0,0)}=0=\tilde{y}_{(0,0)}$, the fiber consist of two $4$-dimensional planes.
7.8 Example.
Another example is classical: let $R=F[\xi]/\xi^{2}F[\xi]$ be the ring of dual numbers. Then one verifies that a $F$-rational point on $\nabla_{\!R}Y$ is given by a $F$-rational point $P$ on $Y$, and a tangent vector $\mathbf{v}$ to $Y$ at $P$, that is to say, an element in the kernel of the Jacobian matrix $\operatorname{Jac}_{\phi}(P)$.
7.9 Example.
As a last example, we calculate $\nabla_{\!Z_{n}}Z_{m}$, where $Z_{n}:=\operatorname{Spec}(F[\xi]/\xi^{n}F[\xi])$. With $\dot{x}=\tilde{x}_{0}+\tilde{x}_{1}\xi+\dots+\tilde{x}_{n-1}\xi^{n-1}$, we will expand $\dot{x}^{m}$ in the basis $\{1,\xi,\dots,\xi^{n-1}\}$ of $F[\xi]/\xi^{n}F[\xi]$ (see Lemma 7.10 below for why the choice of basis is not important); the coefficients of this expansion then generate the ideal of definition of $\nabla_{\!Z_{n}}Z_{m}$. A quick calculation shows that these generators are the polynomials
$$g_{s}(\tilde{x}_{0},\dots,\tilde{x}_{n-1}):=\sum_{i_{1}+\dots+i_{m}=s}\tilde{x%
}_{i_{1}}\tilde{x}_{i_{2}}\cdots\tilde{x}_{i_{m}}$$
for $s=0,\dots,n-1$, where the $i_{j}$ run over $\{0,\dots,n-1\}$. Note that $g_{0}=\tilde{x}_{0}^{m}$. One shows by induction that $(\tilde{x}_{0},\dots,\tilde{x}_{s})F[\tilde{x}]$ is the unique minimal prime ideal of $\nabla_{\!Z_{n}}Z_{m}$, where $s=\lceil\frac{n}{m}\rceil$ is the round-up of $n/m$, that is to say, the least integer greater than or equal to $n/m$. In particular, $\nabla_{\!Z_{n}}Z_{m}$ is irreducible of dimension $n-\lceil\frac{n}{m}\rceil$.
Although the arc scheme depends on the choice of basis, we have:
7.10 Lemma.
For each finite dimensional $F$-algebra $R$, and each $\mathcal{L}_{F}$-formula $\phi$, the arc formula $\nabla_{\!R}\phi$ along $R$ is unique up to an explicit isomorphism modulo $\mathbf{Art}_{F}$.
Proof.
Suppose $R$ has dimension $l$ over $F$, and let $\Delta$ and $\Delta^{*}$ be two bases of $R$, with corresponding isomorphisms $\pi$ and $\pi_{*}$ between $R$ and $F^{l}$, and corresponding arc maps $\nabla$ and $\nabla_{\!*}$ on $\mathcal{L}_{F}$. There exists an $F$-linear automorphism $\sigma$ of $R$ sending $\Delta$ to $\Delta^{*}$. Applying $\sigma$ to $r=\sum\pi_{\alpha}(r)\alpha$ yields $\sigma(r)=\sum\pi_{\alpha}(r)\sigma(\alpha)$, showing that
(23)
$$\pi_{*}(\sigma(r))=\pi(r),$$
for any $r\in R$.
Define $\tau$ as the automorphism $\pi\circ{\sigma^{-1}}\circ{\pi^{-1}}$ of $F^{l}$. I claim that the explicit formula $\tilde{y}=\tau(\tilde{x})$ induces an isomorphism between $\nabla\phi(\tilde{x})$ and $\nabla_{\!*}\phi(\tilde{y})$ modulo $\mathbf{T}_{F}$ (whence also modulo $\mathbf{Art}_{F}$), for any formula $\phi$. Indeed, let $A$ be a finitely generated $F$-algebra, and let
$u\in\nabla\phi(A)$. Put $a:={\pi^{-1}}{u}$, where we continue to write $\pi$ for the base change $A^{\prime}:=R\otimes_{F}A\to A^{l}$. Applying Proposition 7.4 twice, we get $a\in\phi(A^{\prime})$ whence $\pi_{*}(a)\in\nabla_{\!*}\phi(A)$. Since $\tau(u)=\pi({\sigma^{-1}}(a))=\pi_{*}(a)$ by a component-wise application of (23), we showed that $\tau$ induces the desired isomorphism between $\nabla\phi(A)$ and $\nabla_{\!*}\phi(A)$.
∎
7.11 Remark.
So, from now on, we may choose a basis $\Delta=\{\alpha_{0},\dots,\alpha_{l-1}\}$ of $(R,\mathfrak{m})$ with some additional properties. In particular, unless noted explicitly, we will always assume that the first base element is $1$ and that the remaining ones belong to $\mathfrak{m}$. Moreover, once the basis is fixed, we let $\tilde{x}$ be the $l$-tuple of arc variables $(\tilde{x}_{0},\dots,\tilde{x}_{l-1})$, so that $\dot{x}=\tilde{x}_{0}+\tilde{x}_{1}\alpha_{1}+\dots+\tilde{x}_{l-1}\alpha_{\l-1}$ is the corresponding generic arc. It follows from (22) that $\nabla_{\!0}f=f(\tilde{x}_{0})$, for any $f\in F[x]$, where henceforth we simply write $\nabla_{\!j}f$ for $\nabla_{\!\alpha_{j}}f$. By [SchEC, §2.1], we may choose $\Delta$ so that, with $\mathfrak{a}_{i}:=(\alpha_{i},\dots,\alpha_{l-1})R$, we have a Jordan-Holder composition series444Writing $R$ as a homomorphic image of $F[y]$ so that $y:=(y_{1},\dots,y_{e})$ generates $\mathfrak{m}$, let $\mathfrak{a}(\alpha)$, for $\alpha\in\mathbb{Z}^{n}_{\geq 0}$, be the ideal in $R$ generated by all $y^{\beta}$ with $\beta$ lexicographically larger than $\alpha$. Then we may take $\Delta$ to be all monomials $y^{\alpha}$ such that $y^{\alpha}\notin\mathfrak{a}(\alpha)$, ordered lexicographically.
$$\mathfrak{a}_{l}=0\varsubsetneq\mathfrak{a}_{l-1}\varsubsetneq\mathfrak{a}_{l-%
2}\varsubsetneq\dots\varsubsetneq\mathfrak{a}_{1}=\mathfrak{m}\varsubsetneq%
\mathfrak{a}_{0}=R.$$
I claim that $\pi_{j}$ vanishes on each element in $\mathfrak{a}_{j}$ for $j<i$. Indeed, if not, let $j<i$ be minimal so that there exists a counterexample with $r_{j}:=\pi_{j}(r)\neq 0$ for some $r\in\mathfrak{a}_{i}$. By minimality, $r=r_{j}\alpha_{j}+r_{j+1}\alpha_{j+1}+\dots\in\mathfrak{a}_{i}$ showing that $\alpha_{j}\in\mathfrak{a}_{j+1}$, since $r_{j}$ is invertible. However, this implies that $\mathfrak{a}_{j}=\mathfrak{a}_{j+1}$, contradiction.
From this, it is now easy to see that the first $m$ basis elements of $\Delta$ form a basis of $R_{m}:=R/\mathfrak{a}_{m+1}$. Put differently, if $r\in R$, then the $m$-tuple $\pi_{R_{m}}(r)$ is the initial part of the $l$-tuple $\pi_{R}(r)$. Therefore, calculating $\nabla_{\!m}f$ for $f\in F[x]$ does not depend on whether we work with $\pi_{R}$ or with $\pi_{R_{m}}$, and hence, in particular, $\nabla_{\!m}f\in F[\tilde{x}_{0},\dots,\tilde{x}_{m}]$ for every $m<l$.
The next result together with Corollary 7.20 below shows that arc schemes are functorial fibrations:
7.12 Theorem.
Let $Z$ be a local Artinian $F$-scheme of length $l$. For each affine $F$-scheme $X\subseteq{\mathbb{A}_{F}^{n}}$, the projection ${\mathbb{A}_{F}^{ln}}\to{\mathbb{A}_{F}^{n}}$ onto the first $n$ coordinates induces a split surjective map $\nabla_{\!Z}X\to X$, which is smooth above the regular locus of $X$. If $h\colon Y\to X$ is a morphism of affine $F$-schemes, then we have an induced morphism $\nabla_{\!Z}h\colon\nabla_{\!Z}Y\to\nabla_{\!Z}X$ making the diagram
(24)
$$\nabla_{\!Z}X$$$$\nabla_{\!Z}Y$$$$X$$$$Y$$$$h$$$$\nabla_{\!Z}h$$
commute. Moreover, if $h$ is a closed immersion, then so is $\nabla_{\!Z}h$. If $Y\subseteq X$ is an open immersion, then we even we have an isomorphism
(25)
$$\nabla_{\!Z}Y\cong\nabla_{\!Z}X\times_{X}Y.$$
Proof.
Let $(R,\mathfrak{m})$ be the Artinian local ring ${\mathcal{O}}_{Z}$ corresponding to $Z$, and calculate $\pi:=\pi_{R}$ with the basis given as in Remark 7.11.
The projection ${\mathbb{A}_{F}^{ln}}\to{\mathbb{A}_{F}^{n}}$ is given by the embedding $F[x]\to F[\tilde{x}]\colon x\mapsto\tilde{x}_{0}$. Let $\phi$ be the schemic formula defining $X$, let $I:=I(\phi)$ be the corresponding ideal, and let $A:=F[x]/I$ be its coordinate ring. Furthermore, let $\tilde{I}:=I(\nabla X)$ be the ideal defining $\nabla X:=\nabla_{\!Z}X$, and let $\tilde{A}:=F[\tilde{x}]/\tilde{I}$ be its coordinate ring.
The existence of the map $\nabla X\to X$ follows from our observation in Remark 7.11 that $\nabla_{\!0}f=f(\tilde{x}_{0})$. Namely, applying this to every equation in $\phi$, we see that the explicit formula $\tilde{x}_{0}=x$ defining the projection ${\mathbb{A}_{F}^{ln}}\to{\mathbb{A}_{F}^{n}}$ induces a morphism $\phi\to\nabla\phi$, whence a homomorphism $A\to\tilde{A}$, that is to say, a morphism $\nabla_{\!Z}X\to X$. Let $\mathfrak{b}$ be the ideal in $F[\tilde{x}]$ generated by all $\tilde{x}_{i}$ with $0<i<l$. Since $\dot{x}\equiv\tilde{x}_{0}\mod\mathfrak{b}R[\tilde{x}]$, equation (22) yields that all $\nabla_{\!u}f$ belong to $\mathfrak{b}$, for $u>0$. Hence $\tilde{A}/\mathfrak{b}\tilde{A}\cong A$, showing that $\nabla X\to X$ has a section, whence is split surjective.
If $Y\to X$ is a morphism of affine $F$-schemes, then this corresponds by Corollary 4.8 to an explicit morphism $\phi\to\psi$, where $\psi$ is the schemic formula defining $Y$. We leave it to the reader to verify that this induces an explicit morphism $\nabla\phi\to\nabla\psi$, leading to a commutative diagram (24). Suppose $Y\to X$ is a closed immersion. We may assume that $Y$ is a closed subscheme of $X$, and hence the schemic formula of $Y$ can be taken to be a conjunction of the form $\phi\wedge\psi$, with $\psi$ some schemic formula. Therefore, $\nabla(\phi\wedge\psi)=\nabla\phi\wedge\nabla\psi$ is the schemic formula for $\nabla Y$, showing that it is a closed subscheme of $\nabla X$. Next suppose $Y\to X$ is an open immersion. We may reduce to the case that $Y$ is a basic open subset of $X$, since $\nabla$ is compatible with disjuncts/unions. By the case of a closed immersion just proved and the fact that $\nabla$ also preserves intersections, we may furthermore reduce to the case that $Y=\operatorname{D}(f)\subseteq X={\mathbb{A}_{F}^{n}}$ for some non-zero $f\in F[x]$. Hence $Y$ is defined, as a closed subscheme of ${\mathbb{A}_{F}^{n+1}}$, by $g(x,y):=yf(x)-1$. Let $\tilde{y}=(\tilde{y}_{0},\dots,\tilde{y}_{l-1})$ be arc variables with corresponding generic arc $\dot{y}:=\tilde{y}_{0}+\dots+\tilde{y}_{l-1}\alpha_{l-1}$. Let $\tilde{J}\subseteq F[\tilde{x},\tilde{y}]$ be the ideal defining $\nabla Y$, that is to say, the ideal generated by all $\nabla_{\!u}g$, and let $\tilde{B}:=F[\tilde{x},\tilde{y}]/\tilde{J}$ be the coordinate ring of $\nabla Y$. Our aim is to show that $\nabla Y$ is isomorphic to $\operatorname{D}(\nabla_{\!0}f)\subseteq{\mathbb{A}_{F}^{ln}}$. Using (22), we get
$$\sum_{u=0}^{l-1}\nabla_{\!u}g(\tilde{x},\tilde{y})\alpha_{u}=g(\dot{x},\dot{y}%
)=\dot{y}\cdot\Big{(}\sum_{u=0}^{l-1}\nabla_{\!u}f(\tilde{x})\alpha_{u}\Big{)}%
-1.$$
Expansion yields $\nabla_{\!0}f=f(\tilde{x}_{0})$ and $\nabla_{\!0}g=\tilde{y}_{0}\nabla_{\!0}f-1$. Hence under the canonical homomorphism $F[x]\to\tilde{B}$ given by $x\mapsto\tilde{x}_{0}$, we get
$\tilde{y}_{0}f=1$ in $\tilde{B}$. By Remark 7.11, in order to calculate $\nabla_{\!u}g$ for $u>0$, we may ignore all terms containing some $\alpha_{i}$ with $i>u$, that is to say,
$$\displaystyle\nabla_{\!u}g$$
$$\displaystyle=\nabla_{\!u}\Big{(}(\tilde{y}_{0}+\dots+\tilde{y}_{u}\alpha_{u})%
(\nabla_{\!0}f+\dots+(\nabla_{\!u}f)\alpha_{u})\Big{)}$$
$$\displaystyle=\tilde{y}_{u}\nabla_{\!0}f+\nabla_{\!u}\Big{(}(\tilde{y}_{0}+%
\dots+\tilde{y}_{u-1}\alpha_{u-1})(\nabla_{\!0}f+\dots+(\nabla_{\!u}f)\alpha_{%
u})\Big{)}.$$
Note that the second term lies in $F[\tilde{x},\tilde{y}_{0},\dots,\tilde{y}_{u-1}]$. Since $\nabla_{\!0}f=f$ and since $y_{0}f=1$ and $\nabla_{\!u}g=0$ in $\tilde{B}$, we obtain, after multiplying this sum by $\tilde{y}_{0}$, that $\tilde{y}_{u}$ lies in the $F$-subalgebra of $\tilde{B}$ generated by all $\tilde{x}$ and all $\tilde{y}_{j}$ with $j<u$. Hence, by downward induction on $u$, we get
$$\tilde{B}\cong F[\tilde{x},\tilde{y}_{0}]/(\tilde{y}_{0}f-1)F[\tilde{x},\tilde%
{y}_{0}],$$
showing that $\nabla Y=\operatorname{D}(f)$, as claimed. In particular, $\nabla Y$ is the pull-back of $Y$ under the map $\nabla X\to X$, that is to say, (25) holds.
So remains to show that if $P$ is a closed point in the regular locus of $X$, then the fiber of $\nabla X\to X$ at $P$ is non-singular. By the Nullstellensatz, we may, after a change of variables (translation), assume that $P$ corresponds to the maximal ideal $\mathfrak{n}:=(x_{1},\dots,x_{n})F[x]$. Since ${\mathcal{O}}_{X,P}=A_{\mathfrak{n}A}$ is a regular local ring, $IF[x]_{\mathfrak{n}}$ is generated by a regular system of parameters ([Mats, Theorem 14.2]), say, of length $h$. Hence, by Nakayama’s Lemma, we can find an open $U\subseteq{\mathbb{A}_{F}^{n}}$ containing $P$, such that $I{\mathcal{O}}_{U}$ is generated by $h$ elements whose image in ${\mathcal{O}}_{U,P}$ are part of a generating system of $\mathfrak{n}$. Since $\nabla(U\cap X)$ is just the pull-back of $U\cap X$ by (25), and since the present question is not affected by such a pull-back, we may take $X=U$, and assume that $I=(f_{1},\dots,f_{h})F[x]$, with the $f_{u}$ part of a minimal system of generators of $\mathfrak{n}$. In particular, the linear parts of the $f_{u}$ must be linearly independent over $F$. Hence after a linear change of variables (rotation), we may assume that $f_{u}=x_{u}+g_{u}$, with each $g_{u}\in\mathfrak{n}^{2}$, for $j=1,\dots,h$. Using (22), we get
$$f_{v}(\dot{x})=\tilde{x}_{0,v}+\tilde{x}_{1,v}\alpha_{1}+\dots+\tilde{x}_{l-1,%
v}\alpha_{l-1}+\sum_{u=0}^{l-1}\nabla_{\!u}g_{v}(\tilde{x})\alpha_{u}$$
for $v=1,\dots,h$, showing that $\nabla\phi$ is the conjunction of the $lh$ equations $\nabla_{\!u}f_{v}=\tilde{x}_{u,v}+\nabla_{\!u}g_{v}=0$, with $u=0,\dots,l-1$ and $v=1,\dots,h$.
Let $J(\tilde{x})$ be the $lh\times lh$-submatrix
$$\left(\begin{matrix}\frac{\partial(\tilde{x}_{0,1}+\nabla_{\!0}g_{1})}{%
\partial\tilde{x}_{0,1}}&\dots&\frac{\partial(\tilde{x}_{0,1}+\nabla_{\!0}g_{1%
})}{\partial\tilde{x}_{l-1,1}}&\frac{\partial(\tilde{x}_{0,1}+\nabla_{\!0}g_{1%
})}{\partial\tilde{x}_{0,2}}&\dots&\frac{\partial(\tilde{x}_{0,1}+\nabla_{\!0}%
g_{1})}{\partial\tilde{x}_{l-1,h}}\\
&&\\
\frac{\partial(\tilde{x}_{1,1}+\nabla_{\!1}g_{1})}{\partial\tilde{x}_{0,1}}&%
\dots&\frac{\partial(\tilde{x}_{1,1}+\nabla_{\!1}g_{1})}{\partial\tilde{x}_{l-%
1,1}}&\frac{\partial(\tilde{x}_{1,1}+\nabla_{\!1}g_{1})}{\partial\tilde{x}_{0,%
2}}&\dots&\frac{\partial(\tilde{x}_{1,1}+\nabla_{\!1}g_{1})}{\partial\tilde{x}%
_{l-1,h}}\\
\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
\frac{\partial(\tilde{x}_{l-1,1}+\nabla_{\!l-1}g_{1})}{\partial\tilde{x}_{0,1}%
}&\dots&\frac{\partial(\tilde{x}_{l-1,1}+\nabla_{\!l-1}g_{1})}{\partial\tilde{%
x}_{l-1,1}}&\frac{\partial(\tilde{x}_{l-1,1}+\nabla_{\!l-1}g_{1})}{\partial%
\tilde{x}_{0,2}}&\dots&\frac{\partial(\tilde{x}_{l-1,1}+\nabla_{\!l-1}l{g_{1}}%
)}{\partial\tilde{x}_{l-1,h}}\\
&&\\
\frac{\partial(\tilde{x}_{0,2}+\nabla_{\!0}g_{2})}{\partial\tilde{x}_{0,1}}&%
\dots&\frac{\partial(\tilde{x}_{0,2}+\nabla_{\!0}g_{2})}{\partial\tilde{x}_{l-%
1,1}}&\frac{\partial(\tilde{x}_{0,2}+\nabla_{\!0}g_{2})}{\partial\tilde{x}_{0,%
2}}&\dots&\frac{\partial(\tilde{x}_{0,2}+\nabla_{\!0}g_{2})}{\partial\tilde{x}%
_{l-1,h}}\\
\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
\frac{\partial(\tilde{x}_{l-1,h}+\nabla_{\!l-1}g_{h})}{\partial\tilde{x}_{0,1}%
}&\dots&\frac{\partial(\tilde{x}_{l-1,h}+\nabla_{\!l-1}g_{h})}{\partial\tilde{%
x}_{l-1,1}}&\frac{\partial(\tilde{x}_{l-1,h}+\nabla_{\!l-1}g_{h})}{\partial%
\tilde{x}_{0,2}}&\dots&\frac{\partial(\tilde{x}_{l-1,h}+\nabla_{\!l-1}g_{h})}{%
\partial\tilde{x}_{l-1,h}}\end{matrix}\right)$$
of the Jacobian $\operatorname{Jac}_{\nabla\phi}(\tilde{x})$, and, for each $1\leq v,j\leq h$, let $J_{v,j}$ be the $l\times l$ submatrix $(\partial(\tilde{x}_{u,v}+\nabla_{\!u}g_{v})/\partial\tilde{x}_{i,j})_{u,i=0,%
\dots,l-1}$ of $J$. I claim that each $J_{v,j}$ is an upper-triangular matrix with diagonal entries all equal to $\partial f_{v}/\partial x_{j}$ (under the canonical embedding $F[x]\to F[\tilde{x}]$). Assuming the claim, let $\tilde{P}$ be an arbitrary closed point in the fiber above $P$. In particular, if $\tilde{\mathfrak{n}}\subseteq F[\tilde{x}]$ is the maximal ideal corresponding to $\tilde{P}$, then $\mathfrak{n}=\tilde{\mathfrak{n}}\cap F[x]$. Hence, the determinant of $J_{v,j}$ evaluated at $\tilde{P}$ is equal to $(\partial f_{v}/\partial x_{j})^{l}(P)$, which is equal to the Kronecker delta function $\delta_{v,j}$. This shows that $J(\tilde{P})$ is an upper-triangular matrix with determinant equal to one, from which it follows that
$\operatorname{Jac}_{\nabla\phi}(\tilde{P})$ has rank at least $lh$. Since $I(\nabla\phi)$ has height at most $lh$ by Krull’s Principal Ideal theorem, we showed, using the Jacobian criterion for smoothness ([Eis, Theorem 16.19]), that $\nabla X$ is smooth at $\tilde{P}$, as we wanted to show.
So remains to prove the claim. For $f\in F[x]$, we apply the $(i,j)$-th partial derivative to both sides of (22). On the left hand side, we get, using the chain rule
$$\displaystyle\frac{\partial\big{(}f(\dot{x})\big{)}}{\partial\tilde{x}_{i,j}}$$
$$\displaystyle=\frac{\partial f}{\partial x_{j}}(\dot{x})\cdot\frac{\partial(%
\dot{x}_{j})}{\partial\tilde{x}_{i,j}}$$
$$\displaystyle=\frac{\partial f}{\partial x_{j}}(\dot{x})\cdot\alpha_{i}.$$
Doing the same to the right hand side of (22), we get
(26)
$$\frac{\partial f}{\partial x_{j}}(\dot{x})\cdot\alpha_{i}=\sum_{u=0}^{l-1}%
\frac{\partial(\nabla_{\!u}f)}{\partial\tilde{x}_{i,j}}\alpha_{u}.$$
In view of (22) for $\partial f/\partial x_{j}$,
the left hand side of (26) becomes
$$\Big{(}\frac{\partial f}{\partial x_{j}}+\nabla_{\!1}(\frac{\partial f}{%
\partial x_{j}})\alpha_{1}+\dots+\nabla_{\!l-1}(\frac{\partial f}{\partial x_{%
j}})\alpha_{l-1}\Big{)}\cdot\alpha_{i}.$$
By the choice of basis (which remains a basis for the base change $R[\tilde{x}]$ over $F[\tilde{x}]$), the coefficient of $\alpha_{i}$ in this product is $\partial f/\partial x_{j}$. Hence comparing this with the right hand side of (26), we obtain
$$\frac{\partial f}{\partial x_{j}}=\frac{\partial(\nabla_{\!i}f)}{\partial%
\tilde{x}_{i,j}}.$$
Furthermore,
since the left hand side of (26) belongs to $\mathfrak{a}_{j}R[\tilde{x}]$, all $\partial(\nabla_{\!u}f)/\partial\tilde{x}_{i,j}$ must be zero for $u<i$, by the choice of basis (see Remark 7.11). Applied with $f=f_{v}=x_{v}+g_{v}$, the claim follows from this.
∎
7.13 Remark.
The section of $\nabla_{\!Z}X\to X$ given in the proof of the theorem is induced by $\nabla_{\!0}$, and will be called the canonical section of $\nabla_{\!Z}X\to X$. We will identify $X$ as a closed subscheme of $\nabla_{\!Z}X$ via this canonical section.
7.14 Corollary.
If $X$ is a $d$-dimensional affine variety and $Z$ an Artinian local scheme of length $l$, then $\nabla_{\!Z}X$ is an $ld$-dimensional variety.
Proof.
Since $X$ is irreducible, it contains a dense open subset $U$ which is non-singular. By Theorem 7.12, the pull-back $\nabla U=U\times_{X}\nabla X$ is a dense open subset of $\nabla X$. Moreover, Theorem 7.12 also yields that $\nabla U\to U$ is smooth. Since $U$ is non-singular, so is therefore $\nabla U$. In particular, $\nabla U$ is irreducible, whence so is $\nabla X$. Moreover, by the proof of the theorem, $U$ is defined by an ideal of height $n-d$, and $\nabla U$ by an ideal of height $l(n-d)$. Hence $\nabla U$ has dimension $ln-l(n-d)=ld$, whence so does $\nabla X$.
∎
7.15 Corollary.
If $\bar{Z}\subseteq Z$ is a closed immersion of Artinian local $F$-schemes, then for any affine $F$-scheme $X$, we have a split surjection $\nabla_{\!Z}X\to\nabla_{\!\bar{Z}}X$, making the diagram
(27)
$$\nabla_{\!Z}X$$$$\nabla_{\!\bar{Z}}X$$$$X$$
commute.
Proof.
Let $\phi$ be the schemic formula defining $X$, and let $A:=F[x]/I(\phi)$ be its coordinate ring, with $x$ an $n$-tuple of variables. Let $R$ and $\bar{R}$ be the corresponding Artinian local rings, and let $l$ and $\bar{l}$ be their respective lengths. Let $\mathfrak{a}$ be the kernel of the epimorphism $R\to\bar{R}$. We may choose a basis $\Delta=\{\alpha_{0},\dots,\alpha_{l-1}\}$ of $R$ such that $\alpha_{j}\in\mathfrak{a}$ for $j\geq\bar{l}$. Hence the images of the first $\bar{l}$ elements of $\Delta$ form a basis $\bar{\Delta}$ of $\bar{R}$. In view of Lemma 7.10, we may use these two bases to calculate $\nabla_{\!R}\phi$ and $\nabla_{\!\bar{R}}\phi$. Let $\tilde{x}$ be the $ln$-tuple of arc variables. Let us denote the $\bar{l}n$ first variables in $\tilde{x}$ by $\bar{x}$, and let $J$ be the ideal generated by the remaining variables. Hence $\tilde{A}:=F[\tilde{x}]/I(\nabla_{\!R}\phi)$ and $\bar{A}:=F[\bar{x}]/I(\nabla_{\!\bar{R}}\phi)$ are the respective coordinate rings of $\nabla_{\!Z}X$ and $\nabla_{\!\bar{Z}}X$. For $f\in F[x]$, equation (22) shows that $\nabla_{\!i}f$ belongs to $F[\bar{x}]$ for $i<\bar{l}$, and belongs to $J$ for $i\geq\bar{l}$. Hence the embedding $F[\bar{x}]\to F[\tilde{x}]$ induces a homomorphism $\bar{A}\to\tilde{A}$. Moreover, the isomorphism $\tilde{A}/J\tilde{A}\cong\bar{A}$ shows that this homomorphism is split.
∎
Note that if we let $\bar{Z}=\operatorname{Spec}F$ be the closed subscheme given by the residue field, then $\nabla_{\!\bar{Z}}X=X$, showing that $\nabla_{\!Z}X\to X$ is induced by the residue map.
7.16 Corollary.
Each Artinian $F$-scheme $Z$ induces an endomorphism $\nabla_{\!Z}$, called the arc map along $Z$, on ${\mathbf{Gr}(F^{\operatorname{sch}})}$ (respectively, on ${\mathbf{Gr}(F^{\operatorname{pp}})}$), by sending a class ${\left[\phi\right]}$ to the class ${\left[\nabla_{\!Z}\phi\right]}$.
Proof.
Let us show in general that if $\theta$ is a morphic formula giving a morphism $\phi\to\psi$, then $\nabla\theta$ is also morphic and induces a morphism $\nabla\phi\to\nabla\psi$. We verify this on an arbitrary $F$-algebra $A$. Let $a\in\nabla\phi(A)$ and put $\tilde{a}:={\pi^{-1}(a)}$, so that $\tilde{a}\in\phi(\tilde{A})$, where $\tilde{A}:=R\otimes_{F}A$ and $R$ is the Artinian coordinate ring of $Z$. Since $\theta$ is morphic, it satisfies the morphic conditions (1)–(3), and hence we can find $\tilde{b}\in\psi(\tilde{A})$, such that $(\tilde{a},\tilde{b})\in\theta(\tilde{A})$. Let $b:=\pi(\tilde{b})$ so that $b\in\nabla\psi(A)$ and $(a,b)\in\nabla\theta(A)$. In particular, $a$ satisfies the morphic conditions (1) and (3), and by a similar, easy argument we also verify (2), showing that $\nabla\theta$ defines a morphism $\nabla\phi\to\nabla\psi$.
Since $\nabla$ preserves explicit and pp-formulae by Lemma 7.10 and its proof, our previous argument then shows that it also preserves $\mathcal{S}ch$-isomorphisms.
So remains to verify that $\nabla$ also preserves scissor relations. This is clear, however, since $\nabla(\phi\vee\psi)=\nabla\phi\vee\nabla\psi$ and $\nabla(\phi\wedge\psi)=\nabla\phi\wedge\nabla\psi$ by the proof of Proposition 7.4. This also shows that $\nabla$ preserves multiplication, showing that it is a ring endomorphism on either Grothendieck ring.
∎
Integration
Integration is derived from the arc map as follows. Let $X$ and $Z$ be affine $F$-schemes, with $Z$ Artinian. We define their arc-integral as the class
$$\int Z\ dX:={\left[\nabla_{\!Z}X\right]}$$
in ${\mathbf{Gr}(F^{\operatorname{sch}})}$ (or in ${\mathbf{Gr}(F^{\operatorname{pp}})}$). Corollary 7.16 shows that this is well-defined.
7.17 Proposition.
The arc-integral $\int Z\ dX$ only depends on the class of the Artinian scheme $Z$ in ${\mathbf{Gr}_{0}(F^{\operatorname{sch}})}$ and the class of the $F$-scheme $X$ in ${\mathbf{Gr}(F^{\operatorname{sch}})}$. In particular, for any $p\in{\mathbf{Gr}(F^{\operatorname{sch}})}$ and $q\in{\mathbf{Gr}_{0}(F^{\operatorname{sch}})}$, the arc-integral $\int q\ dp$ is a well-defined element of ${\mathbf{Gr}(F^{\operatorname{sch}})}$.
The same result holds upon replacing ${\mathbf{Gr}(F^{\operatorname{sch}})}$ by ${\mathbf{Gr}(F^{\operatorname{pp}})}$ everywhere.
Proof.
The dependence on the class of $X$ follows from Corollary 7.16. Using Corollary 4.9, one easily shows that $\int Z\ dX$ only depends on the isomorphism class of $Z$. To show that it even depends on the class of $Z$ in ${\mathbf{Gr}_{0}(F^{\operatorname{sch}})}$ only, we need to show that it vanishes on any scissor relation. By Lemma 3.8 (using Lemmas 4.2 and 4.4), it suffices to verify this on the scissor relation $Z+Z^{\prime}-Z\sqcup Z^{\prime}$, with $Z$ and $Z^{\prime}$ Artinian schemes. We need to show that
(28)
$$\int Z\ dX+\int Z^{\prime}\ dX=\int Z\sqcup Z^{\prime}\ dX.$$
Let $R$ and $R^{\prime}$ be the corresponding Artinian $F$-algebras, and let $\phi$ be the schemic formula defining $X$. Hence $R\oplus R^{\prime}$ is the Artinian algebra corresponding to $Z\sqcup Z^{\prime}$. Therefore, (28) is equivalent with showing that ${\left[\nabla_{\!R}\phi\right]}+{\left[\nabla_{\!R^{\prime}}\phi\right]}={%
\left[\nabla_{\!R\oplus R^{\prime}}\phi\right]}$, and this, in turn will follow if we can show that the disjoint sum $\nabla_{\!R}\phi\oplus\nabla_{\!R^{\prime}}\phi$ is isomorphic with $\nabla_{\!R\oplus R^{\prime}}\phi$ modulo $\mathbf{Art}_{F}$. We verify this on an arbitrary model $S$ of $\mathbf{Art}_{F}$. Using Lemma 3.3 and Proposition 7.4 both twice, we have
$$\displaystyle\nabla_{\!R\oplus R^{\prime}}\phi(S)$$
$$\displaystyle=\phi((R\oplus R^{\prime})\otimes_{F}S)$$
$$\displaystyle=\phi((R\otimes_{F}S)\oplus(R^{\prime}\otimes_{F}S))$$
$$\displaystyle\cong\phi(R\otimes_{F}S)\sqcup\phi(R^{\prime}\otimes_{F}S)$$
$$\displaystyle=(\nabla_{\!R}\phi)(S)\sqcup(\nabla_{\!R^{\prime}}\phi)(S)$$
$$\displaystyle=(\nabla_{\!R}\phi\oplus\nabla_{\!R^{\prime}}\phi)(S),$$
where the middle bijection is induced by the canonical bijection between $(R\oplus R^{\prime})^{n}$ and $R^{n}\oplus(R^{\prime})^{n}$. Since this bijection is easily seen to be induced by an (explicit) isomorphism, we completed the proof of (28).
The same argument shows that $\int\cdot\ dX$ is additive, and hence can be defined on any element $q\in{\mathbf{Gr}_{0}(F^{\operatorname{sch}})}$.
By Lemma 5.3, we can write an arbitrary element $p\in{\mathbf{Gr}(F^{\operatorname{sch}})}$ as a difference ${\left[X\right]}-{\left[X^{\prime}\right]}$. We let
$$\int q\ dp:=\int q\ dX-\int q\ dX^{\prime}.$$
This is well-defined, as each arc map $\nabla_{\!Z}$ is a ring homomorphism, whence in particular additive.
∎
Since ${\left[\operatorname{Spec}F\right]}=1$ and the arc map $\nabla_{\!\operatorname{Spec}F}$ is the identity, we get the suggestive formula
$$\int\ dX={\left[X\right]}.$$
Since arc maps are also multiplicative, we get the following Fubini-type formula:
7.18 Proposition.
For $Z$ an Artinian $F$-scheme and $X$ and $Y$ affine $F$-schemes, we have
$$\int Z\ d(X\times_{F}Y)=\int Z\ dX\cdot\int Z\ dY$$
in ${\mathbf{Gr}(F^{\operatorname{sch}})}$.∎
The same product formula also holds in ${\mathbf{Gr}(F^{\operatorname{pp}})}$, where we may even drop the affineness assumption in view of Theorem 6.4. Since the arc formula of the Lefschetz formula in one variable is the Lefschetz formula in $\ell(R)$ variables, we immediately obtain from Lemma 7.3 that:
7.19 Proposition.
For any element $q\in{\mathbf{Gr}_{0}(F^{\operatorname{sch}})}$, we have $\int q\ d{\mathbb{L}}={\mathbb{L}}^{\ell(q)}$.∎
The next result generalizes this to an arbitrary smooth scheme, provided we work in the Grothendieck ring ${\mathbf{Gr}(F^{\operatorname{pp}})}$; this is needed since we need the covering properties proven in Theorem 6.4. We call a morphism $Y\to X$ of $F$-schemes a locally trivial fibration with fiber $W$ if for each (closed) point $P\in X$, we can find an open $U\subseteq X$ containing $P$ such that the restriction of $Y\to X$ to $U$ is isomorphic with the projection $U\times_{F}W\to U$.
7.20 Corollary.
If $\bar{Z}\subseteq Z$ is a closed immersion of Artinian local $F$-schemes, and $X$ is a smooth $d$-dimensional affine $F$-scheme, then $\nabla_{\!Z}X\to\nabla_{\!\bar{Z}}X$ is a locally trivial fibration with fiber ${\mathbb{A}_{F}^{dm}}$, where $m=\ell(Z)-\ell(\bar{Z})$. In particular,
$$\int Z\ dX={\left[X\right]}\cdot{\mathbb{L}}^{d\ell(Z)-d}$$
in ${\mathbf{Gr}(F^{\operatorname{pp}})}$.
Proof.
Let $R$ and $\bar{R}$ be the Artinian local coordinate rings of $Z$ and $\bar{Z}$ respectively, and let $\phi$ be the schemic formula $f_{1}=\dots=f_{s}=0$ defining $X\subseteq{\mathbb{A}_{F}^{m}}$. We will write $\nabla$ and $\bar{\nabla}{}$ for the respective arc maps $\nabla_{\!Z}$ and $\nabla_{\!\bar{Z}}$, and similarly, $\pi$ and $\bar{\pi}$ for the isomorphisms $R\cong F^{\ell(R)m}$ and $\bar{R}\cong F^{\ell(\bar{R})m}$. Since the composition of locally trivial fibrations is again a locally trivial fibration, with general fiber the product of the fibers, we may reduce to the case that $\bar{R}=R/\alpha R$ with $\alpha$ an element in the socle of $R$, that is to say, such that $\alpha\mathfrak{m}=0$, where $\mathfrak{m}$ is the maximal ideal of $R$. Let $l$ be the length of $R$, and let $\Delta$ be a basis of $R$ as in Remark 7.11, with $\alpha_{l-1}=\alpha$ (since $\alpha$ is a socle element, such a basis always exists). In particular, $\Delta-\{\alpha\}$ is a basis of $\bar{R}$. We will use these bases to calculate both arc maps.
We start with calculating a general fiber of the map $\nabla X\to\bar{\nabla}{X}$. By Corollary 4.9, it suffices to do this in an arbitrary model of $\mathbf{Art}_{F}$, that is to say, to calculate the fiber of an $S$-rational point on $\bar{\nabla}{X}$, where $S$ is any Artinian local $F$-algebra. Let $\bar{a}$ be an $m$-tuple in $\phi(\bar{R}\otimes_{F}S)$. By Proposition 7.5, its image $\bar{\pi}(\bar{a})$ is an $(l-1)m$-tuple in $\bar{\nabla}{\phi}(S)$, corresponding by Lemma 4.5, therefore, to an $S$-rational point $\bar{Q}$ of $\bar{\nabla}{X}$, and any $S$-rational point of $\bar{\nabla}{X}$ is obtained in this way. Let $P$ be the $S$-rational point on $X$ given as the image of $\bar{Q}$ under the canonical morphism $\bar{\nabla}{X}\to X$.555The image of an $S$-rational point $\operatorname{Spec}S\to Y$ under a morphism $Y\to X$ is simply the composition of these two morphisms, yielding an $S$-rational point on $X$. Hence $a:=\bar{\pi}_{0}(\bar{a})$ is the $m$-tuple in $\phi(S)$ corresponding to $P$.
The surjection $R\to\bar{R}$ induces a surjection $R\otimes_{F}S\to\bar{R}\otimes_{F}S$ and hence a map $\phi(R\otimes_{F}S)\to\phi(\bar{R}\otimes_{F}S)$. The fiber above $\bar{a}$ is therefore defined by the equations $f_{j}(\bar{a}+\tilde{x}_{l-1}\alpha)=0$, for $j=1,\dots,s$. By Taylor expansion, this becomes
(29)
$$0=f_{j}(\bar{a}+\tilde{x}_{l-1}\alpha)=\Big{(}\sum_{i=1}^{m}\frac{\partial f_{%
j}}{\partial x_{i}}(\bar{a})\tilde{x}_{l-1,i}\Big{)}\alpha$$
since $f_{j}(\bar{a})=0$ and $\alpha^{2}=0$ in $R\otimes_{F}S$. In fact, since $\bar{a}\equiv a\mod\mathfrak{m}(R\otimes_{F}S)$ and $\alpha\mathfrak{m}=0$, we may replace each $\partial f_{j}/\partial x_{i}(\bar{a})$ in (29) by $\partial f_{j}/\partial x_{i}(a)$. In terms of $S$-rational points, therefore, the fiber above $\bar{Q}$ is the linear subspace of $S^{m}$ defined as the kernel of the Jacobian $(s\times n)$-matrix $\operatorname{Jac}_{\phi}(P)$.
Since $X$ is non-singular at $P$, this matrix has rank $m-d$, and hence its kernel is a $d$-dimensional linear subspace. This proves that the fiber is equal to ${\mathbb{A}_{S}^{d}}$. More precisely, we may choose $\phi$ so that the first $(m-d)\times(m-d)$-minor in $\operatorname{Jac}_{\phi}(P)$ is invertible. Therefore, by Kramer’s rule, we may express each $\tilde{x}_{l-1,i}$ with $i\leq m-d$ as a linear combination of the $\tilde{x}_{l-1,j}$ with $j>m-d$ on an open neighborhood of $\bar{Q}$. Since this holds in any $S$-rational point $\bar{Q}$ of $\bar{\nabla}{X}$, we showed that $h$ is a locally trivial fibration with fiber ${\mathbb{A}_{F}^{d}}$.
Applying this to $\nabla X\to X$, (note that $X=\nabla_{\!F}X$) we get a locally trivial fibration with fiber equal to ${\mathbb{A}_{F}^{d(l-1)}}$. The last assertion then follows from Lemma 7.21 below.
∎
7.21 Lemma.
If $f\colon Y\to X$ is a locally trivial fibration of $F$-schemes with fiber $Z$, then ${\left[Y\right]}={\left[X\right]}\cdot{\left[Z\right]}$ in ${\mathbf{Gr}(F^{\operatorname{pp}})}$.
Proof.
By definition and compactness, there exists a finite open covering $\{U_{1},\dots,U_{m}\}$ of $X$, so that
$${f^{-1}(U_{i})}\cong U_{i}\times_{F}Z,$$
for $i=1,\dots,m$. Taking classes in ${\mathbf{Gr}(F^{\operatorname{pp}})}$, Lemma 5.5 yields ${\left[{f^{-1}(U_{i})}\right]}={\left[U_{i}\right]}\cdot{\left[Z\right]}$. Since the ${f^{-1}(U_{i})}$ form an open affine covering of $Y$, Theorem 6.4 yields, after taking the $m$-th scissor polynomial on both sides,
$${\left[Y\right]}={\left[S({f^{-1}(U_{1})},\dots,{f^{-1}(U_{m})})\right]}={%
\left[S(U_{1},\dots,U_{m})\right]}\cdot{\left[Z\right]}={\left[X\right]}\cdot{%
\left[Z\right]}$$
in ${\mathbf{Gr}(F^{\operatorname{pp}})}$.
∎
7.22 Remark.
Example 7.7 shows that over a singular point, the dimension of the fiber may increase.
Igusa-zeta series
By a germ, we mean a pair $(X,P)$ with $X$ an $F$-scheme and $P$ a closed point on $X$; if $X$ is a closed subscheme of $X^{*}$, then we also say that $(X,P)$ is a germ in $X^{*}$. For any $F$-scheme $Y$, we can define the (geometric) Igusa-zeta series of $Y$ along the germ $(X,P)$ as the formal power series
$$\operatorname{Igu}^{(X,P)}_{Y}(t):=\int\operatorname{Hilb}_{P}(X)\ dY=\sum_{n}%
\left(\int J_{P}^{n}X\ dY\right)t^{n}=\sum_{n}{\left[\nabla_{\!J_{P}^{n}X}Y%
\right]}\cdot t^{n}$$
in ${\mathbf{Gr}(F^{\operatorname{sch}})}[[t]]$. Note that this is well-defined since each jet is Artinian. This definition generalizes the one in [DLIgu] or [DLDwork, §4]:
7.23 Proposition.
The Igusa-zeta series $\operatorname{Igu}^{({\mathbb{A}_{F}^{1}},O)}_{Y}$ of $Y$ along the germ of the origin on the affine line is sent under the canonical homomorphism
$${\mathbf{Gr}(F^{\operatorname{sch}})}[[t]]\to{\mathbf{Gr}(F^{\operatorname{var%
}})}[[t]]$$
to the geometric Igusa-zeta function $\operatorname{Igu}^{\operatorname{geom}}_{Y}$ of $Y$. If $F$ has characteristic zero, then this image is a rational function.
Proof.
Let $P$ be the origin on the affine line. By our discussion preceding Example 7.7, the arc-integral $\int J_{P}^{n}{\mathbb{L}}\ dY$ is equal to the class of the $n$-th truncated arc space $\mathcal{L}_{n}(Y)$, and hence the first assertion follows from the definition of the geometric Igusa-zeta function in [DLDwork, §4]. Rationality over the classical Grothendieck ring ${\mathbf{Gr}(F^{\operatorname{var}})}$ is proven in [DLDwork, Theorem 4.2.1].
∎
For curves, we can give an explicit formula for the Igusa-zeta series of the Lefschetz class:
7.24 Proposition.
If $(C,P)$ is a germ of a point of multiplicity $e$ on a curve $C$ over $F$, then
$$\operatorname{Igu}^{(C,P)}_{{\mathbb{A}_{F}^{1}}}=\frac{p(t)}{1-{\mathbb{L}}^{%
e}t}$$
for some polynomial $p\in{\mathbf{Gr}(F^{\operatorname{sch}})}[t]$.
Proof.
By definition, the multiplicity of the germ $(C,P)$ is the multiplicity of the local ring ${\mathcal{O}}_{C,P}$. By Hilbert theory, there exist $b,N\in\mathbb{Z}$ such that the length of ${J_{P}^{n}C}$ for $n\geq N$ is equal to $en+b$. Using Proposition 7.19, we get
$$\int{J_{P}^{n}C}\ d{\mathbb{L}}={\mathbb{L}}^{en+b}$$
for $n\geq N$. Hence $\operatorname{Igu}^{(C,P)}_{{\mathbb{A}_{F}^{1}}}$ is the sum of some polynomial of degree $N$ and the power series
$$\sum_{n}{\mathbb{L}}^{en+b}t^{n}=\frac{{\mathbb{L}}^{b}}{1-{\mathbb{L}}^{e}t},$$
from which the assertion easily follows.
∎
The above proof shows that
(30)
$$\operatorname{Igu}^{(X,P)}_{{\mathbb{A}_{F}^{1}}}=\sum_{n}{\mathbb{L}}^{j^{n}_%
{P}(X)}t^{n}$$
for any germ $(X,P)$, where $j^{n}_{P}(X):=\ell(J_{P}^{n}X)$.
For a smooth scheme, we have the following rationality result:
7.25 Proposition.
Let $(C,P)$ be a germ of multiplicity $e$ on a curve. For any $d$-dimensional smooth affine scheme $Y$ over $F$, the Igusa-zeta series of $Y$ along the germ $(C,P)$ is a rational function over ${\mathbf{Gr}(F^{\operatorname{pp}})}$. More precisely,
(31)
$$\operatorname{Igu}^{(C,P)}_{Y}=\frac{p(t)}{1-{\mathbb{L}}^{de}t}$$
for some polynomial $p\in{\mathbf{Gr}(F^{\operatorname{pp}})}[t]$.
Proof.
By Hilbert theory, there exist $b,N\in\mathbb{Z}$ such that $j_{P}^{n}(C)=en+b$ for $n\geq N$. By Corollary 7.20, the coefficient of the $n$-th term in $\operatorname{Igu}^{(C,P)}_{Y}$ is therefore equal to ${\left[Y\right]}\cdot{\mathbb{L}}^{d(en+b-1)}$ for $n\geq N$, and the result follows as in the proof of Proposition 7.24.
∎
Motivic integrals
If we integrate over a higher dimensional scheme, then (30) suggests that we should calibrate the Igusa-zeta series to maintain rationality. To this end, we will define a normalized integration, which more closely resembles the motivic integration of Kontsevich, Denef and Loeser ([CrawMot, DLArcs, LooMot]). More precisely, let ${\mathbf{Gr}(F^{\operatorname{sch}})}_{\mathbb{L}}$ and $\mathbf{Gr}(F^{\operatorname{pp}})_{{\mathbb{L}}}$ be the respective localization of ${\mathbf{Gr}(F^{\operatorname{sch}})}$ and ${\mathbf{Gr}(F^{\operatorname{pp}})}$ at ${\mathbb{L}}$, that is to say, ${\mathbf{Gr}(F^{\operatorname{sch}})}_{\mathbb{L}}={\mathbf{Gr}(F^{%
\operatorname{sch}})}[{{\mathbb{L}}^{-1}}]$ and $\mathbf{Gr}(F^{\operatorname{pp}})_{{\mathbb{L}}}={\mathbf{Gr}(F^{%
\operatorname{pp}})}[{{\mathbb{L}}^{-1}}]$. Let $X$ and $Z$ be affine $F$-schemes with $Z$ Artinian. We define the motivic integral of $X$ along $Z$ to be
$$\int^{\operatorname{mot}}Z\ dX:={\mathbb{L}}^{-dl}\cdot\int Z\ dX={\mathbb{L}}%
^{-dl}\cdot{\left[\nabla_{\!Z}X\right]}$$
with $d$ the dimension of $X$ and $l$ the length of $Z$, viewed either as an element in ${\mathbf{Gr}(F^{\operatorname{sch}})}_{\mathbb{L}}$ or $\mathbf{Gr}(F^{\operatorname{pp}})_{{\mathbb{L}}}$.
We define the motivic Igusa-zeta series of $Y$ along a germ $(X,P)$ as the formal power series
$$\operatorname{Igu}^{(X,P)}_{Y^{\operatorname{mot}}}(t):=\int^{\operatorname{%
mot}}\operatorname{Hilb}_{P}(X)\ dY=\sum_{n}{\mathbb{L}}^{-d\cdot j^{n}_{P}(X)%
}\left(\int{J_{P}^{n}X}\ dY\right)t^{n}$$
in ${\mathbf{Gr}(F^{\operatorname{sch}})}_{\mathbb{L}}[[t]]$ (respectively, in $\mathbf{Gr}(F^{\operatorname{pp}})_{{\mathbb{L}}}[[t]]$), where $d$ is the dimension of $Y$. In particular, by the same argument as in the proof of Proposition 7.25, we get
$$\operatorname{Igu}^{(X,P)}_{Y^{\operatorname{mot}}}=\frac{{\left[Y\right]}%
\cdot{\mathbb{L}}^{-d}}{1-t},$$
over $\mathbf{Gr}(F^{\operatorname{pp}})_{{\mathbb{L}}}$, for any germ $(X,P)$, and any smooth affine $F$-scheme $Y$. This raises the following question:
7.26 Conjecture.
If $F$ is an algebraically closed field of characteristic zero, then, for any affine $F$-scheme $Y$, the motivic Igusa-zeta series $\operatorname{Igu}^{(X,P)}_{Y^{\operatorname{mot}}}$ of $Y$ along an arbitrary germ $(X,P)$ is rational over $\mathbf{Gr}(F^{\operatorname{pp}})_{{\mathbb{L}}}$.
8. Infinitary Grothendieck rings
In this section, we will extend the previous definitions to include infinitary formulae. This turns out to be necessary when dealing with the complement of an open subscheme, as we mentioned already in the introduction.
Formularies
Let $\mathcal{L}$ be an arbitrary first-order language, and let $\Phi$ be a collection of $\mathcal{L}$-formulae of some fixed arity $n$, which we then call the arity of $\Phi$. For an $\mathcal{L}$-structure $M$, let $\Phi(M)$ be the subset in $M^{n}$ given as the union of all $\phi(M)$ with $\phi\in\Phi$. In other words, we view $\Phi$ as an infinitary disjunction, defining in each structure a subset $\Phi(M)$ which is in general only infinitary definable (and its complement is type-definable). We call $\Phi(M)$ the interpretation of $\Phi$ in $M$.
Since elementary classes do no longer behave the same as their theories on infinitary formulae, we must shift our attention from the latter to the former. So, let $\mathfrak{K}$ be a class of $\mathcal{L}$-structures, and let $\Phi$ and $\Psi$ be two collections of $\mathcal{L}$-formulae of the same arity $n$. We say that $\Phi$
and $\Psi$ are $\mathfrak{K}$-equivalent, if $\Phi(M)=\Psi(M)$ for all $M\in\mathfrak{K}$. In particular, if we let $\Phi^{\vee}$ be the collection of all finite disjuncts of formulae in $\Phi$, then $\Phi^{\vee}$ is $\mathfrak{K}$-equivalent with $\Phi$, and so without loss of generality, we may always assume, up to equivalence, that a collection is closed under finite disjunctions.
We say that $\Phi$
is a formulary with respect to $\mathfrak{K}$, if for each structure $M\in\mathfrak{K}$, there is some $\phi\in\Phi$ such that $\phi(M)=\Phi(M)$. Note that $\phi$ will in general depend on the structure $M$. Put differently, although in each $\mathfrak{K}$-structure $M$, the subset $\Phi(M)$ is definable, its definition depends on the given structure $M$. We say that $\Phi$ is first-order, if it is $\mathfrak{K}$-equivalent with a first-order formulae $\phi$. Although we do not insist in this definition on $\phi$ being part of $\Phi$, there is no loss of generality including it, since adding it yields an $\mathfrak{K}$-equivalent formulary. In particular, we may view formulae as first-order formularies and we will henceforth identify both (up to $\mathfrak{K}$-equivalence).
Most of the logical operations generalize to this infinitary setting. Namely, given $\Phi\subseteq\mathcal{L}_{n}$ and $\Psi\subseteq\mathcal{L}_{m}$, let $\Phi\wedge\Psi$ be the collection of all $\phi\wedge\psi$ with $\phi\in\Phi$ and $\psi\in\Psi$, and let $\Phi\vee\Psi:=\Phi\cup\Psi$. Similarly, we define $\Phi\times\Psi\subseteq\mathcal{L}_{n+m}$ as the collection of all $\phi\times\psi$. In case $\mathcal{L}$ contains the two constant symbols $0$ and $1$, we also define $\Phi\oplus\Psi$ as $\Phi^{\prime}\cup\Psi^{\prime}$ where $\Psi^{\prime}$ consists of all $\phi\wedge(v_{n+1}=0)$ for $\phi\in\Phi$, and $\Psi^{\prime}$ of all $\psi\wedge(v_{n+1}=1)$ for $\psi\in\Psi$ (where we assume $m\leq n$). We leave it to the reader to verify that all these operations preserve formularies. If a formulary is not first-order, then we can, however, not define its negation (in model-theoretic terms, the negation of a formulary is a type).
From now on, $\mathcal{L}:=\mathcal{L}_{F}$, the language of $F$-algebras, for $F$ an algebraically closed field, and $\mathfrak{K}=\mathbf{Art}_{F}$ the collection of all Artinian local $F$-algebras.
We call a formulary $\Phi$ respectively schemic or pp if all formulae in $\Phi$ are of that kind.
Total jets and formal schemes
Let $\phi$ be an arbitrary formula and $\zeta$ a schemic formula. We define the total jet of $\phi$ along $\zeta$ as the collection $J_{\zeta}\phi$ of all $n$-jets $J_{\zeta}^{n}\phi$. Let us show that $J_{\zeta}\phi$ is a formulary. Let $\mathfrak{a}:=I(\zeta)$ be the ideal of $\zeta$, and let $(R,\mathfrak{m})$ be an Artinian local $F$-algebra of length $l$. I claim that $(J_{\zeta}\phi)(R)$ is equal to $(J_{\zeta}^{l}\phi)(R)$. To prove this, it suffices to show that $(J_{\zeta}^{n}\phi)(R)=(J_{\zeta}^{n+1}\phi)(R)$, for all $n\geq l$. One direction is clear, so assume that $a\in(J_{\zeta}^{n+1}\phi)(R)$. Hence, for each $f\in\mathfrak{a}$, we have $f^{n+1}(a)=0$, whence $f(a)\in\mathfrak{m}$. Since this holds for all $f\in\mathfrak{a}$, we see that $g(a)=0$ in $R$ for every $g\in\mathfrak{a}^{n}$, as $\mathfrak{m}^{n}=0$. In conclusion, $a\in(J_{\zeta}^{n}\phi)(R)$.
If $\phi$ and $\zeta$ are schemic with $\zeta\Rightarrow\phi$, therefore corresponding to a closed immersion $Z\subseteq X$, then we also will write $J_{Z}X$ for $J_{\zeta}\phi$. We can give the following geometric interpretation of total jets:
8.1 Proposition.
For $Y\subseteq X$ a closed immersion of affine $F$-schemes, there is, for every Artinian local $F$-algebra $R$, a one-one correspondence between the $R$-rational points of the formal scheme $\widehat{X}_{Y}$ and the interpretation of the formulary $J_{Y}X$ in $R$.
Proof.
Let $A:={\mathcal{O}}_{X}(X)$ be the coordinate ring of $X$, and $I$ the ideal defining $Y$, that is to say, ${\mathcal{O}}_{Y}(Y)=A/I$. Hence $J_{Y}^{n}X=\operatorname{Spec}(A/I^{n})$. By [Hart, II.§9], the formal completion $\widehat{X}_{Y}$ is the ringed space with underlying set equal to the underlying set of $Y$ and with sheaf of rings the inverse limit of the sheafs ${\mathcal{O}}_{J_{Y}^{n}X}$. In particular, the ring of global sections is equal to the $I$-adic completion $\widehat{A}$ of $A$.
Let $(R,\mathfrak{m})$ be an Artinian local $F$-algebra, and let $\operatorname{Spec}R\to\widehat{X}_{Y}$ be an $R$-rational point of $\widehat{X}_{Y}$ over $F$.
Taking global sections, we get a $F$-algebra homomorphism $\widehat{A}\to R$. Since the closed point of $\operatorname{Spec}(R)$ is sent to a point in the underlying set of $Y$, we have $IR\subseteq\mathfrak{m}$. If $R$ has length $l$, then $I^{l}$ lies in the kernel of $\widehat{A}\to R$, and so we get a factorization $\widehat{A}\to\widehat{A}/I^{l}\cong A/I^{l}\to R$, that is to say, an $R$-rational point $\operatorname{Spec}R\to J_{Y}^{l}X$. By Lemma 4.5, this corresponds to a tuple in $(J_{Y}^{l}X)(R)$ whence in $(J_{Y}X)(R)$, where, as before, we identify jets with the schemic formulae defining them. Conversely, a tuple in $(J_{Y}X)(R)$ lies in some $(J_{Y}^{n}X)(R)$, and hence, by Lemma 4.5 yields a $F$-algebra homomorphism $A/I^{n}\to R$. Composition with the canonical surjection $\widehat{A}\to A/I^{n}$ then induces an $R$-rational point on $\widehat{X}_{Y}$.
∎
This proposition allows us to identify the total jet $J_{Y}X$ with the formal scheme $\widehat{X}_{Y}$, which we henceforth will do.
8.2 Lemma.
If $Y_{i}\subseteq X_{i}$ for $i=1,2$ are two closed immersions, then we have the following product formula
$$(J_{Y_{1}}X_{1})\times_{F}(J_{Y_{2}}X_{2})\cong J_{Y_{1}\times_{F}Y_{2}}(X_{1}%
\times_{F}X_{2}).$$
Proof.
Let $A_{i}$ be the coordinate ring of $X_{i}$, and $I_{i}$ the ideal defining $Y_{i}$. Hence $X_{1}\times X_{2}$ has coordinate ring $A:=A_{1}\otimes_{F}A_{2}$ and $Y_{1}\times Y_{2}$ is defined by the ideal $I:=I_{1}A+I_{2}A$. The ideals corresponding to the schemic formulae in the total jets $J_{Y_{i}}X_{i}$ and $J_{Y_{1}\times Y_{2}}(X_{1}\times X_{2})$ are respectively the powers of $I_{i}$ and of $I$. Since $I^{2n}\subseteq I_{1}^{n}A+I_{2}^{n}A\subseteq I^{n}$, the formula follows readily.
∎
Although one could give a more general notion of morphism based on morphic formularies, we define them only using (first-order) formulae. Namely, let $\mathcal{I}$ be a family of (first-order) formulae, and let $\Phi\subseteq\mathcal{L}_{n}$ and $\Psi\subseteq\mathcal{L}_{m}$ be formularies. By an $\mathcal{I}$-morphism $f\colon\Phi\to\Psi$, we mean a morphic formula $\theta\in\mathcal{I}$ such that for each Artinian local $F$-algebra $R$, the definable subset $\theta(R)\subseteq R^{n+m}$ restricted to $\Phi(R)$ is the graph of a map $f_{R}\colon\Phi(R)\to\Psi(R)$. Here we have to replace the morphic conditions (1)–(3) by their appropriate counterparts (more precisely, replace $\phi$ and $\psi$ in these sentences respectively by the infinite disjunctions $\bigvee\Phi$ and $\bigvee\Psi$; then require that the resulting (non-first order) sentences to hold in any Artinian local $F$-algebra). As before, an $\mathcal{I}$-isomorphism is an $\mathcal{I}$-morphism which is a bijection on each model, and whose inverse is also an $\mathcal{I}$-morphism.
The infinitary pp-Grothendieck ring
Let $\mathcal{PP}^{\infty}$
be the lattice of all formularies consisting of formulae in $\mathcal{PP}$, with $\wedge$ and $\vee$ as defined above. On this lattice, we have an isomorphism relation $\cong_{\mathcal{S}ch}$, given by schemic isomorphisms modulo $\mathbf{Art}_{F}$.
We define the infinitary pp-Grothendieck ring as
$$\mathbf{Gr}^{\infty}(F^{\operatorname{pp}}):=\mathbf{K}_{\cong_{\mathcal{S}ch}%
}(\mathcal{PP}^{\infty}),$$
where the multiplication is induced by the same argument as in Lemma 3.4 by the multiplication on formularies. Recall that $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$ is the quotient of the free Abelian group $\mathbb{Z}[\mathcal{PP}^{\infty}]$ modulo the subgroup generated by all ${\langle\Phi\rangle}-{\langle\Psi\rangle}$ for $\mathcal{S}ch$-isomorphic formularies $\Phi$ and $\Psi$, and by all ${\langle\Phi\vee\Psi\rangle}+{\langle\Phi\wedge\Psi\rangle}-{\langle\Phi%
\rangle}-{\langle\Psi\rangle}$.
Note that $\mathcal{PP}^{\infty}$ is not Boolean, since the negation of a formulary does not exist. In particular, we do no longer have the analogue of the second property in Lemma 3.7. I do not know whether the analogue of Corollary 3.9 holds (the proof of the corollary relies on the negation property, whence is not admissible here).
Since pp-formulae are just first-order pp-formularies, we get a canonical homomorphism
$${\mathbf{Gr}(F^{\operatorname{pp}})}\to\mathbf{Gr}^{\infty}(F^{\operatorname{%
pp}}).$$
This homomorphism, however, is not an embedding, as can be seen from the following relation.
8.3 Theorem.
For $Y\subseteq X$ a closed immersion of affine $F$-schemes, we have a relation
$${\left[X\right]}={\left[X-Y\right]}+{\left[J_{Y}X\right]}.$$
in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$.
Proof.
Let $\phi$ be the schemic formula of $X$ and $A$ its coordinate ring. Let $Y$ be defined by $\phi\wedge(f_{1}=\dots f_{s}=0)$, whence $I:=(f_{1},\dots,f_{s})A$ its ideal of definition in $X$. Let $\psi_{i}$ be the pp-formula defining the basic open $D_{i}:=\operatorname{Spec}A_{f_{i}}$ of $U:=X-Y$, that is to say, $\psi_{i}:=\phi\wedge(\exists y)(yf_{i}=1)$. It follows that $\{D_{1},\dots,D_{n}\}$ is an open covering of $U$, and hence by Corollary 6.6, we have
$${\left[U\right]}={\left[\psi_{1}\vee\dots\vee\psi_{n}\right]}$$
in ${\mathbf{Gr}(F^{\operatorname{pp}})}$, whence also in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$.
So remains to show that $\phi\wedge\neg\psi_{1}\wedge\dots\wedge\neg\psi_{n}$ is $\mathbf{Art}_{F}$-equivalent to the formulary $J_{Y}X$.
One direction is obvious, and to verify the other, we check this in an arbitrary Artinian local $F$-algebra $(R,\mathfrak{m})$. Let $a$ be an $n$-tuple in $R$ satisfying $\phi\wedge\neg\psi_{1}\wedge\dots\wedge\neg\psi_{n}$. In particular, $f_{i}(a)\in\mathfrak{m}$, for all $i$. For $l$ at least the length of $R$, we therefore get $g(a)=0$ for every $g\in I^{l}$, from which it follows that $a\in(J_{Y}X)(R)$.
∎
Using Proposition 8.1, we get the following more suggestive version of Theorem 8.3: for any closed immersion $Y\subseteq X$ of affine $F$-schemes, we have
(32)
$${\left[X\right]}={\left[X-Y\right]}+{\left[\widehat{X}_{Y}\right]}$$
in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$.
Formal Lefschetz class
We define the formal Lefschetz class, denoted $\widehat{{\mathbb{L}}}$, as the class of the formal completion of the affine line at the origin $O$, that is to say,
$$\widehat{{\mathbb{L}}}:={\left[J_{O}{\mathbb{A}_{F}^{1}}\right]}={\left[%
\widehat{({\mathbb{A}_{F}^{1}})}_{O}\right]}.$$
By (32), we may now give the correct decomposition formula for the Lefschetz class discussed at the end of Remark 6.7, namely, in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$ we have
(33)
$${\mathbb{L}}={\mathbb{L}}^{*}+\widehat{{\mathbb{L}}}.$$
By Lemma 8.2 and the Nullstellensatz, we have
(34)
$${\left[J_{P}{\mathbb{A}_{F}^{n}}\right]}={\left[\widehat{({\mathbb{A}_{F}^{n}}%
)}_{P}\right]}=\widehat{{\mathbb{L}}}^{n},$$
for any closed point $P$ in ${\mathbb{A}_{F}^{n}}$. We next calculate the class of projective space.
Using the standard affine covering by the basic opens $\operatorname{D}(x_{i})\subseteq\mathbb{P}_{F}^{n}$, one easily verifies that the morphism $({\mathbb{A}_{F}^{n+1}}-O)\to\mathbb{P}_{F}^{n}$, given by sending the affine coordinates $(x_{0},\dots,x_{n})$ to the projective ones $(x_{0}:\cdots:x_{n})$, is a locally trivial fibration with fiber ${\mathbb{A}_{F}^{1}}-O$, where $O$ is the origin. By Theorem 8.3 and (34), the class of ${\mathbb{A}_{F}^{i}}-O$ in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$ is equal to ${\mathbb{L}}^{i}-\widehat{{\mathbb{L}}}^{i}$, for every $i$. By Lemma 7.21 applied to this locally trivial fibration ${\mathbb{A}_{F}^{n+1}}-O\to\mathbb{P}_{F}^{n}$, we get
$${\mathbb{L}}^{n+1}-\widehat{{\mathbb{L}}}^{n+1}={\left[\mathbb{P}_{F}^{n}%
\right]}\cdot({\mathbb{L}}-\widehat{{\mathbb{L}}}).$$
We would like to divide both sides by ${\mathbb{L}}-\widehat{{\mathbb{L}}}$, but a priori, this is not a zero-divisor in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$. The resulting formula does hold, as we now calculate by a different method:
8.4 Proposition.
For each $n$, the class of projective $n$-space in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$ is given by the formula
$${\left[\mathbb{P}_{F}^{n}\right]}=\sum_{m=0}^{n}{\mathbb{L}}^{m}\cdot\widehat{%
{\mathbb{L}}}^{n-m}.$$
Proof.
Let $(x_{0}:\dots:x_{n})$ be the homogeneous coordinates of $\mathbb{P}_{F}^{n}$, and let $U_{i}:=D_{+}(x_{i})$ be the basic open given as the complement of the $x_{i}$-hyperplane. Hence each $U_{i}$ is isomorphic with ${\mathbb{A}_{F}^{n}}$ and their union is equal to $\mathbb{P}_{F}^{n}$. Therefore,
(35)
$${\left[\mathbb{P}_{F}^{n}\right]}={\left[S(U_{0},\dots,U_{n})\right]}$$
in ${\mathbf{Gr}(F^{\operatorname{pp}})}$ whence in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$.
So we need to calculate the class of each intersection occurring in the right-hand side scissor relation. One easily verifies that, for $m\geq 0$, any intersection of $m$ different opens $U_{i}$ is isomorphic to the open ${\mathbb{A}_{F}^{n-m}}\times({\mathbb{A}_{F}^{*}})^{m}$, where ${\mathbb{A}_{F}^{*}}$ is the affine line minus a point. Since ${\left[{\mathbb{A}_{F}^{*}}\right]}={\mathbb{L}}^{*}={\mathbb{L}}-\widehat{{%
\mathbb{L}}}$ by (33), the class of such an intersection is equal to the product ${\mathbb{L}}^{n-m}({\mathbb{L}}-\widehat{{\mathbb{L}}})^{m}$. Since there are ${\bigl{(}\begin{matrix}n+1\cr m\cr\end{matrix}\bigr{)}}$ terms of degree $m$ in the scissor polynomial $S_{n+1}$, the class of ${\mathbb{P}_{k}^{n}}$ is equal to $g({\mathbb{L}},\widehat{{\mathbb{L}}})$ by (35) and the previous discussion, with
$$g(t,u):=\sum_{m=0}^{n}(-1)^{m}{\bigl{(}\begin{matrix}n+1\cr m\cr\end{matrix}%
\bigr{)}}t^{n-m}(t-u)^{m}.$$
By the binomial theorem, $t^{n+1}-(t-u)g(t,u)=(t-(t-u))^{n+1}=u^{n+1}$, and hence
$$g(t,u)=\frac{t^{n+1}-u^{n+1}}{t-u}=\sum_{m=0}^{n}t^{m}u^{n-m},$$
as we wanted to show.
∎
Although a priori an infinitary object, the infinitary pp-Grothendieck ring still specializes to the classical Grothendieck ring:
8.5 Proposition.
There exists a canonical homomorphism $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})\to{\mathbf{Gr}(F^{\operatorname{%
var}})}$.
Proof.
We use the following observation: if $\mathfrak{K}$ is a finite collection of $F$-algebras, then any formulary is first-order modulo $\mathfrak{K}$. Indeed, let $\Phi$ be a formulary. As observed above, we may assume that it is closed under finite disjunctions. Let $\mathfrak{K}=\{R_{1},\dots,R_{s}\}$, and let $\phi_{i}\in\Phi$ be such that $\Phi(R)=\phi_{i}(R)$. Hence $\Phi$ is $\mathfrak{K}$-equivalent with the (first-order) disjunction $\phi_{1}\vee\dots\vee\phi_{s}$. We can apply this observation to the singleton $\{F\}$. Since $\mathbf{ACF}_{F}$, the theory of $F$, admits elimination of quantifiers, each class of a formulary in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$ is equal to a class in ${\mathbf{Gr}(F^{\operatorname{var}})}$.
∎
8.6 Remark.
From the proof it follows that image of the formal Lefschetz class $\widehat{{\mathbb{L}}}$ under the homomorphism $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})\to{\mathbf{Gr}(F^{\operatorname{%
var}})}$ is equal to $1$.
Arc formularies
Given a formulary $\Phi\subseteq\mathcal{L}_{n}$, and an Artinian $F$-scheme $Z$, let us define $\nabla_{\!Z}\Phi$ as the formulary of all $\nabla_{\!Z}\phi$ with $\phi\in\Phi$. As the next result shows, in the formulary case, we are justified to call $\nabla_{\!Z}\Phi$ the arc formulary of $\Phi$ along $Z$.
8.7 Proposition.
If $Z$ is a local Artinian $F$-scheme of length $l$ with coordinate ring $(R,\mathfrak{m})$, and $\Phi\subseteq\mathcal{L}_{n}$ an arbitrary formulary, then $\nabla_{\!Z}\Phi$ is also a formulary, and for any Artinian local $F$-algebra $S$, there is a one-one correspondence between $\Phi(R\otimes_{F}S)$ and $\nabla_{\!Z}\Phi(S)$ induced by the canonical isomorphism $\pi\colon R\otimes_{F}S\to S^{l}$.
Moreover, this induces an arc map on the infinitary pp-Grothendieck ring $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$, and as before, we will write $\int Z\ d\Phi$ for the class of $\nabla_{\!Z}\Phi$.
Proof.
We will prove the first two assertions simultaneously. Let $(S,\mathfrak{n})$ be an Artinian local $F$-algebra, put $\bar{S}:=R\otimes_{F}S$, and let $\bar{\mathfrak{m}}:=\mathfrak{m}\bar{S}+\mathfrak{n}\bar{S}$. Since $\bar{S}/\bar{\mathfrak{m}}\cong R/\mathfrak{m}\otimes_{F}S/\mathfrak{n}\cong F%
\otimes_{F}F=F$, as $F$ is algebraically closed, $\bar{S}$ is local with maximal ideal $\bar{\mathfrak{m}}$. In particular, $\bar{S}$ is an Artinian local $F$-algebra, and hence there is some $\phi_{0}\in\Phi$ such that $\phi_{0}(\bar{S})=\Phi(\bar{S})$. Let $a$ be an $ln$-tuple in $\nabla\Phi(S)$. Let $\bar{a}$ be the $n$-tuple over $\bar{S}$ such that $\pi(\bar{a})=a$. By construction, $a\in\nabla\phi(S)$ for some $\phi\in\Phi$, and hence $\bar{a}\in\phi(\bar{S})\subseteq\Phi(\bar{S})=\phi_{0}(\bar{S})$ by definition of arc formulae. This in turn implies that $a\in\nabla\phi_{0}(S)$, showing that $\nabla\Phi(S)=\phi_{0}(S)$. The second assertion is then immediate by Proposition 7.4. The last assertion follows by the exact same argument as for Corollary 7.16, and its proof is left to the reader.
∎
8.8 Corollary.
For $(X,P)$ a germ in ${\mathbb{A}_{F}^{n}}$, and $Z$ an Artinian local $F$-scheme of length $l$, we have
$$\nabla_{\!Z}(J_{P}X)=(J_{P}X\times{\mathbb{A}_{F}^{(l-1)n}})\cap\nabla_{\!Z}X,$$
where we view $J_{P}X$ as a closed subscheme of $\nabla_{\!Z}(J_{P}X)$ via the canonical section defined in Remark 7.13.
Proof.
Suppose $X$ is a closed subscheme of ${\mathbb{A}_{F}^{n}}$. An easy calculation shows that
(36)
$$J_{P}X=J_{P}{\mathbb{A}_{F}^{n}}\cap X.$$
By the Nullstellensatz, we may assume, without loss of generality, that $P$ is the origin, and hence the ideals in $F[x]$ corresponding to the schemic formulae in $J_{P}{\mathbb{A}_{F}^{n}}$ are simply all the powers of the maximal ideal $(x_{1},\dots,x_{n})F[x]$. Let $(R,\mathfrak{m})$ be the Artinian local coordinate ring of $Z$, and let $(S,\mathfrak{n})$ be an arbitrary Artinian local $F$-algebra. As already remarked previously, $\bar{S}:=R\otimes_{F}S$ is an Artinian local $F$-algebra with maximal ideal $\bar{\mathfrak{m}}:=\mathfrak{m}\bar{S}+\mathfrak{n}\bar{S}$. An $n$-tuple $\bar{a}$ over $\bar{S}$ belongs to $J_{P}{\mathbb{A}_{F}^{n}}(\bar{S})$ if and only if all its entries are nilpotent, that is to say, if and only if $\bar{a}\in\bar{\mathfrak{m}}\bar{S}^{n}$. Since $\bar{a}\equiv\pi_{0}({\bar{a}})\mod\mathfrak{m}\bar{S}$, the latter condition is equivalent with $\pi_{0}(\bar{a})\in\mathfrak{n}S^{n}$, which in turn is equivalent with $\pi_{0}(\bar{a})\in J_{P}{\mathbb{A}_{F}^{n}}(S)$, showing that
$$\nabla_{\!Z}(J_{P}{\mathbb{A}_{F}^{n}})=J_{P}{\mathbb{A}_{F}^{n}}\times{%
\mathbb{A}_{F}^{(l-1)n}}$$
under the identification from Remark 7.13. Taken together with (36) and the fact that the arc map preserves intersections, we get the desired equality.
∎
8.9 Remark.
It follows from the proof and (32) that we in fact have an equality
$$\nabla_{\!Z}(\widehat{X}_{P})=(\widehat{({\mathbb{A}_{F}^{n}})}_{P}\times{%
\mathbb{A}_{F}^{(l-1)n}})\cap\nabla_{\!Z}X,$$
if $(X,P)$ is a germ in ${\mathbb{A}_{F}^{n}}$.
We have the following analogue of Proposition 7.19 for the formal Lefschetz class:
8.10 Corollary.
For any element $q\in{\mathbf{Gr}_{0}(F^{\operatorname{sch}})}$, we have $\int q\ d\widehat{{\mathbb{L}}}=\widehat{{\mathbb{L}}}\cdot{\mathbb{L}}^{\ell(%
q)-1}$.
Proof.
By additivity, it suffices to show this for $q$ equal to an Artinian local scheme $Z$ of length $l$. By Corollary 8.8, we have
$$\nabla_{\!Z}(J_{O}{\mathbb{A}_{F}^{1}})=(J_{O}{\mathbb{A}_{F}^{1}}\times{%
\mathbb{A}_{F}^{l-1}})\cap\nabla_{\!Z}{\mathbb{A}_{F}^{1}},$$
where $O$ is the origin. Since $\nabla_{\!Z}{\mathbb{A}_{F}^{1}}={\mathbb{A}_{F}^{l}}$, taking classes therefore yields the asserted formula.
∎
If instead we work in the Grothendieck ring, we may generalize the previous result to higher dimensional fibers:
8.11 Corollary.
Let $Y\subseteq X$ be a closed immersion of $F$-schemes, $Z$ an Artinian $F$-scheme, and $\rho\colon\nabla_{\!Z}X\to X$ the canonical split projection. Then we have an equality
$${\left[\nabla_{\!Z}(J_{Y}X)\right]}={\left[J_{{\rho^{-1}(Y)}}(\nabla_{\!Z}X)%
\right]}$$
in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$.
Proof.
By Theorem 8.3, we have an equality
$${\left[X\right]}={\left[X-Y\right]}+{\left[J_{Y}X\right]}.$$
in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$. Since $\nabla_{\!Z}\cdot$ is an endomorphism, we get
(37)
$${\left[\nabla_{\!Z}X\right]}={\left[\nabla_{\!Z}(X-Y)\right]}+{\left[\nabla_{%
\!Z}(J_{Y}X)\right]}.$$
Since $X-Y\subseteq X$ is an open immersion, we have
$$\nabla_{\!Z}(X-Y)={\rho^{-1}(X-Y)}=\nabla_{\!Z}X-{\rho^{-1}(Y)}$$
by (25) in Theorem 7.12. On the other hand, by another application of Theorem 8.3, we get
$${\left[\nabla_{\!Z}X\right]}={\left[\nabla_{\!Z}X-{\rho^{-1}(Y)}\right]}+{%
\left[J_{{\rho^{-1}(Y)}}\nabla_{\!Z}X\right]},$$
from which the assertion now follows immediately in view of (37).
∎
9. Geometric Igusa-zeta series over linear arcs
For various schemes $X$, we will calculate $\operatorname{Igu}^{(\mathbb{A},O)}_{X}$, that is to say, we want to calculate $\sum_{n}{\left[\nabla_{\!Z_{n}}X\right]}t^{n}$, where, for the remainder of this section $Z_{n}:=\operatorname{Spec}(F[\xi]/\xi^{n}F[\xi])$. To simplify notation, we simply write $\nabla_{\!n}X$ for the $n$-th linear arc scheme $\nabla_{\!Z_{n}}X$. We let $\rho_{n}\colon\nabla_{\!n}X\to X$ be the canonical split projection with section $X\hookrightarrow\nabla_{\!n}X$, and we view closed subschemes of $X$ as closed subschemes of $\nabla_{\!n}X$ via the latter embedding.
The formal Grothendieck ring
In $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$, we define the formal ideal $\mathfrak{N}$ as the ideal generated by the relations ${\left[J_{Y}X\right]}-{\left[Y\right]}$, for all closed immersions $Y\subseteq X$ of affine schemes. It follows that if $Y,Y^{\prime}\subseteq X$ are two closed subschemes of $X$ with the same underlying set, then ${\left[Y\right]}\equiv{\left[Y^{\prime}\right]}\mod\mathfrak{N}$, since they have the same total jets. Recall from Proposition 8.5 that we have a canonical homomorphism $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})\to{\mathbf{Gr}(F^{\operatorname{%
var}})}$. We prove below that $\mathfrak{N}$ belongs to its kernel, and so we introduce the formal Grothendieck ring ${\mathbf{Gr}}(F^{\operatorname{form}})$ as the quotient $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})/\mathfrak{N}$.
9.1 Proposition.
We have a sequence of natural homomorphisms of Grothendieck rings $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})\to{\mathbf{Gr}}(F^{\operatorname{%
form}})\to{\mathbf{Gr}(F^{\operatorname{var}})}$.
Proof.
By definition of formulary, there exists a schemic formula $\phi$ in $J_{Y}X$ such that its $F$-rational points are given by $\phi(F)$, for a given closed immersion $Y\subseteq X$. In particular, $\phi(F)$ is equal to $Y(F)$, showing that ${\left[\phi\right]}={\left[Y\right]}$ in ${\mathbf{Gr}(F^{\operatorname{var}})}$. By the argument in Proposition 8.5, the image of ${\left[J_{Y}X\right]}$ in ${\mathbf{Gr}(F^{\operatorname{var}})}$ is equal to ${\left[\phi\right]}$. This shows that $\mathfrak{N}$ lies in the kernel of $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})\to{\mathbf{Gr}(F^{\operatorname{%
var}})}$.
∎
In particular, to prove the rationality of the geometric Igusa-zeta series over ${\mathbf{Gr}(F^{\operatorname{var}})}_{\mathbb{L}}$, it will suffice to show that its image as a power series over ${\mathbf{Gr}}(F^{\operatorname{form}})$ is rational over ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$. We will simplify our notation and write $\operatorname{Igu}^{\operatorname{geom}}_{X}(t)$ for $\operatorname{Igu}^{(\mathbb{A},O)}_{X}$, viewed as a power series over ${\mathbf{Gr}}(F^{\operatorname{form}})$.
9.2 Lemma.
If $\{Y_{i}\}_{i}$ is a constructible partition of a scheme $X$, then ${\left[X\right]}=\sum_{i}{\left[Y_{i}\right]}$ in ${\mathbf{Gr}}(F^{\operatorname{form}})$.
Proof.
Note that this partition is finite, since $X$ is Noetherian. Moreover, at least one part must be open, say $Y_{1}$, and let $X_{1}:=X-Y_{1}$ be its complement. By Theorem 8.3, we have an equality ${\left[X\right]}={\left[Y_{1}\right]}+{\left[J_{X_{1}}X\right]}$ in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$. By definition of formal ideal, ${\left[J_{X_{1}}X\right]}\equiv{\left[X_{1}\right]}$ modulo $\mathfrak{N}$. Putting these two together, we get an identity ${\left[X\right]}={\left[Y_{1}\right]}+{\left[X_{1}\right]}$ in ${\mathbf{Gr}}(F^{\operatorname{form}})$. Moreover, $\{Y_{2},Y_{3},\dots\}$ is a constructible partition of $X_{1}$, and so we are done by Noetherian induction.
∎
9.3 Theorem.
Let $X$ be a $d$-dimensional scheme, $Y$ a closed subscheme containing the singular locus of $X$, and $Z$ an Artinian scheme of length $l$. If $\rho\colon\nabla_{\!Z}X\to X$ denotes the canonical projection, then we have an equality
$${\left[\nabla_{\!Z}X\right]}={\left[X-Y\right]}\cdot{\mathbb{L}}^{d(l-1)}+{%
\left[{\rho^{-1}(Y)}\right]}$$
in ${\mathbf{Gr}}(F^{\operatorname{form}})$.
Proof.
This follows immediately from Corollaries 7.20 and 8.11, but here is the argument in more detail: let us put $W:=\nabla_{\!Z}X$ and $V:={\rho^{-1}(Y)}$. By Lemma 9.2, we have an equality ${\left[W\right]}={\left[W-V\right]}+{\left[V\right]}$ in ${\mathbf{Gr}}(F^{\operatorname{form}})$. Moreover, by Theorem 7.12, we have an isomorphism $W-V\cong\nabla_{\!Z}(X-Y)$. Since $X-Y$ is smooth by the choice of $Y$, ${\left[\nabla_{\!Z}(X-Y)\right]}={\left[X-Y\right]}\cdot{\mathbb{L}}^{d(l-1)}$ by Corollary 7.20, and the assertion follows.
∎
In order to simplify our notation, we will henceforth write $X_{s}$ for the basic subset $\operatorname{D}(s)$ in a scheme $X$, where $s$ is a global section on $X$. Likewise, we write $X(s_{1},\dots,s_{n})$ for the intersection $X\cap\operatorname{V}(s_{1},\dots,s_{n})$, for given global sections $s_{i}$. In this notation, we have, by Lemma 9.2, the following useful equality
(38)
$${\left[X\right]}={\left[S_{n}(X_{s_{1}},\dots,X_{s_{n}})\right]}+{\left[X(s_{1%
},\dots,s_{n})\right]}$$
in ${\mathbf{Gr}}(F^{\operatorname{form}})$. Note that products in the scissor polynomial are actually given by intersection, as in §2. For instance, (38) becomes for $n=2$, the identity
$${\left[X\right]}={\left[X_{s}\right]}+{\left[X_{t}\right]}-{\left[X_{st}\right%
]}+{\left[X(s,t)\right]}.$$
To derive the next identity, we introduce some further notation. Fix a scheme $X$ and an $n$-tuple of global sections $(s_{1},\dots,s_{n})$. Given a binary vector $\delta$ of length $n$, that is to say, a $n$-tuple in $\{0,1\}^{n}$, we will write $X_{\delta}$ for $X_{s^{\delta}}$ where $s^{\delta}$ is the product of all $s_{i}^{\delta_{i}}$, and we will write $\bar{X}_{\delta}$ for $X_{\delta}(s_{\delta})$, where $s_{\delta}$ is the tuple of all $s_{i}$ for which $\delta_{i}=0$. In other words, $\bar{X}_{\delta}$ is defined by the formula expressing that each $s_{i}$ is either a unit or zero, depending on whether $\delta_{i}$ is one or zero. One easily verifies that
$X=\bigsqcup_{\delta}\bar{X}_{\delta},$
where $\delta$ runs over all binary vectors of length $n$. Applying Lemma 9.2 to this constructible partition, we get
(39)
$${\left[X\right]}=\sum_{\delta}{\left[\bar{X}_{\delta}\right]}$$
in ${\mathbf{Gr}}(F^{\operatorname{form}})$, where the sum runs over all binary vectors. Let us again illustrate this for $n=2$, yielding the identity
$${\left[X\right]}={\left[X_{st}\right]}+{\left[X_{s}(t)\right]}+{\left[X_{t}(s)%
\right]}+{\left[X(s,t)\right]}.$$
It is important to note that this equation is false in $\mathbf{Gr}^{\infty}(F^{\operatorname{pp}})$, whence, in particular, we may not apply $\nabla_{\!Z}$ to it.
Before we turn to a proof of the rationality of the geometric Igusa-zeta series, we should mention that this method is different from working with classical arcs in the classical Grothendieck ring. Although we eventually take classes in ${\mathbf{Gr}}(F^{\operatorname{form}})$, thus collapsing nilpotents, we will do this only after taking arcs. Put differently, although arcs will be reduced, the base schemes will not be, and to see that this makes a difference (even in regards to dimension!), we list, for small lengths, some defining equations of arcs and their reductions for three different closed subschemes with the same underlying set, the union of two lines in the plane:
Tagged and formal equations
In the sequel, we will only invert variables, and to this end, we simplify our notation even further.
To any natural number $a$, we associate its tagged version $a^{*}$, and we call $v(a^{*}):=a$ the underlying value (or untagged version) of $a^{*}$. We can add tagged and/or untagged numbers by the rule that the underlying value of the sum is the sum of the underlying values of the terms, where the sum is tagged if and only if at least one term is tagged (e.g,. $2+3^{*}=5^{*}$). Fix $m\geq 1$, and let $\Gamma_{m}$ be the collection of $m$-tuples with entries natural numbers or their tagged versions. We extend $v$ component-wise to get a map $v\colon\Gamma_{m}\to\mathbb{N}^{m}$, sending a tuple $\theta\in\Gamma_{m}$ to its underlying value $v\theta$. We define a partial order on $\Gamma_{m}$ by $\alpha\preceq\beta$ if and only if $\alpha_{j}$ is untagged and $\alpha_{j}\leq\beta_{j}$, or $\alpha_{j}$ is tagged and $\alpha_{j}=\beta_{j}$, for all $j=1,\dots,m$.
We will introduce two equational conventions in this section that are useful for discussing arc equations.
To each variable $x$, we associate its tagged version $x_{*}$, which we will treat as an invertible variable. Given a tagged number $a^{*}$, we let $x^{a^{*}}$ be the same as $x_{*}^{a^{*}}$, and simply write $x_{*}^{a}$. Hence, we may associate to a polynomial $f\in F[x]$, the polynomial $f(x_{*})$, which is just $f(x)$ but viewed in the Laurent polynomial ring $F[x,\frac{1}{x}]$. Therefore, we interpret the equation $f(x_{*})=0$ as the conjunction $f(x)=0$ and $x$ is invertible, that is to say, the pp-formula $(\exists x^{\prime})f(x)=0\wedge xx^{\prime}=1$. We may extend this practice to several variables, tagging some of them and leaving the others unchanged. For instance, the tagged equation $x^{2}_{*}+x_{*}y^{3}+z^{3}_{*}=0$ should be considered as an element of the mixed Laurent polynomial ring $F[x,y,z,\frac{1}{x},\frac{1}{z}]$, and is equivalent with the conditions $x^{2}+xy^{3}+z^{3}=0$ together with $x$ and $z$ are invertible.
Our second convention is the use of a formal variable $\xi$, fixed once and for all. Given a power series $f(x,\xi)\in F[x][[\xi]]$ (or, at times, a Laurent series) with coefficients in a polynomial ring $F[x]$ (or a mixed Laurent polynomial ring), we interpret the (formal) equation $f=0$ as the condition on the $x$-variables that $f$ be identical zero as a power series (Laurent series) in $\xi$. In other words, if $f(x,\xi)=f_{0}(x)+f_{1}(x)\xi+f_{2}(x)\xi^{2}+\dots$, then $f=0$ stands for the (infinite) conjunction $f_{0}=f_{1}=f_{2}=\dots=0$ (as $f=0$ and $\xi^{i}f=0$ yield equivalent systems of equations, we may reduce the Laurent series case to the power series case). Similarly, for each $n$, the equivalence $f(x,\xi)\equiv 0\mod\xi^{n}$ stands for the conjunction $f_{0}=f_{1}=\dots=f_{n-1}=0$. An example of a combination of both conventions is
$$0=(x+y_{*}\xi)^{2}+(z_{*}+w\xi)^{3}$$
which is equivalent to the pp-formula
$$\displaystyle x^{2}+z^{3}$$
$$\displaystyle=0$$
$$\displaystyle 2xy+3z^{2}w$$
$$\displaystyle=0$$
$$\displaystyle y^{2}+3zw^{2}$$
$$\displaystyle=0$$
$$\displaystyle w^{3}$$
$$\displaystyle=0$$
$$\displaystyle(\exists y^{\prime},z^{\prime})\quad yy^{\prime}$$
$$\displaystyle=1\wedge zz^{\prime}=1$$
To any $m$-tuple of variables $x=(x_{1},\dots,x_{m})$, we associate the corresponding (countably many) arc variables $\tilde{x}=(\tilde{x}_{0},\tilde{x}_{1},\dots)$, where each $\tilde{x}_{i}$ is an $m$-tuple $(\tilde{x}_{i,1},\dots,\tilde{x}_{i,m})$. For each $i$, we let
$$\dot{x}_{i}=\tilde{x}_{0,i}+\tilde{x}_{1,i}\xi+\tilde{x}_{2,i}\xi^{2}+\dots$$
be the generic arc series in $\xi$, and we write $\dot{x}$ for the tuple $(\dot{x}_{1},\dots,\dot{x}_{m})$. Given $\theta\in\Gamma_{m}$, we define $\dot{x}({\theta})$ to be the $m$-tuple of twisted power series with $i$-th entry equal to
$$\dot{x}_{i}(\theta):=\tilde{x}_{\theta_{i},i}+\tilde{x}_{\theta_{i}+1,i}\xi+%
\tilde{x}_{\theta_{i}+2,i}\xi^{2}+\dots$$
if $\theta_{i}$ is untagged, and
$$(\dot{x}_{i})_{*}(\theta):=(\tilde{x}_{v\theta_{i},i})_{*}+\tilde{x}_{v\theta_%
{i}+1,i}\xi+\tilde{x}_{v\theta_{i}+2,i}\xi^{2}+\dots$$
if $\theta_{i}$ is tagged (note that according to this convention, only the constant term is actually tagged, which accords with the fact that a power series is invertible if and only if its constant term is). For each $\theta\in\Gamma_{m}$, define a change of variables $\tau_{\theta}$ sending, for each $j=1,\dots,m$, the variable $\tilde{x}_{i,j}$ to $\tilde{x}_{i-v\theta_{j},j}$ and $(\tilde{x}_{i,j})_{*}$ to $(\tilde{x}_{i-v\theta_{j},j})_{*}$ for $i\geq v\theta_{j}$, and leaving the remaining variables and their tagged versions unchanged. In particular, $\tau_{\theta}$ only depends on the underlying value of $\theta$, and $\tau_{\theta}(\dot{x_{i}}({\theta}))$ is equal to $\dot{x}_{i}$ if $\theta_{i}$ is untagged and to $(\dot{x}_{i})_{*}$ if $\theta_{i}$ is tagged.
Directed arcs
With these conventions, we can now write down the equations of an arc scheme more succinctly. If $X\subseteq{\mathbb{A}_{F}^{m}}$ is the closed subscheme defined by the schemic formula $g_{1}=\dots=g_{s}=0$, then $\nabla_{\!n}X$ is defined by the conditions
$$g_{1}(\dot{x})\equiv g_{2}(\dot{x})\equiv\dots\equiv g_{s}(\dot{x})\equiv 0%
\mod\xi^{n}$$
and $\dot{x}({\mathbf{n}})=0$ (recall that $\mathbf{n}$ is the tuple all of whose entries are equal to $n$). Note that the latter condition simply means that $\tilde{x}_{i,j}=0$ for all $i\geq n$ and all $j=1,\dots,m$.
We extend the notion of arc scheme, by considering certain (initial) linear subspaces of arc schemes. Given $\theta\in\Gamma_{m}$, we define the $n$-th directed arc scheme along $\theta$, denoted $\nabla_{\!n}^{\theta}X$, as the locally closed subscheme of $\nabla_{\!n}X$ defined by the conditions $\tilde{x}_{i,j}=0$ for $i<\theta_{j}$, and $\tilde{x}_{v\theta_{j},j}$ is invertible if $\theta_{j}$ is tagged, for $j=1,\dots,m$. We may also refer to $\nabla_{\!n}^{\theta}X$ as the subscheme of all arcs along, or with initial direction $\theta$. Writing out these conditions in more detail, the defining equations of $\nabla_{\!n}^{\theta}X$ become
$$g_{1}(\dot{x}({\theta}))=\dots=g_{s}(\dot{x}({\theta}))\equiv 0\mod\xi^{n}$$
and $\tilde{x}_{i,j}=0$ for $i<\theta_{j}$ or $i\geq n$, and for $j=1,\dots,m$. By the change of variables $\tau_{\theta}$, we can rewrite these equations as
(40)
$$\tau_{\theta}(g_{1}(\dot{x}({\theta})))=\dots=\tau_{\theta}(g_{s}(\dot{x}({%
\theta})))\equiv 0\mod\xi^{n}\qquad\wedge\qquad\dot{x}({\mathbf{n}-\theta})=0.$$
This form will be easier to work with, as we can now compare arcs along different directions; we call the first set of equations in (40) the arc equations, and the second set the initial conditions. We will use the arc equations as follows: given $\theta\in\Gamma_{m}$, let $z$ be a new $m$-tuple of variables with corresponding arc variables $\tilde{z}$, called the $\theta$-tagging of $x$, where $z_{j}$ is equal to $(x_{j})_{*}$ or $x_{j}$ depending on whether $\theta_{j}$ is tagged or not, and similarly, $\tilde{z}_{i,j}=\tilde{x}_{i,j}$ unless $i=0$ and $\theta_{j}$ is tagged, in which case $\tilde{z}_{0,j}=(\tilde{x}_{0,j})_{*}$). If $f=\sum_{\mu}c_{\mu}x^{\mu}$, then
(41)
$$\tau_{\theta}(f(\dot{x}({\theta})))=\sum_{\mu}c_{\mu}\xi^{\mu\theta}\dot{z}^{%
\mu}.$$
9.4 Example (Fibers).
Recall that $\rho_{n}\colon\nabla_{\!n}X\to X$ is the canonical projection of the arc scheme onto the base scheme. Let us calculate the fiber ${\rho_{n}^{-1}(O)}$ of the origin. If $f_{1}=\dots=f_{s}=0$ is the schemic formula defining $X$, then $\nabla_{\!n}X$ is given by the equations $f_{i}(\dot{x})\equiv 0\mod\xi^{n}$, and ${\rho_{n}^{-1}(O)}$ is the closed subscheme given by $\tilde{x}_{0}=0$, that is to say,
(42)
$${\rho_{n}^{-1}(O)}=\nabla_{\!n}^{\mathbf{1}}X.$$
9.5 Definition (Twisted geometric Igusa-zeta series).
Given $\theta\in\Gamma_{m}$, define the $\theta$-twisted geometric Igusa-zeta series of $X$ to be
$$\operatorname{Igu}^{\theta}_{X}(t):=\sum_{n=0}^{\infty}{\left[\nabla_{\!n}^{%
\theta}X\right]}t^{n}.$$
Hence, $\operatorname{Igu}^{\operatorname{geom}}_{X}$ is just the case in which the twist is zero.
At times, it is convenient, especially in inductive arguments, to prove that all twisted geometric Igusa-zeta series are rational.
Twisted initial forms
For the remainder of the section, we restrict to the case of a hypersurface $X$ defined by a single equation $f:=\sum_{\nu}c_{\nu}x^{\nu}$. If $f$ is not homogeneous, we can no longer expect such a simple relation between the arc scheme and the fiber above the singular locus. As we will shortly see, the following hypersurfaces derived from $X$ will play an important role: for every $\theta\in\Gamma_{m}$, let $\tilde{X}^{\theta}$ be defined as follows. View $F[x]$ as a graded ring giving the variable $x_{i}$ weight $v\theta_{i}$. Let $\operatorname{ord}_{\theta}(f)$, or $\operatorname{ord}_{\theta}(X)$, be the order of $f$ in this grading, that is to say, the minimum of all $v\theta\cdot\nu$ with $c_{\nu}\neq 0$, and let $\tilde{X}^{\theta}$ be the hypersurface with defining equation
$$\tilde{f}^{\theta}:=\sum_{v\theta\cdot\nu=\operatorname{ord}_{\theta}(X)}c_{%
\nu}x^{\nu}.$$
In particular, $X=\tilde{X}^{\mathbf{0}}$. We call $\tilde{X}^{\theta}$, or rather, $\tilde{f}^{\theta}$, the $\theta$-twisted initial form of $X$.
Here is an example to view the previous conventions and definitions at work:
9.6 Example.
Let $f=x^{9}+x^{2}y^{4}+z^{4}$ and $\theta=(2,3^{*},5)$. Hence $\nabla_{\!n}^{(2,3^{*},5)}X$ is the locally closed subscheme of $\nabla_{\!(2,3,5)}X$ given by the conditions $\tilde{x}_{0}=\tilde{x}_{1}=\tilde{y}_{0}=\tilde{y}_{1}=\tilde{y}_{2}=\tilde{z%
}_{0}=\tilde{z}_{1}=\tilde{z}_{2}=\tilde{z}_{3}=\tilde{z}_{4}=0$ and $\tilde{y}_{3}$ is invertible.
Using (40) and (41), its equations are
$$\tau_{(2,3^{*},5)}(f(\dot{x}({2,3^{*},5}),\dot{y}({2,3^{*},5}),\dot{z}({2,3^{*%
},5})))=\xi^{18}\dot{x}^{9}+\xi^{16}\dot{x}^{2}\dot{y}^{4}_{*}+\xi^{20}\dot{z}%
^{4}\equiv 0\mod\xi^{n}$$
and $\dot{x}({n-2})=\dot{y}({n-3})=\dot{z}({n-5})=0$. Hence, $\operatorname{ord}_{(2,3^{*},5)}(X)=16$ and the twisted initial form $\tilde{X}^{(2,3^{*},5)}$ is given by $\tilde{f}^{(2,3^{*},5)}=x^{2}y^{4}_{*}$, that is to say, by the two conditions $x^{2}y^{4}=0$ and $y$ is a unit.
Regular base
We will deduce rationality by splitting off regular pieces of various twisted initial forms, until we arrive at a recursive relation involving the arc scheme of the original hypersurface. To this end, we introduce the following definition: we say that $\theta\in\Gamma_{m}$ is $X$-regular if $\tilde{\theta}^{X}$ is smooth.
As with arcs, directed arcs above a regular base have a locally trivial fibration, a fact which will allow us to determine their contribution to the Igusa-zeta series:
9.7 Proposition.
Let $X\subseteq\mathbb{A}^{m}$ be a hypersurface. For each $X$-regular tuple $\theta\in\Gamma_{m}$, we have locally (i.e., on an open affine covering), an isomorphism
$$\nabla_{\!n}^{\theta}X\cong\nabla_{\!n-\operatorname{ord}_{X}(\theta)}(\tilde{%
X}^{\theta})\times\mathbb{A}^{m\cdot\operatorname{ord}_{X}(\theta)-\left|%
\theta\right|},$$
for each $n>\operatorname{ord}_{X}(\theta)$.
Proof.
Let $f=\sum_{\nu}c_{\nu}x^{\nu}$ be the defining equation of $X$. Let us put $a:=\operatorname{ord}_{X}(\theta)$; recall that it is the minimum of all $v\theta\cdot\nu$ with $c_{\nu}\neq 0$. Instead of $x$, we will use the $m$-tuple of variables $z$ whose $i$-th variable is tagged precisely if $\theta_{i}$ is. For each $k$, let $f^{\theta}_{k}:=\sum_{v\theta\cdot\nu=k}c_{\nu}z^{\nu}$, so that $\tilde{f}^{\theta}=f_{a}^{\theta}$ is the defining equation of $\tilde{X}^{\theta}$. By (40) and (41), the arc equation of $\nabla_{\!n}^{\theta}X$ is
$$\tau_{\theta}(f(\dot{z}({\theta})))=\sum_{k=a}^{n-1}\xi^{k}f^{\theta}_{k}(\dot%
{z})\equiv 0\mod\xi^{n}$$
whereas the initial condition is $\dot{z}(\mathbf{n}-\theta)=0$. Factoring out $\xi^{a}$, yields the arc equation
(43)
$$\sum_{k=0}^{n-a-1}\xi^{k}f^{\theta}_{a+k}(\dot{z})\equiv 0\mod\xi^{n-a}.$$
On the other hand, the arc equation of $\nabla_{\!n-a}\tilde{X}^{\theta}$ is
(44)
$$\tilde{f}^{\theta}(\dot{z})\equiv 0\mod\xi^{n-a}.$$
Note that expansions (43) and (44) have the same constant term $\tilde{f}^{\theta}=f^{\theta}_{a}$. Expand
$$f^{\theta}_{k}(\dot{z})=\sum_{l}\xi^{l}f^{\theta}_{k,l}(\tilde{z}),$$
with each $f_{k,l}^{\theta}$ only depending on $\tilde{z}_{0},\dots,\tilde{z}_{l-1}$. If $k=a$, we will write $\tilde{f}^{\theta}_{l}(\tilde{z})$ for $f^{\theta}_{a,l}(\tilde{z})$, so that $\tilde{f}^{\theta}(\dot{z})=\sum_{l}\xi^{l}\tilde{f}^{\theta}_{l}(\tilde{z})$. Substituting in (43), we get an expansion
$$\sum_{k,l=0}^{n-a-1}\xi^{k+l}f^{\theta}_{a+k,l}(\tilde{z})$$
showing that the defining equations of $\nabla_{\!n}^{\theta}X$ are $g_{0}=\dots=g_{n-a-1}=0$ together with $\tilde{z}_{i,j}=0$ if $i\geq n-\theta_{j}$, where
(45)
$$g_{l}(\tilde{z}):=\sum_{k=0}^{l}f^{\theta}_{a+k,l-k}=\tilde{f}^{\theta}_{l}+%
\sum_{k=1}^{l}f^{\theta}_{a+k,l-k}.$$
Since $\tilde{X}^{\theta}$ is smooth, the proof of Corollary 7.20 shows that locally $\tilde{f}^{\theta}_{l}$, for $l>0$, is linear in the $\tilde{z}_{l}$-variables, and smoothness allows us to solve for one of the $\tilde{z}_{l}$-variables in terms of the others. Restricting to a basic open, we may assume that we can do this globally (we leave the details to the reader; but see also the proof of Corollary 7.20). However, the same is then true for $g_{k}$, since the difference $g_{l}-\tilde{f}^{\theta}_{l}$ only depends on variables $\tilde{z}_{0},\dots,\tilde{z}_{l-1}$ by (45). This shows that the closed subscheme defined by $g_{0},\dots,g_{n-a-1}$ when viewed in the variables $\tilde{z}_{0},\dots,\tilde{z}_{n-a-1}$ is the same as $\nabla_{\!n}\tilde{X}^{\theta}$. As for the remaining $ma$ variables $\tilde{z}_{n-a},\dots,\tilde{z}_{n-1}$, among these, $\left|\theta\right|$ many of them are put equal to zero, whereas the rest remains free, proving the assertion.
∎
9.8 Corollary.
If $\theta$ is $X$-regular, then
$${\left[\nabla_{\!n}^{\theta}X\right]}={\left[\tilde{X}^{\theta}\right]}\cdot{%
\mathbb{L}}^{(m-1)(n-1)+\operatorname{ord}_{X}(\theta)-\left|\theta\right|}$$
in ${\mathbf{Gr}(F^{\operatorname{pp}})}$.
Proof.
This follows, by the same argument as for Lemma 7.21, immediately from Corollary 7.20 and Proposition 9.7, noting that $\tilde{X}^{\theta}$ has dimension $m-1$.
∎
Recursion
Given $\alpha,\beta\in\Gamma_{m}$, we will write $\alpha\lhd_{X}\beta$, if $\alpha\preceq\beta$ and there exists some $s>0$ such that
$$\tau_{\beta}(f(\dot{z}({\beta})))=\xi^{s}\tau_{\alpha}(f(\dot{z}({\alpha}))).$$
An easy calculation shows that necessarily $s=\operatorname{ord}_{X}(\beta)-\operatorname{ord}_{X}(\alpha)$. Note that $f$ is homogeneous in the classical sense if and only if $\mathbf{0}\lhd_{X}\mathbf{1}$.
9.9 Lemma.
If $\alpha\lhd_{X}\beta$, then
$${\left[\nabla_{\!n}^{\beta}X\right]}={\left[\nabla_{\!n-s}^{\alpha}X\right]}%
\cdot{\mathbb{L}}^{sm-\left|\beta\right|+\left|\alpha\right|}$$
in ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$, for all $n>s$, with $s=\operatorname{ord}_{X}(\beta)-\operatorname{ord}_{X}(\alpha)$.
Proof.
By (40), the defining equations of $\nabla_{\!n}^{\beta}X$ are
$$\tau_{\beta}(f(\dot{z}({\beta})))\equiv 0\mod\xi^{n}\quad\text{and}\quad\dot{x%
}({\mathbf{n}-\beta})=0.$$
By assumption, the power series in the arc equation equals $\xi^{s}\tau_{\alpha}(f(\dot{z}({\alpha})))$, and so yields the arc equation
(46)
$$\tau_{\alpha}(f(\dot{z}({\alpha})))\equiv 0\mod\xi^{n-s}.$$
However, (46) is also the arc equation of $\nabla_{\!n-s}^{\alpha}X$. As the initial condition for $\nabla_{\!n-s}^{\alpha}X$ is given by $\dot{x}({\mathbf{n}-\mathbf{s}-\alpha})$, the difference between the two directed arc schemes lies in the number of free variables not covered by the respective initial conditions, a number which is easily seen to be $\left|\mathbf{s}-\beta+\alpha\right|=sm-\left|\beta\right|+\left|\alpha\right|$, whence the assertion.
∎
Rationalizing trees
We are interested in subtrees of $\Gamma_{m}$, and will use the following terminology: by a tree we mean a finite, connected partially ordered subset of (nodes from) $\Gamma_{m}$ such that any initial segment is totally ordered. The unique minimum is called the root of the tree, and any maximal element is called a leaf. By a branch, we will mean a chain $[\alpha,\beta]$ from a node $\alpha$ to a leaf $\beta$.
Let $\delta\leq\eta$ be binary vectors, that is to say, tuples with entries $0$ or $1$. We define a transformation $e^{\eta}_{\delta}$ on $\Gamma_{m}$ as follows. For each $i$, let $e_{i}$ simply be addition with the basis vector $e_{i}$ on $\Gamma_{m}$ (note that per our addition convention, each entry stays in whichever state, tagged or untagged, it was). On the other hand, we let $e_{i}^{*}$ be the transformation which tags the $i$-th entry but leaves the remaining entries unchanged. Given a binary vector $\varepsilon$, we let $e_{\varepsilon}$ (respectively, $e^{*}_{\varepsilon}$) be the composition of all $e_{i}$ (respectively, all $e_{i}^{*}$) for which $\varepsilon_{i}=1$. Note that all these transformations commute with each other. Finally, we let $e^{\eta}_{\delta}$ be the composition of $e_{\delta}$ and $e^{*}_{\eta-\delta}$. For instance,
$$\displaystyle\quad e^{(1,1,0,1,0)}_{(0,0,0,1,0)}(2,3^{*},1,4,1)=e_{(1,1,0,0,0)%
}^{*}e_{(0,0,0,1,0)}(2,3^{*},1,4,1)=\\
\displaystyle e_{1}^{*}e_{2}^{*}e_{4}(2,3^{*},1,4,1)=(2^{*},3^{*},1,5,1).$$
Note that $e^{\eta}_{\delta}(\theta)$ has underlying value equal to $v\theta+\delta$. More precisely, taking in account our addition convention, we have
$$e^{\eta}_{\delta}(\theta)=e^{*}_{\eta-\delta}(\theta+\delta).$$
Note that $e^{\eta}_{\delta}$ call fail to be an increasing function (if in the above example we replace $(0,0,0,1,0)$ by $(0,1,0,0,1,0)$ the resulting tuple is $(2^{*},4^{*},1,5,1)$ which is not comparable with $(2,3^{*},1,4,1)$ because of the second entry). A necessary condition is
(47)
$$(\forall i)[\text{if }\theta_{i}\text{ tagged then }\eta_{i}=0]\Rightarrow%
\theta\preceq e^{\eta}_{\delta}(\theta).$$
We will use these transformation mainly through the following result:
9.10 Lemma.
Let $X\subseteq{\mathbb{A}_{F}^{m}}$ be a closed subscheme.
For every $\theta\in\Gamma_{m}$, and every binary vector $\eta$, we have an identity
$${\left[\nabla_{\!n}^{\theta}X\right]}=\sum_{\delta\leq\eta}{\left[\nabla_{\!n}%
^{e^{\eta}_{\delta}(\theta)}X\right]}$$
in ${\mathbf{Gr}}(F^{\operatorname{form}})$, for all $n$.
Proof.
We apply (39) to $Y:=\nabla_{\!n}^{\theta}X$ with respect to the variables $\tilde{x}_{\theta_{i},i}$ such that $\eta_{i}=1$, yielding
$${\left[Y\right]}=\sum_{\delta\leq\eta}{\left[\bar{Y}_{\delta}\right]}.$$
However, $\bar{Y}_{\delta}$ is obtained by inverting, that is to say, tagging $\tilde{x}_{\theta_{i},i}$ if $\delta_{i}=1$, and equating it to zero, if $\delta_{i}=0$. Since the defining arc equations for $\nabla_{\!n}^{\theta}X$ are $f_{i}(\dot{x}({\theta}))\equiv 0\mod\xi^{n}$, for $i=1,\dots,s$, where $f_{1}=\dots=f_{s}=0$ is the defining schemic formula of $X$, the arc equations of $\bar{Y}_{\delta}$ are $f_{i}(\dot{x}({\theta+\eta-\delta}))\equiv 0\mod\xi^{n}$, for $i=1,\dots,s$, together with inverting all $\tilde{x}_{\theta_{i},i}$ for which $\delta_{i}=1$. As these are the defining arc equations for $\nabla_{\!n}^{e^{\eta}_{\eta-\delta}(\theta)}X$, we proved the assertion (note that summing over all $\delta$ is the same as summing over all $\eta-\delta$).
∎
We define by induction on the height of a tree in $\Gamma_{m}$ for it to be a resolution tree as follows: any singleton is a resolution tree; if $T$ is a resolution tree, then so is $T^{\prime}$ which is obtained from $T$ first by choosing a leaf $\gamma$ of $T$ and a binary vector $\eta$ such that whenever an entry $\gamma_{i}$ is tagged, the corresponding entry $\eta_{i}$ is zero, and then by adding on to $T$ all the $e^{\eta}_{\delta}(\gamma)$ as new leafs, for $\delta\leq\eta$. By (47), the new subset is indeed a tree. In particular, if every entry of some node $\theta\in T$ is tagged and $T$ is a resolution tree, then $\theta$ is necessarily a leaf of $T$. Moreover, if $T^{\prime}$ is a subtree of $T$, that is to say, all nodes of $T$ greater than or equal to a fixed node, and $T$ is a resolution tree, then so is $T^{\prime}$.
9.11 Lemma.
Let $X\subseteq\mathbb{A}^{m}$ be a closed subscheme and let $T\subseteq\Gamma_{m}$ be a subtree with root $\theta$. If $T$ is a resolution tree, then
$${\left[\nabla_{\!n}^{\theta}X\right]}=\sum_{\gamma\in T\text{ leaf}}{\left[%
\nabla_{\!n}^{\gamma}X\right]}$$
in ${\mathbf{Gr}}(F^{\operatorname{form}})$, for all $n$.
Proof.
By induction on the height of a node, immediate from Lemma 9.10.
∎
For a tree $T\subseteq\Gamma_{m}$, we define recursively what it means for it to be $X$-rationalizing: if all but one of its leafs $\gamma$ are $X$-regular and if $\theta\lhd_{X}\gamma$, then $T$ is $X$-rationalizing. Furthermore, if $T$ is $X$-rationalizing, $\gamma$ a leaf of $T$, and $T^{\prime}$ an $X$-rationalizing tree with root $\gamma$, then the composite tree obtained by replacing $\gamma$ in $T$ by $T^{\prime}$ is again $X$-rationalizing.
9.12 Theorem.
If $T$ is an $X$-rationalizing resolution tree with root $\theta\in\Gamma_{m}$, then the geometric Igusa-zeta series $\operatorname{Igu}^{\operatorname{geom}}_{X}$ is rational over ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$.
Proof.
Let us first show that if $T$ is a resolution tree with root $\theta$ and for each leaf $\gamma$, the twisted geometric Igusa-zeta series $\operatorname{Igu}^{\gamma}_{X}$ is rational, then so is $\operatorname{Igu}^{\theta}_{X}$. Indeed, by Lemma 9.11, we have an identity
(48)
$${\left[\nabla_{\!n}^{\theta}X\right]}=\sum_{\gamma\in T\text{ leaf}}{\left[%
\nabla_{\!n}^{\gamma}X\right]}$$
Multiplying with $t^{n}$, and summing over all $n$ then yields
$$\operatorname{Igu}^{\theta}_{X}=\sum_{\gamma\in T\text{ leaf}}\operatorname{%
Igu}^{\gamma}_{X},$$
proving the claim. Hence, we may use the recursive definition of a rationalizing tree and the previous result, to reduce to the case that all but one leaf $\bar{\gamma}$ of $T$ is $X$-regular, and $\theta\lhd_{X}\bar{\gamma}$. Assume first the characteristic is zero. Again (48) holds, and by Corollary 9.8, the directed arcs along all leafs $\gamma\neq\bar{\gamma}$ are certain multiples of ${\mathbb{L}}^{(m-1)n}$, where the multiple is independent from $n$, whereas by Lemma 9.9, the directed arc class along $\bar{\gamma}$ is a multiple of the class along $\theta$. More precisely, there exists an element $w\in{\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$ such that
$${\left[\nabla_{\!n}^{\theta}X\right]}=w\cdot{\mathbb{L}}^{(m-1)n}+{\left[%
\nabla_{\!n-s}^{\theta}X\right]}\cdot{\mathbb{L}}^{r}$$
for all $n$, where $s=\operatorname{ord}_{X}(\bar{\gamma})-\operatorname{ord}_{X}(\theta)$ and $r=sm-\left|\bar{\gamma}\right|+\left|\theta\right|$.
Multiplying with $t^{n}$ and summing over all sufficiently large $n$, we get an identity
$$\operatorname{Igu}^{\theta}_{X}=\frac{Q}{1-{\mathbb{L}}^{m-1}t}+{\mathbb{L}}^{%
r}t^{s}\operatorname{Igu}^{\theta}_{X}$$
for some polynomial $Q$ over ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$. Solving for $\operatorname{Igu}^{\theta}_{X}$ then proves the claim, as $s>0$.
∎
Linear rationalization algorithm
The algorithm that we will use here to construct an $X$-rationalizing resolution tree with root $\mathbf{0}$, thus establishing the rationality of the geometric Igusa-zeta series of a hypersurface $X$ by Theorem 9.12, relies on the simple form the singular locus takes. Namely, we say that a hypersurface $X$ containing the origin has linear singularities, if its singular locus is a union of coordinate subspaces, where a coordinate subspaces is a closed subscheme given by equations $x_{i_{1}}=\dots=x_{i_{s}}=0$ for some subset $x_{i_{j}}$ of the variables. We will apply the algorithm to hypersurfaces all of whose twisted initial forms have linear singularities.
Single-branch linear rationalization algorithm for power hypersurfaces with an isolated singularity
In its simplest form, the algorithm works as follows: assume for every twisted initial form $\tilde{X}^{\theta}$ of $X$, there exists a variable $x_{i}$ such that the basic subset $(\tilde{X}^{\theta})_{x_{i}}$ is smooth. We then apply Lemma 9.10 with $\eta=e_{i}$, thus building a binary tree with at each stage exactly one untagged and one tagged leaf, and such that the latter is moreover $X$-regular, whence requires no further action. We continue this process (on the remaining untagged leaf) until we reach an untagged leaf $\gamma$ with $\mathbf{0}\lhd_{X}\gamma$, at which point we can invoke Theorem 9.12. If such a leaf $\gamma$ can be found, we say that the algorithm stops.
This algorithm will stop on any hypersurface $X$ with an equation of the form
$$f:=r_{1}x_{1}^{a_{1}}+\dots+r_{m}x_{m}^{a_{m}}$$
with $a_{i}>0$ and $r_{i}\in F$; we will refer to such an $X$ as power hypersurface. In characteristic zero, the origin is an isolated singularity, but in positive characteristic, this is only the case if at most one of the powers $a_{i}$ is divisible by the characteristic. In the isolated singularity case, the algorithm as described above does apply: any twisted initial form is again a power hypersurface; if it is one of the powers $x_{i}^{a_{i}}$, its regular locus, although empty, is obtained by inverting $x_{i}$, even if $a_{i}$ is divisible by the characteristic; in the remaining case, we can always invert one variable whose power is not divisible by the characteristic, yielding a smooth twisted initial form. So remains to show that this algorithm stops, that is to say, will eventually produce a leaf $\gamma$ such that $\mathbf{0}\lhd_{X}\gamma$. To see this, note that the set of all $\operatorname{ord}_{X}(\theta)$, with $\theta$ running over all untagged nodes in the tree, is equal to the union of all semi-groups $a_{i}\mathbb{N}$, for $i=1,\dots,m$. Therefore, if $e$ is the least common multiple of all $a_{i}$, it will occur as some $\operatorname{ord}_{X}(\gamma)$ for some untagged leaf $\gamma$ in this algorithm. However, it is easy to see that $\tilde{X}^{\gamma}=X$, and hence we showed:
9.13 Theorem.
The geometric Igusa zeta-series $\operatorname{Igu}^{\operatorname{geom}}_{X}$ of a power hypersurface $X$ with an isolated singularity is rational over ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$.∎
In the next section, we will work out in complete detail the implementation of this algorithm for the power surface $x^{2}+y^{3}+z^{4}=0$. Generalizing these calculations, we will derive the following formula:
9.14 Corollary.
If $r_{1}x_{1}^{a_{1}}+\dots+r_{m}x_{m}^{a_{m}}=0$ is the equation of the power hypersurface $X$ with an isolated singularity, then there exists a polynomial $Q_{X}(t)\in{\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}[t]$ such that
$$\operatorname{Igu}^{\operatorname{geom}}_{X}=\frac{Q_{X}(t)}{(1-{\mathbb{L}}^{%
m-1}t)(1-{\mathbb{L}}^{N}t^{e})}$$
where $e$ is the least common multiple of $a_{1},\dots,a_{m}$, and where
(49)
$$N=e(\frac{a_{1}-1}{a_{1}}+\dots+\frac{a_{m}-1}{a_{m}}).$$
10. Rationality of the Igusa-zeta series for Du Val surfaces
In this final section, we apply the previous rationalization algorithm to the geometric Igusa zeta series of Du Val surfaces, which over a field $F$ of characteristic different $p\neq 2$, are precisely the isolated canonical singularities (at the origin $O$). Over $\mathbb{C}$, they can be realized, up to analytic isomorphism, as the quotients $\mathbb{A}^{2}/\Gamma$, where $\Gamma\subseteq\operatorname{SL}_{2}(\mathbb{C})$ is a finite subgroup. A complete invariant is the dual resolution graph viewed as one of the following Dynkyn diagrams: $A_{k}$, $D_{k}$, $E_{6}$, $E_{7}$, or $E_{8}$, and we therefore will denote them simply by the latter letters. The main result of this section is the rationality of their geometric Igusa-zeta series over ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$, summarized by the following table, where we listed in the last column only the relevant factor in the denominator (the other factor being $(1-{\mathbb{L}}^{2}t)$).
The $E_{6}$-surface
Let us work step-by-step through the rationalization algorithm for the Du Val surface $E_{6}$ with equation $x^{2}+y^{3}+z^{4}$. We take a ‘short-cut’ by observing that the origin $O$ is an isolated singularity, so that we only need to calculate the class of $\nabla_{\!n}^{(1,1,1)}E_{6}={\rho_{n}^{-1}(O)}$ by Theorem 9.3. By (41), its arc equations are
$$\xi^{2}\dot{x}^{2}+\xi^{3}\dot{y}^{3}+\xi^{4}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
together with the initial conditions $\tilde{x}_{i}=\tilde{y}_{i}=\tilde{z}_{i}=0$ for $i\geq n-1$.
The twisted initial form is $x^{2}$. According to the algorithm, we have a single branching given by the transformations $e_{1}^{*}$ and $e_{1}$. The twisted initial form of $e_{1}^{*}(1,1,1)=(1^{*},1,1)$ is defined by $x_{*}^{2}=0$ and hence is empty. So remains the untagged leaf $e_{1}(1,1,1)=(2,1,1)$, with arc equations
$$\xi^{4}\dot{x}^{2}+\xi^{3}\dot{y}^{3}+\xi^{4}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
and in addition to the previous initial conditions, also $\tilde{x}_{n-2}=0$. As the twisted initial form is $y^{3}$, we branch with $e_{2}^{*}$ and $e_{2}$. The twisted initial form of $e_{2}^{*}(2,1,1)=(2,1^{*},1)$ is $y_{*}^{3}=0$, whence empty, leaving us with $e_{2}(2,1,1)=(2,2,1)$, whose arc equations are
(50)
$$\xi^{4}\dot{x}^{2}+\xi^{6}\dot{y}^{3}+\xi^{4}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
and an additional initial condition $\tilde{y}_{n-2}=0$. The new twisted initial form is $x^{2}+z^{4}$. At this point, inverting either variable $x$ or $z$ yields a regular surface. However, instead of choosing one, we may perform a multi-branching step, in which we consider all four possibilities $e_{1}e_{3}$, $e_{1}^{*}e_{3}$, $e_{1}e_{3}^{*}$, or $e_{1}^{*}e_{3}^{*}$, when applying (39), yielding the four leafs $(3,2,2)$, $(2^{*},2,2)$, $(3,2,1^{*})$, and $(2^{*},2,1^{*})$ respectively. The corresponding initial forms are given by $x^{2}+y^{3}=0$, $x_{*}^{2}=0$, $z^{4}_{*}=0$, and $x^{2}_{*}+z_{*}^{4}=0$. The middle two clearly are empty, and as the last is smooth, we may invoke Corollary 9.8, to get
$${\left[\nabla_{\!n}^{(2^{*},2,1^{*})}E_{6}\right]}={\left[\tilde{E_{6}}^{(2^{*%
},2,1^{*})}\right]}\cdot{\mathbb{L}}^{2n-2+4-5}={\left[x^{2}_{*}+z_{*}^{4}%
\right]}\cdot{\mathbb{L}}^{2n-3}$$
as $\operatorname{ord}_{(2^{*},2,1^{*})}(E_{6})=4$. This leaves the first leaf, $(3,2,2)$, with arc equations
$$\xi^{6}\dot{x}^{2}+\xi^{6}\dot{y}^{3}+\xi^{8}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
and the two additional initial conditions $\tilde{x}_{n-3}=\tilde{z}_{n-2}=0$. Its twisted initial form $x^{2}+y^{3}$ becomes non-singular if we invert $x$ or $y$, suggesting another multi-branching step. Inverting one and equating the other to zero leads once more to contradictory equations,
so we only have to deal with the two leafs $(3^{*},2^{*},2)$ and $(4,3,2)$. For the former, we may invoke once more Corollary 9.8, yielding the class
$${\left[x^{2}_{*}+y_{*}^{3}=0\right]}\cdot{\mathbb{L}}^{2n-2+6-7},$$
as $\operatorname{ord}_{(3^{*},2,2^{*})}(E_{6})=6$. The latter has arc equations
$$\xi^{8}\dot{x}^{2}+\xi^{9}\dot{y}^{3}+\xi^{8}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
together with the vanishing of $\tilde{x}_{i}$, $\tilde{y}_{i}$ and $\tilde{z}_{i}$ for $i$ greater than or equal to respectively $n-4$, $n-3$, and $n-2$. Since $(4,3,2)$ has the same twisted initial form as $(2,2,1)$,
we may repeat our previous argument. Tagging both variables gives the leaf $(4^{*},3,2^{*})$ and
$${\left[x^{2}_{*}+z_{*}^{4}\right]}\cdot{\mathbb{L}}^{2n-2+8-9}$$
as $\operatorname{ord}_{(4^{*},3,2^{*})}(E_{6})=8$. The latter leaf is $(5,3,3)$, with arc equations
$$\xi^{10}\dot{x}^{2}+\xi^{9}\dot{y}^{3}+\xi^{12}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
together with $\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i}=0$ for $i\geq n-5,n-3,n-3$ respectively. As the twisted initial form is $y^{3}$, we again branch over $e_{2}^{*}$ and $e_{2}$ leading to the leaf $(5,4,3)$, with arc equations
$$\xi^{10}\dot{x}^{2}+\xi^{12}\dot{y}^{3}+\xi^{12}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
together with $\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i}=0$, for $i\geq n-5,n-4,n-3$ respectively. As $x^{2}$ is the new twisted initial form, we branch over $e_{1}^{*}$ and $e_{1}$, yielding the leaf $(6,4,3)$, with arc equations
$$\xi^{12}\dot{x}^{2}+\xi^{12}\dot{y}^{3}+\xi^{12}\dot{z}^{4}\equiv 0\mod\xi^{n}$$
together with $\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i}=0$ for $i\geq n-6,n-4,n-3$ respectively. As $X$ itself is the twisted initial form of this leaf, that is to say, $\mathbf{0}\lhd_{X}(6,4,3)$, our algorithm has come to a halt. Indeed, if we factor out $\xi^{12}$ in the last equation, we get the $(n-12)$-th arc equations. Since we have $\left|(6,4,3)\right|=13$ additional initial conditions, we are left with $3\cdot 12-13=23$ free variables $\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i}$ for $n-12\leq i<n-6,n-4,n-3$ respectively, as predicted by Lemma 9.9. Putting everything together, we showed that ${\left[\nabla_{\!n}E_{6}\right]}$ is equal to
(51)
$${\left[E_{6}-O\right]}{\mathbb{L}}^{2n-2}+2{\left[\tilde{E_{6}}^{(2^{*},2,1^{*%
})}\right]}{\mathbb{L}}^{2n-3}+{\left[\tilde{E_{6}}^{(3^{*},2^{*},2)}\right]}{%
\mathbb{L}}^{2n-3}+{\left[\nabla_{\!n-12}E_{6}\right]}{\mathbb{L}}^{23}$$
Multiplying with $t^{n}$, summing over all $n$, and solving for the zeta series yields
$$\operatorname{Igu}^{\operatorname{geom}}_{E_{6}}=\frac{Q_{E_{6}}}{(1-{\mathbb{%
L}}^{2}t)(1-{\mathbb{L}}^{23}t^{12})}$$
for some polynomial $Q_{E_{6}}$ over ${\mathbf{Gr}}(F^{\operatorname{form}})_{\mathbb{L}}$. A schematic representation of these calculations is given by the following rationalization tree, in which we equated, for brevity, a leaf to the class of the corresponding directed arc scheme (giving only its defining polynomial): |
Sequences of labeled trees related to Gelfand-Tsetlin patterns
Ilse Fischer
Abstract.
By rewriting the famous hook-content formula it easily follows that there are $\prod\limits_{1\leq i<j\leq n}\frac{k_{j}-k_{i}+j-i}{j-i}$ semistandard tableaux of shape $(k_{n},k_{n-1},\ldots,k_{1})$ with entries in $\{1,2,\ldots,n\}$ or, equivalently, Gelfand-Tsetlin patterns with bottom row $(k_{1},\ldots,k_{n})$. In this article we introduce certain sequences of labeled trees, the signed enumeration of which is also given by this formula. In these trees, vertices as well as edges are labeled, the crucial condition being that each edge label lies between the vertex labels of the two endpoints of the edge. This notion enables us to give combinatorial explanations of the shifted antisymmetry of the formula
and its polynomiality. Furthermore, we propose to develop an analog approach of combinatorial reasoning for monotone triangles and explain how this may lead to a combinatorial understanding of the alternating sign matrix theorem.
Supported by the Austrian Science Foundation
FWF, START grant Y463 and NFN grant S9607–N13.
1. Introduction
One possibility to see that the expression
$$\prod\limits_{1\leq i<j\leq n}\frac{k_{j}-k_{i}+j-i}{j-i}$$
(1.1)
is an integer for any choice of $(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$
is to find combinatorial
objects that are enumerated by this quantity. This is, for instance, accomplished by Gelfand-Tsetlin patterns
with prescribed bottom row $k_{1},k_{2},\ldots,k_{n}$. A Gelfand-Tsetlin pattern (see [11, p. 313] or [7, (3)] for the first appearance) is a triangular array of
integers with $n$ rows of the following shape
$$a_{1,1}$$
$$a_{2,1}$$
$$a_{2,2}$$
$$\dots$$
$$\dots$$
$$\dots$$
$$a_{n-2,1}$$
$$\dots$$
$$\dots$$
$$a_{n-2,n-2}$$
$$a_{n-1,1}$$
$$a_{n-1,2}$$
$$\dots$$
$$\dots$$
$$a_{n-1,n-1}$$
$$a_{n,1}$$
$$a_{n,2}$$
$$a_{n,3}$$
$$\dots$$
$$\dots$$
$$a_{n,n}$$
,
that is monotone increasing along northeast diagonals and southeast
diagonals, i.e. $a_{i,j}\leq a_{i-1,j}$ for $1\leq j<i\leq n$ and $a_{i,j}\leq a_{i+1,j+1}$ for
$1\leq j\leq i<n$. It is conceivable to assume that
$(k_{1},\ldots,k_{n})\in\mathbb{Z}_{\geq 0}^{n}$, as Gelfand-Tsetlin patterns with bottom row $(k_{1},\ldots,k_{n})$ are obviously in bijective correspondence with Gelfand-Tsetlin patterns with bottom row $(k_{1}+t,\ldots,k_{n}+t)$ for any integer $t\in\mathbb{Z}$. Under this assumption, they are equivalent to
semistandard tableaux of shape $(k_{n},k_{n-1},\ldots,k_{1})$ with entries in $\{1,2,\ldots,n\}$, the latter
being fillings of the Ferrers diagram associated with the integer partition $(k_{n},k_{n-1},\ldots,k_{1})$ that are weakly increasing
along rows and strictly increasing along columns.111Note that there is actually no dependency between the number of feasible values for the entries of the semistandard tableaux and the number of parts in the integer partition: semistandard tableaux of shape
$(k_{m},k_{m-1},\ldots,k_{1})$ with entries in $\{1,2,\ldots,n\}$ are equivalent to semistandard tableaux of shape $(k_{m},k_{m-1},\ldots,k_{1},0^{n-m})$ with entries in $\{1,2,\ldots,n\}$ if $n\geq m$ and there exists no semistandard tableau otherwise. Next we give an example of a Gelfand-Tsetlin pattern and
the corresponding semistandard tableaux.
$$2$$
$$2$$
$$2$$
$$1$$
$$2$$
$$4$$
$$1$$
$$1$$
$$3$$
$$4$$
$$0$$
$$1$$
$$3$$
$$3$$
$$5$$
$$0$$
$$0$$
$$2$$
$$3$$
$$5$$
$$6$$
1
1
3
3
5
6
2
2
4
6
6
3
5
5
4
6
In general, given a Gelfand-Tsetlin pattern $(a_{i,j})_{1\leq j\leq i\leq n}$, the corresponding semistandard tableau is constructed by placing the integer $i$ in the cells of the skew shape
$$(a_{i,i},a_{i,i-1},\ldots,a_{i,1})/(a_{i-1,i-1},a_{i-1,i-2},\ldots,a_{i-1,1}).$$
Semistandard tableaux of fixed shape (and thus Gelfand-Tsetlin patterns) are known to be enumerated by the hook-content formula [11, Corollary 7.21.4], which is easily seen to be equivalent to (1.1), see also [11, Lemma 7.21.1]. A common way to prove this formula is to translate the problem into the enumeration of families of non-intersecting lattice paths with a certain set of fixed starting points and end points. To complement the treatment given in this article, we sketch this point of view in Appendix A.
A direct proof of the fact that Gelfand-Tsetlin patterns with bottom row
$k_{1},k_{2},\ldots,k_{n}$ are enumerated by (1.1) can be found in [2, Section 5]. There we have actually proven a more general result, which
we describe in the following paragraph.
The reader will have noticed that the combinatorial interpretations that we have given so far only provide an explanation for the integrality of (1.1) if the sequence $k_{1},k_{2},\ldots,k_{n}$ is weakly increasing. This can be overcome222Of course, this also follows by choosing a permutation
$\sigma\in{\mathcal{S}}_{n}$ with $k_{\sigma_{1}}+\sigma_{1}\leq k_{\sigma_{2}}+\sigma_{2}\leq\ldots\leq k_{%
\sigma_{n}}+\sigma_{n}$ and then observing that
$\prod\limits_{1\leq i<j\leq n}\frac{k_{\sigma_{j}}-k_{\sigma_{i}}+\sigma_{j}-%
\sigma_{i}}{j-i}=\operatorname{sgn}\sigma\prod\limits_{1\leq i<j\leq n}\frac{k%
_{j}-k_{i}+j-i}{j-i}$ is the number of Gelfand-Tsetlin patterns with bottom row $(k_{\sigma_{1}}+\sigma_{1}-1,k_{\sigma_{2}}+\sigma_{2}-2,\ldots,k_{\sigma_{n}}%
+\sigma_{n}-n)$. by extending the combinatorial interpretation of Gelfand-Tsetlin patterns with bottom row $(k_{1},\dots,k_{n})$ to all $n$-tuples of integers
$(k_{1},\ldots,k_{n})$ and working with a signed enumeration as follows: a (generalized) Gelfand-Tsetlin pattern is an array of integers $(a_{i,j})_{1\leq j\leq i\leq n}$ such that the following condition is fulfilled: for any $a_{i,j}$ with $1\leq j\leq i\leq n-1$ we have $a_{i+1,j}\leq a_{i,j}\leq a_{i+1,j+1}$ if $a_{i+1,j}\leq a_{i+1,j+1}$ and $a_{i+1,j}>a_{i,j}>a_{i+1,j+1}$
if $a_{i+1,j}>a_{i+1,j+1}$. (In particular, there exists no generalized Gelfand-Tsetlin pattern with $a_{i+1,j}=a_{i+1,j+1}+1$.) In the latter case we say that $a_{i,j}$ is an inversion.
The weight (or sign) of a given Gelfand-Tsetlin pattern is $(-1)^{\text{$\#$ of inversions}}$. With this, (1.1) is the signed enumeration of all Gelfand-Tsetlin patterns with bottom row $k_{1},k_{2},\ldots,k_{n}$.
The main task of the present paper is to provide a whole family of sets of objects that come along with a rather canonical notion of a sign, the signed enumeration of each of these sets is given by (1.1). We call these objects Gelfand-Tsetlin tree sequences as Gelfand-Tsetlin patterns are one special member of this family. The definition of these objects is given in Section 2.
This enables us to give a combinatorial proof of the fact that Gelfand-Tsetlin patterns are enumerated by (1.1). Interestingly, this combinatorial
proof is not based on a bijection between Gelfand-Tsetlin patterns and a second type of objects which are more easily seen to be enumerated by (1.1). Rather than that we give combinatorial proofs of the facts that the replacement $(k_{i},k_{j})\to(k_{j}+j-i,k_{i}-i+j)$ in the enumeration formula
for the number of Gelfand-Tsetlin patterns with prescribed bottom row only causes the inversion of the sign (Section 3) as well as that the enumeration formula must be a polynomial in $(k_{1},\ldots,k_{n})$ of degree no greater than $n-1$ in every $k_{i}$ (Section 4). For each of these properties, this is accomplished by providing an appropriate member of the family for which the respective property is almost obvious. Then, it is not hard to see that these properties essentially determine the enumeration formula, which is the only algebraic part of the proof.
Note that the first property can obviously only be understood combinatorially after having extended the combinatorial interpretation of Gelfand-Tsetlin patterns with bottom row $k_{1},k_{2},\ldots,k_{n}$ to
arbitrary $(k_{1},k_{2},\ldots,k_{n})\in\mathbb{Z}^{n}$ as the sequence
$k_{1},\ldots,k_{i-1},k_{j}+j-i,k_{i+1},\ldots,k_{j-1},k_{i}+i-j,k_{j+1},\ldots%
,k_{n}$ can not be weakly increasing if $k_{1},k_{2},\ldots,k_{n}$ is weakly increasing. Also the inversion of the sign surely indicates that a signed enumeration must be involved.
However, the original motivation for this paper is the intention to
translate some of the research we have done on monotone triangles into a more combinatorial reasoning. Monotone triangles are Gelfand-Tsetlin patterns with strictly increasing rows and their significance is due to the fact that they are in bijective correspondence with alternating sign matrices when prescribing $1,2,\ldots,n$ as bottom row.
It took a lot of effort to enumerate $n\times n$ alternating sign matrices and all proofs known so far can not be considered as combinatorial proofs as they usually involve heavy algebraic manipulations, see [1]. Also the long-standing “Gog-Magog conjecture” [9], which is a generalization of the fact that $n\times n$ alternating sign matrices are in bijective correspondence with $2n\times 2n\times 2n$ totally symmetric self-complementary plane partitions is still unsolved, which is another indication for the fact that alternating sign matrices (as well as plane partitions) are
combinatorial objects that are rather persistant against combinatorial reasonings.
Our own proof of the alternating sign matrix theorem [4] makes us believe that it could be helpful to work with signed enumerations: let $\alpha(n;k_{1},\ldots,k_{n})$ denote the number of monotone triangles with bottom row $k_{1},\ldots,k_{n}$. The key identity in this
proof is the following.
$$\alpha(n;k_{1},\ldots,k_{n})=(-1)^{n-1}\alpha(n;k_{2},\ldots,k_{n},k_{1}-n)$$
(1.2)
Obviously, this identity does not make any sense at first as $k_{2},k_{3},\ldots k_{n},k_{1}-n$ is not strictly increasing if $k_{1},k_{2},\ldots,k_{n}$ is strictly increasing. However, it is not hard to see that, for fixed $n$, the quantity
$\alpha(n;k_{1},\ldots,k_{n})$ can in fact be represented by a (unique) polynomial in $k_{1},\ldots,k_{n}$ and so (1.2)
can be understood as an identity for this polynomial. On the other hand, it is also possible to give $\alpha(n;k_{1},\ldots,k_{n})$ a combinatorial interpretation for all
$(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$ in terms of a signed enumeration.
We have provided such an interpretation in [5] and give three additional but related extensions in the concluding section of this article. These
extensions provide combinatorial interpretations of (1.2) and to give also a combinatorial proof of this identity could be an important step towards a combinatorial understanding of the alternating sign matrix theorem as we explain in Section 5. It is hoped that a combinatorial proof of this identity as well as of other interesting identities involving monotone triangles follows the same lines as the combinatorial reasonings we present in this article for Gelfand-Tsetlin patterns.
2. Definition of Gelfand-Tsetlin tree sequences
In
this paper, an $n$–tree is a directed tree with $n$ vertices such that the vertices are identified with integers in
$\{1,2,\ldots,n\}$ and the edges are identified with primed integers
in $\{1^{\prime},2^{\prime},\ldots,(n-1)^{\prime}\}$. In Figure 1, we give an example of an $8$–tree. We consider sequences of trees: a tree sequence of order
$n$ is a sequence of trees ${\mathcal{T}}=(T_{1},T_{2},\ldots,T_{n})$
such that $T_{i}$ is an $i$-tree for each $i$, see Figure 4 for an example of order
$5$. Each member of the family, the signed enumeration of which is given by (1.1), will have a fixed underlying tree sequence of order $n$. The actual objects will be certain admissible labelings (vertices and edges are labeled; the labels must not be confused with the “names” of the vertices and edges) of the underlying tree sequence. Gelfand-Tsetlin patterns will be one member of this family; in the underlying tree sequence ${\mathcal{B}}=(B_{1},B_{2},\ldots,B_{n})$,
the $i$-trees $B_{i}$ are paths with the canonial labeling, i.e.
$j^{\prime}=(j,j+1)\in E(B_{i})$ for $j=1,2,\ldots,i-1$. In the following, the
tree $B_{i}$ will be referred to as the basic $i$-tree. In Figure 3, we display the respective tree sequence of order
$6$ (left figure) and the admissible labeling (a notion to be defined below) that corresponds to the Gelfand-Tsetlin pattern given in the introduction (right figure). In the right figure, we suppress the “names” of the vertices and edges in order to avoid a confusion with the labelings. However, these “names” are just the second summands of the labelings, whereas the first summand corresponds to the respective entry of the Gelfand-Tsetlin pattern given in the introduction.
We work towards defining admissible labelings of tree sequences.
Definition 1.
Let $T$ be an $n$–tree and ${\bf k}=(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$.
A vector ${\bf l}=(l_{1},\ldots,l_{n-1})\in\mathbb{Z}^{n-1}$ is said to be
admissible for the pair $(T,{\bf k})$ if for each edge $j^{\prime}=(p,q)$ of $T$ the following is fulfilled: if
$k_{p}+p<k_{q}+q$ then $k_{p}+p\leq l_{j}+j<k_{q}+q$ and otherwise $k_{q}+q\leq l_{j}+j<k_{p}+p$. In the latter case we say that the edge $j^{\prime}$ is an inversion of the pair $(T,{\bf k})$.
Phrased differently, if we
label vertex $i$ with $k_{i}+i$ and edge $j^{\prime}$ with $l_{j}+j$ for all $i$ and $j$ then, for each edge, the edge label is greater or equal than the minimum of the two vertex labels on the endpoints of the edge but smaller than the maximum. The edge is an inversion if it is directed from the maximum vertex label to the minimum vertex label. If, for an edge, the label of the tail coincides with the label of the head then there exists no vector $\bf l$ that is admissible for the pair $(T,{\bf k})$. In the following, we address the vectors ${\bf k}+(1,2,\ldots,n)$ and ${\bf l}+(1,2,\ldots,n-1)$ as the vertex labeling, respectively edge labeling of the tree and the vectors $\bf k$ and $\bf l$ as the shifted labelings.
For instance, consider the $8$-tree $T$ in Figure 1 and the vector ${\bf k}=(4,1,7,2,4,2,6,1)\in\mathbb{Z}^{8}$. Then the vector
${\bf l}=(6,3,9,5,1,2,1)$ is admissible for $(T,{\bf k})$, see Figure 4. The
inversions are $2^{\prime},3^{\prime},6^{\prime}$. Also observe that
there is no admissible shifted labeling $\bf l$ if ${\bf k}=(4,1,7,2,4,2,6,2)$ as there is no $l_{4}$ with $2+8\leq l_{4}+4<7+3$.
Now we are in the position to define Gelfand-Tsetlin tree sequence.
Definition 2.
A Gelfand-Tsetlin tree sequence
associated with a tree sequence ${\mathcal{T}}=(T_{1},\ldots,T_{n})$ of order $n$ and a shifted labeling ${\bf k}\in\mathbb{Z}^{n}$ of the vertices of $T_{n}$ is a sequence $({\bf l_{1},l_{2},\ldots,l_{n}})$ of vectors ${\bf l_{i}}\in\mathbb{Z}^{i}$ with ${\bf l_{n}}={\bf k}$ such that $\bf l_{i-1}$ is
admissible for the pair $(T_{i},{\bf l_{i}})$ if $i=2,3,\ldots,n$. We let ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$ denote the set of these Gelfand-Tsetlin tree sequences.
In Figure 5, we give an example of a Gelfand-Tsetlin tree sequence
associated with the tree sequence displayed in Figure 2. Observe that
${\bf k}=(5,6,3,-3,0)$ in this case. An edge label is displayed in italic type if the corresponding edge is an inversion. In Figure 3, we represent the Gelfand-Tsetlin pattern from Section 1
as a Gelfand-Tsetlin tree sequence associated with $(B_{1},B_{2},B_{3},B_{4},B_{5},B_{6})$.
We give a preliminary definition of the sign of a Gelfand-Tsetlin
tree sequence:
the inversions of a Gelfand-Tsetlin tree sequence are the inversions
of the pairs $(T_{i},{\bf l_{i}})$ for $i=2,3,\ldots,n$ and the sign is defined as
$(-1)^{\text{$\#$ of inversions}}$.
The (preliminary) sign of the Gelfand-Tsetlin tree sequence given in Figure 5 is $-1$ as there are $7$ inversions. We will see that the signed enumeration of Gelfand-Tsetlin tree sequences associated with
a fixed tree sequence ${\mathcal{T}}=(T_{1},\ldots,T_{n})$ of order $n$ and a fixed shifted labeling ${\bf k}=(k_{1},\ldots,k_{n})$ of the vertices of $T_{n}$ is, up to a sign, equal to (1.1).
This sign only depends on the underlying unlabeled tree sequence ${\mathcal{T}}$ and will be defined next. After that we adjust the definition of the sign of a Gelfand-Tstelin tree sequence by multiplying this global sign.
For this purpose, we define the sign of an $n$–tree $T$: fix a
root vertex $r$ of the tree. The standard orientation with
respect to this root is the orientation in which each edge is
oriented away from the root. An edge in $T$ is said to be a reversed edge if its orientation does not coincide with the
standard orientation. If, in our example in Figure 1, we choose $2$ to be the root then the reversed edges are $3^{\prime}$, $4^{\prime}$ and $7^{\prime}$. Except for the root, each vertex is the head
of a unique edge with respect to the standard orientation. We
obtain a permutation $\pi$ of $\{1,2,\ldots,n\}$, if we order the
head vertices of the edges in accordance with their edge names (i.e.
for the edges $i^{\prime}=(a,b)$ and $j^{\prime}=(c,d)$ with $i<j$, the vertex $b$ comes before
vertex $d$ in the permutation) and prepend the root $r$ at the
beginning of the permutation. In our running example, we obtain the
permutation $\pi=2\,3\,1\,7\,8\,5\,4\,6$. Then the sign of $T$ is defined as
follows.
$$\operatorname{sgn}T=(-1)^{\text{$\#$ of reversed edges}}\operatorname{sgn}\pi$$
(2.1)
The sign of the tree in Figure 1 is $1$ as there are $3$ reversed edges and
$\operatorname{sgn}\pi=-1$.
We need to show that the sign does not depend on the choice of the root:
suppose $s$ is
a vertex adjacent to the root $r$. If we change from root $r$ to
root $s$, we have to interchange $r$ and $s$ in the permutation
$\pi$, which reverses the sign of $\pi$. This is because the
standard orientation with respect to the root $s$ coincides with
the standard orientation with respect to the root $r$ except for
the edge incident with $r$ and $s$, where the orientation is
reversed. For the same reason, shifting the root from $r$ to $s$, either
increases or decreases the number of reversed edges by $1$.
Consequently, the product in (2.1) remains unaffected.
The sign of a tree sequence ${\mathcal{T}}=(T_{1},T_{2},\ldots,T_{n})$ is defined as the product of the signs of
the $i$-trees in the sequence, i.e.
$$\operatorname{sgn}{\mathcal{T}}=\operatorname{sgn}T_{1}\cdot\operatorname{sgn}%
T_{2}\cdots\operatorname{sgn}T_{n}.$$
The sign of the tree sequence in Figure 2 is $-1$ as $\operatorname{sgn}T_{1}=1,\operatorname{sgn}T_{2}=1,\operatorname{sgn}T_{3}=-1%
,\operatorname{sgn}T_{4}=1,$ and $\operatorname{sgn}T_{5}=1$. Concerning Gelfand-Tsetlin patterns we
obviously have $\operatorname{sgn}B_{i}=1$, which implies $\operatorname{sgn}{\mathcal{B}}=1$.
Here is the final definition of the sign of a Gelfand-Tsetlin tree sequence
${\bf L}=({\bf l_{1},l_{2},\ldots,l_{n}})\in{\mathcal{L}}_{n}({\mathcal{T}},{%
\bf k})$:
$$\operatorname{sgn}{\bf L}=(-1)^{\text{$\#$ of inversions of
${\bf L}$}}\cdot\operatorname{sgn}{\mathcal{T}}$$
The signed enumeration of elements in ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$ is denote by $L_{n}({\mathcal{T}},{\bf k})$. The sign of the Gelfand-Tsetlin tree sequence given in Figure 5 is $1$ as there are $7$ inversions and the sign of the underlying unlabeled tree sequence is $-1$. We are in the position to state an important result of this paper.
Theorem 1.
The signed enumeration of
Gelfand-Tsetlin tree sequences associated with a fixed underlying unlabeled tree sequence
${\mathcal{T}}=(T_{1},\ldots,T_{n})$ of order $n$ and a shifted labeling ${\bf k}=(k_{1},\ldots,k_{n})$ of the vertices of $T_{n}$ is given by
$$\prod_{1\leq i<j\leq n}\frac{k_{j}-k_{i}+j-i}{j-i}.$$
Before we turn our attention to searching for properties of $L_{n}({\mathcal{T}},{\bf k})$ that determine this quantity uniquely, we want to mention an obvious generalization of Gelfand-Tsetlin tree sequences, which we do not consider in this article, but might be interesting to look at: the notion of admissibility makes perfect sense if the tree $T$ is replaced by any other graph. Are there any nice assertions to be made on “Gelfand-Tsetlin graph sequences”?
3. Properties of $L_{n}({\mathcal{T}},{\bf k})$: independency and shift-antisymmetry
We say that a function $f(k_{1},\ldots,k_{n})$ on $\mathbb{Z}^{n}$ is shift-antisymmetric iff
$$f(k_{1},\ldots,k_{n})=-f(k_{1},\ldots,k_{i-1},k_{j}+j-i,k_{i+1},\ldots,k_{j-1}%
,k_{i}+i-j,k_{j+1},\ldots,k_{n})$$
for all $i,j$ with $1\leq i<j\leq n$ and all $(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$. In this section we prove by induction with respect to $n$ that
the signed
enumeration $L_{n}({\mathcal{T}},{\bf k})$ has the following two properties.
•
Independency: $L_{n}({\mathcal{T}},{\bf k})$
does not depend on the tree sequence ${\mathcal{T}}$.
•
Shift-antisymmetry:
$L_{n}({\mathcal{T}},{\bf k})$ is
shift-antisymmetric in ${\bf k}=(k_{1},\ldots,k_{n})$. In fact, we prove the following stronger result: fix $i,j$ with $1\leq i<j\leq n$. We construct a tree sequence of order $n$, denoted by
${\mathcal{S}}_{n}^{i,j}$, and an associated sign reversing involution on the set of Gelfand-Tsetlin tree sequences of the tree sequence ${\mathcal{S}}^{i,j}_{n}$ such that the shifted vertex labeling ${\bf k}\in\mathbb{Z}^{n}$ of
the largest tree is transformed into
$$E^{j-i}_{k_{j}}E^{i-j}_{k_{i}}S_{k_{i},k_{j}}{\bf k}=(k_{1},\ldots,k_{i-1},k_{%
j}+j-i,k_{i+1},\ldots,k_{j-1},k_{i}+i-j,k_{j+1},\ldots,k_{n}),$$
where $S_{x,y}f(x,y)=f(y,x)$ and $E_{x}p(x)=p(x+1)$.
The proofs are combinatorially in the following sense: suppose we are given two sets $A$ and $B$ and a signed enumeration $|.|_{-}$ on each of the sets such that
$|A|_{-}=|B|_{-}$. Then we find decompositions of $A$ and $B$ into two
sets $A_{1},A_{2}$ and $B_{1},B_{2}$, respectively, such that there is a
sign preserving bijection between
$A_{1}$ and $B_{1}$ and $|A_{2}|_{-}=|B_{2}|_{-}=0$, where the latter identities are proven by giving
sign reversing involutions on $A_{2}$ and $B_{2}$. However, if we have $|A|_{-}=-|B|_{-}$ then
the bijection between $A_{1}$ and $B_{1}$ is sign reversing.
Observe that there is nothing to prove for $n=1$.
We deal with the independency first.
Lemma 1.
The independency and shift-antisymmetry for order $n-1$ implies the independency for order $n$.
Proof. For a tree sequence ${\mathcal{T}}=(T_{1},T_{2},\ldots,T_{n})$ of order
$n$ we have
$$L_{n}({\mathcal{T}},{\bf k})=\operatorname{sgn}T_{n}\cdot(-1)^{\text{$\#$ of %
inversions of $(T_{n},{\bf k})$}}\sum_{\text{${\bf l}\in\mathbb{Z}^{n-1}$ is %
admissible for
$(T_{n},{\bf k})$}}L_{n-1}({\mathcal{T}}_{<n},\bf l),$$
where ${\mathcal{T}}_{<n}=(T_{1},T_{2},\ldots,T_{n-1})$.
The independency for $n-1$ implies that $L_{n}({\mathcal{T}},{\bf k})$
is invariant under the replacement of ${\mathcal{T}}_{<n}$ by any other
tree sequence of order $n-1$. We have to show that it is also
invariant under the replacement of $T_{n}$ by any other $n$-tree.
The strategy is as follows: we first show that
$L_{n}({\mathcal{T}},{\bf k})$ is invariant under certain tree
operations on $T_{n}$ and then verify that every tree can be obtained from every other by means of these operations. To prove this invariance, we often
replace ${\mathcal{T}}_{<n}$ by a particularly convenient tree sequence.
We define the first tree operation: let $T_{n}$ be an $n$-tree and $T^{\prime}_{n}$ be an $n$-tree which is obtained from $T_{n}$ by reversing the orientation of a single edge. Then
$\operatorname{sgn}T_{n}=-\operatorname{sgn}T^{\prime}_{n}$, the number of inversions of $(T_{n},{\bf k})$ differs from the number of inversions of $(T^{\prime}_{n},{\bf k})$ by $1$ and
${\bf l}\in\mathbb{Z}^{n-1}$ is admissible for $(T_{n},{\bf k})$ if and only if ${\bf l}$ is admissible for $(T^{\prime}_{n},{\bf k})$. This implies that
$L_{n}({\mathcal{T}},{\bf k})$ is invariant under the replacement of
$T_{n}$ by $T^{\prime}_{n}$.
For the second operation we assume $n\geq 3$. It is illustrated in Figure 6 and defined as follows: suppose that $i^{\prime}$ and $j^{\prime}$ are two edges in the $n$-tree $T_{n}$ that have a vertex $q$ in common. Let $T^{\prime}_{n}$ be the tree we obtain from $T_{n}$ by replacing vertex $q$ in $i^{\prime}$ by the vertex of $j^{\prime}$ which is different from $q$. Then we say that $T^{\prime}_{n}$ is obtained from $T_{n}$ by sliding edge $i^{\prime}$ along edge $j^{\prime}$. In the following argument, we let $p$ be the vertex of $i^{\prime}$ in $T_{n}$ that is different from $q$ and $r$ be the vertex of $j^{\prime}$ that is different from $q$.
We show $\operatorname{sgn}T_{n}=\operatorname{sgn}T^{\prime}_{n}$: let $q$ be the root. The head of the old edge $i^{\prime}$ (i.e. in $T_{n}$) as well as of the new edge $i^{\prime}$ (i.e. in $T^{\prime}_{n}$) is $p$ with respect to the standard orientation. Moreover, the edge $i^{\prime}$ is reversed in $T_{n}$ if and only if it is reversed in $T^{\prime}_{n}$.
There is no change for the remaining edges, since the standard orientation
does not change there. Hence, neither the permutation $\pi$ nor the set of
reversed edges is changed.
In order to show that $L_{n}({\mathcal{T}},{\bf k})$ is invariant under the replacement of $T_{n}$ by $T^{\prime}_{n}$, we have to distinguish between the six possibilities for the relative positions of
$k_{p}+p,k_{q}+q,k_{r}+r$. As we have a symmetry between vertex $q$ and vertex $r$ we may assume without loss of generality that $k_{q}+q\leq k_{r}+r$. We let
${\mathcal{T}}^{\prime}$ denote the tree sequence that we obtain from $\mathcal{T}$
by replacing $T_{n}$ by $T^{\prime}_{n}$.
Case 1. $k_{p}+p\leq k_{q}+q\leq k_{r}+r$: we decompose
${\mathcal{L}}_{n}({\mathcal{T}}^{\prime},{\bf k})$ into two sets as follows. Let
${\bf l}\in\mathbb{Z}^{n-1}$ be an admissible shifted edge labeling
of $T^{\prime}_{n}$. The first set contains the Gelfand-Tsetlin tree sequences where
the label of edge $i^{\prime}$ fulfills
$k_{p}+p\leq l_{i}+i<k_{q}+q$, whereas for the second set we have
$k_{q}+q\leq l_{i}+i\leq k_{r}+r$. The signed
enumeration of the first set is obviously equal to
${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$, since the edge $i^{\prime}$ is an inversion
of
$T_{n}$ if and only if it is an inversion of $T^{\prime}_{n}$. We have to show that the signed enumeration
of the second set reduces to zero: we replace ${\mathcal{T}}_{<n}$ by
${\mathcal{S}}^{i,j}_{n-1}$. As $k_{q}+q\leq l_{i}+i<k_{r}+r$ and
$k_{q}+q\leq l_{j}+j<k_{r}+r$, the sign reversing involution on
the set of all Gelfand-Tsetlin tree sequence associated with
${\mathcal{S}}^{i,j}_{n-1}$ induces a sign reversing involution
on the second subset of ${\mathcal{L}}_{n}({\mathcal{T}}^{\prime},{\bf k})$.
Case 2. $k_{q}+q\leq k_{p}+p\leq k_{r}+r$: if
${\bf l}\in\mathbb{Z}^{n-1}$ is an admissible shifted edge labeling
of $T_{n}$ for an element of ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$ then we
have $k_{q}+q\leq l_{i}+i<k_{p}+p$; in ${\mathcal{L}}_{n}({\mathcal{T}}^{\prime},{\bf k})$ we have $k_{p}+p\leq l_{i}+i<k_{r}+r$. The edge $i^{\prime}$ is an inversion for the pair $(T_{n},{\bf k})$ if and only it is no inversion for the pair
$(T^{\prime}_{n},{\bf k})$. We decompose both sets into two sets according to the edge label of $j^{\prime}$: in the
first set we have
$k_{q}+q\leq l_{j}+j<k_{p}+p$ and in the second set we have $k_{p}+p\leq l_{j}+j<k_{r}+r$. If we replace ${\mathcal{T}}_{<n}$ by ${\mathcal{S}}_{n-1}^{i,j}$, we see that in case of ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$ the signed enumeration of the first set is zero, while for ${\mathcal{L}}_{n}({\mathcal{T}}^{\prime},{\bf k})$ the signed enumeration of the second set is zero. For the two other sets, the replacement of $(l_{i},l_{j})\to(l_{j}+j-i,l_{i}+i-j)$ of the shifted edge labels of the largest tree and performing the sign reversing involution on ${\mathcal{S}}_{n-1}^{i,j}$ is a sign preserving involution.
Case 3. $k_{q}+q\leq k_{r}+r\leq k_{p}+p$: for the edge label of $i^{\prime}$ in
$T_{n}$ we have $k_{q}+q\leq l_{i}+i<k_{p}+p$. We decompose ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$ into two sets, where we have $k_{q}+q\leq l_{i}+i<k_{r}+r$ and
$k_{r}+r\leq l_{i}+i<k_{p}+p$, respectively. As $k_{q}+q\leq l_{j}+j<k_{r}+r$, the signed enumeration of the first set is zero, while the signed enumeration of the
second set coincides with the signed enumeration of the elements in
${\mathcal{L}}_{n}({\mathcal{T}}^{\prime},{\bf k})$.
In order to conclude the proof of Lemma 1, it suffices to show that every $n$-tree can be transformed
into every other by means of the two operations “sliding an edge along another edge” and “reversing the orientation of an edge”. As both operations are in fact involutions, it suffices to show that every $n$-tree can be transformed into the basic $n$-tree $B_{n}$. First of all, it is obvious that sliding and reversing can be used to transform a given $n$-tree into a directed path. Hence, it suffices to show that it is possible to interchange
vertices as well as edges. In both cases, it suffices to consider
adjacent vertices, respectively
edges. Concerning edges, suppose $x^{\prime}$ and $y^{\prime}$ are adjacent edges. By possibly reversing the
orientation of one edge, we may assume without loss of generality that $x^{\prime}=(a,b)$ and $y^{\prime}=(b,c)$. Then
the following sequence of operations interchanges the edges:
$$\displaystyle x^{\prime}=(a,b),y^{\prime}=(b,c)\rightarrow x^{\prime}=(a,c),y^%
{\prime}=(b,c)\rightarrow x^{\prime}=(a,c),y^{\prime}=(b,a)\\
\displaystyle\rightarrow x^{\prime}=(b,c),y^{\prime}=(b,a)\rightarrow x^{%
\prime}=(b,c),y^{\prime}=(a,b)$$
(Note that all operations except for the last are slides, which implies that
interchanging edges reverses the sign of the $n$-tree.)
Concerning swapping vertices, assume that we want to interchange vertex $a$ and $b$ and that $x^{\prime}=(a,b)$ is an
edge. We reverse the orientation of $x^{\prime}$ and slide all edges incident with $a$ but different from $x^{\prime}$ along $x^{\prime}$ to $b$ as well as all edges incident with $b$ but different from $x^{\prime}$ along $x^{\prime}$ to $a$. (Again we see that swapping vertices reverses the sign.)
∎
Now we turn to the shift-antisymmetry.
Lemma 2.
The independency for order $n$ implies the shift-antisymmetry for
order $n$.
Proof. Fix $i,j$ with $1\leq i<j\leq n$. We define a tree sequence
${\mathcal{S}}^{i,j}_{n}=(T_{1},\ldots,T_{n})$ of order $n$: let $S_{m}$ be the directed tree with $m$ vertices sketched in Figure 7 and, for $3\leq m\leq n$,
let this be the underlying tree for $T_{m}$. (Note that there is no choice for the underlying tree if $m=1,2$.) There are no restrictions on the names of the vertices and edges except that the two sinks in $T_{n}$ are $i$ and $j$, the two sinks in $T_{n-1}$ are the unprimed versions of the
edges incident with $i$ and $j$ in $T_{n-1}$, the two sinks in $T_{n-2}$ are the unprimed versions of the edges incident with the two sinks in $T_{n-1}$ etc. Let ${\bf k}=(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$ and ${\bf k^{\prime}}=E^{j-i}_{k_{j}}E^{i-j}_{k_{i}}S_{k_{i},k_{j}}{\bf k}$. Then the following is a sign reversing involution between the Gelfand-Tsetlin tree sequence associated with ${\mathcal{S}}^{i,j}_{n}$ and fixed shifted vertex labeling $\bf k$ of $T_{n}$ and those where the shifted vertex labeling of $T_{n}$ is given by $\bf k^{\prime}$: for $m\geq 3$, we interchange in $T_{m}$ the labels of the two sink vertices as well as the labels of the two edges incident with the sinks; in $T_{2}$ we interchange the two vertex labels. This either produces or resolves an inversion in $T_{2}$ and concludes the proof of Lemma 2.
Alternatively, we can also argue as follows: let $T^{\prime}_{n}$ be the tree which we obtain from $T_{n}$ by interchanging vertex $i$ and vertex $j$ (the underlying tree remains unaffected) and ${\mathcal{T}}^{\prime}=(T_{1},\ldots,T_{n-1},T^{\prime}_{n})$. As $\operatorname{sgn}T_{n}=-\operatorname{sgn}T^{\prime}_{n}$, we obviously have
$$L_{n}({\mathcal{T}},{\bf k})=-E^{j-i}_{k_{j}}E^{i-j}_{k_{i}}S_{k_{i},k_{j}}L_{%
n}({\mathcal{T}}^{\prime},{\bf k}).$$
The assertion follows from Lemma 1 since $L_{n}({\mathcal{T}}^{\prime},{\bf k})=L_{n}({\mathcal{T}},{\bf k})$. ∎
In Appendix B, a direct combinatorial proof of the shift-antisymmetry of the enumeration formula for Gelfand-Tsetlin patterns is sketched, which does not make use of the notion of Gelfand-Tsetlin tree sequences.
4. Taking differences –
${L}_{n}(\mathcal{T},{\bf k})$ is a polynomial
The quantity ${L}_{n}(\mathcal{T},{\bf k})$ is not characterized by the properties we have derived so far. Next, we show that ${L}_{n}(\mathcal{T},{\bf k})$ is a polynomial of
degree no greater than $n-1$ in every $k_{i}$, which is the last ingredient to finally see that
it is equal to (1.1).
In order to show that $p(x)$ is a polynomial in $x$ of degree no greater than $n-1$, it suffices to prove that $\Delta^{n}_{x}p(x)=0$ where $\Delta_{x}:=E_{x}-\operatorname{id}$ is the difference operator.
Thus it suffices to show the following.
Lemma 3.
For $i\in\{1,2,\ldots,n\}$ we have $\Delta^{n}_{k_{i}}{L}_{n}({\mathcal{T}},{\bf k})=0$ .
Proof.
We define a convenient tree sequence ${\mathcal{R}_{n,i}}=(R_{1},\ldots,R_{n})$ (see Figure 8) and find a combinatorial interpretation for $\Delta^{j}_{k_{i}}{L}_{n}({\mathcal{R}_{n,i}},{\bf k})$ if $j\in\{0,1,\ldots,n-1\}$: in $R_{n}$, we require $i=:i_{n}$ to be a leaf, in $R_{n-1}$ we require the unprimed version $i_{n-1}$ of the edge incident with $i_{n}$ in $R_{n}$ to be a leaf, in $R_{n-2}$ we require the unprimed version $i_{n-2}$ of the edge incident with $i_{n-1}$ in
$R_{n-1}$ to be a leaf etc. As for the orientations of the edges $i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{\prime}_{n-1}$, we choose the vertices $i_{2},i_{3},\ldots,i_{n}$ to be sinks. By $l_{i_{1}}+i_{1},l_{i_{2}}+i_{2},\ldots,l_{i_{n-1}}+i_{n-1}$, we denote the respective edge labels (which are of course also vertex labels in the next level).
We define $\Delta^{j}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$: it is
the set of labeled tree sequences on the unlabeled tree sequence ${\mathcal{R}}_{n,i}$ such that the conditions
on the edge labels are as for Gelfand-Tsetlin tree sequence in
${\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$, except
for the edges $i^{\prime}_{n-j},i^{\prime}_{n-j+1},\ldots,i^{\prime}_{n-1}$ in $R_{n-j+1},R_{n-j+2},\ldots,R_{n}$, respectively, where we require
$l_{i_{n-j}}+i_{n-j}=l_{i_{n-j+1}}+i_{n-j+1}=\ldots=l_{i_{n-1}}+i_{n-1}=k_{i}+i$.
As for the sign, we compute it as usual only we ignore the contributions
of the edges $i^{\prime}_{n-j}\in E(R_{n-j+1}),i^{\prime}_{n-j+1}\in E(R_{n-j+2}),\ldots,i^{%
\prime}_{n-1}\in E(R_{n})$.
Then, by induction with respect to $j$, the signed enumeration of these labeled tree sequences on ${\mathcal{R}}_{n,i}$ is equal to $\Delta^{j}_{k_{i}}{L}_{n}({\mathcal{R}_{n,i}},{\bf k})$: for $j=0$ this is obvious. It suffices to show that
$$\Delta_{k_{i}}|\Delta_{k_{i}}^{j}{\mathcal{L}}_{n}({\mathcal{R}}_{n,i},{\bf k}%
)|_{-}=|\Delta_{k_{i}}^{j+1}{\mathcal{L}}_{n}({\mathcal{R}}_{n,i},{\bf k})|_{-}.$$
Consider an element from $E_{k_{i}}\Delta^{j}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$ such that the vertex label of the sink $i_{n-j}$ of the edge $i^{\prime}_{n-j-1}$ in $R_{n-j}$ (which is $l_{i_{n-j}}+i_{n-j}=k_{i}+i+1$) is greater than the vertex label of the other endpoint of the edge. Then, by decreasing the labels $l_{i_{n-j}}+i_{n-j},l_{i_{n-j+1}}+i_{n-j+1},\ldots,l_{i_{n-1}}+i_{n-1},k_{i}+i+1$ (which are all equal) by $1$, we obtain a corresponding element in $\Delta^{j}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$, except for the case when
$l_{i_{n-j-1}}+i_{n-j-1}=k_{i}+i$. In such a tree sequence, we also decrease the labels $l_{i_{n-j}}+i_{n-j},l_{i_{n-j+1}}+i_{n-j+1},\ldots,l_{i_{n-1}}+i_{n-1},k_{i}+i+1$ by $1$ to obtain an element of $\Delta^{j+1}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$. This way, we obtain exactly the elements of $\Delta^{j+1}_{k_{i}}{L}_{n}({\mathcal{R}_{n,i}},{\bf k})$ such that the edge $i^{\prime}_{n-j-1}$ is no inversion in $R_{n-j}$.
On the other hand, if the edge $i^{\prime}_{n-j-1}$ is an inversion for an element of
$\Delta^{j}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$, then, by increasing the labels $l_{i_{n-j}}+i_{n-j},l_{i_{n-j+1}}+i_{n-j+1},\ldots,l_{i_{n-1}}+i_{n-1},k_{i}+i$ by $1$, we obtain a corresponding element in
$E_{k_{i}}\Delta^{j}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$, except for the case when $l_{i_{n-j-1}}+i_{n-j-1}=k_{i}+i$. This way, we obtain exactly the elements of $\Delta^{j+1}_{k_{i}}{L}_{n}({\mathcal{R}_{n,i}},{\bf k})$ such that the edge $i^{\prime}_{n-j-1}$ is an inversion in $R_{n-j}$. The sign that comes from the inversion $i^{\prime}_{n-j-1}$ in $R_{n,n-j}$ takes into account for the fact that we “subtract” the greater set from the smaller set in this case.
Now observe that in fact $\Delta^{n-1}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$ does not depend on $k_{i}$ and, consequently, $\Delta^{n}_{k_{i}}{L}_{n}({\mathcal{R}_{n,i}},{\bf k})$ must be zero. ∎
We are finally in the position of prove Theorem 1.
Proof of Theorem 1. By the shift-antisymmetry (Lemma 2), we conclude that the polynomial (Lemma 3) ${L}_{n}({\mathcal{T}},{\bf k})$ vanishes if $k_{i}+i=k_{j}+j$ for distinct $i,j\in\{1,2,\ldots,n\}$. This implies that
the expression in (1.1) has to be a factor of ${L}_{n}({\mathcal{T}},{\bf k})$. Again by Lemma 3, we know that it is a polynomial of degree no geater than $n-1$ and since (1.1) is of degree $n-1$ in every $k_{i}$, this implies that
$${L}_{n}({\mathcal{T}},{\bf k})=C\cdot\prod\limits_{1\leq i<j\leq n}\frac{k_{j}%
-k_{i}+j-i}{j-i},$$
where $C\in\mathbb{Q}$.
As there is only one Gelfand-Tsetlin pattern with bottom row $(1,1,\ldots,1)\in\mathbb{Z}^{n}$, we can conclude that $C=1$.
The combinatorial interpretation of $\Delta^{j}_{k_{i}}{\mathcal{L}}_{n}({\mathcal{R}_{n,i}},{\bf k})$ was surely the main ingredient in the proof of Lemma 3. The remainder of this section is devoted to use basically the same idea to give a combinatorial proof of the identity
$$e_{\rho}(\Delta_{k_{1}},\ldots,\Delta_{k_{n}}){L}_{n}({\mathcal{T}},{\bf k})=0,$$
(4.1)
which holds for ${\rho}\geq 1$ and where
$$e_{\rho}(X_{1},\ldots,X_{n})=\sum_{1\leq i_{1}<i_{2}<\ldots<i_{\rho}\leq n}X_{%
i_{1}}X_{i_{2}}\cdots X_{i_{\rho}}$$
is the $\rho$-th elementary symmetric function. (An algebraic proof, which already uses the fact that
$L_{n}({\mathcal{T}},{\bf k})$ is equal to (1.1) as well as the presentation of
(1.1) in terms of a determinant (see (A.1)), can be found in [4, Lemma 1].) This identity is of interest as it is the crucial fact in the proof of (1.2) given
in [4].
Even though the ideas are straight forward, this combinatorial proof of (4.1) is a bit elaborate. (However, nothing else is to be expected when a statement is related to alternating sign matrix counting.) In fact, the benefit of
this exercise is not primarily the proof of (4.1) but an improvement of the understanding of how to
interpret the application of difference operators to enumerative quantities such as
${L}_{n}({\mathcal{T}},{\bf k})$ combinatorially. To give a hint as to why such an understanding could be of interest, observe that the proof of (4.1) relies on a combinatorial interpretation of
$$\Delta_{k_{i_{1}}}\Delta_{k_{i_{2}}}\ldots\Delta_{k_{i_{\rho}}}{L}_{n}({%
\mathcal{T}},{\bf k})$$
(4.2)
for subsets $\{i_{1},\ldots,i_{\rho}\}\subseteq[n]$. As the number of monotone triangles with bottom row
$(k_{1},\ldots,k_{n})$ is given by
$$\alpha(n;k_{1},\ldots,k_{n})=\left(\prod_{1\leq p<q\leq n}(\operatorname{id}+%
\Delta_{k_{p}}\delta_{k_{q}})\right){L}_{n}({\mathcal{T}},{\bf k}),$$
(4.3)
where $\delta_{x}=\operatorname{id}-E^{-1}_{x}$ is a second type of difference operator (see
Section 5), ideas along these lines might also lead to a combinatorial proof of this formula.
We need a more general notion of
admissibility. The idea is simple and very roughly as follows: we require each vertex of a fixed vertex set $R$ of the tree $T$ to have an associated edge incident with it such that the edge label takes on the extreme label given by the vertex label.
Definition 3.
Given an $n$-tree $T$, an $n$-tupel ${\bf k}=(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$ and a subset $R\subseteq[n]=:\{1,2,\ldots,n\}$ of vertices of $T$, we define a vector ${\bf l}=(l_{1},\ldots,l_{n-1})\in\mathbb{Z}^{n-1}$ to be
weakly $R$-admissible for the pair $(T,{\bf k})$ as follows.
•
For each vertex $r\in R$ of $T$, there exists a unique edge $i(r)^{\prime}$ of $T$ incident with $r$ such that $k_{r}+r=l_{i(r)}+i(r)$.
•
For the edges $j^{\prime}=(p,q)$ that do not appear in the image $i(R)^{\prime}$ we have
$\min(k_{p}+p,k_{q}+q)\leq l_{j}+j<\max(k_{p}+p,k_{q}+q)$. (Note that
for those edges we do not allow $l_{j}+j=k_{p}+p$ or $l_{j}+j=k_{q}+q$ if $p\in R$ or
$q\in R$, respectively.)
The vector $\bf l$ is said to be $R$-admissible if the function $i:R\to[n-1]$ is injective. If the function is not injective then we choose for each pair of distinct vertices $r,s\in R$ that share an edge $i(r)^{\prime}=i(s)^{\prime}$ one endpoint to be the dominating endpoint.
An example is given in Figure 9.
For the extreme cases concerning $R$, we have the following: the weak $\emptyset$-admissibility coincides
with the ordinary admissibility and there exists no
$[n]$-admissible vector as there is no injective function $i:[n]\to[n-1]$. If
$n=1$ then there exists an $R$-admissible vector if and only if $R=\emptyset$, namely the empty set.
We introduce the sign which we associate with $(T,{\bf k})$, $i:R\to[n-1]$ and a choice of dominating vertices (if necessary). The following manner of speaking will turn out to be useful: if we refer to the minimum of an edge then we mean the minimum of the two labels of the endpoints of the edge or, by abuse of language, the respective vertex where this minimum is attained; similar for
the maximum. If, for an edge $j^{\prime}$, the labels on the two
endpoints coincide then the edge must be in the image $i(R)^{\prime}$. If $i^{-1}(j)$ contains a unique vertex then
we define this to be the “maximum” of the edge and if $i^{-1}(j)$ contains both endpoints then the dominating vertex is defined as the “maximum”; in both cases the other endpoint is defined as the minimum. As for the sign, we let each vertex that is an inversion contribute a $-1$ (which is the case when it is directed from its maximum to its minimum) as well as each $r\in R$ that is the minimum of the edge $i(r)^{\prime}$.
We define $(n,m,R)$-Gelfand-Tsetlin tree sequence as follows.
Definition 4.
Let $m\leq n$ be positive integers,
${\mathcal{T}}=(T_{1},\ldots,T_{n})$ be a tree sequence, $R\subseteq[m]$ be
a set of vertices of $T_{m}$ and ${\bf k}\in\mathbb{Z}^{n}$ be a shifted labeling of the tree $T_{n}$. An $(n,m,R)$-Gelfand-Tsetlin tree sequence associated with ${\mathcal{T}}$ and
${\bf k}$ is a sequence
${\bf L}=({\bf l}_{1},{\bf l}_{2},\ldots,{\bf l}_{n})$ with ${\bf l}_{i}\in\mathbb{Z}^{i}$ and
${\bf l}_{n}={\bf k}$ which has the following properties.
•
The shifted labeling ${\bf l}_{i-1}$ is admissible for the pair $(T_{i},{\bf l}_{i})$ if $i\in\{2,3,\ldots,n\}\setminus\{m\}$.
•
The shifted labeling ${\bf l}_{m-1}$ is weakly $R$-admissible for the pair
$(T_{m},{\bf l}_{m})$.
If the function $i:R\to[m-1]$, which manifests the weak $R$-admissibility
is not injective then the $(n,m,R)$-Gelfand-Tsetlin tree sequence comes
along with a set of dominating vertices as described in Definition 3; all choices are possible.
We let ${\mathcal{L}}_{n,m,R}({\mathcal{T}},{\bf k})$ denote the set of these sequences. For an integer $\rho\leq m$, we denote by ${\mathcal{L}}_{n,m,\rho}({\mathcal{T}},{\bf k})$ the union over all
$\rho$-subsets $R$ of $[m]$.
Concerning
the sign, we define
$$\operatorname{sgn}{\bf L}=(-1)^{\text{$\#$ of inversion of $\bf L$}}\cdot(-1)^%
{\text{$\#$ of vertices $r\in R$ s.t. $r$ is the minimum of $i(r)^{\prime}$}}%
\cdot\operatorname{sgn}{\mathcal{T}}.$$
We let $L_{n,m,R}({\mathcal{T}},{\bf k})$, respectively $L_{n,m,\rho}({\mathcal{T}},{\bf k})$, denote the signed enumeration
of these objects.
The following is obvious but crucial: the quantity $L_{n,m,R}({\mathcal{T}},{\bf k})$ does not change if we pass in Definition 4 from
weak $R$-admissibility to $R$-admissibility as changing the dominating vertex from one endpoint of a shared edge to the other is a sign-reversing involution.
We are in the position to give the combinatorial interpretation for the expression in (4.2). In order to state the result, we introduce a convenient notation: if $R=\{i_{1},\ldots,i_{\rho}\}\subseteq[n]$ then
$$\Delta_{{\bf k}_{R}}f({\bf k}):=\Delta_{k_{i_{1}}}\cdots\Delta_{k_{i_{\rho}}}f%
(k_{1},\ldots,k_{n}).$$
(The analog convention for $E_{{\bf k}_{R}}f({\bf k})$ will be used below.)
Proposition 1.
Let $R\subseteq[n]$. Then
$\Delta_{{\bf k}_{R}}{L}_{n}({\mathcal{T}},{\bf k})=L_{n,n,R}({\mathcal{T}},{%
\bf k})$.
This immediately implies the following combinatorial interpretation for the left-hand side of (4.1).
Corollary 1.
Let $\rho\in\{0,1,\ldots,n\}$. Then
$e_{\rho}(\Delta_{k_{1}},\ldots,\Delta_{k_{n}}){L}_{n}({\mathcal{T}},{\bf k})=L%
_{n,n,\rho}({\mathcal{T}},{\bf k}).$
The following lemma is used in several places of our proofs in the remainder of this section.
Lemma 4.
For an integer $t<m$, we fix a set $P$ of pairs of edges of $T_{t+1}$ and
let ${\mathcal{L}}_{n,m,R,P}({\mathcal{T}},{\bf k})$ denote the subset of labeled tree sequences in
${\mathcal{L}}_{n,m,R}({\mathcal{T}},{\bf k})$ such that for each pair in
$P$ the edge labels of the respective edges of $T_{t+1}$ are distinct.
Then the signed
enumeration of this subset is equal to the signed enumeration of the whole set.
Proof. We consider the complement
of ${\mathcal{L}}_{n,m,R,P}({\mathcal{T}},{\bf k})$ and suppose that for $(i,j)\in P$ the edge labeling
${\bf l}_{t}+(1,2,\ldots,t)$ of $T_{t+1}$ is equal in the coordinates
$i$ and $j$. If there is more than one pair then we choose the pair which is minimial with respect to the fixed order on $P$. Then, we may replace the tree $T_{t}$ in ${\mathcal{T}}$ by a tree where vertex $i$ and $j$ are adjacent. The assertion follows as such as tree does not possess an admissible edge labeling. ∎
Proof of Proposition 1. We consider subsets of ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$ indexed by two disjoint subsets $P,Q\subseteq[n]$ of vertices of $T_{n}$: let ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k},P,Q)$ denote the set of Gelfand-Tsetlin tree sequences in ${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$ such that
for the edge labeling ${\bf l}\in\mathbb{Z}^{n-1}$ of the largest tree $T_{n}$ in the tree sequence $\mathcal{T}$ the following is fulfilled:
•
For each $p\in P$, there exists an edge $i(p)^{\prime}$ of $T_{n}$ incident with $p$ such
that $k_{p}+p$ is the minimum of $i(p)^{\prime}$ and
$l_{i(p)}+i(p)=k_{p}+p$.
•
For each $q\in Q$, there exists an edge $i(q)^{\prime}$ in $T_{n}$ incident with $q$ such that $k_{q}+q$ is the maximum of $i(q)^{\prime}$ and $l_{i(q)}+i(q)=k_{q}+q-1$.
We denote the respective signed enumeration by ${L}_{n}({\mathcal{T}},{\bf k},P,Q)$. Suppose $r\notin P,Q$. Then
$$\Delta_{k_{r}}{L}_{n}({\mathcal{T}},{\bf k},P,Q)=E_{k_{r}}{L}_{n}({\mathcal{T}%
},{\bf k},P,Q\cup\{r\})-{L}_{n}({\mathcal{T}},{\bf k},P\cup\{r\},Q).$$
In order to see this, consider an element of $E_{k_{r}}{\mathcal{L}}_{n}({\mathcal{T}},{\bf k},P,Q)$ with the
following property: for each edge $i^{\prime}$ of $T_{n}$ that is incident with vertex $r$ of $T_{n}$ and such that the
vertex label of the other endpoint of $i^{\prime}$ is smaller than $k_{r}+r+1$ we have that the respective
edge label $l_{i}+i$ is smaller than $k_{r}+r$. In this case, we may change the vertex label of $r$ to $k_{r}+r$ to obtain an element of
${\mathcal{L}}_{n}({\mathcal{T}},{\bf k},P,Q)\setminus{\mathcal{L}}_{n}({%
\mathcal{T}},{\bf k},P\cup\{r\},Q)$. Thus, these elements cancel in the difference on the left-hand side and we are left with the elements on the right-hand side.
This implies by induction with respect to the size of $R\subseteq[n]$ that
$$\Delta_{{\bf k}_{R}}{L}_{n}({\mathcal{T}},{\bf k})=\sum_{Q\subseteq R}(-1)^{|R%
|+|Q|}E_{{\bf k}_{Q}}{L}_{n}({\mathcal{T}},{\bf k},R\setminus Q,Q).$$
(4.4)
The right-hand side is in fact equal to the signed enumeration of
${\mathcal{L}}_{n,n,R}({\mathcal{T}},{\bf k})$: in order to see this, we may assume by
Lemma 4 that the edge labels of $T_{n}$ are distinct, both
in ${\mathcal{L}}_{n,n,R}({\mathcal{T}},{\bf k})$ and in $E_{{\bf k}_{Q}}{\mathcal{L}}_{n}({\mathcal{T}},{\bf k},R\setminus Q,Q)$. This implies that for each tree sequences in $E_{{\bf k}_{Q}}{\mathcal{L}}_{n}({\mathcal{T}},{\bf k},R\setminus Q,Q)$ and each $r\in R$, there is a unique edge $i(r)^{\prime}$ of $T_{n}$ with $l_{i(r)}+i(r)=k_{r}+r$. Now, we may convert elements of $E_{{\bf k}_{Q}}{\mathcal{L}}_{n}({\mathcal{T}},{\bf k},R\setminus Q,Q)$ into elements of
${\mathcal{L}}_{n,n,R}({\mathcal{T}},{\bf k})$ by decreasing the
labels of the vertices in $Q$ by $1$.
We obtain elements, where for $r\in Q$,
the vertex label $k_{r}+r$ is the maximum of $i(r)^{\prime}$ and, for $r\in R\setminus Q$, the vertex label
$k_{r}+r$ is the minimum of $i(r)^{\prime}$ – attached
with a sign according to the number cases where $k_{r}+r$ is the minimum of the edge $i(r)^{\prime}$. The fact that the edge labels are distinct and since there always exists an edge label that is equal to $k_{r}+r$ implies that it is irrelevant that the intervals for the possible labels of the edges incident with $r$ were slightly changed when passing from $E_{{\bf k}_{Q}}{\mathcal{L}}_{n}({\mathcal{T}},{\bf k},R\setminus Q,Q)$ to ${\mathcal{L}}_{n,n,R}({\mathcal{T}},{\bf k})$.
However, by decreasing the vertex label of a vertex $q\in Q$ of
an element in $E_{{\bf k}_{Q}}{\mathcal{L}}_{n}({\mathcal{T}},{\bf k},R\setminus Q,Q)$ by $1$ to $k_{q}+q$, this value may reach the vertex label $k_{p}+p$ of a vertex $p$ that is adjacent to $q$; in this case we have to guarantee that $k_{q}+q$ can still be identified as the maximum of the edge $j^{\prime}$ connecting $p$ and $q$. The assumption implies $i(q)=j$.
If $p\notin R$ then, when considering the labeled tree sequence as an element of ${\mathcal{L}}_{n,n,R}({\mathcal{T}},{\bf k})$,
the vertex $q$ is the maximum of $j^{\prime}$ by definition. If, on the other hand, $p\in R$, then we also have $i(p)=j$
and we let $q$ be the dominating vertex of the edge to remember that it used to be the maximum of the edge $j^{\prime}$. Thus it is clear how to reverse the procedure.
∎
In the definition of the $R$-admissibility, we have fixed a set $R$ of vertices of $T$. However, we may as well fix the image $i(R)=:R^{\prime}$ of the injective function $i:R\to[n-1]$, which corresponds to a set of edges of $T$.
Definition 5.
Let $T$ be an $n$-tree, ${\bf k}\in\mathbb{Z}^{n}$ and $R^{\prime}\subseteq[n-1]$. A vector ${\bf l}\in\mathbb{Z}^{n-1}$ together with a function $t:R^{\prime}\to[n]$ is said to be
$R^{\prime}$-edge-admissible for the pair $(T,{\bf k})$ if
${\bf l}$ is $t(R^{\prime})$-admissible for the pair $(T,{\bf k})$, where $t^{-1}:t(R^{\prime})\to[n-1]$ is the function that proves the $t(R^{\prime})$-admissibility.
In analogy to Definition 4, it is also clear how to define Gelfand-Tsetlin tree sequences associated with a triple $(n,m,R^{\prime})$, where $m\leq n$ are positive integers and $R^{\prime}\subseteq[m-1]$ corresponds to a subset of edges of $T_{m}$. We denote this set by ${\mathcal{L}}_{n,m}^{R^{\prime}}({\mathcal{T}},{\bf k})$ and by
${L}_{n,m}^{R^{\prime}}({\mathcal{T}},{\bf k})$ its signed enumeration. Note that ${\mathcal{L}}_{n,m,\rho}({\mathcal{T}},{\bf k})$ is also the union of ${\mathcal{L}}_{n,m}^{R^{\prime}}({\mathcal{T}},{\bf k})$, where $R^{\prime}$ is a
$\rho$-subset of $[m-1]$.
In the proof of the next proposition, it will be helpful to replace the $R^{\prime}$-edge-admissibility in the definition of ${L}_{n,m}^{R^{\prime}}({\mathcal{T}},{\bf k})$
by a more general notion, which we call
weak $R^{\prime}$-edge-admissibility and define as follows.
Definition 6.
Let $T$ be an $n$-tree, ${\bf k}\in\mathbb{Z}^{n}$ and $R^{\prime}\subseteq[n-1]$. A vector ${\bf l}\in\mathbb{Z}^{n-1}$ is said to be
weakly $R^{\prime}$-edge-admissible for the pair $(T,{\bf k})$ if there exists a function $t:R^{\prime}\to[n]$ such that the following conditions are fulfilled.
•
For all $r\in R$, the edge $r^{\prime}$ of $T$ is incident with the vertex $t(r)$ of $T$ and
$l_{r}+r=k_{t(r)}+t(r)$.
•
For all $r\in[n-1]\setminus R^{\prime}$, we have $\min(k_{p}+p,k_{q}+q)\leq l_{r}+r<\max(k_{p}+p,k_{q}+q)$, where $r^{\prime}=(p,q)$ in T.
The sign we associate is defined as follows: each inversion contributes
a $-1$ as well as each edge $r^{\prime}$ of $R^{\prime}$ such that $t(r)$ is the minimum of the edge. (If the two vertex labels of an edge $r^{\prime}$ coincide then it must be an
element of $R^{\prime}$ and we define $t(r)$ as the “maximum” of the edge.)
To obtain the ordinary edge-admissibility we have to require in addition that for all $r\in R^{\prime}$ the following is fulfilled: suppose $s^{\prime}$ is an edge of $T$ incident with vertex $t(r)$ such that $l_{s}+s=k_{t(r)}+t(r)$ then we have $r=s$. However,
the violation of this condition would require two edges of $T$ to have the same label, which can be avoided for an element of ${L}_{n,m}^{R^{\prime}}({\mathcal{T}},{\bf k})$ by the argument given in Lemma 4.
The following proposition will finally imply (4.1).
Proposition 2.
Let $R\subseteq[m-1]$. Then
$L_{n,m}^{R}({\mathcal{T}},{\bf k})=L_{n,m-1,R}({\mathcal{T}},{\bf k})$.
An immediate consequence is the following.
Corollary 2.
Let $\rho$ be a non-negative integer. Then
$L_{n,m,\rho}({\mathcal{T}},{\bf k})=L_{n,m-1,\rho}({\mathcal{T}},{\bf k})$.
The corollary implies
$L_{n,m,\rho}({\mathcal{T}},{\bf k})=0$
if $\rho$ is non-zero as ${\mathcal{L}}_{n,m,\rho}({\mathcal{T}},{\bf k})=\emptyset$ if $\rho\geq m$, since there is no injective function from $[\rho]$ to $[m-1]$.
By Corollary 1, (4.1) finally follows.
Proof of Proposition 2. We restrict our considerations to the case that $m=n$ as the general case is analog. By
Lemma 4, we assume that the edge labels of $T_{n-1}$ are distinct, both in
${\mathcal{L}}_{n,n}^{R}({\mathcal{T}},{\bf k})$ and in
${\mathcal{L}}_{n,n-1,R}({\mathcal{T}},{\bf k})$.
We consider an element
of ${\mathcal{L}}_{n,n}^{R}({\mathcal{T}},{\bf k})$,
denote by ${\bf l}\in\mathbb{Z}^{n-1}$ the respective shifted edge labeling of $T_{n}$ and by
$t:R\to[n]$ the function that proves the weak $R$-edge-admissibility of the vector $\bf l$ for the pair
$(T_{n},{\bf k})$. Suppose $r\in R$ and that $p,q$ are the vertices of the edge $r^{\prime}$ in $T_{n}$ then
we have either $t(r)=p$ or $t(r)=q$. We denote the first subset of
${\mathcal{L}}_{n,n}^{R}({\mathcal{T}},{\bf k})$ by $M_{r,p}$ and the second subset
by $M_{r,q}$. The situation is sketched in Figure 10.
Assuming w.l.o.g. that $k_{p}+p\leq k_{q}+q$, we first observe that we can restrict our attention to the case that there is at least on edge incident with vertex $r$ in $T_{n-1}$, the label of which lies in the interval $[k_{p}+p,k_{q}+q)$. This is because for the other elements, $l_{r}+r\to k_{q}+q$ and $t(r)\to q$ induces a sign reversing bijection from $M_{r,p}$ to $M_{r,q}$. In the following, we address these edges as the relevant edges of $r$.
In order to construct an element of ${\mathcal{L}}_{n,n-1,R}({\mathcal{T}},{\bf k})$ we perform the following shifts to the labels $l_{r}+r$ for all $r\in R$: if the fixed element of ${\mathcal{L}}_{n,n}^{R}({\mathcal{T}},{\bf k})$ is an element of $M_{r,p}$, we shift $l_{r}+r$ to the minimum of incident edge labels in $T_{n-1}$ no smaller than $k_{p}+p$
and let $j(r)^{\prime}$
be the respective edge, while for elements of $M_{r,q}$ we shift $l_{r}+r$ to the maximum of incident edge labels in $T_{n-1}$ smaller than $k_{q}+q$ and let $j(r)^{\prime}$ be the respective edge. These edges $j(r)$ are unique as the edge labels are assumed to be distinct. The contribution of $-1$ to the sign of the elements in $M_{r,p}$ that comes from
the fact that the edge label of $r^{\prime}$ in $T_{n}$ is equal to the minimum of the edge translates in the new element into the contribution of $-1$ of the edge $j(r)^{\prime}$ in $T_{n-1}$ as
its edge label is also equal to the minimum of the edge. If this procedure causes two distinct vertices $r,s\in R$ to share an edge $j(r)^{\prime}=j(s)^{\prime}$ then we let the dominating vertex be the maximum of the respective edge in the
original element.
The precise description of the elements in ${\mathcal{L}}_{n,n-1,R}({\mathcal{T}},{\bf k})$ that appear as a result of
this procedure is the following. For each $r\in R$, one of the following two possibilities applies: suppose $p,q$ are the endpoints of $r^{\prime}$ in $T_{n}$ and w.l.o.g. $k_{p}+p\leq k_{q}+q$ then either
•
the vertex $r$ is the minimum of the edge $j(r)^{\prime}$ and the edge label of $j(r)^{\prime}$
is the minimum under all relevant edges of $r$, or
•
the vertex $r$ is the maximum of the edge $j(r)^{\prime}$ and
the edge label of $j(r)^{\prime}$
is the
maximum under all relevant edges of $r$.
For such an element it is also clear how to invert the procedure to reobtain
an element of ${\mathcal{L}}_{n,n}^{R}({\mathcal{T}},{\bf k})$.
Finally, we define a sign-reversing involution on the set of elements of ${\mathcal{L}}_{n,n-1,R}({\mathcal{T}},{\bf k})$
that do not fulfill this requirement: suppose that $r\in R$ is minimal such that the requirement is not met and that $r$ is the minimum of the edge $j(r)^{\prime}$.
Let $i^{\prime}$ be the relevant edge of $r$, the edge label of which is maximal with the property that it is smaller than $l_{r}+r$. We shift $l_{r}+r$ to this edge
label and set $j(r)=i$. If necessary we choose the dominating vertices such that the set of inversions remains unaffected. Then, $r$ is the maximum of the edge $j(r)^{\prime}$. Likewise when $r$ is the maximum of the edge. The fact that we only work with relevant edges guarantees that we are able to perform the shift accordingly for the edge label of $r^{\prime}$ in $T_{n}$.
∎
To conclude this section, we demonstrate that also (4.1) implies that ${L}_{n}({\mathcal{T}},{\bf k})$ is a polynomial in $k_{1},\ldots,k_{n}$ of degree no greater than $n-1$ in every $k_{i}$.
Lemma 5.
Suppose that $A(k_{1},\ldots,k_{n})$ is a function with
$$e_{\rho}(\Delta_{k_{1}},\ldots,\Delta_{k_{n}})A(k_{1},\ldots,k_{n})=0$$
for all $\rho>0$. Then $\Delta^{n}_{k_{i}}A(k_{1},\ldots,k_{n})=0$ for all $i\in\{1,2,\ldots,n\}$.
Proof. We define
$$A_{\rho,i}(k_{1},\ldots,k_{n})=e_{\rho}(\Delta_{k_{1}},\ldots,\widehat{\Delta_%
{k_{i}}},\ldots,\Delta_{k_{n}})A(k_{1},\ldots,k_{n}),$$
where $\widehat{\Delta_{k_{i}}}$ indicates that $\Delta_{k_{i}}$ does not appear in the argument.
We use the identity
$$e_{\rho}(X_{1},\ldots,X_{n})=e_{\rho}(X_{1},\ldots,\widehat{X_{i}},\ldots,X_{n%
})+X_{i}e_{\rho-1}(X_{1},\ldots,\widehat{X_{i}},\ldots,X_{n})$$
and the assumption to see that
$$A_{\rho,i}(k_{1},\ldots,k_{n})=-\Delta_{k_{i}}A_{\rho-1,i}(k_{1},\ldots,k_{n}).$$
This implies
$$A_{\rho,i}(k_{1},\ldots,k_{n})=(-1)^{\rho}\Delta^{\rho}_{k_{i}}A(k_{1},\ldots,%
k_{n})$$
by induction with respect to $\rho$. As $A_{n,i}(k_{1},\ldots,k_{n})=0$, the assertion follows. ∎
5. Monotone triangles
I would like to see an analog “theory” for monotone triangles (Gelfand-Tsetlin patterns with strictly increasing rows), which seems conceivable as there are several properties of the unrestricted patterns for which we have a corresponding (though in some cases more complicated) property of monotone triangles. For instance, it is known [3] that the number $\alpha(n;k_{1},\ldots,k_{n})$ of monotone
triangles with bottom row $k_{1},k_{2},\ldots,k_{n}$ is given by
$$\prod_{1\leq p<q\leq n}(E_{k_{p}}+E_{k_{q}}^{-1}-E_{k_{p}}E^{-1}_{k_{q}})\prod%
_{1\leq i<j\leq n}\frac{k_{j}-k_{i}+j-i}{j-i}=\prod_{1\leq p<q\leq n}(%
\operatorname{id}+\Delta_{k_{p}}\delta_{k_{q}})\prod_{1\leq i<j\leq n}\frac{k_%
{j}-k_{i}+j-i}{j-i},$$
(5.1)
where $\delta_{x}:=\operatorname{id}-E^{-1}_{x}$. To start with, we give four different (but related)
combinatorial extensions of $\alpha(n;k_{1},\ldots,k_{n})$ to all $(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$ in this section, and
then present certain other properties of $\alpha(n;k_{1},\ldots,k_{n})$, for which it would be nice to have
combinatorial proofs of the type as we have presented them in this article for Gelfand-Tsetlin patterns. This is because these properties imply, on the one hand, (5.1) and, on the other hand, the refined alternating sign matrix theorem. The latter will be explained at the end of this section.
5.1. Four combinatorial extensions of $\alpha(n;k_{1},\ldots,k_{n})$ to all $(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$
The quantity $\alpha(n;k_{1},\ldots,k_{n})$ obviously satisfies the following recursion for any sequence $(k_{1},k_{2},\ldots,k_{n})$ of strictly increasing integers.
$$\alpha(n;k_{1},\ldots,k_{n})=\sum_{(l_{1},\ldots,l_{n-1})\in\mathbb{Z}^{n-1}%
\atop k_{1}\leq l_{1}\leq k_{2}\leq l_{2}\leq k_{3}\leq\ldots\leq k_{n-1}\leq l%
_{n-1}\leq k_{n},l_{i}\not=l_{i+1}}\alpha(n-1;l_{1},\ldots,l_{n-1})$$
(5.2)
To obtain an extension of the combinatorial interpretation of $\alpha(n;k_{1},\ldots,k_{n})$ to all
$(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$, it is convenient to write this summation in terms of “simple” summations
$\sum\limits_{i=a}^{b}f(i)$, i.e. summations over intervals. This is because we can then use the extended definition of the summation, i.e.
$\sum\limits_{i=a}^{a-1}f(i)=0$ and $\sum\limits_{i=a}^{b}f(i)=-\sum\limits_{i=b+1}^{a-1}f(i)$ if $b+1\leq a-1$. Note that if $p(i)$ is a polynomial in $i$ then there exists a polynomial $q(i)$ with $\Delta_{i}q(i)=p(i)$, which implies $\sum\limits_{i=a}^{b}p(i)=q(b+1)-q(a)$ if $a\leq b$ and, consequently, that this sum is a polynomial in $a$ and $b$. The extension of the simple summation we have just introduced was chosen such that the latter identity is true for all $a,b\in\mathbb{Z}$. After we have given at least one representation of the summation in (5.2) in terms of simple summations, this shows that $\alpha(n;k_{1},\ldots,k_{n})$ can be represented by a polynomial in $k_{1},k_{2},\ldots,k_{n}$ if $k_{1}<k_{2}<\ldots<k_{n}$. (This polynomial is in fact unique as a polynomial in $k_{1},k_{2},\ldots,k_{n}$ is uniquely determined by its values on the set of $n$-tuples $(k_{1},k_{2},\ldots,k_{n})\in\mathbb{Z}^{n}$ with $k_{1}<k_{2}<\ldots<k_{n}$.) The extended monotone triangles with prescribed bottom row $k_{1},k_{2},\ldots,k_{n}$ will be chosen such that these objects are enumerated by this polynomial for all $(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$. In particular, it will certainly not be the naive extension, which sets $\alpha(n;k_{1},\ldots,k_{n})=0$ if $k_{1},k_{2},\ldots,k_{n}$ is not strictly increasing.
5.1.1. First extension
If we assume that $k_{1}<k_{2}<\ldots<k_{n}$, then one possibility to write the summation in (5.2) in terms of simple summations is the following: we choose a
subset $\{l_{i_{1}},l_{i_{2}},\ldots,l_{i_{p}}\}\subseteq\{l_{1},\ldots,l_{n-1}\}$ for which we have $l_{i_{j}}=k_{i_{j}}$. For all other $l_{q}$ we have $k_{q}<l_{q}\leq k_{q+1}$, except for the case that $q+1=i_{j}$ for a $j$, where we have $k_{q}<l_{q}<k_{q+1}$. More formally,
$$\sum_{p\geq 0}\sum_{1\leq i_{1}<i_{2}<\ldots<i_{p}\leq n-1}\sum_{l_{1}=k_{1}+1%
}^{k_{2}}\sum_{l_{2}=k_{2}+1}^{k_{3}}\ldots\sum_{l_{i_{1}-1}=k_{i_{1}-1}+1}^{k%
_{i_{1}}-1}\sum_{l_{i_{1}}=k_{i_{1}}}^{k_{i_{1}}}\ldots\sum_{l_{i_{p}-1}=k_{i_%
{p}-1}+1}^{k_{i_{p}}-1}\sum_{l_{i_{p}}=k_{i_{p}}}^{k_{i_{p}}}\ldots\sum_{l_{n-%
1}=k_{n-1}+1}^{k_{n}}$$
where in the exceptional case that $i_{j}=i_{j-1}+1$ the expression
$\sum\limits_{l_{i_{j}-1}=k_{i_{j}-1}+1}^{k_{i_{j}}-1}\sum\limits_{l_{i_{j}}=k_%
{i_{j}}}^{k_{i_{j}}}$ is replaced by
$\sum\limits_{l_{i_{j-1}}=k_{i_{j-1}}}^{k_{i_{j-1}}}\sum\limits_{l_{i_{j}}=k_{i%
_{j}}}^{k_{i_{j}}}$.
This leads to the following extension: a monotone triangle of order $n$ is a triangular array $(a_{i,j})_{1\leq j\leq i\leq n}$ of integers such that the following conditions are fulfilled.
•
There is a subset of special entries $a_{i,j}$ with $i<n$ for which we require $a_{i,j}=a_{i+1,j}$. We mark these entries with a star on the left.
•
If $a_{i,j}$ is not a special entry then we have to distinguish between the case that $a_{i,j}$ is the left neighbour of a special entry or not.
–
If $a_{i,j+1}$ is not special (which includes also the case that $a_{i,j+1}$ does not exist) then
$a_{i+1,j}<a_{i,j}\leq a_{i+1,j+1}$ in case that $a_{i+1,j}<a_{i+1,j+1}$ and $a_{i+1,j+1}<a_{i,j}\leq a_{i+1,j}$ otherwise. (There exists no pattern with $a_{i+1,j}=a_{i+1,j+1}$.) In the latter case we have an inversion.
–
If $a_{i,j+1}$ is special then $a_{i+1,j}<a_{i,j}<a_{i+1,j+1}$ or
$a_{i+1,j+1}\leq a_{i,j}\leq a_{i+1,j}$. (There exists no pattern with $a_{i+1,j+1}=a_{i+1,j}+1$.) In the latter case we have an inversion.
The sign of a monotone triangle is $-1$ to the number of inversions. Then $\alpha(n;k_{1},\ldots,k_{n})$ is the signed enumeration of monotone triangles with $a_{n,i}=k_{i}$. Here is an example of such an array.
$$3$$
$${}^{*}2$$
$$6$$
$$2$$
$$4$$
$${}^{*}6$$
$$3$$
$${}^{*}1$$
$$6$$
$$7$$
$$3$$
$$1$$
$$7$$
$$5$$
$$8$$
5.1.2. Second extension
The summation can also be written in the following more symmetric manner: we choose a subset $I\subseteq[n-1]$ such that $l_{i}=k_{i}$ if $i\in I$ and a subset $J\subseteq[n-1]$ such that $l_{j}=k_{j+1}$ if $j\in J$. The sets $I,J$ have to be disjoint and, moreover, $i\in I$ implies $i-1\notin J$ (which is equivalent to $(I-1)\cap J=\emptyset)$. On the other hand, if $h\in[n-1]\setminus(I\cup J)$ then $k_{h}<l_{h}<k_{h+1}$. Equivalently,
$$\sum_{p,q\geq 0}\sum_{I=\{i_{1},\ldots,i_{p}\},J=\{j_{1},\ldots,j_{q}\}%
\subseteq[n-1]\atop I\cap J=\emptyset,(I-1)\cap J=\emptyset}\sum_{l_{i_{1}}=k_%
{i_{1}}}^{k_{i_{1}}}\ldots\sum_{l_{i_{p}}=k_{i_{p}}}^{k_{i_{p}}}\sum_{l_{j_{1}%
}=k_{j_{1}+1}}^{k_{j_{1}+1}}\ldots\sum_{l_{j_{q}}=k_{j_{q}+1}}^{k_{j_{q}+1}}%
\sum_{l_{h_{1}}=k_{h_{1}}+1}^{k_{h_{1}+1}-1}\ldots\sum_{l_{h_{r}}=k_{h_{r}}+1}%
^{k_{h_{r}+1}-1},$$
where $[n-1]\setminus(I\cup J)=\{h_{1},\ldots,h_{r}\}$. Using this representation, we can deduce the following extension: a monotone triangle of order $n$ is a triangular array $(a_{i,j})_{1\leq j\leq i\leq n}$ of integers such that the following conditions are fulfilled.
•
There is a subset of “left-special” entries $a_{i,j}$ with $i<n$ for which we require $a_{i,j}=a_{i+1,j}$ and we mark them
with a star on the left as well as a subset of “right-special” entries $a_{i,j}$ with $i<n$ for which we require $a_{i,j}=a_{i+1,j+1}$ and mark them with a star on the right.
•
An entry can not be a left-special entry and a right-special entry. If a right-special entry and a left-special entry happen to be in the same row then the right-special entry may not be situated immediately to the left of the left-special entry.
•
If $a_{i,j}$ is not a special entry then we have $a_{i+1,j}<a_{i,j}<a_{i+1,j+1}$ or
$a_{i+1,j+1}\leq a_{i,j}\leq a_{i+1,j}$, respectively. In the latter case we have an inversion.
Next we give an example of such an array.
$$3$$
$$2$$
$$4$$
$$3$$
$${}^{*}2$$
$$6$$
$${}^{*}3$$
$$2$$
$$5^{*}$$
$$7$$
$$3$$
$$1$$
$$7$$
$$5$$
$$8$$
The sign of a monotone triangle is again $-1$ to the number of inversions and $\alpha(n;k_{1},\ldots,k_{n})$ is the signed enumeration of these extended monotone triangles with prescribed $a_{n,i}=k_{i}$.
Although we think that the fourth extension is probably the nicest, the first two extensions are the only ones where in case that $k_{1}<k_{2}<\ldots<k_{n}$ the removal of all stars leads to a monotone triangle in the original sense and no array is assigned a minus sign, i.e. we have a plain enumeration.
5.1.3. Third extension
Another possibility to write the summation in (5.2) in terms of simple summations is the following.
$$\sum_{p\geq 0}(-1)^{p}\sum_{2\leq i_{1}<i_{2}<\ldots<i_{p}\leq n-1\atop i_{j+1%
}\not=i_{j}+1}\sum_{l_{1}=k_{1}}^{k_{2}}\sum_{l_{2}=k_{2}}^{k_{3}}\ldots\sum_{%
l_{i_{1}-1}=k_{i_{1}}}^{k_{i_{1}}}\sum_{l_{i_{1}}=k_{i_{1}}}^{k_{i_{1}}}\ldots%
\sum_{l_{i_{p}-1}=k_{i_{p}}}^{k_{i_{p}}}\sum_{l_{i_{p}}=k_{i_{p}}}^{k_{i_{p}}}%
\ldots\sum_{l_{n-1}=k_{n-1}}^{k_{n}}$$
This leads to the following extension: a monotone triangle of order $n$ is a triangular array $(a_{i,j})_{1\leq j\leq i\leq n}$ of integers such that the following conditions are fulfilled. The entries
$a_{i-1,j-1}$ and $a_{i-1,j}$ are said to be the parents of $a_{i,j}$.
•
Among the entries $(a_{i,j})_{1<j<i\leq n}$ we may have special entries such that if two of them happen to be in the same row they must not be adjacent. We mark these entries with a star. For the parents of a special entry $a_{i,j}$ we have require $a_{i-1,j-1}=a_{i,j}=a_{i-1,j}$.
•
If $a_{i,j}$ is not the parent of a special entry then $a_{i+1,j}\leq a_{i,j}\leq a_{i+1,j+1}$ and $a_{i+1,j+1}<a_{i,j}<a_{i+1,j}$, respectively. In the latter case we have an inversion.
In this case, the sign of a monotone triangle is $-1$ to the number of inversions plus the number of special entries. Then $\alpha(n;k_{1},\ldots,k_{n})$ is the signed enumeration of monotone triangles with $a_{n,i}=k_{i}$. Next we give an example of such an array.
$$\phantom{*}\atop 4$$
$$\phantom{*}\atop 5$$
$$\phantom{*}\atop 3$$
$$\phantom{*}\atop 5$$
$$\phantom{*}\atop 5$$
$$\phantom{*}\atop 2$$
$$\phantom{*}\atop 3$$
$$*\atop 5$$
$$\phantom{*}\atop 2$$
$$\phantom{*}\atop 2$$
$$\phantom{*}\atop 4$$
$$\phantom{*}\atop 1$$
$$\phantom{*}\atop 7$$
$$*\atop 2$$
$$\phantom{*}\atop 5$$
This is the extension that has already appeared in [5]. There we have indicated that the non-adjacency requirement for special entries can also be ignored: suppose that $(a_{i,j})_{1\leq j\leq i\leq n}$ is an array with the properties given above accept that we allow special entries to be adjacent: suppose $a_{i,j}$ and $a_{i,j+1}$ are two adjacent special entries such that $i+j$ is maximal with this property.
Then we have
$a_{i-1,j-1}=a_{i,j}=a_{i-1,j}=a_{i,j+1}=a_{i-1,j+1}$. This
implies that $a_{i-2,j-1}=a_{i-1,j}=a_{i-2,j}$ whether or not $a_{i-1,j}$ is a special entry, which implies that changing the status of the entry $a_{i-1,j}$ is a sign-reversing involution.
5.1.4. Fourth extension
In order to explain the representation of (5.2) in terms of simple summations which is used for the third extension, it is convenient to
use the operator
$V_{x,y}:=E_{x}^{-1}+E_{y}-E^{-1}_{x}E_{y}$. Then
$$\sum_{k_{1}\leq l_{1}\leq k_{2}\leq l_{2}\leq\ldots\leq k_{n-1}\leq l_{n-1}%
\leq k_{n},\atop l_{i}\not=l_{i+1}}a(l_{1},\ldots,l_{n-1})=\left.V_{k_{1},k^{%
\prime}_{1}}V_{k_{2},k^{\prime}_{2}}\cdots V_{k_{n},k^{\prime}_{n}}\sum_{l_{1}%
=k^{\prime}_{1}}^{k_{2}}\sum_{l_{2}=k^{\prime}_{2}}^{k_{3}}\ldots\sum_{l_{n-1}%
=k^{\prime}_{n-1}}^{k_{n}}a(l_{1},\ldots,l_{n-1})\right|_{k^{\prime}_{i}=k_{i}},$$
if $k_{1}<k_{2}<\ldots<k_{n}$ is strictly increasing. (Note that $V_{k_{1},k^{\prime}_{1}}$ as well as $V_{k_{n},k^{\prime}_{n}}$ can also be removed as the application of $V_{x,y}$
to a function which does not depend on $x$ and $y$ acts as the identity. In order to convince oneself that this is indeed a valid representation of the summation in (5.2), one can use induction with respect to $n$ to transform it into the representation of
the first extension.)
This leads to the following extension, which we think is the nicest: a monotone triangle of order $n$ is an integer array $(a_{i,j})_{1\leq j\leq i\leq n}$
together with
a function $f$ which assigns to each $a_{i,j}$ an element of
$\{\leftarrow,\rightarrow,\leftrightarrow\}$ such that the following conditions are fulfilled for any element $a_{i,j}$ with $i<n$: we have to
distinguish cases depending on the assignment of the arrows to the elements
$a_{i+1,j}$ and $a_{i+1,j+1}$.
(1)
$f(a_{i+1,j})=\leftarrow$, $f(a_{i+1,j+1})=\leftarrow,\leftrightarrow$ : $a_{i+1,j}\leq a_{i,j}<a_{i+1,j+1}$ or
$a_{i+1,j+1}\leq a_{i,j}<a_{i+1,j}$
(2)
$f(a_{i+1,j})=\leftarrow$, $f(a_{i+1,j+1})=\rightarrow$ : $a_{i+1,j}\leq a_{i,j}\leq a_{i+1,j+1}$ or
$a_{i+1,j+1}<a_{i,j}<a_{i+1,j}$
(3)
$f(a_{i+1,j})=\leftrightarrow,\rightarrow$, $f(a_{i+1,j+1})=\leftarrow,\leftrightarrow$ : $a_{i+1,j}<a_{i,j}<a_{i+1,j+1}$ or
$a_{i+1,j+1}\leq a_{i,j}\leq a_{i+1,j}$
(4)
$f(a_{i+1,j})=\leftrightarrow,\rightarrow$, $f(a_{i+1,j+1})=\rightarrow$ : $a_{i+1,j}<a_{i,j}\leq a_{i+1,j+1}$ or
$a_{i+1,j+1}<a_{i,j}\leq a_{i+1,j}$.
In Case 1 and Case 4, there exists no pattern if $a_{i+1,j}=a_{i+1,j+1}$, in Case 2, we have no pattern if $a_{i+1,j}=a_{i+1,j+1}+1$ and, in Case 3, there is no pattern if $a_{i+1,j+1}=a_{i+1,j}+1$. In each case, we say that $a_{i,j}$ is an inversion if the second possibility applies. We define the sign of
a monotone triangle to be $-1$ to the number of inversions plus the number of elements that are assigned the element “$\leftrightarrow$”. Then
$\alpha(n;k_{1},\ldots,k_{n})$ is the signed enumeration of monotone triangles $(a_{i,j})_{1\leq j\leq i\leq n}$ of order $n$ with
$a_{n,i}=k_{i}$. Here is an example.
$$\leftrightarrow\atop 5$$
$$\leftarrow\atop 5$$
$$\rightarrow\atop 6$$
$$\leftarrow\atop 4$$
$$\leftarrow\atop 6$$
$$\leftrightarrow\atop 7$$
$$\leftarrow\atop 2$$
$$\leftarrow\atop 6$$
$$\rightarrow\atop 7$$
$$\rightarrow\atop 5$$
$$\rightarrow\atop 3$$
$$\leftrightarrow\atop 1$$
$$\rightarrow\atop 8$$
$$\rightarrow\atop 5$$
$$\rightarrow\atop 4$$
In order to see that this extension comes from the presentation given above, note that, when expanding
$$V_{k_{1},k^{\prime}_{1}}V_{k_{2},k^{\prime}_{2}}\cdots V_{k_{n},k^{\prime}_{n}%
}=(E^{-1}_{k_{1}}+E_{k^{\prime}_{1}}-E^{-1}_{k_{1}}E_{k^{\prime}_{1}})(E^{-1}_%
{k_{2}}+E_{k^{\prime}_{2}}-E^{-1}_{k_{2}}E_{k^{\prime}_{2}})\cdots(E^{-1}_{k_{%
n}}+E_{k^{\prime}_{n}}-E^{-1}_{k_{n}}E_{k^{\prime}_{n}}),$$
the assignment of “$\leftarrow$” to the entry $k_{i}$ in the bottom row corresponds to choosing $E^{-1}_{k_{i}}$ from the operator $V_{k_{i},k^{\prime}_{i}}$, while the assignment of “$\rightarrow$” to $k_{i}$ corresponds to choosing $E_{k^{\prime}_{i}}$ and the assignment of “$\leftrightarrow$” corresponds to choosing $E^{-1}_{k_{i}}E_{k^{\prime}_{i}}$.
In all cases, the combinatorial extension of $\alpha(n;k_{1},\ldots,k_{n})$ is, generally speaking, a signed enumeration, which reduces to a plain enumeration in the first and in the second case if $k_{1},k_{2},\ldots,k_{n}$ is strictly increasing. This can be generalized as
follows.
Proposition 3.
Suppose $k_{1},k_{2},\ldots,k_{n}$ is a weakly increasing sequence of integers then $\alpha(n;k_{1},\ldots,k_{n})$ is the number of Gelfand-Tsetlin patterns with
prescribed bottom row $k_{1},\ldots,k_{n}$ and where all other rows are strictly increasing.
Proof. In order to see this, we use the first extension. Suppose $k_{j}=k_{j+1}$ and $(a_{i,j})_{1\leq j\leq i\leq n}$ is a respective pattern. As
$a_{n,j}=a_{n,j+1}$ it follows that $a_{n-1,j}$ equal to this quantity as well and at least one of $a_{n-1,j}$ and $a_{n-1,j+1}$ must be special. We can exclude the latter possibility by the following sign reversing involution on the extended monotone triangles where $a_{n-1,j+1}$ is special in such a situation: let $j$ be maximal with this property. Then, changing the status of $a_{n-1,j}$ (from special to not special or vice versa) is a sign reversing involution. Thus we can assume that
$a_{n-1,j+1}$ is not special (and, consequently, $a_{n-1,j}$ must be special) whenever we have $a_{n,j}=a_{n,j+1}$.
This can be used to show that $\alpha(n;k_{1},\ldots,k_{n})=0$ if there are $p,q$ with $1\leq p<q\leq n-1$ such that $k_{p}=k_{p+1}$,
$k_{q}=k_{q+1}$ and $k_{j}+1=k_{j+1}$ for $p<j<q$, which is one special case of the statement: as $a_{n-1,p+1}$ can be assume not to be special (which already settles the case $q=p+1$) we can deduce that $a_{n-1,p+2}$ is not special (otherwise we would have no choice for $a_{n-1,p+1}$) and, by iterating this argument, we can see that $a_{n-1,j}$ is not special for $p+1\leq j\leq q-1$. This implies that $a_{n-1,p+1}=a_{n,p+2},a_{n-1,p+2}=a_{n,p+3},\ldots,a_{n-1,q-1}=a_{n,q}$.
On the other hand, the fact that $a_{n-1,q}$ is special implies $a_{n-1,q-1}=a_{n,q-1}$, which is a contradiction.
Thus we may assume that such $p,q$ do not exist for our sequence $k_{1},k_{2},\ldots,k_{n}$. Consequently, if $k_{j}=k_{j+1}$ then $k_{j-1}<k_{j}$ and
$k_{j+1}<k_{j+2}$. As $a_{n-1,j}$ is special and $a_{n-1,j+1}$ is not, we have $a_{n-1,j-1}<a_{n-1,j}<a_{n-1,j+1}$. ∎
It should be remarked that the signed enumeration in the first and in the second extension is in general not a plain enumeration if $k_{1},\ldots,k_{n}$ is
weakly increasing but not strictly increasing. Also note that the proposition is equivalent to the fact that, for weakly increasing sequences $k_{1},k_{2},\ldots,k_{n}$, the application of the summation in (5.2) to $\alpha(n-1;l_{1},\ldots,l_{n-1})$ is equivalent to the application of the representation of this summation in terms of simple summations to $\alpha(n-1;l_{1},\ldots,l_{n-1})$. (If the sequence is not increasing then the summation in (5.2) is over the emptyset and therefore zero.) As a next step, it would be interesting to figure out whether there is a notion analog to that of Gelfand-Tsetlin tree sequences for monotone triangles. This could be helpful in understanding the properties of $\alpha(n;k_{1},\ldots,k_{n})$, which we list next.
5.1.5. Properties of $\alpha(n;k_{1},\ldots,k_{n})$
In previous papers we have shown that $\alpha(n;k_{1},\ldots,k_{n})$ has the following properties.
(1)
For $n\geq 1$ and $i\in\{1,2,\ldots,n-1\}$, we have
$$(\operatorname{id}+E_{k_{i+1}}E^{-1}_{k_{i}}S_{k_{i},k_{i+1}})V_{k_{i},k_{i+1}%
}\alpha(n;k_{1},\ldots,k_{n})=0.$$
(This is proved in [3].)
(2)
For $n\geq 1$ and $i\in\{1,2,\ldots,n\}$, we have $\deg_{k_{i}}\alpha(n;k_{1},\ldots,k_{n})\leq n-1$. (See [3].)
(3)
For $n\geq 1$, we have $\alpha(n;k_{1},\ldots,k_{n})=(-1)^{n-1}\alpha(n;k_{2},\ldots,k_{n},k_{1}-n)$. (A proof can be found in [4].)
(4)
For $n\geq 1$ and $p\geq 1$, we have
$$e_{p}(\Delta_{k_{1}},\ldots,\Delta_{k_{n}})\alpha(n;k_{1},\ldots,k_{n})=0.$$
(See Lemma 1 in [4].)
The first property is obviously the analog of the shift-antisymmetry of ${L}_{n}({\mathcal{T}},{\bf k})$ as the latter can obviously be formulated as follows.
$$(\operatorname{id}+E_{k_{i+1}}E^{-1}_{k_{i}}S_{k_{i},k_{i+1}}){L}_{n}({%
\mathcal{T}},{\bf k})=0$$
It is interesting to note that a special case of this property for
$\alpha(n;k_{1},\ldots,k_{n})$
follows from Proposition 3: if we specialize $k_{i+1}=k_{i}-1$ then
the first property simplifies to
$$\displaystyle\alpha(n;k_{1},\ldots,k_{i-1},k_{i}-1,k_{i}-1,k_{i+2},\ldots,k_{n%
})+\alpha(n;k_{1},\ldots,k_{i-1},k_{i},k_{i},k_{i+2},\ldots,k_{n})\\
\displaystyle-\alpha(n;k_{1},\ldots,k_{i-1},k_{i}-1,k_{i},k_{i+2},\ldots,k_{n}%
)=0.$$
However, for integers $k_{1},k_{2},\ldots,k_{i},k_{i+2},\ldots,k_{n}$ with
$k_{1}<k_{2}<\ldots<k_{i-1}<k_{i}-1$ and
$k_{i}<k_{i+2}<\ldots<k_{n-1}<k_{n}$, Proposition 3 implies this identity: in a monotone triangle $(a_{i,j})_{1\leq j\leq i\leq n}$ with bottom row $k_{1},\ldots,k_{i-1},k_{i}-1,k_{i},k_{i+2},\ldots,k_{n}$ we have either $a_{n-1,i}=k_{i}-1$, which corresponds to the case that we have $k_{1},\ldots,k_{i-1},k_{i}-1,k_{i}-1,k_{i+2},\ldots,k_{n}$ as bottom row, or $a_{n-1,i}=k_{i}$, which corresponds to the case that $k_{1},\ldots,k_{i-1},k_{i},k_{i},k_{i+2},\ldots,k_{n}$ is the bottom row.
As a polynomial in $k_{1},k_{2},\ldots,k_{i},k_{i+2},\ldots,k_{n}$ is
uniquely determined by its values on the set of these elements $(k_{1},k_{2},\ldots,k_{i},k_{i+2},\ldots,k_{n})\in\mathbb{Z}^{n-1}$, the identity follows.
Concerning the second property, we have seen that it also holds for
${\mathcal{L}}_{n}({\mathcal{T}},{\bf k})$. Both properties together actually
imply (5.2), see [3], and thus it would be interesting to give combinatorial proofs of these properties.
5.1.6. Property (3) implies the refined alternating sign matrix theorem
The third property is interesting as it holds also for
Gelfand-Tsetlin patterns where it can easily be deduced from the shift-antisymmetry. However, it is
a mystery that it also holds for monotone triangles, as we do not see how it can be deduced from the first property. Quite remarkably, it can be used to deduce the refined alternating sign matrix theorem as we explain next.
The number $A_{n,i}$ of $n\times n$ alternating sign matrices, where the unique $1$ in the first row is located in the
$i$-th column is equal to the number of monotone triangles with bottom row $1,2,\ldots,n$ and
$i$ appearances of $1$ in the first NE-diagonal, or, equivalently, the number of monotone triangles with
bottom row $1,2,\ldots,n$ and $i$ appearances of $n$ in the last SE-diagonal. (This follows immediately from the standard bijection between alternating sign matrices and monotone triangles.)
If we assume that
$k_{1}\leq k_{2}<\ldots<k_{n}$, then the number of “partial” monotone triangles
with $n$ rows, where the entries $a_{n,1},a_{n-1,1},\ldots,a_{n-i+1,1}$ are removed, no entry is smaller than $k_{1}$ and $a_{n,i}=k_{i}$ for $i=2,3,\ldots,n$ is equal to
$$\left.(-1)^{i-1}\Delta_{k_{1}}^{i-1}\alpha(n;k_{1},\dots,k_{n})\right|_{(k_{1}%
,\ldots,k_{n})=(1,1,2,\ldots,n-1)}.$$
(A proof is given in [6].)
In fact, it follows quite easily by induction with respect to $i$ as
$$-\Delta_{k_{1}}\left(\sum_{(l_{1},\ldots,l_{n-1})\in\mathbb{Z}^{n-1}\atop k_{1%
}\leq l_{1}\leq k_{2}\leq l_{2}\leq k_{3}\leq\ldots\leq k_{n-1}\leq l_{n-1}%
\leq k_{n},l_{i}\not=l_{i+1}}a(l_{1},\ldots,l_{n-1})\right)=\sum_{(l_{2},%
\ldots,l_{n-1})\in\mathbb{Z}^{n-2}\atop k_{2}\leq l_{2}\leq k_{3}\leq\ldots%
\leq k_{n-1}\leq l_{n-1}\leq k_{n},l_{i}\not=l_{i+1}}a(k_{1},l_{2},\ldots,l_{n%
-1}).$$
This implies the first identity in
$$A_{n,i}=\left.(-1)^{i-1}\Delta_{k_{1}}^{i-1}\alpha(n;k_{1},\dots,k_{n})\right|%
_{(k_{1},\ldots,k_{n})=(1,1,2,\ldots,n-1)}=\left.\delta_{k_{n}}^{i-1}\alpha(n;%
k_{1},\dots,k_{n})\right|_{(k_{1},\ldots,k_{n})=(1,2,\ldots,n-1,n-1)}.$$
The proof of the fact that the first expression is also equal to the last expression is similar.
Therefore, by Property (3),
$$\displaystyle A_{n,i}=\left.(-1)^{i+n}\Delta_{k_{1}}^{i-1}\alpha(n;k_{2},\dots%
,k_{n},k_{1}-n)\right|_{(k_{1},\ldots,k_{n})=(1,1,2,\ldots,n-1)}\\
\displaystyle=\left.(-1)^{i+n}\delta_{k_{1}}^{i-1}E_{k_{1}}^{-2n+1+i}\alpha(n;%
k_{2},\dots,k_{n},k_{1})\right|_{(k_{2},\ldots,k_{n},k_{1})=(1,2,\ldots,n-1,n-%
1)}.$$
We use
$E_{x}^{-m}=(\operatorname{id}-\delta_{x})^{m}=\sum\limits_{j=0}^{m}\binom{m}{j%
}(-1)^{j}\delta_{x}^{j}$
to see that this is equal to
$$\displaystyle\left.(-1)^{i+n}\delta_{k_{1}}^{i-1}\sum_{j=0}^{2n-1-i}\binom{2n-%
1-i}{j}(-1)^{j}\delta_{k_{1}}^{j}\alpha(n;k_{2},\dots,k_{n},k_{1})\right|_{(k_%
{2},\ldots,k_{n},k_{1})=(1,2,\ldots,n-1,n-1)}\\
\displaystyle=\sum_{j=0}^{2n-1-i}\binom{2n-1-i}{j}(-1)^{i+j+n}A_{n,i+j}.$$
This shows that the refined alternating sign matrix numbers $A_{n,i}$ are a solution of the following system of linear equations.
$$A_{n,i}=\sum_{k=1}^{n}\binom{2n-1-i}{k-i}(-1)^{k+n}A_{n,k},\qquad 1\leq i\leq n$$
In [4], it was shown that this system of linear equations together with the obvious symmetry $A_{n,i}=A_{n,n+1-i}$ determines the numbers $A_{n,i}$ inductively with respect to $n$.
It is worth mentioning that a similar reasoning can be applied to the doubly refined enumeration $\overline{\underline{A}}_{n,i,j}$ of $n\times n$ alternating sign matrices with respect to the position $i$ of the $1$ in first row and the position $j$ of the $1$ in the last row. This number is equal to the number of monotone triangles with bottom row $1,2,\ldots,n$ and $i$ appearances of $1$ in the first NE-diagonal and $j$ appearances of $n$ in the last SE-diagonal, which implies (see
[6]) that
$$\overline{\underline{A}}_{n,i,j}=\left.(-1)^{i-1}\Delta^{i-1}_{k_{1}}\delta^{j%
-1}_{k_{n}}\alpha(n;k_{1},\ldots,k_{n})\right|_{(k_{1},\ldots,k_{n})=(2,2,%
\ldots,n-1,n-1)}.$$
Using the first and the third property of $\alpha(n;k_{1},\ldots,k_{n})$ displayed above we deduce
the following identity.
$$\displaystyle(\operatorname{id}+E^{n-1}_{k_{n}}E^{-n+1}_{k_{1}}S_{k_{1},k_{n}}%
)V_{k_{n},k_{1}}\alpha(n;k_{1},\ldots,k_{n})\\
\displaystyle=(-1)^{n-1}(\operatorname{id}+E^{n-1}_{k_{n}}E^{-n+1}_{k_{1}}S_{k%
_{1},k_{n}})V_{k_{n},k_{1}}\alpha(n;k_{2},\ldots,k_{n},k_{1}-n)=0$$
We apply $(-1)^{i-1}\Delta^{i-1}_{k_{1}}\delta^{j-1}_{k_{n}}$ to the equivalent identity
$$\displaystyle 0=\alpha(n;k_{1},\ldots,k_{n})+\Delta_{k_{1}}\delta_{k_{n}}%
\alpha(n;k_{1},\ldots,k_{n})\\
\displaystyle+E^{-2n+4}_{k_{1}}E^{2n-4}_{k_{n}}\alpha(n;k_{n}-n+3,k_{2},\ldots%
,k_{n-1},k_{1}+n-3)+E^{-2n+4}_{k_{1}}E^{2n-4}_{k_{n}}\delta_{k_{1}}\Delta_{k_{%
n}}\alpha(n;k_{n}-n+3,k_{2},\ldots,k_{n-1},k_{1}+n-3)$$
to see that
$$\displaystyle 0=(-1)^{i-1}\Delta^{i-1}_{k_{1}}\delta^{j-1}_{k_{n}}\alpha(n;k_{%
1},\ldots,k_{n})-(-1)^{i}\Delta^{i}_{k_{1}}\delta^{j}_{k_{n}}\alpha(n;k_{1},%
\ldots,k_{n})\\
\displaystyle+E^{-2n+3+i}_{k_{1}}E^{2n-3-j}_{k_{n}}(-1)^{i-1}\delta^{i-1}_{k_{%
1}}\Delta^{j-1}_{k_{n}}\alpha(n;k_{n}-n+3,k_{2},\ldots,k_{n-1},k_{1}+n-3)\\
\displaystyle+E^{-2n+3+i}_{k_{1}}E^{2n-3-j}_{k_{n}}(-1)^{i-1}\delta^{i}_{k_{1}%
}\Delta^{j}_{k_{n}}\alpha(n;k_{n}-n+3,k_{2},\ldots,k_{n-1},k_{1}+n-3).$$
Now we use the expansions
$$E^{-2n+3+i}_{k_{1}}=(\operatorname{id}-\delta_{k_{1}})^{2n-3-i}=\sum\limits_{p%
=0}^{2n-3-i}\binom{2n-3-i}{p}(-1)^{p}\delta^{p}_{k_{1}}$$
and
$$E^{2n-3-j}_{k_{n}}=(\operatorname{id}+\Delta_{k_{n}})^{2n-3-j}=\sum\limits_{q=%
0}^{2n-3-j}\binom{2n-3-j}{q}\Delta^{q}_{k_{n}}$$
to see that
$$\displaystyle 0=(-1)^{i-1}\Delta^{i-1}_{k_{1}}\delta^{j-1}_{k_{n}}\alpha(n;k_{%
1},\ldots,k_{n})-(-1)^{i}\Delta^{i}_{k_{1}}\delta^{j}_{k_{n}}\alpha(n;k_{1},%
\ldots,k_{n})\\
\displaystyle+\sum_{p=0}^{2n-3-i}\sum_{q=0}^{2n-3-j}\binom{2n-3-i}{p}\binom{2n%
-3-j}{q}(-1)^{i-1+p}\delta^{p+i-1}_{k_{1}}\Delta^{q+j-1}_{k_{n}}\alpha(n;k_{n}%
-n+3,k_{2},\ldots,k_{n-1},k_{1}+n-3)\\
\displaystyle+\sum_{p=0}^{2n-3-i}\sum_{q=0}^{2n-3-j}\binom{2n-3-i}{p}\binom{2n%
-3-j}{q}(-1)^{i-1+p}\delta^{i+p}_{k_{1}}\Delta^{j+p}_{k_{n}}\alpha(n;k_{n}-n+3%
,k_{2},\ldots,k_{n-1},k_{1}+n-3).$$
We evaluate at $(k_{1},k_{2},\ldots,k_{n-1},k_{n})=(2,2,3,\ldots,n-2,n-1,n-1)$ to arrive at
$$\overline{\underline{A}}_{n,i+1,j+1}-\overline{\underline{A}}_{n,i,j}=\sum_{p=%
0}^{2n-3-i}\sum_{q=0}^{2n-3-j}\binom{2n-3-i}{p}\binom{2n-3-j}{q}(-1)^{i+j+p+q}%
\left(\overline{\underline{A}}_{n,q+j,p+i}-\overline{\underline{A}}_{n,q+j+1,p%
+i+1}\right).$$
Computer experiments led us to the conjecture that this identity together with the obvious relations $\overline{\underline{A}}_{n,i,j}=\overline{\underline{A}}_{n,j,i}$ and $\overline{\underline{A}}_{n,i,j}=\overline{\underline{A}}_{n,n+1-i,n+1-j}$ determine the doubly refined enumeration numbers $\overline{\underline{A}}_{n,i,j}$ uniquely inductively with respect to $n$.
5.1.7. Property (1) and (4) imply Property (3).
The analog of the fourth property is true for Gelfand-Tsetlin tree sequences, see (4.1), for which we gave a combinatorial proof in Section 4.
The significance of this property is that it can be used to deduce the third property from the first property. Since every symmetric polynomial in $X_{1},X_{2},\ldots,X_{n}$ can be written as a polynomial in the elementary symmetric functions, this property implies that
$$p(E_{k_{1}},\ldots,E_{k_{n}})\alpha(n;k_{1},\ldots,k_{n})=p(1,1,\ldots,1)%
\alpha(n;k_{1},\ldots,k_{n})$$
for every symmetric polynomial $p(X_{1},\ldots,X_{n})$ in $X_{1},\ldots,X_{n}$. This extends to symmetric polynomials in $X_{1},X_{1}^{-1},\ldots,X_{n},X^{-1}_{n}$: let $p(X_{1},\ldots,X_{n})$ be such a polynomial and $t\in\mathbb{Z}$ such that $p(X_{1},\ldots,X_{n})X^{t}_{1}\cdots X^{t}_{n}=:q(X_{1},\ldots,X_{n})$ is a symmetric polynomial in $X_{1},\ldots,X_{n}$ then
$$\displaystyle p(E_{k_{1}},\ldots,E_{k_{n}})\alpha(n;k_{1},\ldots,k_{n})=E^{t}_%
{k_{1}}\cdots E^{t}_{k_{n}}p(E_{k_{1}},\ldots,E_{k_{n}})\alpha(n;k_{1}-t,%
\ldots,k_{n}-t)\\
\displaystyle=E^{t}_{k_{1}}\cdots E^{t}_{k_{n}}p(E_{k_{1}},\ldots,E_{k_{n}})%
\alpha(n;k_{1},\ldots,k_{n})=q(1,1,\ldots,1)\alpha(n;k_{1},\ldots,k_{n})=p(1,1%
,\ldots,1)\alpha(n;k_{1},\ldots,k_{n}).$$
In particular, this shows that (4.1) is also true if all “$\Delta$”s are replaced by “$\delta$”s. Now we are ready to deduce Property (3) from Property (1) and Property (4): note that the operator $V_{x,y}$ is invertible as an operator on polynomials in $x$ and $y$: this follows as $V_{x,y}=\operatorname{id}+\delta_{x}\Delta_{y}$ and
$$V^{-1}_{x,y}=\sum_{i=0}^{\infty}(-1)^{i}\delta_{x}^{i}\Delta^{i}_{y}.$$
(The sum is finite when applied to polynomials.)
Property (1) is obviously equivalent to
$$\alpha(n;k_{1},\ldots,k_{i-1},k_{i+1}+1,k_{i}-1,k_{i+2},\ldots,k_{n})=-V_{k_{i%
},k_{i+1}}V^{-1}_{k_{i+1},k_{i}}\alpha(n;k_{1},\ldots,k_{n}).$$
This implies
$$\displaystyle(-1)^{n-1}\alpha(n;k_{2},\ldots,k_{n},k_{1}-n)=(-1)^{n-1}\alpha(n%
;k_{2}+1,\ldots,k_{n}+1,k_{1}-n+1)\\
\displaystyle=\prod_{i=2}^{n}V_{k_{1},k_{i}}V^{-1}_{k_{i},k_{1}}\alpha(n;k_{1}%
,\ldots,k_{n}).$$
Therefore, in order to show the third property, we have
to prove that
$$\left(\prod_{i=2}^{n}V_{k_{1},k_{i}}-\prod_{i=2}^{n}V_{k_{i},k_{1}}\right)%
\alpha(n;k_{1},\ldots,k_{n})=0.$$
This follows from the fourth property as
$$\displaystyle\prod_{i=2}^{n}V_{k_{1},k_{i}}-\prod_{i=2}^{n}V_{k_{i},k_{1}}=%
\prod_{i=2}^{n}(\operatorname{id}+\delta_{k_{1}}\Delta_{k_{i}})-\prod_{i=2}^{n%
}(\operatorname{id}+\Delta_{k_{1}}\delta_{k_{i}})=\sum_{r=0}^{n-1}\delta^{r}_{%
k_{1}}e_{r}(\Delta_{k_{2}},\ldots,\Delta_{k_{n}})-\sum_{r=0}^{n-1}\Delta^{r}_{%
k_{1}}e_{r}(\delta_{k_{2}},\ldots,\delta_{k_{n}})\\
\displaystyle=\sum_{r=0}^{n-1}\left(\delta^{r}_{k_{1}}\left(e_{r}(\Delta_{k_{1%
}},\ldots,\Delta_{k_{n}})-\Delta_{k_{1}}e_{r-1}(\Delta_{k_{2}},\ldots,\Delta_{%
k_{n}})\right)-\Delta^{r}_{k_{1}}\left(e_{r}(\delta_{k_{1}},\ldots,\delta_{k_{%
n}})-\delta_{k_{1}}e_{r-1}(\delta_{k_{2}},\ldots,\delta_{k_{n}})\right)\right)%
\\
\displaystyle=\sum_{r=0}^{n-1}\left(\delta^{r}_{k_{1}}e_{r}(\Delta_{k_{1}},%
\ldots,\Delta_{k_{n}})-\Delta^{r}_{k_{1}}e_{r}(\delta_{k_{1}},\ldots,\delta_{k%
_{n}})\right)\\
\displaystyle-\sum_{r=1}^{n-1}\left(\delta^{r}_{k_{1}}\Delta_{k_{1}}e_{r-1}(%
\Delta_{k_{2}},\ldots,\Delta_{k_{n}})-\Delta^{r}_{k_{1}}\delta_{k_{1}}e_{r-1}(%
\delta_{k_{2}},\ldots,\delta_{k_{n}})\right)=\ldots\\
\displaystyle=\sum_{s=1}^{n}\sum_{r=1}^{n-s}(-1)^{s}\left(\Delta^{r+s-1}_{k_{1%
}}\delta^{s-1}_{k_{1}}e_{r}(\delta_{k_{1}},\ldots,\delta_{k_{n}})-\delta^{r+s-%
1}_{k_{1}}\Delta^{s-1}_{k_{1}}e_{r}(\Delta_{k_{1}},\ldots,\Delta_{k_{n}})%
\right).$$
Appendix A The non-intersecting lattice paths point of view
In Figure 11, the family of non-intersecting lattice paths that corresponds to the Gelfand-Tsetlin pattern given in the introduction is displayed: in general, the lattice paths join the starting points $(0,0),(-1,1),\ldots,(-n+1,n-1)$ to
the end points $(1,k_{1}),(1,k_{2}+1),\ldots,(1,k_{n}+n-1)$, where the lattice paths can take east and north steps of length $1$ and end with a step to the east.
As indicated in the drawing, the heights of the horizontal steps of the $i$-th
path, counted from the bottom, can be obtained from the $i$-th southeast
diagonal of the Gelfand-Tsetlin pattern, counted from the left, by adding $i$ to the entries in the respective diagonal of the Gelfand-Tsetlin pattern. By a well-known result on the enumeration of non-intersecting lattice paths of Lindström [10, Lemma 1] and of Gessel and Viennot
[8, Theorem 1], this number is equal to
$$\det_{1\leq i,j\leq n}\binom{k_{j}+j-1}{i-1},$$
(A.1)
which is, by the Vandermonde determinant evaluation, equal to (1.1).
(Note that $\binom{k_{j}+j-1}{i-1}$ is a polynomial in $k_{j}$ of degree $i-1$.)
Interestingly, another possibility to extend the combinatorial interpretation of (1.1) to all $(k_{1},\ldots,k_{n})\in\mathbb{Z}_{\geq 0}^{n}$
is related to this interpretation in terms of families of non-intersection lattice paths: for arbitrary non-negative integers $k_{1},k_{2},\ldots,k_{n}$,
consider families of $n$ lattice paths with unit steps to the north and to the east (in general, these families are intersecting for the moment) that connect the starting points $(0,0),(-1,1),\ldots,(-n+1,n-1)$ to the endpoints $(0,k_{1}),(0,k_{2}+1),\ldots,(0,k_{n}+n-1)$, in any order. (Now we omit the vertical steps at the end of the paths.) Suppose that the $i$-th starting point $(-i+1,i-1)$ is connected to the $\pi_{i}$-th end point $(0,k_{\pi_{i}}+\pi_{i}-1)$ then the sign of the family is defined as the sign of the permutation
$(\pi_{1},\pi_{2},\ldots,\pi_{n})=\pi$. Then, (1.1) is the
signed enumeration of families of lattice paths with these starting points and end points. The merit of the theorem of Lindström and of Gessel and Viennot is the definition of a sign reversion involution on the families of intersecting lattice paths, which shows that only the non-intersecting families remain in the signed enumeration. Depending on the relative positions of the numbers $k_{1},k_{2}+1,\ldots,k_{n}+n-1$, there is only one permutation $\pi$ for which a family of non-intersecting lattice paths exists at all. This implies that the signed enumeration of families of lattice paths reduces essentially (i.e. up to the sign of $\pi$) to the plain enumeration of families of non-intersecting lattice paths.
Finally, it is worth mentioning (without proof) that the requirement that all $k_{i}$ are non-negative can be avoided. A close look at the proof shows that this requirement is useful at first place to guarantee that the location of the end points is not “too far” to the south of the starting points. If an end point is south-east of a starting point then there is obviously no lattice path connecting them which only uses steps of the form $(1,0)$ and $(0,1)$. However, in such a case it is convenient to allow steps of the form $(1,-1)$ and $(0,-1)$. Moreover if we require these paths to start with a step of the form $(0,-1)$ and let each step of the form $(1,-1)$ contribute a minus sign, we obtain an interpretation of (1.1) for all $(k_{1},\ldots,k_{n})\in\mathbb{Z}^{n}$. A typical situation is sketched in Figure 12.
Appendix B Another proof of the shift-antisymmetry
We sketch a
(sort of) combinatorial proof of the shift-antisymmetry of the signed enumeration of
Gelfand-Tsetlin patterns with prescribed bottom row which does not rely on
the notion of Gelfand-Tsetlin tree sequences. The argument is a bit involved and thus shows the
merit of the notion of Gelfand-Tsetlin tree sequences. On the other hand,
it could be helpful for proving the analog property for monotone triangles as we have not established a notion
that is analog to that of Gelfand-Tsetlin tree sequences for monotone triangles so far,
see Section 5.
The following notion turns out to be extremely useful in order to avoid case distinctions: we define
$[x,y]:=\{z\in\mathbb{Z}|x\leq z\leq y\}$ if $x\leq y$ as usual, $[x,x-1]:=\emptyset$
and $[x,y]:=[y+1,x-1]$ if $y+1\leq x-1$. The latter situation is said to be an inversion. By considering all possible relative positions of $x,y,z$, it is not hard to see that
$$[x,y]\triangle[x,z+1]=[y+1,z+1],$$
where $A\triangle B:=(A\setminus B)\cup(B\setminus A)$ is the symmetric difference.
In fact, concerning this symmetric difference, the following can
be observed: either one set is contained in the other or the sets are disjoint. The latter situation
occurs iff exactly one of $[x,y]$ and $[x,z+1]$ is an inversion.
On the other hand,
$$[z,x]\triangle[y-1,x]=[x+1,z-1]\triangle[x+1,y-2]=[z,y-2]=[y-1,z-1]$$
and we have $[z,x]\setminus[y-1,x]\not=\emptyset$ and $[y-1,x]\setminus[z,x]\not=\emptyset$ (which implies that the two sets are disjoint) iff exactly one of $[z,x]$ and $[y-1,x]$ is an inversion.
Let
${\mathcal{L}}_{n}(k_{1},\ldots,k_{n}):={\mathcal{L}}_{n}({\mathcal{B}},{\bf k})$ denote the set of Gelfand-Tsetlin
patterns with bottom row $k_{1},k_{2},\ldots,k_{n}$ and ${L}_{n}(k_{1},\ldots,k_{n}):={L}_{n}({\mathcal{B}},{\bf k})$
the corresponding signed enumeration. The proof is by induction with respect to $n$. Nothing is to be done for $n=1$. Otherwise, it suffices to consider the case $j=i+1$. We fix $i\in\{1,2,\ldots,n-1\}$ and decompose ${\mathcal{L}}_{n}(k_{1},\ldots,k_{n})$ into four
sets: let ${\mathcal{L}}^{1}_{n,i}(k_{1},\ldots,k_{n})$ denote the subset of patterns
$(a_{p,q})_{1\leq q\leq p\leq n}\in{\mathcal{L}}_{n}(k_{1},\ldots,k_{n})$
for which the replacement $a_{n,i}\to k_{i+1}+1$ and $a_{n,i+1}\to k_{i}-1$
produces another Gelfand-Tsetlin pattern (which is obviously an element of
${\mathcal{L}}_{n}(k_{1},\ldots,k_{i-1},k_{i+1}+1,k_{i}-1,k_{i+2},\ldots,k_{n})$ then). If
we perform this replacement we can either have a contradiction concerning the requirement for $l_{i-1}:=a_{n-1,i-1}$ or for $l_{i+1}:=a_{n-1,i+1}$. (There can not be a contradiction for
$l_{i}:=a_{n-1,i}$ as $l_{i}\in[k_{i},k_{i+1}]$ if and only if $l_{i}\in[k_{i+1}+1,k_{i}-1]$.) We let ${\mathcal{L}}^{2}_{n,i}(k_{1},\ldots,k_{n})$ denote the set of patterns, where we have a contradiction
for $l_{i-1}$ but not for $l_{i+1}$, ${\mathcal{L}}^{3}_{n,i}(k_{1},\ldots,k_{n})$ denote
the set of patterns, where we have a contradiction for $l_{i+1}$ but not for $l_{i-1}$ and ${\mathcal{L}}^{4}_{n,i}(k_{1},\ldots,k_{n})$ denote the set of patterns, where we have a contradiction for both $l_{i-1}$ and $l_{i+1}$. Finally, we let ${L}^{j}_{n,i}(k_{1},\ldots,k_{n})$ denote the respective signed enumerations. We aim to show that
$${L}^{j}_{n,i}(k_{1},\ldots,k_{n})=-{L}^{j}_{n,i}(k_{1},\ldots,k_{i-1},k_{i+1}+%
1,k_{i}-1,k_{i+2},\ldots k_{n})$$
(B.1)
if $j\in\{1,2,3,4\}$.
The case $j=1$ is almost obvious, only the sign requires the following thoughts:
having no contradiction for both $l_{i-1}$ and $l_{i+1}$ means that
$l_{i-1}\in[k_{i-1},k_{i}]\cap[k_{i-1},k_{i+1}+1]$ and $l_{i+1}\in[k_{i+1},k_{i+2}]\cap[k_{i}-1,k_{i+2}]$. This is in fact true for patterns in ${\mathcal{L}}^{1}_{n,i}(k_{1},\ldots,k_{n})$ as well as for patterns in ${\mathcal{L}}^{1}_{n,i}(k_{1},\ldots,k_{i+1}+1,k_{i}-1,\ldots,k_{n})$. The intersection
$[k_{i-1},k_{i}]\cap[k_{i-1},k_{i+1}+1]$ is empty if exactly one of the intervals is an inversion.
Thus we may assume that they are either both inversions or both not inversions.
This implies that $l_{i-1}$ is an inversion for the patterns on the left if and only if it is an inversion for the patterns on the right. The same is true for $l_{i+1}$. On the other hand,
$l_{i}$ is obviously an inversion on the left if and only if it is no inversion on the right,
which takes care of the minus sign.
We show (B.1) for $j=2$ (the case $j=3$ is analog by symmetry):
given an element of ${\mathcal{L}}^{2}_{n,i}(k_{1},\ldots,k_{n})$, we have $l_{i-1}\in[k_{i-1},k_{i}]\setminus[k_{i-1},k_{i+1}+1]$, whereas for an element of
${\mathcal{L}}^{2}_{n,i}(k_{1},\ldots,k_{i+1}+1,k_{i}-1,\ldots,k_{n})$, we
have $l_{i-1}\in[k_{i-1},k_{i+1}+1]\setminus[k_{i-1},k_{i}]$. The conditions for the other elements are the same. (In particular, $l_{i+1}\in[k_{i+1},k_{i+2}]\cap[k_{i}-1,k_{i+2}]$.) If we are in the case that either both sets $[k_{i-1},k_{i}]$ and $[k_{i-1},k_{i+1}+1]$ are no inversions or both sets are inversions then one set is contained in the other, which implies that one of the conditions for $l_{i-1}$ can not be met. However, then the condition for $l_{i-1}$ in the other set is that it lies in $[k_{i}+1,k_{i+1}+1]$. As the condition for $l_{i}$ is that it is contained in $[k_{i},k_{i+1}]$ it follows, by the shift-antisymmetry for $n-1$, that the signed enumeration of the patterns in
this set must be zero.
If, however, exactly one set of $[k_{i-1},k_{i}]$ and $[k_{i-1},k_{i+1}+1]$ is an inversion then the sets are disjoint and their union is $[k_{i}+1,k_{i+1}+1]$.
We decompose the two sets ${\mathcal{L}}^{2}_{n,i}(k_{1},\ldots,k_{n})$ and ${\mathcal{L}}^{2}_{n,i}(k_{1},\ldots,k_{i+1}+1,k_{i}-1,\ldots,k_{n})$ further according whether $l_{i}\in[k_{i-1}-1,k_{i}-1]$ or
$l_{i}\in[k_{i-1}-1,k_{i+1}]$. (Observe that also $[k_{i},k_{i+1}]$ is the disjoint union of
$[k_{i-1}-1,k_{i}-1]$ and $[k_{i-1}-1,k_{i+1}]$.)
By the shift-antisymmetry for $n-1$, the signed enumeration
of the elements in ${\mathcal{L}}^{2}_{n,i}(k_{1},\ldots,k_{n})$ which satisfy $l_{i}\in[k_{i-1}-1,k_{i}-1]$ is zero as the requirement for $l_{i-1}$ is that it is contained in $[k_{i-1},k_{i}]$.
Similarly, the signed enumeration of the elements in ${\mathcal{L}}^{2}_{n,i}(k_{1},\ldots,k_{i+1}+1,k_{i}-1,\ldots,k_{n})$ with $l_{i}\in[k_{i-1}-1,k_{i+1}]$ is zero. Thus,
for the first set, we are left with the patterns that satisfy
$l_{i-1}\in[k_{i-1},k_{i}]$ and $l_{i}\in[k_{i-1}-1,k_{i+1}]$ and, for the second set,
the patterns with $l_{i-1}\in[k_{i-1},k_{i+1}+1]$ and $l_{i}\in[k_{i-1}-1,k_{i}-1]$ remain.
By the symmetry of these conditions and the shift-antisymmetry for $n-1$, we see that the signed enumeration of the first set
is the negative of the signed enumeration of the second set: as for the sign observe that $l_{i-1}$ is an inversion on the left (which is the case iff $[k_{i-1},k_{i}]$ is an inversion) if and only if it is no inversion on the right (which is the case iff $[k_{i-1},k_{i+1}+1]$ is no inversion). The analog assertion is true for $l_{i}$ as it is an inversion on the left iff $[k_{i},k_{i+1}]$ is an inversion and it is an inversion on the right iff $[k_{i+1}+1,k_{i}-1]$ is an inversion. Finally, for $l_{i+1}$ we have the situation that it is an inversion on the left iff it is an inversion on the right or the condition $l_{i+1}\in[k_{i+1},k_{i+2}]\cap[k_{i}-1,k_{i+2}]$ can not be met.
The case $j=4$ is similar though a bit more complicated and left to the interested reader.
References
[1]
D. M. Bressoud, Proofs and Confirmations, The Story of the
Alternating Sign Matrix Conjecture, Cambridge University Press,
Cambridge, 1999.
[2]
I. Fischer, A method for proving polynomial enumeration formulas,
J. Combin. Theory Ser. A 111 (2005), 37 – 58.
[3]
I. Fischer, The number of monotone triangles with prescribed bottom row,
Adv. in Appl. Math. 37 (2006), no. 2, 249 – 267.
[4]
I. Fischer, A new proof of the refined alternating sign matrix theorem, J. Comb. Theory Ser. A 114 (2007), 253–264.
[5]
I. Fischer, The operator formula for monotone triangles – simplified proof and three generalizations, J. Comb. Theory Ser. A 117 (2010), 1143-1157.
[6]
I. Fischer,
Refined enumerations of alternating sign matrices: monotone $(d,m)$-trapezoids with prescribed top and bottom row, J. Alg. Combin. 33 (2011), 239 - 257.
[7]
I. M. Gelfand and M. L. Tsetlin, Finite-dimensional representations of the group of unimodular
matrices (in Russian), Doklady Akad. Nauk. SSSR (N. S.) 71 (1950), 825 – 828.
[8]
I. M. Gessel and X. Viennot, Determinant, paths and plane partitions, preprint, 1989.
[9]
C. Krattenthaler, A gog-magog conjecture, unpublished, http://www.mat.univie.ac.at/~kratt/.
[10]
B. Lindström, On the vector representations of induced matroids, Bull. London Math. Soc. 5
(1973), 85–90.
[11]
R. P. Stanley, Enumerative combinatorics, vol. 2, Cambridge
University Press, Cambridge 1999. |
On a very steep version of the standard map.
M. Arnold
University of Texas at Dallas, Richardson, TX,
USA
T.
Dauer
Indiana University, Bloomington, IN, USA
M.
Doucette
University of Chicago, Chicago, IL, USA
S.-C.
Wolf
Université Paris-Saclay, Paris, France
We consider the long time behavior of the trajectories of the
discontinuous
analog of the standard Chirikov map. We prove that for some values of
parameters all the trajectories remains bounded for all time. For other
set of parameters we provide an estimate for the escape rate for the
trajectories and present a numerically supported
conjecture for the
actual escape rate.
1 Introduction.
We consider the area-preserving transformation of the cylinder
$[0,1)\times\mathbb{R}$ defined by $f(x,y)=(x^{\prime},y^{\prime})$ where
$$\begin{cases}x^{\prime}&=x+\alpha y\ (\text{mod}\ 1)\\
y^{\prime}&=y+\operatorname{\mathrm{sgn}}\left(x^{\prime}-\frac{1}{2}\right),%
\\
\end{cases}$$
(1)
Parameter $\alpha\in\mathbb{R}$ is called the twist parameter. A point
at position $(x,y)$ on the cylinder moves at constant height $y$ around
the cylinder a distance $\alpha y$, and then moves up one unit if it is on
the right half of the cylinder ($x^{\prime}\in(1/2,1)$), down one unit if it is on
the left half ($x^{\prime}\in(0,1/2)$), and stays at the same vertical position if it
is at the singular lines $x^{\prime}=1/2$ or $0$.
Such system can be regarded as a discontinuous analog of the standard
Chirikov map (see [3]), where the smooth function
$\sin(x^{\prime})$
is replaced by the discontinuous $\operatorname{\mathrm{sgn}}(x^{\prime})$. This system can be also
obtained from the Fermi-Ulam accelerator model with the sawtooth-like
wall movement regime (see [1] for details). For the
smooth variants of the
described problems KAM-technique can be used to provide the existence
of the invariant curves separating the phase space and so no unbounded
orbit exists for such systems. Since transformation (1)
is discontinuous, KAM theory is not applicable and so new methods are
needed for the analysis. Such systems having many interesting
dynamical properties, attracted a lot of attention in the past few years
(see e.g. [5], [4]).
In this note we study the asymptotic properties of the orbits of system
(1) in terms of the growth rate of the height $y_{n}$ of
the iterates $(x_{n},y_{n})=f^{n}(x_{0},y_{0})$. We will focus on the rational
values of the twist parameter $\alpha$. The case of the irrational values
of $\alpha$ is
more difficult and will be a subject of a future work.
In the next section, we collect preliminary results on the structure of the
set of orbits of system (1) and relate our system to
a transformation on a finite lattice. In section 3 we present
our main results and state some conjectures
based on the numerical simulations. Section 4 is devoted
to the numerical study of the periodic orbits.
Acknowledgments.
Present work was done during the
Summer@ICERM research program in 2015. Authors are deeply thankful
to ICERM
and Brown University for the hospitality and highly encouraging
atmosphere. Authors also want to thank
Vadim Zharnitsky and Stefan Klajbor-Goderich for deep and fruitful
discussions.
2 Preliminaries.
We will use the following notations.
Integer part of $x$ is denoted as $\lfloor x\rfloor$, therefore for the
fractional part of $x$ we have $\{x\}=x-\lfloor x\rfloor$.
$\mathbb{Z}_{q}=\{0,1,\dots,q-1\}$ will denote the ring of residues modulo $q$.
Lemma 1.
Map (1) is symmetric with respect to
the point
$(1/2,0)$. In details, for any
two points $(x,y)$ and $(\tilde{x},\tilde{y})$ such that
$(x,y)+(\tilde{x},\tilde{y})=(1,0)$ one has
$f(x,y)+f(\tilde{x},\tilde{y})=(1,0)$ (See Fig. 1).
Proof.
Let $(\tilde{x},\tilde{y})=(1-x,-y)$
, then $x^{\prime}=x+\alpha y$ and
$\tilde{x}^{\prime}=1-x-\alpha y=1-x^{\prime}$. Hence $\tilde{y}^{\prime}-\tilde{y}=-(y^{\prime}-y)$.
∎
Next we notice that the transformation (1) preserves
the lattice $y\in\{y_{0}+\mathbb{Z}\}$ in the second coordinate. Let
$\alpha=\dfrac{p}{q}$, where $p,q\in\mathbb{Z}$ and $\gcd(p,q)=1$. Then for two
points $(x,y)$ and
$(x,y\pm q)$ first
components of their images coincide and thus the increments in the
second
components are equal. Therefore we can restrict our attention to the set
$(x,y)\in[0,1)\times\{y_{0}+\mathbb{Z}_{q}\}$. Transformation (1) can be regarded as an interval exchange transformation on the union
of $2q$ intervals $\bigcup_{j=1}^{q}(I_{j,+}\cup I_{j,-})$, where
$I_{j,-}=\{(x,y):\{qx+py<q/2\},y=y_{0}+j\}$ and $I_{j,+}=\{(x,y):\{qx+py>q/2\},y=y_{0}+j\}$ (see Fig. 1)
If one considers the special case $q=1$ dynamics of the
system
(1) degenerates to
$$\begin{cases}x^{\prime}=x+py_{0}\ (\text{mod}\ 1)\\
j^{\prime}=j+\operatorname{\mathrm{sgn}}\left(x^{\prime}-\frac{1}{2}\right)%
\end{cases}$$
In this particular case dynamic of the
first coordinate became independent from the second coordinate.
Therefore one can identify all the intervals $I_{j,+}=I_{+}$ and all the
intervals
$I_{j,-}=I_{-}$.
For $py_{0}=1$ one immediately obtain linearly
growing trajectory $f^{n}(1/4,y_{0})=(1/4,n+y_{0})$. On the other hand for
$py_{0}=1/2$ any trajectory remains bounded since
for $x\in I_{+}$ from lemma 1 it follows that $x^{\prime}\in I_{-}$.
The case of $y_{0}$
being irrational has been extensively studied (see
[9, 7, 8, 2]) It provides a
random-like behavior of
the trajectories depending on the arithmetic properties of the initial
condition $y_{0}$.
In this paper we address the case $q>1$ and consider rational initial
conditions $y_{0}=\dfrac{a}{b}$. Using substitution $y=y_{0}+j$, we rewrite
transformation (1) as
$$\begin{cases}x^{\prime}=x+\frac{p(a+bj)}{bq}\ (\text{mod}\ 1)\\
j^{\prime}=j+\operatorname{\mathrm{sgn}}\left(x^{\prime}-\frac{1}{2}\right)%
\end{cases}$$
(2)
where $j^{\prime}$ is defined by the expression $y^{\prime}=y_{0}+j^{\prime}$.
Lemma 2.
Trajectories of the system (2) are organized in
bands: for $(x,j)$ and $(\tilde{x},j)$ such that
$\lfloor bqx\rfloor=\lfloor bq\tilde{x}\rfloor$ it
follows that $\lfloor bqx^{\prime}\rfloor=\lfloor bq\tilde{x}^{\prime}\rfloor$.
Proof.
Obviously, integral parts of $xbq$ and $\tilde{x}bq$ are changed by
the transformation (2) by the same amount
$p(a+bj)$.
∎
From lemma 2 it follows that we can restrict our attention on
the single representatives from the classes of equivalent trajectories and
consider our transformation on the discrete torus $(x,y)\in\mathbb{Z}_{bq}\times\mathbb{Z}_{q}$. For the sake of simplicity we will use the following lattices:
$$L_{e}=\left\{\left(\frac{2+4r}{4bq},\frac{a}{b}+j\right),~{}j\in\mathbb{Z}_{q}%
,~{}r\in\mathbb{Z}_{bq}\right\}$$
$$L_{o}=\left\{\left(\frac{3+4r}{4bq},\frac{a}{b}+j\right),~{}j\in\mathbb{Z}_{q}%
,~{}r\in\mathbb{Z}_{bq}\right\}$$
We refer to $L_{e}$ and $L_{o}$ as the even and the odd lattice,
respectively. $L_{o}$ can be obtained from $L_{e}$ by shifting to the right
by $\dfrac{1}{4bq}$ (see Fig. 2).
Thanks to lemma 2 these lattices are invariant
under the action of $f$.
To simplify the notations, henceforth when $bq$ is even we consider $f:L_{e}\rightarrow L_{e}$, and when $bq$ is odd we consider $f:L_{o}\rightarrow L_{o}$. For purposes of calculation we will think of $f$ as acting on $(r,j)$
instead of $(x,y)$.
Explicitly, for $r\in\mathbb{Z}_{bq}$ and $j\in\mathbb{Z}_{q}$ we
have
$$\begin{cases}r^{\prime}=r+p(a+bj)&\ (\text{mod}\ bq)\\
j^{\prime}=j+\operatorname{\mathrm{sgn}}(2r^{\prime}-bq+1+\delta)&\ (\text{mod%
}\ q),\end{cases}$$
(3)
where $\delta=bq\ (\text{mod}\ 2)$ refers to our choice of the lattice
$L_{e}$ or
$L_{o}$.
3 Main Results.
Since the lattices $L_{e}$, $L_{o}$ are finite all the trajectories of the system
(3) are periodic. The total increment in the
second coordinate of any periodic trajectory has to be proportional to
$q$. If the total increment of a trajectory is zero, we will call such a
trajectory bounded or periodic. Otherwise the trajectory will be called
escaping.
3.1 Existence of escaping trajectories.
Theorem 1.
Let
$\alpha=\frac{p}{q}$
and
$y_{0}=\frac{a}{b}$ be two rational numbers satisfying the condition
$\lfloor bq/2\rfloor=pa\ (\text{mod}\ b)$. Then
1.
For $bq$ even, any orbit of the transformation
(1) starting at the level $y=y_{0}$ is
bounded.
2.
For $bq$ odd there exists a unique class of equivalent
trajectories of the system (1) starting at the
level $y=y_{0}$ and growing without bounds.
Proof.
Thanks to the above discussion, every
unbounded trajectory of the system (1) corresponds
to the escaping trajectory of the system
(3). Thus, one has to show that system (3)
either does not have escaping trajectoies (for even $bq$) or
has exactly one escaping trajectory (for odd $bq$).
Transformation (3) can be considered as a continuous
transformation with respect to the second coordinate, since every
iteration gets an increment of $\pm 1$ in $j$. We will construct a
critical level $j=j_{*}$ such that no trajectory can cross it in the case of
even $bq$. As it will be clear from the construction, for the case of
odd $bq$ there is only one trajectory which can cross this level.
We will look for $j_{*}$ such that
$$p(a+bj_{*})=\lfloor bq/2\rfloor\ (\text{mod}\ bq).$$
(4)
Note that $p/q$ is irreducible and so
$\gcd(pb,bq)=b$. Since by the assumption of the theorem $\lfloor bq/2\rfloor-pa$ is divisible by $b$ we conclude that the congruence
(4) has exactly $b$ solutions in the form
$j_{*}+kq$, $k=0,1,\dots(b-1)$ (see [6]). Thus all these
solutions correspond to the same equivalence class in $\mathbb{Z}_{q}$.
We will show that in the
case of even $bq$
no trajectories may cross the level $j=j_{*}$ and for the odd $q$ there
is only
one such trajectory.
Indeed, assume for definiteness that for $(r,j_{*}-1)$ we have
$f(r,j_{*}-1)=(r^{\prime},j_{*})$. This means that $r^{\prime}$ belongs to the right half of
the cylinder. But for the even $bq$ it follows that $bpj_{*}=bq/2$ and
so $r^{\prime}$
is shifted exactly by $bq/2$ thus the trajectory coming to the critical
level from below will go down at the next step. From lemma
1 it follows that neither trajectory can cross this level
from above.
For the case of odd $bq$ one gets $pj=(bq-1)/2$ and so only $r=(bq-1)/2$ together with its image $r^{\prime}=bq-1$ belong to the right half
of
the cylinder. Since there is a unique point at which trajectory may
pass
the level $j_{*}$
such a trajectory necessarily has to be escaping (see Fig.
5) ∎
Remark 1.
In the case of integral initial condition $y_{0}=a$ one can set $b=1$
and
so the congruence (4) always has a solution. Thus
theorem 1 states that for $\alpha=1/2k$
all the
trajectories of the system (1) remain bounded
while for $\alpha=1/(2k+1)$ there is only one equivalence class of
unbounded trajectories.
From numerical simulations the following statement is evident.
Conjecture 1.
For every $\alpha=p/q$, there exists $y_{0}=a/b$ such that there is
an escaping orbit of (1) starting at the level
$y=y_{0}$.
For odd $q$ it follows from theorem 1
that $a=0$, $b=1$ provides the desired result. When the technical
conditions
of the Theorem 1 is not satisfied,
congruence (4) has
no solutions and we cannot construct the bottleneck level passing
which will assure that trajectory is escaping. However, one particular
case
seems to be tractable. From here on we fix $p$ to be equal to $1$.
Theorem 2.
For $q=4k+2$ there exists an escaping
trajectory of the transformation $f$, starting at the level $y_{0}=1/2$.
Letting $a=1$, $b=2$ we will show that the orbit of the
point
$(r_{0},j_{0})=(4k-1,2k+1)$ is unbounded.
From (3) we get the following system
$$\begin{cases}r^{\prime}=r+1+2j\ (\text{mod}\ 2q)\\
j^{\prime}=j+\textrm{sgn}(1+2r^{\prime}-2q)\ (\text{mod}\ q)\end{cases}$$
Next lemma provides us some control on the sub-lattice for
which
the orbit of $(r_{0},y_{0})$ should belong to.
Lemma 3.
Let $q=4k+2$. Denote by $(r_{m},j_{m})$ the
$m$-th
iterate of the point $(r_{0},j_{0})=(4k~{}-~{}1,2k~{}+~{}1)$.
Then
$r_{m}+m\ (\text{mod}\ 2)=3\ (\text{mod}\ 4)$.
Proof.
At first we observe that the parity of the second coordinate of
the point always differs from the parity of $m$. Indeed $j_{0}$ is
odd and at each step the trajectory gets or looses $1$.
We proceed by induction.
First let us consider $m~{}=~{}0$. We have $j_{0}=2k+1=1\ (\text{mod}\ 2)$ and $r_{0}=4k-1=3\ (\text{mod}\ 4)$. Then for
$m=1$ we get $r_{1}=r_{0}+2j_{0}+1=3+2+1=2\ (\text{mod}\ 4)$.
Now assume that for some even $m$ the assumption of the
lemma holds true. Then since $j_{m}=1\ (\text{mod}\ 2)$ it follows
$r_{m+1}=r_{m}+2j_{m}+1=2\ (\text{mod}\ 4)$.
Finally if the assumption holds for some odd $m$ we get
$j_{m}~{}=~{}0\ (\text{mod}\ 2)$ and therefore
$r_{m+1}~{}=~{}2~{}+~{}1~{}+~{}4~{}\ (\text{mod}\ 4)~{}=~{}3~{}\ (\text{mod}\ 4)$.
∎
Proof of theorem 2.
Consider the orbit of the point
$(r_{0},j_{0})=(4k-1,2k+1)$.
One can easily calculate that $j_{1}=j_{0}+1$ and $j_{2}=j_{1}+1$,
that is, there are immediately two consecutive increases. It
then
suffices to show that there is no point at level $j=2k+3$
from which there are two consecutive decreases.
To have a decrease to the
$j=2k+1$ level from $(r,2k+2)$ we need to have
$1+2r^{\prime}-2q<0$, i.e.,
$$r+q+3\ (\text{mod}\ 2q)<q-\frac{1}{2}$$
(5)
For $r<q-3$ we have $r+q+3\ (\text{mod}\ 2q)=r+q+3$, so for the
inequality (5) to hold we need $r<-\frac{1}{2}-3$,
which is impossible. For $r\geq q-3$ we have $r+q+3\ (\text{mod}\ 2q)=r-q+3$, so for (5) to hold we need
$r<2q-\frac{7}{2}$.
By lemma 3, $r=2\ (\text{mod}\ 4)$ for
any
point in our desired orbit at level $j=2k+2$, and so
the
only $r$’s that are possibly in the orbit and result in a decrease
from this level are $r=4k+2,4k+6,\ldots,8k-2$.
In the $j=2k+3$ level, to have a decrease to the $j=2k+2$ level we need to have $r^{\prime}<4k+\frac{3}{2}$. To have two consecutive decreases, this $r^{\prime}$
must be
one of the $r$’s we found in the previous paragraph. But the
smallest such $r$ is $4k+2$, so this cannot happen.
∎
For $q={4k}$, we searched
for $y_{0}=a/b$ that give an escaping orbit for some $(x_{0},y_{0})\in L_{e}$.
We present the table of $a/b$ depending on $k$ with the
smallest
$b$.
$$k$$
1
2
3
4
5
6
7
8
$$a$$
1
4
4
1
26
36
67
63
$$b$$
3
13
11
45
57
103
144
205
$$k$$
$$\cdots\;\;\,9$$
10
11
12
13
14
$$a$$
$$\cdots\;\,77$$
19
23
360
243
23
$$b$$
$$\cdots\,227$$
337
223
1043
1264
505
One can see that the $b$ required increases rather quickly with $k$. It
also appears that one cannot simply narrow the search by taking
$a=1$. For example, for $k=3$ we searched for escaping orbits with
$y_{0}=1/b$ and found none for $b\leq 5000$.
3.2 Length of the escaping orbit.
Now we will investigate the growth rate of the escaping orbit. The
fastest possible rate fo the transformation (1) is
linear, i.e. the trajectory may gain as much as $O(N)$ in the second
coordinate after $N$ iterations. In fact, escaping trajectories grow
much slower. Since the phase space of the transformation (3) is finite and thus so are all the trajectories we will consider the
lengths of the trajectories instead of their growth rates. Let
$\alpha=1/q$ and $y_{0}=0$. Theorem 1
provides unique
escaping trajectory for each odd $q$.
Definition 1.
Define $\ell(q)$ as the unique odd-length period under $f$ on
$L_{o}$.
Quantity $\ell(q)$ describes the portion of the phase space
$\mathbb{Z}_{q}\times\mathbb{Z}_{q}$ swiped by the escaping trajectory. Thus linearly
growing trajectory should have $\ell(q)=O(q)$, since such a trajectory
should visit every level of the lattice $O(1)$ amount of times. In fact,
numerical experiments show that the escaping trajectory has the
slowest
possible growth rate $\ell(q)=O(q^{2})$ (see Fig. 4)
Conjecture 2.
Length of the escaping orbit $\ell(q)$ grows
as
$O(q^{2})$.
At this moment we can provide much milder estimate.
Theorem 3.
Consider transformation
(1) with $\alpha=\frac{1}{q}$ with odd
$q$ and $y_{0}=0$. Let $\{x_{n},y_{n}\}$ denote the unbounded
trajectory provided
by theorem 1. Then for any $n$ and
$n^{\prime}$ such that $|n^{\prime}-n|<q\log q$ it follows that $|y_{n^{\prime}}-y_{n}|<q$.
We will prove this theorem providing an aprioiry bound on
the length
$\ell(q)$.
The idea of the proof consists in the estimate of the time it takes from
the escaping trajectory to pass the levels near the bottleneck level
$j=(q+1)/2$. We will use two lemmas. First lemma states that
two consecutive vertical
increases near $j=(q+1)/2+m$ (for $m\geq 0$ reasonably small)
cause the resulting iterate to be less than about $k$ units to the right
of $x=1/2$ (a line of discontinuity for $f$). Second lemma uses the
information about the first coordinate of the trajectory to estimate the
time trajectory will spent on the prescribed level (see
Fig 5).
Lemma 4.
Let $q\geq 1$ be odd. Suppose $(r_{0},j_{0})\in L_{o}$,
$(r_{1},j_{1})=f(r_{0},j_{0})$, and $(r_{2},j_{2})=f(r_{1},j_{1})$ are such that
$j_{0}=\frac{q+1}{2}+m-2$, $j_{1}=j_{0}+1$, and $j_{2}=j_{0}+2$ for some $m\in\{1,2,\ldots,\frac{q-1}{2}\}$.
Then
$r_{2}\in[\frac{q-1}{2},\frac{q-1}{2}+~{}m~{}-~{}1]$.
Proof.
Let $j_{0}$, $j_{1}$, and $j_{2}$ be as in the statement of the lemma.
That $j_{1}>j_{0}$ means $r_{1}>q/2$ (since we need
$\operatorname{\mathrm{sgn}}(r_{1}-\frac{q}{2})=1$ in order for this to happen), and
in the same way $j_{2}>j_{1}\implies r_{2}>q/2$. Therefore
$\frac{q-1}{2}\leq r_{i}\leq q-1$ for $i\in\{1,2\}$. These
inequalities make sense, despite $\mathbb{Z}_{q}$ not being ordered,
because everything is in the interval $[0,q)$.
Each value of $r_{1}$ satisfying these inequalities can be written as
$r_{1}=\frac{q-1}{2}+n$ for some $n\in\{0,1,\ldots,\frac{q-1}{2}\}$. We have
$$\begin{split}\displaystyle r_{2}&\displaystyle=r_{1}+j_{1}\ (\text{mod}\ q)=%
\left(\frac{q-1}{2}+n\right)+\\
&\displaystyle+\left(\frac{q+1}{2}+m-1\right)\ (\text{mod}\ q)=n+m-1\end{split}$$
Using our definitions of $n$ and $m$, we obtain
$$\frac{q-1}{2}\leq r_{2}=n+m-1\leq\frac{q-1}{2}+m-1$$
as desired.
∎
The next lemma roughly states that if we start at a point within $k$
units horizontally of $r=\frac{q+1}{2}$ (for $m>0$ reasonably small)
and at vertical level $j=\frac{q+1}{2}+m$, then the iterates of the
point
bounce at least about $q/2m$ times between $j=\frac{q+1}{2}+m$
and
$j=\frac{q+1}{2}+m-1$.
Lemma 5.
Let $q\geq 9$ be an odd integer. Let
$j_{0}=\frac{q+1}{2}+m$
and
$\frac{q+1}{2}\leq r_{0}\leq\frac{q+1}{2}+m-1$ for $m\in\{1,2,\ldots,\lfloor q/9\rfloor\}$. Let $(r_{0},j_{0})\in L_{o}$ and
take $N_{m}$ to be the greatest integer such that $j_{n}\in\{\frac{q+1}{2}+m,~{}\frac{q+1}{2}+m-1\}$ for all $n\leq N_{m}$. Then
$N_{m}\geq\left\lfloor\frac{q-1}{2m}\right\rfloor-1$.
Proof.
Let $r_{0}=\frac{q+1}{2}+s$, where $s$ is in $\{0,1,\ldots,m-1\}$.
We
have
$$\begin{split}\displaystyle r_{1}&\displaystyle=r_{0}+j_{0}\ (\text{mod}\ q)=s+%
m+1<\frac{q}{2},\\
\displaystyle j_{1}&\displaystyle=j_{0}-1=\frac{q+1}{2}+m-1\end{split}$$
and
$$\begin{split}\displaystyle r_{2}&\displaystyle=r_{1}+j_{1}\ (\text{mod}\ q)=%
\frac{q+1}{2}+s+2m>\frac{q}{2},\\
\displaystyle j_{2}&\displaystyle=j_{1}+1=\frac{q+1}{2}+m.\end{split}$$
Every two iterations of $f$, the value of $r$ increases by the amount
$\left(\frac{q+1}{2}+m\right)+\left(\frac{q+1}{2}+m-1\right)\ (\text{mod}\ q)=2m$
until $r$ increases past $q$. Thus,
$$(r_{2n-1},\,j_{2n-1})=\left(s+(2n-1)m+1,\,\frac{q+1}{2}+m-1\right)$$
and
$$(r_{2n},\,j_{2n})=\left(\frac{q+1}{2}+s+2nm,\,\frac{q+1}{2}+m\right)$$
for all integers $n$ with $0\leq n\leq n^{*}$, where
$n^{*}$ is such that
$s+(2n^{*}-1)m+1<\frac{q}{2}$ and
$\frac{q+1}{2}+s+2mn^{*}<q$.
We claim that $n^{*}=\left\lfloor\dfrac{q-1}{4m}\right\rfloor-1$
satisfies these inequalities. We have
$$s+(2n^{*}-1)m+1<m-1+\left(\frac{q-1}{2m}-1\right)m+1=\frac{q-1}{2}$$
And on the other hand
$$\frac{q+1}{2}+s+2mn^{*}\leq\frac{q+1}{2}+m-1+$$
$$+2\left(\frac{q-1}{4m}-1\right)m=q-m-1<q,$$
as claimed. Note also that $n^{*}\geq 0$, since
$$\left\lfloor\frac{q-1}{4m}\right\rfloor-1\geq\frac{q-1}{4m}-2\geq\frac{9(q-1)}%
{4q}-2=$$
$$=\frac{1}{4}-\frac{9}{4q}\geq\frac{1}{4}-\frac{1}{4}=0$$
Therefore the total number of points $N_{m}$ with $j$ in
$\{\frac{q+1}{2}+m,\frac{q+1}{2}+m-1\}$ satisfies
$$N_{m}\geq 2n^{*}+1=\left\lfloor\frac{q-1}{2m}\right\rfloor-1$$
∎
Proof of theorem 3.
From the proof of theorem 1 it follows
that the escaping orbit pass through the point $(x_{0},(q-1)/2)$,
where $x_{0}=\frac{1}{4}+\frac{q-1}{2}$). There exist
positive integers $n_{k}$ for $k\in\{1,2,\ldots,\lfloor q/9\rfloor\}$
such that
$$j_{n_{k}-2}=\frac{q+1}{2}+k-2,\;j_{n_{k}-1}=\frac{q+1}{2}+k-1$$
$$j_{n_{k}}=\frac{q+1}{2}+k$$
since the orbit must pass through at least one point at each height.
By lemma 4, $\frac{q+1}{2}\leq r_{n_{k}}\leq\frac{q+1}{2}+k-1$. Define
$$A_{k}=\left\{(r_{n_{k}+m},j_{n_{k}+m}):m=0,1,...,\left\lfloor\frac{q-1}{2k}%
\right\rfloor-2\right\}$$
By lemma 5, $|A_{k}|=\left\lfloor\frac{q-1}{2k}\right\rfloor-1$. Since
$$\frac{q}{2}<x_{n_{k}+m}<q,~{}y_{n_{k}+m}=\frac{q+1}{2}+k~{}\textrm{for~{}}m%
\textrm{ even and}$$
$$0<x_{n_{k}+m}<\frac{q}{2},~{}y_{n_{k}+m}=\frac{q+1}{2}+k-1~{}\textrm{for~{}}m%
\textrm{ odd,}$$
the $A_{k}$ are disjoint. Therefore we have
$$\ell(q)\geq\sum_{k=1}^{\lfloor q/9\rfloor}\left(\left\lfloor\frac{q-1}{2k}%
\right\rfloor-1\right)=O(q\log q)$$
as desired.
∎
4 Periodic orbits.
We conclude our discussion with the numerical investigation of the
distribution of
the periodic orbits.
From theorem 1 it follows
that the whole phase space of the system (3) is divided
into
the set of periodic orbits of various periods. If $q$ is odd then there
exists a unique orbit of enormously large period which swipe almost a
half of the phase space. It turns out that all the other periods are
distributed in the range of $O(q)$. For even $q$ all the periods belong to
this range. What is spectacular that we observe some similarity in the
distribution of these periods for even and odd values of $q$. Collection
of the periodic orbits represents a partition of the number
$q^{2}$
into the sum of the periods of the trajectories. We present here the
Young
diagrams for these partitions scaled by the factor of $q$ in both
directions. For the case of odd $q$ we present the diagram
corresponding to the partition of the set of bounded trajectories. It turns
out that the Young diagrams constructed for the cases of
even $q$ and for the bounded part of the phase space for the odd $q$
are similar (see Fig. 5(b)).
Conjecture 3.
Maximum length of the bounded trajectories for the transformation
(3) has the magnitude $O(q)$.
Looking at the portrait of the escaping trajectory (Fig. 2(b))
one can notice well-defined lacunae corresponding to the levels
$j=q/(2n+1)$. These lacunae represent the islands of stability around the
corresponding periodic points for the transformation $f$.
However, these islands do not exhaust the whole phase space since the
every island consists of $O(q)$ bounded trajectories while every such
trajectory has period of order $O(q^{1/2})$ (see Fig 7).
Nevertheless we observe that
Conjecture 4.
Distributions of large periods of the periodic trajectories for even values
of $q$ coincide.
On the other hand for small periods we have observed some differences. It
turns out that for
$q=2\ (\text{mod}\ 4)$ number of periodic orbits of small periods does not depend
on $q$ while for $q=4k$ there are exactly $(2k-1)$ periodic orbits
of period $4$. Indeed one can easily check by the direct computation that
trajectory of every point $(r,k)$, $r\in[2k,3k-1)$ is
$4$-periodic.
Combined with the lemma 1 this observation
provides $2k-2$ points of period $4$. Since the point $(0,0)$ is clearly
$4$-periodic for any $q$, the total
number of $4$-periodic orbits equals $2k-1$.
References
[1]
M. Arnold and V. Zharnitsky.
Pinball dynamics: unlimited energy growth in switching hamiltonian
systems.
Comm. Math. Phys., 338(2):501–521, 2015.
[2]
A. Avila, D. Dolgopyat, E. Duryev, and O. Sarig.
The visits to zero of a random walk driven by an irrational rotation.
Israel J. Math., 207(2):653–717, 2015.
[3]
B. Chirikov.
A universal instability of many-dimensional oscillator systems.
Phys. Rep., 52(5):263–379, 1979.
[4]
J. de Simoi and D. Dolgopyat.
Dynamics of some piecewise smooth fermi-ulam models.
Chaos, 22(2):026124, 2012.
[5]
D. Dolgopyat.
Piecewise smooth perturbations of integrable systems.
In XVIIth International Congress of Mathematical Physics, pages
52–67.
[6]
G. H. Hardy and E. M. Wright.
An introduction to the theory of numbers.
Oxford University Press, 2008.
[7]
H. Kesten.
On a conjecture of Erdös and Szüsz related to uniform
distribution mod 1.
Acta Arith., 12:193–212, 1966.
[8]
D. Ralston.
Substitutions and 1/2-discrepancy of $\{n\theta+x\}$.
Acta Arith., 154(1):1–28, 2012.
[9]
L. Rocadas and J. Schloissengeiger.
On the local discrepancy of $(n\alpha)$-sequences.
J. Number Theory, 131(8):1492–1497, 2011. |
Three-Way Serpentine Slow-Wave Structures with Stationary Inflection
Point and Enhanced Interaction Impedance
Robert Marosi, Tarek Mealy
Alexander Figotin, and Filippo Capolino
Robert Marosi, Tarek Mealy, and Filippo Capolino are with the
Department of Electrical Engineering and Computer Science, University
of California, Irvine, Irvine, California, e-mail: rmarosi@uci.edu tmealy@uci.edu f.capolino@uci.edu.Alexander Figotin is with the Department of Mathematics, University
of California, Irvine, Irvine, California, e-mail: afigotin@uci.edu.
Abstract
We introduce two novel variants of the serpentine waveguide slow-wave
structure (SWS), often utilized in millimeter-wave traveling-wave
tubes (TWTs), with an enhanced interaction impedance. Using dispersion
engineering in conjunction with transfer matrix methods, we tune the
guided wavenumber dispersion relation to exhibit stationary inflection
points (SIPs), and also non-stationary, or “tilted” inflection
points (TIPs), within the dominant $\mathrm{TE_{10}}$ mode of a rectangular
waveguide. The degeneracy is found below the first upper band-edge
associated with the bandgap where neighboring spatial harmonics meet
in the dispersion of the serpentine waveguide (SWG) which is threaded
by a beam tunnel.
The structure geometries are optimized to be able to achieve an SIP
which allows for three-mode synchronism with an electron beam over
a specified wavenumber interval in the desired Brillouin zone. Full-wave
simulations are used to obtain and verify the existence of the SIP
in the three-way coupled waveguide and fine-tune the geometry such
that a beam would be in synchronism at or near the SIP. The three-way
waveguide SWS exhibits a moderately high Pierce impedance in the vicinity
of a nearly-stationary inflection point, making the SWS geometry potentially
useful for improving the power gain and basic extraction efficiency
of millimeter-wave TWTs. Additionally, the introduced SWS geometries
have directional coupler-like behavior, which enables distributed
power extraction at frequencies near the SIP frequency.
Index Terms:
serpentine waveguide (SWG), serpentine ladder waveguide (SLWG), three-coupled
serpentine waveguide (TCSWG), traveling-wave tube (TWT), millimeter-wave,
dispersion, stationary inflection point (SIP), distributed power extraction
(DPE), interaction impedance.
I Introduction
Traveling-wave tubes (TWTs) are able to perform
broadband, high-power amplification due to the distributed transfer
of energy from a beam of electrons to guided electromagnetic fields.
There are two primary factors that control the strength of the interaction
between the beam and the electromagnetic field: velocity synchronization
between the beamline and guided modes and beam-wave interaction impedance,
also called Pierce impedance, in the TWT slow wave structure (SWS)
[1]. In S-band helix TWTs, Pierce impedance is
typically on the order of 100 Ω, allowing for efficient
basic beam-wave power conversion. However, at millimeter-wave frequencies,
microfabricated slow-wave structures such as the serpentine waveguide
(SWG) typically exhibit Pierce (or interaction) impedance on the order
of 10 Ω or smaller, which drastically reduces their basic
conversion efficiency. This is partly due to the fact that serpentine-type
TWTs are typically synchronized to an electron beam in the first or
second Brillouin zone rather than the fundamental Brillouin zone to
avoid the use of relativistic electron beam velocities which require
much larger voltages to accelerate electrons close to the speed of
light [2]. Due to structure geometry that
scales inversely with operating frequency, helix-type TWTs become
extremely difficult to fabricate at millimeter-wave frequencies, making
microfabricated structures like the SWG much more attractive. In this
paper, we propose two new types of dispersion-engineered SWSs based
on the SWG geometry that are capable of exhibiting moderately high
Pierce impedance over narrow bandwidths and can be microfabricated.
These two structure variants, the serpentine ladder waveguide (SLWG)
and the three-coupled serpentine waveguide (TCSWG), with their longitudinal
cross-sections illustrated in Figs. 1a and
1b, respectively, can have their geometries
easily designed to exhibit stationary inflection points (SIPs) or
nearly-stationary tilted inflection points (TIPs), sometimes referred
to as tilted SIPs, at specific frequencies, similarly to the kind
shown in Fig. 1c. Such structures with SIPs are
capable of exhibiting moderately high to very high Pierce (interaction)
impedance comparable to the impedance observed near the band-edge
of SWG SWS, where the group velocity and power flow also vanish. Therefore,
at the frequencies that the electron beam is in synchronism with the
SIP, we expect the proposed SIP SWS to also have a high Pierce gain
parameter, as $C^{3}=Z_{\mathrm{Pierce}}I_{0}/(4V_{0})$, where $I_{0}$
and $V_{0}$ are the average current and equivalent kinetic voltage
of the electron beam, respectively. These structures have potential
use in the design of compact, high efficiency, millimeter-wave TWTs
and backward-wave oscillators (BWOs).
The SIP is a class of modal degeneracy, whereby three eigenmodes coalesce
in both their wavenumbers and eigenvectors (polarization states).
Such modal degeneracies of orders 2, 3, and 4 were originally investigated
by Figotin and Vitebskiy in [3, 4, 5, 6, 7, 8].
The SIP is a particular type of exceptional point of degeneracy (EPD),
and is sometimes called a “frozen mode” in literature. Exceptional
points of degeneracy of various orders have been previously investigated
theoretically in gainless and lossless structures operating at both
radio and optical frequencies in [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22],
and have also been experimentally demonstrated at radio frequency
in [23, 24, 25, 26, 27, 28].
In particular, the first experimental demonstration of SIPs at radio
frequencies in a reciprocal three-way waveguide SWS has been performed
in [28]. The slow wave structures we introduce here
are designed to operate with an electron beam synchronized to three
degenerate “cold” eigenmodes. That is, the SIP is made to exist
in the “cold” dispersion relation, i.e.,before the introduction
of an electron beam, which will perturb the dispersion relation. Note
that this regime of operation is different from the regime of “exceptional
synchronization” or “degenerate synchronization” studied in
[29, 30, 31], where an
exceptional point of degeneracy is designed to occur in the “hot”
system, i.e., it is visible only in the hot modal dispersion relation.
These exceptionally synchronized modes of the structure become degenerate
when a synchronized electron beam is coupled to the electromagnetic
modes.
We define a “waveguide way” as an individual waveguide component
(rectangular waveguide, SWG, etc.) which is not in cutoff over the
designed operating frequency and can support two electromagnetic modes
(one forward and one backward). A three-way waveguide supports six
modes, when considering propagation in both the $+z$ and $-z$ directions,
i.e., three modes in each direction. The three-way microstrip structure
with SIP in [28] is what inspired the design of
the structures in this paper.
The concept of “three-mode synchronization regime” using SIPs
in the cold SWS dispersion of linear-beam tubes was initially proposed
in [22] and was based on ideal transmission lines,
following a multi-transmission line generalization of the Pierce model.
Here, we show for the first time how one can design a realistic serpentine-like
SWS to exhibit a cold SIP at millimeter-wave frequencies.
In Section II, we explain the concept
of the SIP, smooth-TIP, and alternating-TIP. In Section III,
we introduce the two proposed waveguide structures and we describe
our design methodology to introduce SIPs in the three-way coupled
waveguides. In Section IV, we show the dispersion,
scattering parameters, and Pierce (interaction) impedance enhancement
for our three-way coupled waveguides. All dimensions used in this
paper are in SI units unless otherwise stated.
II Cold Stationary Inflection Points
Stationary inflection points are a special case of eigenmode degeneracy,
whereby three eigenmodes coalesce at a single frequency point in the
dispersion diagram for the modes of a periodic structure, as has been
explored in [32, 33, 34, 35, 10, 36, 37, 38, 39, 22, 12, 40].
On the other hand, a non-stationary, or “tilted” inflection
point is a single frequency point in the structure’s modal dispersion
diagram where three eigenmodes are nearly coalescing, but not perfectly
so. In the structure’s modal dispersion diagram, the dispersion relation
local to an SIP or TIP will be cubic in shape. In general, these cubic-shaped
dispersion relations will have an inflection point that occurs at
a frequency-wavenumber combination where the second derivative of
the $\omega-k$ dispersion relation vanishes (i.e. $d^{2}\omega/dk^{2}=0$).
These inflection points are classified here into two kinds: the SIP,
and the non-stationary inflection point, or TIP. The SIP occurs where
both the first and second derivative of the $\omega-k$ dispersion
relation vanish at the same wavenumber, i.e., $d\omega/dk=0$ and
$d^{2}\omega/dk^{2}=0$. For the case of the TIP, only the second
derivative of the $\omega-k$ dispersion relation vanishes at the
inflection point. In other words, the SIP is a special case of TIP.
Because the modal dispersion diagrams for our lossless, reciprocal
structures are symmetric in each Brillouin zone, the usual classification
of non-stationary inflection points as rising or falling is ambiguous.
Every reciprocal waveguide with a TIP in its dispersion relation will
always exhibit two kinds of TIPs, rising and falling, for opposite
signs of $k_{\mathrm{TIP}}$. To remedy this, we futher classify TIPs
into two sub-categories: smooth-TIPs which have a non-vanishing
group velocity that does not change sign for wavenumbers slightly
above and below the inflection point, and alternating-TIPs
which have a group velocity that alternates in sign as the wavenumber
is swept near the inflection point, as illustrated in Fig. 1c.
We were able to design both the SLWG and TCSWG structures to exhibit
SIPs in their cold dispersion, that is, without an electron beam present.
We stress that this form of modal degeneracy is different from the
hot eigenmode degeneracy, or “exceptional synchronization” studied
in [29, 30]. The coalescence of the cold
eigenmodes at an SIP results in a perfect cubic dispersion relation
local to the SIP in the dispersion diagram,
$$\left(f-f_{\mathrm{SIP}}\right)\simeq h\left(k-k_{\mathrm{SIP}}\right)^{3}.$$
(1)
When the cold eigenmodes of a structure are nearly-coalescing
at a single wavenumber, and the dispersion relation exhibits a TIP,
the dispersion relation local to the inflection point may be represented
by a depressed cubic function (the quadratic term is suppressed due
to the shift in $k$ by the cubic function’s inflection point),
$$\left(f-f_{\mathrm{TIP}}\right)\simeq h\left(k-k_{\mathrm{TIP}}\right)^{3}+s\left(k-k_{\mathrm{TIP}}\right).$$
(2)
The parameters $f_{\mathrm{SIP}}\simeq f_{\mathrm{TIP}}$ and $k_{\mathrm{SIP}}\simeq k_{\mathrm{TIP}}$
are the frequency and Floquet-Bloch wavenumber, respectively, at which
the three eigenmodes coalesce or are nearly-coalescing to form an
SIP or a TIP, respectively. We also note that, due to Floquet-Bloch
spatial harmonics, the wavenumbers $k$, $k_{\mathrm{TIP}}$, and
$k_{\mathrm{SIP}}$ in (1) and (2)
have a periodicity of $2\pi/d$, where the pitch of the unit cell
is $d$. That is, $k$, $k_{\mathrm{TIP}}$, and $k_{\mathrm{SIP}}$
in these formulae do not necessarily need to be within the fundamental
Brillouin zone. The parameter $h$ is a scalar flatness coefficient
that depends on the strength of eigenmode coupling and $s$ is a scalar
coefficient that affects the “tilt” of the TIP, as demonstrated
in Fig. 1c. Tilted inflection points and their
properties, such as improved gain-bandwidth products and power efficiency,
have also been explored in [22], using a multi-transmission
line generalization of the Pierce model [41, 42].
At an SIP, the dispersion relation local to the inflection point is
a cubic function similar to Eqn. (2)
with $s=0$, as shown in solid black in Fig. 1c.
At, and very close to, an SIP or smooth-TIP, the eigenwaves all propagate
in the same direction. That is, the group velocities of the eigenwaves
do not change sign at frequencies slightly lower and higher than $f_{\mathrm{TIP}}$.
At the inflection point of an SIP, the group velocity becomes zero.
In a smooth-TIP with $s>0$, the group velocity ($d\omega/dk$) at
the inflection point is no longer zero (blue dash-dotted line in Fig.
1c). Having the electron beam interact with an
TIP with a near-zero positive group velocity (nearly-stationary TIP)
instead of a perfectly-zero group velocity SIP may be preferable for
TWT designs, since the Pierce (interaction) impedance at the frequency
of the inflection point will be sufficiently large but the device
will not become absolutely unstable when the electron beam is introduced.
On the other hand, having an electron beam interact with a TIP that
has negative group velocity is useful for the design of BWOs. We call
this interaction between an electron beam and a cold SIP, a “three-mode
synchronization” (see [22] for more details). Beamline
interactions at points of zero group velocity, like the band-edge
[43, 44, 45, 46]
or the degenerate band-edge (DBE) [17, 47],
are to be avoided in the design of TWT amplifiers, as they are considered
a source of instability. The alternating-TIP (magenta dotted line
in Fig. 1c) is the second kind of TIP studied
with $s<0$, in which the group velocities of the eigenwaves will
change sign at frequencies slightly lower and higher than $f_{\mathrm{TIP}}$.
If the geometry of a structure can be tuned to exhibit smooth-TIPs
for one set of dimensions and alternating-TIPs for another set of
dimensions, it is expected that such a structure can be made to exhibit
an SIP.
From Pierce theory, the Pierce (interaction) impedance for a specific
Floquet-Bloch spatial harmonic $p$ and specific wavenumber corresponding
to the frequency of interest is defined as
$$Z_{\mathrm{Pierce}}(k_{p})=\frac{\left|E_{z,p}(k)\right|^{2}}{2\mathrm{Re}(k_{p})^{2}P(k)}$$
(3)
where, $k_{p}=k+2\pi p/d$ is the wavenumber corresponding to the
appropriate pth Floquet-Bloch spatial harmonic, $p=0,\pm 1,\pm 2,...$,
and the wavenumber $k$ is in the fundamental Brillouin zone defined
here as $kd/\pi\in[-1,1]$, i.e., with $p=0$. Furthermore, $\left|E_{z,p}(k)\right|$
is the magnitude of the phasor of the electric field component along
the center of the beam tunnel in the z direction for a given
wavenumber and pth Floquet-Bloch spatial harmonic, and $P(k)$
is the time-average power flux at the fundamental wavenumber corresponding
to the frequency of interest (the time average power flux is the sum
of power contributions from all spatial harmonics) [48].
To obtain the magnitude of the axial electric field phasor, corresponding
to the appropriate spatial harmonic, the complex axial electric field
along the beam tunnel axis is decomposed into Floquet-Bloch spatial
harmonics as $E_{z}(z,k)=\sum_{p=-\infty}^{\infty}E_{z,p}(k)e^{-jk_{p}z}$,
where the harmonic weights are computed as $E_{z,p}(k)=\frac{1}{d}\intop_{0}^{d}E_{z}(z,k)e^{jk_{p}z}dz$
[49]. Both the complex axial electric field $E_{z}(z,k)$
and the time-average power flux $P(k)$ through the cross section
of the unit cell are calculated for the cold structure (i.e. without
the electron beam) using the eigenmode solver in CST Studio Suite.
However, one must pay careful attention to how the phase across the
periodic boundaries is defined in the CST model to correctly compute
the interaction impedance. Since the $\exp(j\omega t)$ time convention
is used by CST, the formula for calculating $E_{z,p}(k)$ requires
a delaying phase from the lower periodic boundary to the upper periodic
boundary of the simulated unit cell, i.e. phase of $E_{z}(z,k)$ must
decrease from $z_{min}$ to $z_{max}=z_{min}+d$ for a positive value
of $k$. Conveniently, the electromagnetic energy simulated within
the enclosed vacuum space of the unit cell between periodic boundaries
in the eigenmode solver, which is based on the finite element method
(FEM) implemented in the software CST Studio Suite, is always assumed
to be 1 Joule for each eigenmode solution. Therefore, the power flux
is calculated using the formula $P=(1\,\mathrm{Joule})v_{g}/d$, where
$d$ is the unit cell pitch, and the group velocity $v_{g}=d\omega/dk$
is determined directly from the dispersion diagram via numerical differentiation
(The group velocity is the same at every spatial harmonic).
In order for interaction impedance to be large , the ratio in Eqn.
(3), $\left|E_{z,p}(k)\right|^{2}/P(k)$, must become
large in magnitude or the wavenumber in the denominator must become
very small (i.e., operating closer to the fundamental spatial harmonic).
At a nearly-stationary TIP, which is close to becoming an SIP, the
power flow at the inflection point is indeed smaller than the power
flow of conventional SWG at the same wavenumber of the inflection
point, this because the power flow is proportional to the group velocity.
Assuming that the magnitude of the axial electric field component
is comparable for both cases, one concludes that the Pierce impedance
will be larger for the structure with an inflection point than in
a conventional SWG, at the wavenumber corresponding to the inflection
point.
Due to this phenomena, it is possible to obtain a moderately high,
narrowband Pierce impedance at an SIP or nearly-stationary TIP, which
is several times larger than the Pierce impedance observed in a conventional
serpentine waveguide, as demonstrated in Sec. IV.
Conventional TWT SWS exhibit higher symmetries, such as glide symmetry
in the serpentine-type TWT or screw symmetry in the helix-type TWT
[50, 51]. Nearly parallel
dispersion curves are formed by breaking glide symmetry. Glide symmetry
can be broken in our structures by introducing minor dimensional differences
between two similar waveguide sections. This allows us to readily
tune the tilt of TIPs in our dispersion relation by simply varying
one or more of our structure’s dimensions.
Next, we show the design methodology for two kinds of SWS geometries,
whose unit cells are shown in in Fig. 2,
that can be dispersion engineered to exhibit SIPs or nearly-stationary
TIPs.
III Proposed Waveguides exhibiting SIP
III-A Serpentine Ladder Waveguide (SLWG)
The SLWG was our first attempt at obtaining a SWG-like structure which
is capable of exhibiting an SIP and has its geometry shown in Fig.
2a. As we will show in our dispersion diagrams in
Section IV, the SLWG structure can be potentially
designed to operate as a BWO due to the backward waves that are exhibited
where the beamline interacts with an SIP or smooth-TIP. This behavior
is also illustrated by the intersection of the inflection point (solid
brown curve) and the beamline (dashed red line) in the dispersion
diagram of Fig. 3a. Furthermore, since
the guided electromagnetic modes of the SLWG structure are distributed
over a larger cross section than a conventional SWG due to the two
lateral waveguides that couple to the middle SWG, the power handling
capability of this structure may be enhanced. The SLWG structure is
a serpentine waveguide SWS which is sandwiched between two straight,
parallel rectangular waveguides with similar broad wall dimensions.
A single beam tunnel is through the the center of the SWG structure.
The structure resembles a ladder due to the rung-like appearance of
transverse serpentine sections running between the parallel straight
waveguides. The parallel waveguides are each weakly coupled periodically
to the SWG at the middle of each bend by small rectangular slots,
as shown in the longitudinal cross section of Fig. 1a
and in the dimensional markup of the unit cell, Fig. 2a.
This waveguide structure was inspired by a similar microstrip three-way
periodic structure which also exhibits SIPs [28].
However, our structures are only able to exhibit SIPs in the fundamental
$\mathrm{TE_{10}}$rectangular waveguide mode by breaking glide symmetry
at this time.
The dispersion curves corresponding to the $\mathrm{TE_{10}}$ mode
of the individual parallel rectangular waveguides must be bent and
vertically shifted in order to obtain an SIP or TIP of a desired tilt
and frequency in the SLWG dispersion relation, as will be further
described in Sec. III-C.
Introducing small dimensional differences between the top and bottom
parallel waveguide ways (i.e., changing the broad wall dimensions
$a_{t}$ and $a_{b}$ as shown in Fig. 2a) or introducing
metallic obstacles of different dimensions in the top and bottom parallel
waveguides breaks glide symmetry. Breaking glide-symmetry provides
a simple route to achieve a cold SIP in the structure’s
dispersion relation once periodic coupling is introduced between individual-waveguide
modes. These differences between the top and bottom waveguides directly
control the tilt of the TIP. In general, breaking glide symmetry is
not a necessary condition to have an SIP, as three-way waveguide structures
with unbroken glide-symmetry have been previously shown to exhibit
SIPs [28]. However, here it is has been found to
be convenient to achieve and manipulate SIPs in structures with broken
glide-symmetry, as the SIP can occur within the fundamental $\mathrm{TE_{10}}$
mode of the SWG below upper band-edge associated to the bandgap of
the SWG which occurs at $k=2\pi p/d$ .
III-B Three Coupled Serpentine Waveguide (TCSWG)
The three-coupled serpentine waveguide TCSWG structure is our second
example of a serpentine-type SWS that is capable of exhibiting SIPs
in its dispersion relation, and has its geometry shown in Fig. 2b.
The TCSWG seems to be better suited for use in a TWT than the SLWG
structure, since the example we show tends to exhibit forward waves
in proximity to the synchronization point where the beamline interacts
with an SIP or a smooth-TIP, as we will show in our dispersion diagrams
in Section IV. The TCSWG structure is constructed
similarly to the SLWG structure. However, the top and bottom rectangular
waveguides which sandwich the center SWG are also made to be serpentine
in shape, giving them a longer path length and similar dispersion
shape to that of the center SWG, as shown in the longitudinal cross-section
of Fig. 1b and in the dimensional markup of
the unit cell in Fig. 2b. No periodic obstacles
or broad wall dimension variations are required, as the mean path
lengths of each individual waveguide way may be altered to break glide-symmetry
and obtain an SIP. In our TCSWG structure, the top, bottom, and middle
SWGs each have the same pitch and broad wall dimension. The straight
section height of each SWG way ($H_{t}$, $H_{m}$, $H_{b}$) is varied
to alter the shape of its respective dispersion curve, allowing the
prementioned SIP conditions to occur. The tilt of the TIP is controlled
by the size of the coupling slots placed between bends of adjacent
SWSs, as well as minor path length differences (between the serpentine
height dimensions $H_{t}$ and $H_{b}$) introduced to break glide
symmetry. We have also found that if the path length of the middle
SWG, mainly controlled by $H_{m}$, is made to be a scalar multiple
of the top or bottom path length that is greater than two, it is possible
to have multiple SIPs or TIPs in synchronism with the beamline at
different frequencies, though we do not show it in this paper.
Instead of inserting a beam tunnel only in the center SWG, it is also
possible to add beam tunnels to the center of the top and bottom SWGs.
This makes the structure operate with two beams propagating parallel
to each other, provided that the two beams are not too close together
and an external magnetic field can be used to confine both beams.
This dual-beam structure can potentially benefit from increased power
output at the SIP/TIP due to beamline synchronism in both the top
and bottom SWG sections.
III-C Design methodology
A minimum of three ways is required to obtain an SIP in a reciprocal,
lossless, cold structure. This is because the SIP is a synchronous
coalescence of three eigenmodes. In order to design three-way serpentine
waveguide SWSs that are both consistently synchronized to a beamline
and exhibit SIPs, we utilize a design methodology based on the work
of [52]. We use a dispersion approximation for
the initial design of both the individual straight rectangular waveguide
and serpentine waveguide ways. The design process begins by selecting
a fixed center operating frequency, spatial harmonic number $p$,
cell pitch $d$, and average electron beam velocity $u_{0}$ determined
from the cathode-anode voltage of an electron gun. The full-cell pitch
$d$ in our work is chosen to be equal to $\lambda_{g}/4$, where
$\lambda_{g}=2\pi/\left(k_{0}\sqrt{1-c/(2af_{\mathrm{center}})^{2}}\right)$
is the guided wavelength at the center operating frequency within
the SWG containing the beam tunnel, where $k_{0}$ is the free space
wavenumber, $c$ is the velocity of light in free space, $a$ is the
broad-wall dimension of the individual waveguide cross-section as
shown in Fig. 2, and $f_{\mathrm{center}}$
is the center operating frequency .
Starting with the average beam velocity, $u_{0}$, the beamline’s
linear relation (neglecting space charge effects) between the frequency
and average electronic phase constant , $\beta_{0}$, is
$$\beta_{0}=\frac{2\pi f}{u_{0}}.$$
(4)
Then, the dispersion for the $p^{th}$ spatial harmonics
of the individual serpentine and/or straight waveguide sections is
calculated from the relation found in [52]
$$f=f_{c}\sqrt{1+\left(\frac{a}{L}\right)^{2}\left(\frac{kd}{\pi}-2p\right)^{2}},$$
(5)
where, $f_{c}=c/(2a)$ is the cutoff frequency of the rectangular
waveguide cross-section and $L=2H+\pi d/2$ is the mean path length
of the individual uncoupled waveguide section within the unit cell,
as can be observed in the serpentine waveguide sections of Fig. 2.
To model the dispersion of a straight rectangular waveguide sections
of Fig. 2a, the path length simply becomes equal to
the pitch of the unit cell, $L=d$. We describe our structure using
a full unit cell notation (of period d) rather than the half
unit cell notation (of period d/2) commonly used in literature.
The half-cell notation is often used because the beam “sees” two
beam tunnel intersections per geometric period of the full unit cell,
which only differ by a sense-inversion of $E_{z}$ fields at each
consecutive beam tunnel intersection. The primary difference is that
the path lengths of each individual waveguide of our full unit cell
are twice as long as they would be in the half-cell notation. Much
of the fundamental spatial harmonic of the full-cell dispersion diagram
lies above the light-line of $k_{0}=\pm\omega\sqrt{\mu_{0}\epsilon_{0}}$
and cannot be utilized for amplification without the use of a relativistic
beam velocity [2]. Because the full-cell
notation is being used in this paper rather than the half-cell notation,
the additional $\pi$ phase shift considered in some other papers
such as [52, 53, 54, 55]
is no longer needed, so the term $2p+1$ in [52]
has been replaced with $2p$ in (5).
As long as the coupling between waveguide sections in each unit cell
is weak, this dispersion relation will serve as a reasonably accurate
approximation of actual dispersion below the first $k=2\pi p/d$ bandgap,
which occurs at the intersection of two neighboring spatial harmonic
curves corresponding to the same individual waveguide way which contains
the beam tunnel.
The dimensions $a$ and $H$ of the SWG sections containing beam tunnels,
as shown in Fig. 2, are selected using
an optimization algorithm which minimizes the integrated frequency
error between the beamline and SWG dispersion curves over the wavenumber
interval of $kd/\pi\in\left[2p,2(p+1)\right]$. This wavenumber interval
corresponds to the frequency range over which amplification is desired
to occur in our paper, as shown in Fig. 3a
and Fig. 3b. Once suitable $a$
and $H$ dimensions are determined for the SWG with a beam tunnel,
the narrow wall $b$ dimension of our serpentine waveguide way was
chosen to be $b=a/6$ to provide adequate spacing between the SWG
beam tunnel intersections. Of course, in most SWG structures, the
a and b dimensions are rarely close to standard waveguide sizes due
to the need for synchronism with a specific beamline, so waveguide
transitions are needed at the input and output ports to allow connections
for standard waveguide sizes, in addition to radio frequency (RF)
windows to maintain vacuum within the tube. However, for simplicity,
we do not consider such waveguide transitions or windows in our study,
and we only focus on the beam-wave interaction region. The beam tunnel
diameter $d_{t}$ is selected based on the empirical formula from
[52],
$$d_{t}=L\alpha\left(1+\left(\frac{L}{2a}\right)^{2}\right)^{-\frac{1}{2}},$$
(6)
which minimizes the width of the bandgap caused by the beam tunnel.
The ratio of the beam tunnel radius to the free space wavelength at
the $2\pi$ frequency, $\alpha\simeq 0.115$, was used in the design
of our structures as well. The bandgap normally caused by the beam
tunnel is significantly widened due to the additional periodic reactive
loading introduced by coupling slots. Increasing the size of the coupling
slots introduces stronger coupling between the waveguide sections,
but enlarges the bandgap. Therefore, simply having a beam tunnel which
is sufficiently in cutoff appears to be adequate for these kinds of
structures. A square-shaped beam tunnel with side length $d_{t}$
may also be used in place of a conventional cylindrical beam tunnel,
as shown in Fig. 2, to make the structures
more compatible with multi-step LIGA (lithographie, galvanoformung,
abformung; German for lithography, electroplating, and molding) processes
[56]. Traveling wave tube amplifiers with
square beam tunnels may be fabricated using two-step LIGA processes
like in [57, 58, 59, 60, 61, 62, 63].
In the two-step LIGA process, the SWSs are electroformed out of two
symmetric halves which are later bonded together. However, more than
two steps will likely be necessary for our structures due to the coupling
slot lengths differing from the beam tunnel width or differing broad
wall dimensions in each waveguide way. Additionally, the use of a
square beam tunnel may slightly degrade the hot operation of TWTs,
as mentioned in [64, 61, 60].
While it is potentially challenging to fabricate such structures using
LIGA fabrication, it should not be significantly more challenging
than it already is for conventional serpentine waveguides fabricated
by two-step LIGA. For example, each additional LIGA step required
for the SLWG and TCSWG structures corresponds to a repetition of procedures
7-13 (lapping/polishing, photoresist attachment, mask, nth
layer alignment, exposure, development, and electroplating) after
procedure 13 shown in [61].
The dispersion condition of the waveguides (when uncoupled to each
other) utilized by our group to consistently obtain SIPs is to have
two nearly parallel uncoupled (individual-waveguide) modes cross over
a third (individual-waveguide) mode which is nearly perpendicular
to the other two modes on the dispersion diagram, as shown in Fig.
3. If a periodic coupling is introduced
between all three of the individual-waveguide modes and the nearly
parallel individual-waveguide modes are in close proximity in the
dispersion diagram, then two phase- and frequency-shifted bandgaps
will form at the intersection points. If the top band-edge of one
bandgap tangentially touches the bottom band-edge of another bandgap,
an inflection point is able to form between these band-edges, as illustrated
in the inset of Fig. 3a and Fig. 3b.
Varying the proximity of near-parallel individual-waveguide modes
for a given coupling strength directly controls the tilt of the TIP.
Near-parallel individual-waveguide modes which are close to each other
tend to form smooth-TIPs, whereas near-parallel individual-waveguide
modes which are further from each other will typically form alternating-TIPs.
Between smooth-TIP and alternating-TIP conditions, an SIP condition
is expected to exist.
Once the basic dimensions of the SWGs with beam tunnels are established,
coupling slots are positioned between the bends of adjacent waveguide
ways to periodically couple the individual-waveguide modes. The coupling
slots have a vertical thickness $t$ in the y-direction, width $w$
in the z-direction, and length $l$ in the x-direction, as shown in
Fig. 2. The length of the coupling slot,
which is also along same axis as the broad wall dimension of the waveguide
ways, is the dimension that strongly controls the evanescent coupling
of modes between waveguide ways. The width of the coupling slot controls
the wave impedance within the slot, and the slot thickness primarily
controls the extent of evanescent decay and phase delay for waves
which are below the slot cutoff frequency. In this paper, we use a
coupling slot length equal to half the $a$ dimension of the SWG section
containing the beam tunnel. The slot width and thickness are arbitrarily
chosen in this paper to demonstrate that SIPs can be attained in our
structures. Larger slot lengths will strengthen mode coupling, but
the dispersion relation of the actual structure will be strongly dissimilar
to the dispersion relation of the individual uncoupled waveguide modes
from before. While large slot lengths and widths enhance mode coupling,
the reflections introduced by the periodic slot reactance in a finite-length
structure may also make the hot structure more susceptible to regenerative
oscillations.
Finally, a small geometric difference is introduced between the waveguide
sections corresponding to near-parallel dispersion curves to control
the frequency and wavenumber spacing between near-parallel dispersion
curves. This directly controls the tilt of the TIP. Large geometric
differences typically result in an alternating-TIP, whereas small
geometric differences make the TIP smooth. For the TCSWG structure,
geometric differences may be introduced as a height difference $\Delta H=\left|H_{t}-H_{b}\right|$
between the top and bottom serpentine sections ($H_{t}$ and$H_{b}$
in Fig. 2b, respectively). For the SLWG structure,
a broad wall dimension difference $\Delta a=\left|a_{t}-a_{b}\right|$
may be introduced between the top and bottom straight waveguide sections
($a_{t}$ and $a_{b}$ in Fig. 2a, respectively).
Alternatively, geometric differences may be introduced in periodic
capacitive obstacles loading the top and bottom straight waveguides
of the SLWG structure to achieve the same effect, reducing the number
of steps required for LIGA fabrication. However, it may not be desirable
to use periodic obstacles due to the large reflections they introduce
in the top and bottom waveguide sections of the SLWG structure. Conversely,
the use of unloaded parallel waveguides also reduces the complexity
and reflection coefficients of SWS at the cost of more steps with
LIGA fabrication.
Once all initial structure dimensions are determined, the full unit
cell geometry may be simulated in an eigenmode solver to obtain the
Pierce (interaction) impedance and $\omega-k$ dispersion relation
with real-valued wavenumber, $k$. In this paper, we use the software
CST Studio Suite to obtain the modal dispersion for each periodic
structure. Using simulated dispersion data, the $a$ and $H$ dimensions
of SWG ways containing beam tunnels are then further tuned to recover
beamline synchronism. Adjusting the $a$ dimensions of each serpentine
waveguide shown in Fig. 2a ($a_{m}$, for the middle
SWG of the SLWG structure; $a_{t}$ and $a_{b}$ for the top and bottom
SWGs of the TCSWG structure) primarily serves to shift each respective
SWG dispersion curve up or down. Adjusting the $H$ dimension primarily
controls the slope of the SWG dispersion curve, which may have changed
due to periodic loading from the slots. When the modal dispersion
curves and TIP are satisfactorily synchronized to the beamline, the
geometric difference between waveguide sections corresponding to parallel
dispersion curves (for example, $\Delta a$ in the SLWG structure
or $\Delta H$ in the TCSWG structure) may then be tuned to adjust
the tilt of the TIP to make it close to an SIP.
IV Results
Following the aforementioned design procedures of the previous section,
we obtain the real part of the modal dispersion relation (for the
lossless and cold structures shown, imaginary parts of the dispersion
relation correspond to evanescent modes, e.g. below the cutoff frequency
of the waveguide or in bandgaps where neighboring spatial harmonics
meet on the dispersion diagram) for both SLWG and two-beam TCSWG structures,
as shown in Figs. 4a and 4b,
obtained using the methods shown in Appendix B
and verified using the eigenmode solver in CST Studio Suite. In the
insets of these figures, we also demonstrate how it is possible to
vary the tilt of the inflection point for three different cases simply
by altering the difference in straight waveguide widths ($a_{t}$
and $a_{b}$ for top and bottom rectangular waveguides, respectively)
for the SLWG structure, or the serpentine waveguide heights ($H_{t}$
and$H_{b}$) of the top and bottom waveguides. The dimensions of each
case are provided in Appendix A. From
the dispersion relation shown for the SLWG in Fig. 4a,
the point where the beamline intersects an SIP or smooth-TIP (solid
black and dashed blue curves, respectively) is a backward-wave interaction,
making the SLWG design better suited for use in a BWO rather than
a TWT. While an alternating TIP (magenta dotted curve in the inset
of Fig. 4a) might enable forward-wave interactions
in the SLWG structure, the upper and lower band-edges on either side
of the inflection point still pose a significant risk for oscillations.
Backward wave oscillators constructed with the SLWG structure may
also benefit from improved power handling capability compared to a
conventional SWG BWO due to the guided electromagnetic mode being
distributed over a larger cross section in the two lateral waveguides.
From the dispersion relation for the TCSWG structure in Fig. 4b,
the point where the beamline intersects the inflection point is a
forward wave for the SIP and smooth-TIP (solid black and dashed blue
curves, respectively), making the TCSWG structure a better choice
for TWT designs. While the TCSWG structure shown is designed for velocity-synchronism
with an electron beam at the inflection point, which is inherently
a narrowband phenomenon, this does not mean that the bandwidth of
TWTs built using the TCSWG structure is severely limited. Because
we initially designed these structures with broadband beam-wave synchronization
in mind, there are still broad frequency ranges along dispersion branches
above and below the designed inflection points, where the beamline
is velocity-synchronized to the guided waves. It is still possible
to amplify waves over broad bandwidths as long as the TWT is stable
when the beam is introduced.
Additionally, metal losses and manufacturing errors may perturb the
dispersion relation of an SIP or TIP from its intended design, so
such effects need to be considered in the fabrication and testing
of SLWG and TCSWG structures. Waveguide structures exhibiting EPDs
are known to be quite sensitive to losses and fabrication tolerances[23, 26, 27, 28].
For the example SLWG structure in this paper, the geometric difference
in $\Delta a$ needed to go from an SIP to a smooth/alternating TIP
in the inset of Fig. 4a is $30\ \mathrm{\mu m}.$
For the example TCSWG structure, a difference in $\Delta H$ of $50\ \mathrm{\mu m}$
is needed to go from an SIP to a smooth/alternating TIP in the inset
of Fig. 4b. The work of Li, et.
al. [58] shows that it is possible to fabricate
serpentine waveguide structures with $6\ \mathrm{\mu m}$ depth tolerance
(for the broad-wall dimension $a$ of our structures) and $2\ \mathrm{\mu m}$
width tolerance between sidewalls using ultraviolet LIGA (UV-LIGA)
fabrication techniques. A fabrication tolerance of $6\ \mathrm{\mu m}$
(20% of the difference in $\Delta a$ needed to go from an SIP to
a smooth/alternating TIP shown in Fig. 4a
for the SLWG structure) is acceptable to obtain a SLWG structure that
exhibits a nearly-stationary inflection point. For the TCSWG structure,
a tolerance of $2\ \mathrm{\mu m}$ for sidewall widths (4% of the
difference in $\Delta H$ needed to go from an SIP to a smooth/alternating
TIP shown in Fig. 4b for the TCSWG
structure) is even better. Though, of course, these tolerances may
be relaxed for designs that have an SIP/TIP at lower frequencies.
Due to the neighboring upper and lower band edges in proximity to
the inflection point, one must also carefully “aim” the beamline,
which is directly controlled by the accelerating voltage of the electron
gun, to avoid striking dispersion branches that have zero group velocity,
such as the band edge, as it can lead to instability [43, 44, 45, 46].
Because there are upper and lower band edges at frequencies close
to the inflection point, the tuneability of the beam voltage is limited.
For instance, in Fig. 4a, neglecting space
charge (i.e. at low beam currents), the average beam velocity can
only be varied by approximately $u_{0}=0.200c\pm 0.001c$ to avoid
striking the neighboring upper or lower band edges on other dispersion
branches. This beam velocity range corresponds to an approximate kinetic
equivalent voltage tuneable range of $V_{0}=10.54\pm 0.11\ \mathrm{kV}$
from the relativistic relation $V_{0}=c^{2}/\eta\left[1-\left(u_{0}/c\right)^{2}\right]^{-1/2}$in
[65], where $\eta$ is the charge-to-mass
ratio of an electron at rest. For the case of the TCSWG structure
we show, the tuneable range of the beam velocity equivalent kinetic
voltage is better due to the neighbouring upper/lower band edges near
the inflection point being separated at higher/lower frequencies,
respectively, as can be observed in Fig. 4b.
However, there is still a risk of oscillations at the lower band edge
corresponding to a frequency of approx. 133 GHz for the structure
shown, so the tuneable range of beam velocity for the TCSWG structure
is $u_{0}=0.300c\pm 0.002c$, which corresponds to an approximate beam
voltage range of $V_{0}=24.67\pm 0.36\ \mathrm{kV}$.
One must also consider how the electron beam perturbs the inflection
point in the hot dispersion relation for the three-mode synchronization
regime, as was explained in [22]. When an electron
beam is introduced to the system, the SIP or TIP which existed in
the cold dispersion relation will be deformed in the hot dispersion
relation. The amount that the electron beam perturbs the inflection
point depends primarily on the dc beam current density, with higher
currents causing larger perturbations to the inflection point in the
hot dispersion relation.
Similar to other three-way waveguide power dividers and combiners
explored in papers such as [66, 67, 68],
our structures also exhibit directional coupler-like behavior at its
SIP/TIP frequencies, as demonstrated with a finite-length structure
of 32 unit cells, shown in Figs. 5a
and 5b. The S parameters of
the finite length structure were calculated using the methods explained
in Appendix B. The port numbering scheme for
our structures is that the input ports at the electron-gun end of
the structure (on the left) are odd-numbered from top to bottom, and
output ports at the collector end of our structure (on the right)
are even-numbered from top to bottom. While it is highly important
to consider the effect of waveguide transitions and RF windows, our
study focuses primarily on the interaction region of linear beam tubes,
so for brevity we do not consider the effect of input/output coupling
structures; i.e. we only consider the S-parameters at reference planes
between the SWS and where an RF window would be placed in a fabricated
device. This directional coupler-like behavior enables distributed
power extraction (DPE) which can be directed either backward toward
the cathode-end of the structure or forward toward the collector-end
of the structure. However, only forward-directive DPE may be desired
for amplification, due to the potential risk of regenerative oscillations
introduced by amplified waves returning to the electron gun-end of
the structure, like in [69]. The introduction
of distributed power extraction for linear beam tubes was necessary
to conceive the degenerate (or exceptional) synchronization in the
hot systems studied in [31, 30, 29].
For the TCSWG structure, increasing the path length of the middle
SWG can allow forward-directive DPE to occur at certain frequencies
and for power to be extracted in the top, middle, and bottom SWG outputs.
However, there may still be SIP/TIP frequencies where backward-directive
DPE continues to occur. Shifting the frequency above or below the
SIP/TIP in the vicinity of the inflection point directly controls
whether the top or bottom SWG section contributes more power to the
output of the middle SWG, as can be seen from the scattering parameters.
This dual beam TCSWG structure may also be excited either from the
middle SWG input or top and bottom SWG inputs to achieve amplification
and DPE at the SIP/TIP frequency if coupling is sufficient. Increasing
the size of coupling slots enhances DPE, however this also exacerbates
reflections in the finite-length structure and increases risk of BWO.
Because these structures can still be designed to be well-matched
with small coupling slots, longer finite-length structures and higher
beam currents may potentially be used before unstable BWO occurs in
PIC simulations and hot testing.
Finally, we compute the Pierce impedance as discussed in Section II
for a fourth case of the SLWG structure in the vicinity of a nearly-stationary
smooth-TIP, which has dispersion relation similar to the black curve,
but the inflection point is not as not as tilted as the blue curve,
shown in Fig. 4a, with dimensions provided
in Appendix A. We demonstrate the benefit
of using nearly-stationary TIPs to enhance the Pierce impedance of
serpentine-like structures, as shown in Fig. 6.
We also compare the Pierce impedance of the SLWG (solid blue line)
to the the Pierce impedance of a conventional individual SWG (red
dotted line) (i.e., the serpentine of the SLWG structure, with removed
coupling slots and straight waveguide ways). We find that the pierce
impedance of the full SLWG structure is several times higher than
a conventional simple serpentine waveguide at the frequency corresponding
to a nearly-stationary inflection point. We also note that, while
the interaction impedance of the SLWG appears quite small relative
to the simple SWG at frequencies beyond the inflection point, the
interaction impedance is comparable that of an SWG on other higher/lower
frequency branches of the SLWG’s dispersion diagram, which are not
shown.
Similarly, we compute the Pierce impedance for the a fourth case of
the dual-beam TCSWG structure in the vicinity of the nearly-stationary
smooth-TIP, which has a dispersion similiar to the black curve, but
not as tilted as the blue curve shown in the inset of Fig. 4b,
with dimensions provided in Appendix A.
We demonstrate that the nearly-stationary TIP can be used to ehance
the Pierce impedance in both beam tunnels, as shown in Fig. 7.
We compare the Pierce impedance of the TCSWG structure (solid blue
line) to the Pierce impedance of a conventional individual SWG (i.e.
with coupling slots and adjacent waveguide sections removed) for each
respective beam tunnel (dashed red for the top SWG, dotted magenta
for the bottom SWG). We find that, just like with the SLWG structure,
the Pierce impedance is several times higher than a conventional SWG
at the frequency corresponding to the nearly-stationary inflection
point. Interestingly, below the frequency of the TIP, the interaction
impedance in the lower beam tunnel is higher than in the upper beam
tunnel, whereas the opposite occurs at frequencies above the TIP.
Since glide symmetry is slightly broken due to the top and bottom
SWGs having different $H_{t}$ and $H_{b}$ dimensions, respectively,
the dispersion branches of the individual SWGs (red dashed line for
the top SWG and magenta dotted line for the bottom SWG) are dissimilar.
Due to broken glide symmetry, the peak values of interaction impedance
in the top and bottom tunnels are also different at the inflection
point. If an electron beam is introduced to the SLWG or TCSWG structures
and the beam is velocity synchronized to the SIP/TIP, we say that
the electron beam is synchronized to three degenerate modes, i.e.,
we have three-mode synchronization, as was described in [22].
Under the three-mode synchronization regime, the Pierce gain parameter
$C$ will also become larger than that of a conventional SWG TWT due
to the enhanced interaction impedance.
V Conclusion
We have showcased two novel dispersion-engineered three-way SWSs for
use in linear electron beam devices: the SLWG and TCSWG geometries.
Such geometries are capable of exhibiting SIPs or TIPs in their dispersion
relations, and larger Pierce (interaction) impedance than that of
a conventional serpentine waveguide at the frequency corresponding
to the inflection point. Using our design methodology, we were able
to demonstrate simple conditions which enable one or more SIP/TIP
to occur in a three-way waveguide periodic structure once weak periodic
coupling is introduced between individual waveguides. We have shown
the first known example of a millimeter-wave SWS for linear beam tubes
which exhibits stationary or nearly-stationary inflection points in
its dispersion relation. A previous example of a waveguide which exhibits
an SIP at radio frequencies was demonstrated using microstrip technology
in [28], and was the inspiration for this paper.
What is of interest in both of our introduced structures is that the
group velocity in the vicinity of the SIP/TIP may be easily controlled
by slightly breaking glide-symmetry in our geometries. With weak coupling
between waveguides, the dispersion relation of the introduced structures
will not be significantly different from the dispersion relations
of the individual (uncoupled) waveguides. Due to the “three-mode
synchronization” regime, the Pierce impedance, and consequently
the Pierce gain parameter, can be drastically enhanced over narrow
bandwidths near the SIP/TIP, when compared to a conventional serpentine
waveguide commonly used for millimeter-wave TWTs and BWOs. We believe
that the introduction of the three-mode synchronization regime in
such structures may enable the design of more efficient, compact linear
beam tubes, as it was speculated in [22]. There is
a lot of room for improvement if one wishes to focus on improving
the interaction impedance or bandwidth further using SIPs. However,
we believe that the design methodology shown in this paper is still
useful for designing realistic millimeter-wave SWSs with inflection
points in their dispersion relation for use in linear beam tubes.Additionally,
we have shown how to obtain TIPs with either backward or forward mode
interactions by simply by varying how much glide-symmetry is broken
in our structures, potentially enabling the design of either BWOs
or TWTs from the same initial geometry. We have also showcased the
directional coupler-like properties of both the TCSWG and SLWG structures
at frequencies near-to the SIP/TIP. This property may be exploited
in the design of specialized TWTs or BWOs which can be simultaneously
excited from multiple ports or simultaneously drive multiple loads.
Appendix A Dimensions used in Figures
For the dispersion of the individual-waveguide ways of the SLWG structure
in Fig. 3a (i.e, without coupling), the
dimensions are identical to those in Table I
with $\Delta a=0.1$ mm. , with the exception of $H=0.843$ mm,
which differs from the final $H$ dimension used for shown in Fig.
4a. Note that, unlike the TCSWG structure,
the SLWG structure only has one $H$ dimension.
For the dispersion of the individual waveguide ways of the TCSWG structure
in Fig. 3b, the dimensions are
identical to those in Table I with the exception
of the normalized beam velocity being $u_{0}/c=0.20$ and the top,
middle, and bottom SWG heights being $H_{t}=0.518$ mm, $H_{m}=1.161$ mm,
and $H_{b}=0.418$ mm, respectively.
Table I reports the dimensions for Fig. 4a
and Fig. 4b as follows. In the
inset of Fig. 4a, we have the dimensions
used in the SLWG column of Table I , with
$\Delta a=0.21$ mm for the SIP (black solid lines), $\Delta a=0.24$ mm
for the alternating-TIP (group velocity alternates in sign at frequencies
slightly above or below TIP frequency) (magenta dotted line), and
$\Delta a=0.18$ mm for the smooth-TIP (group velocity does not
change sign at frequencies slightly above or below TIP frequency)
(blue dashed line).
In the inset of Fig. 4b, we have
the dimensions used in the TCSWG column of Table I,
with $\Delta H=0.15$ mm for the SIP (black solid line), $\Delta H=0.10$ mm
for the smooth-TIP (blue dashed line), and $\Delta H=0.20$ mm for
the alternating-TIP (magenta dotted line).
For the Pierce impedance plot of Fig. 6,
we use the dimensions of the SWLG column of Table I,
with $\Delta a=0.20$ mm, which provides an smooth-TIP that is almost
stationary (i.e., it is very close to an SIP). For the Pierce impedance
plot of Fig. 7, we use the dimensions
of the TCSWG column of Table I, with $\Delta H=0.125$ mm,
which provides a smooth-TIP that is almost stationary. We simulated
the SLWG and TCSWG structures in the eigenmode solver of CST Studio
Suite 2019 with a step size of $0.5^{\circ}$ in the boundary phase
and a tetrahedral mesh setting of 50 cells per max box edge. We found
that the peak value of the interaction impedance at an SIP or nearly-stationary
TIP for both structures depends greatly on mesh density and step size
of boundary phase.
Appendix B T Parameters
Using frequency-domain simulations, it is possible to rapidly and
accurately compute the complex dispersion and approximate finite-length
S-parameters of a periodic structure without an eigenmode solver using
the frequency-dependent scattering parameters (S parameters) of a
single unit cell having separable pairs of ports for each way of the
multi-way waveguide structure. The benefit of computing dispersion
in this manner is that it is possible to obtain both real and imaginary
solutions for the Bloch wavenumber, $k$, whereas it is only possible
to obtain the real part of the Bloch wavenumber using the eigenmode
solver of CST Studio Suite. For lossless and cold structures, the
imaginary part of the Bloch wavenumber corresponds to evanescent modes,
e.g. below the cutoff frequency of the waveguide or in bandgaps that
form where neighboring spatial harmonics meet. It is also notably
faster to obtain the unit cell S parameters using the frequency domain
solver than it is to directly obtain the modal dispersion from a phase
sweep of the periodic boundary in the eigenmode solver. Using this
method, it was possible for us to tune the geometry of our structures
to obtain beamline synchronism and desired inflection point tilt in
a reasonable time frame. The real dispersion obtained by this method
was found to be in excellent agreement with CST Studio Suite’s eigenmode
solutions.
The method we use to obtain the complex dispersion and approximate
finite-length S parameters of our periodic structures involves converting
between S parameters and scattering transmission matrices (T parameters)
in intermediate steps. T parameters may be directly obtained through
algebraic manipulation of each S parameter, as both are defined in
terms of the same $a$- and $b$- waves.
$$\left[\begin{array}[]{c}b_{1}\\
b_{2}\\
b_{3}\\
b_{4}\\
b_{5}\\
b_{6}\end{array}\right]=\underline{\mathrm{\mathbf{S}}}\left[\begin{array}[]{c}a_{1}\\
a_{2}\\
a_{3}\\
a_{4}\\
a_{5}\\
a_{6}\end{array}\right]\iff\left[\begin{array}[]{c}b_{2}\\
a_{2}\\
b_{4}\\
a_{4}\\
b_{6}\\
a_{6}\end{array}\right]=\underline{\mathrm{\mathbf{T}}}\left[\begin{array}[]{c}a_{1}\\
b_{1}\\
a_{3}\\
b_{3}\\
a_{5}\\
b_{5}\end{array}\right]$$
(7)
By converting our frequency-dependent S parameters to T parameters
at each frequency for the unit cell, it is possible to solve the following
Floquet-Bloch eigenvalue problem at each frequency to obtain the complex
dispersion diagram for the modes of the periodic structure [70, 71, 42],
$$\underline{\mathrm{\mathbf{T}}}(\mathit{z+d,z})\boldsymbol{\Psi}(\mathit{z})=e^{-jkd}\boldsymbol{\Psi}(\mathit{z}),$$
where, $\boldsymbol{\Psi}$ is the complex state vector for the unit
cell composed of $a$ and $b$ waves at each port, as shown in Eqn.
7 and Fig. 5a
and Fig. 5b, and $d$ is the
unit cell pitch. In other words, the wavenumbers of the fundamental
spatial harmonic, may be evaluated directly from the eigenvalues $\lambda=\exp(-jkd)$
of the T matrix through the relation
$$k=\frac{-\ln\left(\lambda\right)}{jd}.$$
The modal dispersion diagrams in Fig. 4a
and Fig. 4b (only showing the branches
with purely real $k$) are calculated from the eigenvalue problem
shown above and are verified using the eigenmode solver of CST Studio
Suite. It is also possible to estimate the S parameters of our periodic
structure with finite length by cascading T matrices and converting
the resultant parameters back into S parameters using the same algebraic
manipulation as before [72]. This is how
the finite-length S parameters shown in Fig. 5a
and Fig. 5b were computed.
This method may be readily generalized for 2N-port periodic
structures. It is also possible to solve the eigenvalue for spatial
harmonics other than the fundamental, as the solutions $k_{n}$ are
periodic as $k_{n}=k+2\pi n/d$, with $n=0,\pm 1,\pm 2,...$.
Due to the presence of periodic coupling slots in both of our unit
cell designs, our single-cell frequency domain model for our structures
must be slightly modified in order to avoid placing ports at coupling
slots or in waveguide bends. Our unit cells were modified for the
frequency domain solver by horizontally shifting the reference planes
of all ports by $d/4$ along each waveguide path and adding de-embedding
to the ports to account for small reflections caused by transitions
between straight waveguide sections and waveguide bends and slots,
as shown in Fig. 8a and Fig. 8b.
This modification does not appear to significantly affect the dispersion
relation of the periodic structure under study.
Acknowledgments
This material is based upon work supported by the GAANN Fellowship,
by the Air Force Office of Scientific Research under Award FA9550-18-1-0355
and under the MURI Award FA9550- 20-1-0409 administered through the
University of New Mexico. We thank Dassault Systèmes for providing
CST Studio Suite, which has been instrumental in this work. The authors
would also like to thank our colleague, Mr. Miguel Saavedra, for his
assistance in fixing our interaction impedance calculation.
References
[1]
J. Pierce, “Waves in electron streams and circuits,” Bell System
Technical Journal, vol. 30, no. 3, pp. 626–651, 1951.
[2]
C. Paoloni, D. Gamzina, R. Letizia, Y. Zheng, and N. C. Luhmann Jr,
“Millimeter wave traveling wave tubes for the 21st century,” Journal
of Electromagnetic Waves and Applications, vol. 35, no. 5, pp. 567–603,
2021.
[3]
A. Figotin and I. Vitebskiy, “Electromagnetic unidirectionality in magnetic
photonic crystals,” Physical Review B, vol. 67, no. 16, p. 165210,
2003.
[4]
——, “Gigantic transmission band-edge resonance in periodic stacks of
anisotropic layers,” Physical review E, vol. 72, no. 3, p. 036619,
2005.
[5]
——, “Frozen light in photonic crystals with degenerate band edge,”
Physical Review E, vol. 74, no. 6, p. 066613, 2006.
[6]
——, “Slow-wave resonance in periodic stacks of anisotropic layers,”
Physical Review A, vol. 76, no. 5, p. 053839, 2007.
[7]
——, “Slow light in photonic crystals,” Waves in Random and Complex
Media, vol. 16, no. 3, pp. 293–382, 2006.
[8]
——, “Slow wave phenomena in photonic crystals,” Laser & Photonics
Reviews, vol. 5, no. 2, pp. 201–213, 2011.
[9]
H. Ramezani, S. Kalish, I. Vitebskiy, and T. Kottos, “Unidirectional lasing
emerging from frozen light in nonreciprocal cavities,” Physical review
letters, vol. 112, no. 4, p. 043904, 2014.
[10]
M. B. Stephanson, K. Sertel, and J. L. Volakis, “Frozen modes in coupled
microstrip lines printed on ferromagnetic substrates,” IEEE microwave
and wireless components letters, vol. 18, no. 5, pp. 305–307, 2008.
[11]
M. A. Othman, F. Yazdi, A. Figotin, and F. Capolino, “Giant gain enhancement
in photonic crystals with a degenerate band edge,” Physical Review B,
vol. 93, no. 2, p. 024301, 2016.
[12]
R. Almhmadi and K. Sertel, “Frozen-light modes in 3-way coupled silicon ridge
waveguides,” in 2019 United States National Committee of URSI National
Radio Science Meeting (USNC-URSI NRSM). IEEE, 2019, pp. 1–2.
[13]
B. Paul, N. K. Nahar, and K. Sertel, “Frozen mode in coupled silicon ridge
waveguides for optical true time delay applications,” JOSA B,
vol. 38, no. 5, pp. 1435–1441, 2021.
[14]
G. Mumcu, K. Sertel, and J. L. Volakis, “Lumped circuit models for degenerate
band edge and magnetic photonic crystals,” IEEE microwave and wireless
components letters, vol. 20, no. 1, pp. 4–6, 2009.
[15]
M. A. Othman, V. A. Tamma, and F. Capolino, “Theory and new amplification
regime in periodic multimodal slow wave structures with degeneracy
interacting with an electron beam,” IEEE Transactions on Plasma
Science, vol. 44, no. 4, pp. 594–611, 2016.
[16]
J. R. Burr, N. Gutman, C. M. de Sterke, I. Vitebskiy, and R. M. Reano,
“Degenerate band edge resonances in coupled periodic silicon optical
waveguides,” Optics express, vol. 21, no. 7, pp. 8736–8745, 2013.
[17]
M. A. Othman, M. Veysi, A. Figotin, and F. Capolino, “Low starting electron
beam current in degenerate band edge oscillators,” IEEE Transactions
on Plasma Science, vol. 44, no. 6, pp. 918–929, 2016.
[18]
M. Y. Nada, M. A. Othman, and F. Capolino, “Theory of coupled resonator
optical waveguides exhibiting high-order exceptional points of degeneracy,”
Physical Review B, vol. 96, no. 18, p. 184304, 2017.
[19]
D. Oshmarin, F. Yazdi, M. A. Othman, J. Sloan, M. Radfar, M. M. Green, and
F. Capolino, “New oscillator concept based on band edge degeneracy in lumped
double-ladder circuits,” IET Circuits, Devices & Systems, vol. 13,
no. 7, pp. 950–957, 2019.
[20]
A. F. Abdelshafy, D. Oshmarin, M. A. Othman, M. M. Green, and F. Capolino,
“Distributed degenerate band edge oscillator,” IEEE Transactions on
Antennas and Propagation, vol. 69, no. 3, pp. 1821–1824, 2020.
[21]
A. M. Zuboraj, B. K. Sertel, and C. J. L. Volakis, “Propagation of degenerate
band-edge modes using dual nonidentical coupled transmission lines,”
Physical Review Applied, vol. 7, no. 6, p. 064030, 2017.
[22]
F. Yazdi, M. A. Othman, M. Veysi, A. Figotin, and F. Capolino, “A new
amplification regime for traveling wave tubes with third-order modal
degeneracy,” IEEE Transactions on Plasma Science, vol. 46, no. 1, pp.
43–56, 2017.
[23]
M. A. Othman, X. Pan, G. Atmatzakis, C. G. Christodoulou, and F. Capolino,
“Experimental demonstration of degenerate band edge in metallic periodically
loaded circular waveguide,” IEEE Transactions on Microwave Theory and
Techniques, vol. 65, no. 11, pp. 4037–4045, 2017.
[24]
A. Chabanov, “Strongly resonant transmission of electromagnetic radiation in
periodic anisotropic layered media,” Physical Review A, vol. 77,
no. 3, p. 033811, 2008.
[25]
T. Mealy and F. Capolino, “General conditions to realize exceptional points of
degeneracy in two uniform coupled transmission lines,” IEEE
Transactions on Microwave Theory and Techniques, vol. 68, no. 8, pp.
3342–3354, 2020.
[26]
A. F. Abdelshafy, M. A. Othman, D. Oshmarin, A. T. Almutawa, and F. Capolino,
“Exceptional points of degeneracy in periodic coupled waveguides and the
interplay of gain and radiation loss: Theoretical and experimental
demonstration,” IEEE Transactions on Antennas and Propagation,
vol. 67, no. 11, pp. 6909–6923, 2019.
[27]
D. Oshmarin, A. F. Abdelshafy, A. Nikzamir, M. M. Green, and F. Capolino,
“Experimental demonstration of a new oscillator concept based on degenerate
band edge in microstrip circuit,” arXiv preprint arXiv:2109.07002,
2021.
[28]
M. Y. Nada, T. Mealy, and F. Capolino, “Frozen mode in three-way periodic
microstrip coupled waveguide,” IEEE Microwave and Wireless Components
Letters, vol. 31, no. 3, pp. 229–232, 2020.
[29]
T. Mealy, A. F. Abdelshafy, and F. Capolino, “High-power backward-wave
oscillator using folded waveguide with distributed power extraction operating
at an exceptional point,” IEEE Transactions on Electron Devices,
vol. 68, no. 7, pp. 3588–3595, 2021.
[30]
——, “High-power x-band relativistic backward-wave oscillator with
exceptional synchronous regime operating at an exceptional point,”
Physical Review Applied, vol. 15, no. 6, p. 064021, 2021.
[31]
——, “Exceptional point of degeneracy in a backward-wave oscillator with
distributed power extraction,” Physical Review Applied, vol. 14,
no. 1, p. 014078, 2020.
[32]
A. Figotin and I. Vitebsky, “Nonreciprocal magnetic photonic crystals,”
Physical Review E, vol. 63, no. 6, p. 066609, 2001.
[33]
A. Figotin and I. Vitebskiy, “Oblique frozen modes in periodic layered
media,” Physical Review E, vol. 68, no. 3, p. 036609, 2003.
[34]
G. Mumcu, K. Sertel, J. L. Volakis, I. Vitebskiy, and A. Figotin, “Rf
propagation in finite thickness unidirectional magnetic photonic crystals,”
IEEE transactions on antennas and propagation, vol. 53, no. 12, pp.
4026–4034, 2005.
[35]
M. Sumetsky, “Uniform coil optical resonator and waveguide: transmission
spectrum, eigenmodes, and dispersion relation,” Optics Express,
vol. 13, no. 11, pp. 4331–4340, 2005.
[36]
J. Scheuer and M. Sumetsky, “Optical-fiber microcoil waveguides and resonators
and their applications for interferometry and sensing,” Laser &
Photonics Reviews, vol. 5, no. 4, pp. 465–478, 2011.
[37]
N. Gutman, C. M. de Sterke, A. A. Sukhorukov, and L. C. Botten, “Slow and
frozen light in optical waveguides with multiple gratings: Degenerate band
edges and stationary inflection points,” Physical Review A, vol. 85,
no. 3, p. 033804, 2012.
[38]
N. Apaydin, L. Zhang, K. Sertel, and J. L. Volakis, “Experimental validation
of frozen modes guided on printed coupled transmission lines,” IEEE
transactions on microwave theory and techniques, vol. 60, no. 6, pp.
1513–1519, 2012.
[39]
H. Li, I. Vitebskiy, and T. Kottos, “Frozen mode regime in finite periodic
structures,” Physical Review B, vol. 96, no. 18, p. 180301, 2017.
[40]
Z. M. Gan, H. Li, and T. Kottos, “Effects of disorder in frozen-mode light,”
Optics Letters, vol. 44, no. 11, pp. 2891–2894, 2019.
[41]
A. Figotin and G. Reyes, “Multi-transmission-line-beam interactive system,”
Journal of Mathematical Physics, vol. 54, no. 11, p. 111901, 2013.
[42]
V. A. Tamma and F. Capolino, “Extension of the pierce model to multiple
transmission lines interacting with an electron beam,” IEEE
Transactions on Plasma Science, vol. 42, no. 4, pp. 899–910, 2014.
[43]
D. Hung, I. Rittersdorf, P. Zhang, D. Chernin, Y. Lau, T. Antonsen Jr,
J. Luginsland, D. Simon, and R. Gilgenbach, “Absolute instability near the
band edge of traveling-wave amplifiers,” Physical review letters,
vol. 115, no. 12, p. 124801, 2015.
[44]
P. Zhang, D. Hung, Y. Y. Lau, D. Chernin, B. Hoff, P. Wong, D. H. Simon, and
R. M. Gilgenbach, “Absolute instability near twt band edges,” in 2016
IEEE International Vacuum Electronics Conference (IVEC). IEEE, 2016, pp. 1–2.
[45]
F. Antoulinakis, P. Wong, A. Jassem, and Y. Lau, “Absolute instability and
transient growth near the band edges of a traveling wave tube,”
Physics of Plasmas, vol. 25, no. 7, p. 072102, 2018.
[46]
L. Ang and Y. Lau, “Absolute instability in a traveling wave tube model,”
Physics of Plasmas, vol. 5, no. 12, pp. 4408–4410, 1998.
[47]
M. A. Othman, M. Veysi, A. Figotin, and F. Capolino, “Giant amplification in
degenerate band edge slow-wave structures interacting with an electron
beam,” Physics of Plasmas, vol. 23, no. 3, p. 033112, 2016.
[48]
J. W. Gewartowski and H. A. Watson, “Traveling-wave amplifiers,” in
Principles of electron tubes: including grid-controlled tubes,
microwave tubes, and gas tubes. Van
Nostrand, 1965, ch. 10, p. 357.
[49]
A. K. MM, S. Aditya, and C. Chua, “Interaction impedance for space harmonics
of circular helix using simulations,” IEEE Transactions on Electron
Devices, vol. 64, no. 4, pp. 1868–1872, 2017.
[50]
A. Hessel, M. H. Chen, R. C. Li, and A. A. Oliner, “Propagation in
periodically loaded waveguides with higher symmetries,” Proceedings of
the IEEE, vol. 61, no. 2, pp. 183–195, 1973.
[51]
M. Bagheriasl, O. Quevedo-Teruel, and G. Valerio, “Bloch analysis of
artificial lines and surfaces exhibiting glide symmetry,” IEEE
Transactions on Microwave Theory and Techniques, vol. 67, no. 7, pp.
2618–2628, 2019.
[52]
K. T. Nguyen, A. N. Vlasov, L. Ludeking, C. D. Joye, A. M. Cook, J. P. Calame,
J. A. Pasour, D. E. Pershing, E. L. Wright, S. J. Cooke et al.,
“Design methodology and experimental verification of
serpentine/folded-waveguide twts,” IEEE Transactions on Electron
Devices, vol. 61, no. 6, pp. 1679–1686, 2014.
[53]
J. H. Booske, M. C. Converse, C. L. Kory, C. T. Chevalier, D. A. Gallagher,
K. E. Kreischer, V. O. Heinen, and S. Bhattacharjee, “Accurate parametric
modeling of folded waveguide circuits for millimeter-wave traveling wave
tubes,” IEEE Transactions on Electron Devices, vol. 52, no. 5, pp.
685–694, 2005.
[54]
D. Xu, S. Wang, Z. Wang, W. Shao, T. He, H. Wang, T. Tang, H. Gong, Z. Lu,
Z. Duan et al., “Theory and experiment of high-gain modified angular
log-periodic folded waveguide slow wave structure,” IEEE Electron
Device Letters, vol. 41, no. 8, pp. 1237–1240, 2020.
[55]
C. Robertson, A. Cross, C. Gilmour, D. Dyson, P. Huggard, F. Cahill,
M. Beardsley, R. Dionisio, and K. Ronald, “71-76 ghz folded waveguide twt
for satellite communications,” in 2019 International Vacuum
Electronics Conference (IVEC). IEEE,
2019, pp. 1–2.
[56]
E. Backer, W. Ehrfeld, D. Münchmeyer, H. Betz, A. Heuberger, S. Pongratz,
W. Glashauser, H. Michel, and R. v. Siemens, “Production of
separation-nozzle systems for uranium enrichment by a combination of x-ray
lithography and galvanoplastics,” Naturwissenschaften, vol. 69,
no. 11, pp. 520–523, 1982.
[57]
H. Li and J. Feng, “Microfabrication of w band folded waveguide slow wave
structure using two-step uv-liga technology,” in 2012 IEEE
International Vacuum Electronics Conference (IVEC). IEEE, 2012, pp. 387–388.
[58]
H. Li, Y. Li, and J. Feng, “Fabrication of 340-ghz folded waveguides using
kmpr photoresist,” IEEE electron device letters, vol. 34, no. 3, pp.
462–464, 2013.
[59]
C. D. Joye, J. P. Calame, M. Garven, and B. Levush, “Uv-liga microfabrication
of 220 ghz sheet beam amplifier gratings with su-8 photoresists,”
Journal of Micromechanics and Microengineering, vol. 20, no. 12, p.
125016, 2010.
[60]
S.-T. Han, S.-G. Jeon, Y.-M. Shin, K.-H. Jang, J.-K. So, J.-H. Kim, S.-S.
Chang, and G.-S. Park, “Experimental investigations on miniaturized vacuum
electron devices,” in IVESC 2004. The 5th International Vacuum
Electron Sources Conference Proceedings (IEEE Cat. No. 04EX839). IEEE, 2004, pp. 404–405.
[61]
G. Park, Y. Shin, J. So, S. Han, K. Jang, J. Kim, and S. Chang, “Feasibility
study of two-step liga-fabricated circuits applicable to
millimeter/submillimeter wave sources,” in AIP Conference
Proceedings, vol. 807, no. 1. American Institute of Physics, 2006, pp. 299–308.
[62]
Y.-M. Shin, G.-S. Park, G. P. Scheitrum, and B. Arfin, “Novel coupled-cavity
twt structure using two-step liga fabrication,” IEEE transactions on
plasma science, vol. 31, no. 6, pp. 1317–1324, 2003.
[63]
H. Li, J. Cai, Y. Du, X. Li, and J. Feng, “Uv-liga microfabrication for high
frequency structures of a y-band twt 2nd harmonic amplifier,” in 2015
IEEE International Vacuum Electronics Conference (IVEC). IEEE, 2015, pp. 1–2.
[64]
C. D. Joye, J. P. Calame, D. K. Abe, K. T. Nguyen, E. L. Wright, D. E.
Pershing, M. Garven, and B. Levush, “3d uv-liga microfabricated circuits for
a wideband 50w g-band serpentine waveguide amplifier,” in 2011
International Conference on Infrared, Millimeter, and Terahertz Waves. IEEE, 2011, pp. 1–3.
[65]
A. Gilmour, “Electron motion in static electric fields,” in Principles
of traveling wave tubes. Norwood, MA,
USA: Artech House, 1994, ch. 3, p. 20.
[66]
N. J. Fonseca, D. Petrolati, and P. Angeletti, “Design of a waveguide
dual-mode three-way power divider for dual-polarization beam forming networks
at ka-band,” in 2013 IEEE Antennas and Propagation Society
International Symposium (APSURSI). IEEE, 2013, pp. 1096–1097.
[67]
P. Gardner and B. Ong, “Mode matching design of three-way waveguide power
dividers,” in IEE Colloquium on Advances in Passive Microwave
Components (Digest No: 1997/154). IET, 1997, pp. 5–1.
[68]
G. A. Kumar, B. Biswas, and D. Poddar, “A compact broadband riblet-type
three-way power divider in rectangular waveguide,” IEEE Microwave and
Wireless Components Letters, vol. 27, no. 2, pp. 141–143, 2017.
[69]
S. Bhattacharjee, J. H. Booske, C. L. Kory, D. W. Van Der Weide, S. Limbach,
S. Gallagher, J. D. Welter, M. R. Lopez, R. M. Gilgenbach, R. L. Ives
et al., “Folded waveguide traveling-wave tube sources for terahertz
radiation,” IEEE transactions on plasma science, vol. 32, no. 3, pp.
1002–1014, 2004.
[70]
A. F. Abdelshafy, M. A. Othman, F. Yazdi, M. Veysi, A. Figotin, and
F. Capolino, “Electron-beam-driven devices with synchronous multiple
degenerate eigenmodes,” IEEE Transactions on Plasma Science, vol. 46,
no. 8, pp. 3126–3138, 2018.
[71]
M. A. Othman and F. Capolino, “Theory of exceptional points of degeneracy in
uniform coupled waveguides and balance of gain and loss,” IEEE
Transactions on Antennas and Propagation, vol. 65, no. 10, pp. 5289–5302,
2017.
[72]
W. F. Egan, “Gain,” in Practical RF system design. John Wiley & Sons, 2004, ch. 2, pp. 7–45. |
Answering UCQs under updates
and in the presence of integrity constraints††thanks: Funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) – SCHW 837/5-1.
Christoph Berkholz,
Jens Keppeler,
Nicole Schweikardt
Humboldt-Universität zu Berlin
{berkholz,keppelej,schweika}@informatik.hu-berlin.de
Abstract
We investigate the query evaluation problem for fixed queries over
fully dynamic databases where tuples can be inserted or deleted.
The task is to design a dynamic data structure that can immediately
report the new result of a fixed query after every database update.
We consider unions of conjunctive queries (UCQs) and focus on the
query evaluation tasks
testing (decide whether an input tuple $\overline{a}$ belongs to the
query result),
enumeration (enumerate, without repetition,
all tuples in the query result),
and counting (output the number of tuples in the
query result).
We identify three increasingly restrictive classes of UCQs which we
call t-hierarchical, q-hierarchical, and
exhaustively q-hierarchical UCQs.
Our main results provide the following dichotomies:
If the query’s homomorphic core is t-hierarchical (q-hierarchical,
exhaustively q-hierarchical), then the testing (enumeration, counting)
problem can be solved
with constant update time and constant testing time (delay, counting time). Otherwise, it
cannot be solved with sublinear update time and sublinear testing
time (delay, counting time).${}^{\ast}$
We also study the complexity of query evaluation in the dynamic setting
in the presence of integrity constraints, and we obtain according
dichotomy results
for the special case of small domain constraints (i.e., constraints which state that
all values in a particular column of
a relation belong to a fixed domain of constant size).
${}^{\ast})$ To be precise:
our lower bound for the enumeration problem
is obtained only for queries that are self-join free,
with sublinear we mean $O(n^{1-\varepsilon})$ for $\varepsilon>0$ and
where $n$ is the size of the active domain of the current database,
and all our
lower bounds rely on the OV-conjecture
and/or the OMv-conjecture,
two algorithmic conjectures on the hardness of the Boolean orthogonal vectors
problem and the Boolean online matrix-vector multiplication problem.
1 Introduction
Dynamic query evaluation refers to a setting where a fixed
query $q$ has to be evaluated against a database that is constantly
updated [19].
In this paper, we study dynamic query evaluation for
unions of conjunctive queries (UCQs)
on relational databases that may be updated by inserting or deleting tuples.
A dynamic algorithm for evaluating a query $q$
receives an initial database and performs a preprocessing
phase which builds a data structure that contains a suitable
representation of the database and the result of $q$ on this database.
After every database update, the data structure is updated so that it
suitably represents the new database $D$ and the result $q(D)$ of $q$ on
this database.
To solve the counting problem, such an algorithm is required to quickly
report the number $|q(D)|$ of tuples in the current query result,
and the counting time is the time used to compute this number.
To solve the testing problem, the algorithm has to be able to check for an arbitrary
input tuple $\overline{a}$ if $\overline{a}$ belongs to the current query
result, and the testing time is the time used to perform this check.
To solve the enumeration problem, the algorithm has to enumerate $q(D)$
without repetition and with a bounded delay between the output tuples.
The update time is the time used for updating the data
structure after having received a database update.
We regard the counting (testing, enumeration) problem of a query $q$
to be tractable under updates if it can be solved by a dynamic
algorithm with linear preprocessing time, constant update time, and
constant counting time (testing time, delay).
This setting has been studied for conjunctive queries (CQs)
in our previous paper [5], which identified
a class of CQs called q-hierarchical that precisely characterises the
tractability frontier of the counting problem and the enumeration
problem for CQs under updates:
For every q-hierarchical CQ,
the counting problem and the enumeration problem can be solved with
linear preprocessing time, constant update time, constant counting
time, and constant delay.
And for every CQ that is not equivalent to a q-hierarchical CQ,
the counting problem (and for the case of self-join free queries, the enumeration problem)
cannot be solved with
sublinear update time and sublinear counting time (delay),
unless the OMv-conjecture or the OV-conjecture (the OMv-conjecture)
fails.
The latter are well-known algorithmic conjectures on the hardness of the
Boolean online matrix-vector multiplication problem (OMv) and the
Boolean orthogonal vectors problem (OV)
[18, 1],
and “sublinear” means $O(n^{1-\epsilon})$, where $\epsilon>0$
and $n$ is the
size of the active domain of the current database.
Our contribution.
We identify a new subclass of CQs
which we call t-hierarchical,
which contains and properly extends the class of q-hierarchical CQs,
and
which precisely characterises the tractability frontier of the testing
problem for CQs under updates
(see Theorem 3.4):
For every t-hierarchical CQ, the testing problem
can be solved by a dynamic algorithm with linear preprocessing time,
constant update time, and constant testing time.
And for every CQ that is not equivalent to a t-hierarchical CQ,
the testing problem cannot be solved with arbitrary preprocessing
time, sublinear update time, and sublinear testing time, unless the
OMv-conjecture fails.
Furthermore, we transfer the notions of t-hierarchical and
q-hierarchical queries to unions of conjunctive queries (UCQs)
and identify a further class of UCQs
which we call exhaustively q-hierarchical, yielding three increasingly
restricted subclasses of UCQs.
In a nutshell, our main contribution concerning UCQs shows that these
notions precisely characterise the tractability frontiers of the
testing problem, the enumeration problem, and the counting problem for
UCQs under updates
(see the Theorems 4.1, 4.2, 4.4):
For every t-hierarchical (q-hierarchical,
exhaustively q-hierarchical) UCQ, the testing (enumeration, counting)
problem can be solved with linear preprocessing time,
constant update time, and constant testing time (delay, counting
time).
And for every UCQ that is not equivalent to a t-hierarchical (q-hierarchical,
exhaustively q-hierarchical) UCQ, the testing (enumeration, counting)
problem cannot be solved with sublinear update time and sublinear testing
time (delay, counting time); to be precise, the lower bound for
enumeration is obtained only for self-join free queries,
the lower bounds for testing and enumeration are conditioned on
the OMv-conjecture,
and the lower bound for counting is conditioned on the OMv-conjecture and the
OV-conjecture.
Finally, we transfer our results to a scenario where
databases are required to satisfy a set of small domain constraints
(i.e., constraints stating that all values which occur in a particular column of
a relation belong to a fixed domain of constant size), leading to a
precise characterisation of the UCQs for which the testing
(enumeration, counting) problem under updates is tractable in this scenario (see
Theorem 5.3).
Further related work.
The complexity of evaluating CQs and UCQs in the static setting
(i.e., without database updates)
is well-studied. In particular, there are characterisations of
“tractable” queries known for Boolean queries
[16, 15, 23] as well as for
the task of counting the result tuples
[11, 7, 12, 14, 8].
In [3],
the fragment of self-join
free CQs that can be enumerated with constant delay after linear
preprocessing time has been identified, but almost nothing
is known about the complexity of the enumeration problem for UCQs on static databases.
Very recent papers also studied the complexity of CQs with respect to
a given set of integrity constraints [13, 20, 4].
The dynamic query evaluation problem has been
considered from different angles, including descriptive dynamic
complexity [26, 27, 28]
and, somewhat closer to what we are aiming for, incremental
view maintenance
[17, 9, 21, 22, 25].
In [19], the
enumeration and testing problem under updates has been studied
for q-hierarchical and (more general) acyclic CQs in
a setting that is very similar to our setting and the setting of [5];
the Dynamic Constant-delay Linear Representations (DCLR) of [19] are
data structures that use at most linear update time and solve
the enumeration problem and the testing problem with constant delay and constant testing time.
Outline.
The rest of the paper is structured as follows.
Section 2 provides basic notations
concerning databases, queries, and dynamic algorithms for query
evaluation.
Section 3 is devoted to CQs and proves
our dichotomy result concerning the testing problem for CQs.
Section 4 focuses on UCQs and proves our dichotomies
concerning the testing, enumeration, and
counting problem for UCQs.
Section 5 is devoted to
the setting in which integrity constraints may cause a query whose
evaluation under updates is hard in general to be tractable on
databases that satisfy the constraints.
2 Preliminaries
Basic notation.
We write $\mathbb{N}$ for the set of non-negative integers and let
$\mathbb{N}_{\scriptscriptstyle\geqslant 1}:=\mathbb{N}\setminus\{0\}$ and $[n]:=\{1,\ldots,n\}$ for
all $n\in\mathbb{N}_{\scriptscriptstyle\geqslant 1}$.
By $2^{S}$ we denote the power set of a set $S$.
We write $\vec{v}_{i}$ to denote the $i$-th component of an
$n$-dimensional vector $\vec{v}$,
and we write $M_{i,j}$ for the entry in row $i$ and column $j$ of a matrix
$M$.
By $()$ we denote the empty tuple, i.e., the unique tuple of arity 0.
For an $r$-tuple $t=(t_{1},\ldots,t_{r})$ and indices $i_{1},\ldots,i_{m}\in\{1,\ldots,r\}$ we write
$\pi_{i_{1},\ldots,i_{m}}(t)$ to denote the projection of $t$ to the components $i_{1},\ldots,i_{m}$, i.e.,
the $m$-tuple $(t_{i_{1}},\ldots,t_{i_{m}})$, and in case that $m=1$ we identify the $1$-tuple $(t_{i_{1}})$ with the
element $t_{i_{1}}$.
For a set $T$ of $r$-tuples we let $\pi_{i_{1},\ldots,i_{m}}(T):=\{\pi_{i_{1},\ldots,i_{m}}(t)\ :\ t\in T\}$.
Databases.
We fix a countably infinite set dom, the domain of potential
database entries. Elements in dom are called constants.
A schema is a finite set $\sigma$ of relation symbols, where
each $R\in\sigma$ is equipped with a fixed arity $\operatorname{ar}(R)\in\mathbb{N}$ (note that here we explicitly allow
relation symbols of arity 0).
Let us fix a schema $\sigma=\{R_{1},\ldots,R_{s}\}$, and let
$r_{i}:=\operatorname{ar}(R_{i})$ for $i\in[s]$.
A database $D$ of schema $\sigma$ ($\sigma$-db, for short),
is of the form $D=(R_{1}^{D},\ldots,R_{s}^{D})$, where $R_{i}^{D}$ is a finite subset of
$\textbf{dom}^{r_{i}}$.
The active domain $\textrm{adom}(D)$ of $D$ is the smallest subset
$A$ of dom such that $R_{i}^{D}\subseteq A^{r_{i}}$ for all $i\in[s]$.
Queries.
We fix a countably infinite set var of variables.
We allow queries to use variables as well as constants.
An atomic formula (for short: atom) $\psi$ of schema $\sigma$ is of the form
$Rv_{1}\cdots v_{r}$ with $R\in\sigma$, $r=\operatorname{ar}(R)$, and
$v_{1},\ldots,v_{r}\in\textbf{var}\cup\textbf{dom}$.
A conjunctive formula of schema $\sigma$ is of the form
$$\exists y_{1}\,\cdots\,\exists y_{\ell}\;\big{(}\,\psi_{1}\,\wedge\,\cdots\,%
\wedge\,\psi_{d}\,\big{)}$$
($$*$$)
where $\ell\geqslant 0$, $d\geqslant 1$, $\psi_{j}$ is an atomic
formula of schema $\sigma$ for every $j\in[d]$, and
$y_{1},\ldots,y_{\ell}$ are pairwise distinct elements in var.
For a conjunctive formula $\varphi$ of the form ($*$ ‣ 2) we let
$\textrm{vars}(\varphi)$ (and $\textrm{cons}(\varphi)$, respectively) be the set of all variables (and constants, respectively) occurring in
$\varphi$.
The set of free variables of $\varphi$ is
$\textrm{free}(\varphi):=\textrm{vars}(\varphi)\setminus\{y_{1},\ldots,y_{\ell}\}$.
For every variable $x\in\textrm{vars}(\varphi)$ we let $\textrm{atoms}_{\varphi}(x)$ (or
$\textrm{atoms}(x)$, if $\varphi$ is clear from the context) be the set of all atoms $\psi_{j}$ of $\varphi$ such that
$x\in\textrm{vars}(\psi_{j})$. The formula $\varphi$ is called quantifier-free
if $\ell=0$, and it is called self-join free if no relation
symbol occurs more than once in $\varphi$.
For $k\geqslant 0$, a $k$-ary conjunctive query ($k$-ary CQ, for
short) is of the form
$$\{\ (u_{1},\ldots,u_{k})\ :\ \varphi\ \}$$
($$**$$)
where $\varphi$ is a conjunctive formula of schema $\sigma$,
$u_{1},\ldots,u_{k}\in\textrm{free}(\varphi)\cup\textbf{dom}$, and $\{u_{1},\ldots,u_{k}\}\cap\textbf{var}=\textrm{free}(\varphi)$.
We often write $q_{\varphi}(\overline{u})$ for $\overline{u}=(u_{1},\ldots,u_{k})$ (or $q_{\varphi}$ if $\overline{u}$ is clear from the context)
to denote such a query.
We let $\textrm{vars}(q_{\varphi}):=\textrm{vars}(\varphi)$,
$\textrm{free}(q_{\varphi}):=\textrm{free}(\varphi)$, and $\textrm{cons}(q_{\varphi}):=\textrm{cons}(\varphi)\cup(\{u_{1},\ldots,u_{k}\}%
\cap\textbf{dom})$.
For every $x\in\textrm{vars}(q_{\varphi})$ we let $\textrm{atoms}_{q_{\varphi}}(x):=\textrm{atoms}_{\varphi}(x)$, and if $q_{\varphi}$ is clear from the context, we omit
the subscript and simply write
$\textrm{atoms}(x)$. The CQ $q_{\varphi}$ is called quantifier-free
(self-join free) if $\varphi$ is quantifier-free (self-join free).
The semantics are defined as usual: A valuation is a mapping $\beta:\textrm{vars}(q_{\varphi})\cup\textbf{dom}\to\textbf{dom}$
with $\beta(a)=a$ for every $a\in\textbf{dom}$.
A valuation $\beta$ is a homomorphism from $q_{\varphi}$ to a $\sigma$-db
$D$
if for every atom $Rv_{1}\cdots v_{r}$ in $q_{\varphi}$ we have
$\big{(}\beta(v_{1}),\ldots,\beta(v_{r})\big{)}\in R^{D}$.
We sometimes write $\beta:q_{\varphi}\to D$ to
indicate that $\beta$ is a homomorphism from $q_{\varphi}$ to $D$.
The query result $q_{\varphi}(D)$ of
a $k$-ary CQ $q_{\varphi}(u_{1},\ldots,u_{k})$ on the $\sigma$-db $D$ is
defined as the set
$\{\,\big{(}\beta(u_{1}),\ldots,\beta(u_{k})\big{)}\ :\ \text{ $\beta$ is a
homomorphism from $q_{\varphi}$ to $D$}\}$.
If $\overline{x}=(x_{1},\ldots,x_{k})$ is a list of the free variables of
$\varphi$ and $\overline{a}\in\textbf{dom}^{k}$, we sometimes write $D\models\varphi[\overline{a}]$ to indicate that
there is a homomorphism $\beta:q\to D$ with
$\overline{a}=\big{(}\beta(x_{1}),\ldots,\beta(x_{k})\big{)}$, for the query $q=q_{\varphi}(x_{1},\ldots,x_{k})$.
A $k$-ary union of conjunctive queries ($k$-ary UCQ)
is of the form
$q_{1}(\overline{u}_{1})\cup\cdots\cup q_{d}(\overline{u}_{d})$ where $d\geqslant 1$ and $q_{i}(\overline{u}_{i})$ is a $k$-ary CQ of schema $\sigma$ for every $i\in[d]$.
The query result of such a $k$-ary UCQ $q$ on a $\sigma$-db $D$ is
$q(D):=\bigcup_{i=1}^{d}q_{i}(D)$.
For a $k$-ary query $q$ we write $\textrm{vars}(q)$ (and $\textrm{cons}(q)$) to denote the set of all
variables (and constants) that occur in $q$.
Clearly,
$q(D)\subseteq(\textrm{adom}(D)\cup\textrm{cons}(q))^{k}$.
A Boolean query is a query of arity $k=0$.
As usual, for Boolean queries $q$ we will write $q(D)=\texttt{yes}$
instead of $q(D)\neq\emptyset$, and $q(D)=\texttt{no}$ instead of
$q(D)=\emptyset$.
Two $k$-ary queries $q$ and $q^{\prime}$ are
equivalent ($q\equiv q^{\prime}$, for short)
if $q(D)=q^{\prime}(D)$ for every $\sigma$-db $D$.
Homomorphisms.
We use standard notation concerning homomorphisms
(cf., e.g. [2]).
The notion of a homomorphism $\beta:q\to D$ from a CQ $q$ to a
database $D$ has already been defined above.
A homomorphism $g:D\to q$ from a database $D$ to a CQ $q$ is a
mapping from $\textrm{adom}(D)\to\textrm{vars}(q)\cup\textrm{cons}(q)$ such that whenever
$(a_{1},\ldots,a_{r})$ is a tuple in some relation $R^{D}$ of $D$, then
$Rg(a_{1})\cdots g(a_{r})$ is an atom of $q$.
Let $q(u_{1},\ldots,u_{k})$ and $q^{\prime}(v_{1},\ldots,v_{k})$ be two
$k$-ary CQs. A homomorphism from $q$ to $q^{\prime}$ is a
mapping $h\colon\textrm{vars}(q)\cup\textbf{dom}\to\textrm{vars}(q^{\prime})\cup\textbf%
{dom}$ with
$h(a)=a$ for all $a\in\textbf{dom}$ and $h(u_{i})=v_{i}$ for all $i\in[k]$ such
that for every atom $Rw_{1}\cdots w_{r}$ in $q$ there is an atom
$Rh(w_{1})\cdots h(w_{r})$ in $q^{\prime}$. We sometimes write $h:q\to q^{\prime}$ to
indicate that $h$ is a homomorphism from $q$ to $q^{\prime}$.
Note that by [6]
there is a homomorphism from $q$
to $q^{\prime}$ if and only if for every database $D$ it holds that
$q(D)\supseteq q^{\prime}(D)$.
A CQ $q$ is a homomorphic core if there is no homomorphism
from $q$ into a proper subquery of $q$.
Here, a subquery of a CQ $q_{\varphi}(\overline{u})$ where $\varphi$ is of
the form ($*$ ‣ 2) is a CQ $q_{\varphi^{\prime}}(\overline{u})$ where
$\varphi^{\prime}$ is of the form $\exists y_{i_{1}}\cdots\exists y_{i_{m}}\;(\psi_{j_{1}}\,\wedge\,\cdots\,%
\wedge\,\psi_{j_{n}})$ with
$i_{1},\ldots,i_{m}\in[\ell]$, $j_{1},\ldots,j_{n}\in[d]$,
and $\textrm{free}(\varphi^{\prime})=\textrm{free}(\varphi)$.
We say that a UCQ is
a homomorphic core, if every CQ in the union is a homomorphic core and there is no
homomorphism between two distinct CQs.
It is well-known that every CQ and every UCQ is equivalent to a unique
(up to renaming of variables) homomorphic core, which is therefore
called the core of the query (cf., e.g., [2]).
Sizes and Cardinalities.
The size $|\!|\sigma|\!|$ of a schema $\sigma$ is
$|\sigma|+\sum_{R\in\sigma}\operatorname{ar}(R)$.
The size $|\!|q|\!|$ of a query $q$ of schema $\sigma$ is
the length of $q$ when viewed as a word over the alphabet
$\sigma\cup\textbf{var}\cup\textbf{dom}\cup\{\,\wedge\,,\exists\,,(\,,)\,,\{\,,%
\}\,,:\,,\cup\,\}\cup\{\,,\}$.
For a $k$-ary query $q$ and a $\sigma$-db $D$, the
cardinality of the query result is the number $|q(D)|$ of
tuples in $q(D)$.
The cardinality $|D|$ of a $\sigma$-db $D$ is defined
as the number of tuples stored in $D$, i.e.,
$|D|:=\sum_{R\in\sigma}|R^{D}|$.
The size $|\!|D|\!|$ of $D$ is defined as
$|\!|\sigma|\!|+|\textrm{adom}(D)|+\sum_{R\in\sigma}\operatorname{ar}(R){\cdot}%
|R^{D}|$ and corresponds to the size of a reasonable encoding of $D$.
The following notions concerning updates, dynamic algorithms for query
evaluation, and algorithmic conjectures are taken almost verbatim from
[5].
Updates.
We allow to update a given database of schema $\sigma$ by inserting or deleting
tuples as follows. An insertion command is of the form
insert $R(a_{1},\ldots,a_{r})$
for $R\in\sigma$, $r=\operatorname{ar}(R)$, and $a_{1},\ldots,a_{r}\in\textbf{dom}$. When
applied to a $\sigma$-db $D$, it results in the updated $\sigma$-db
$D^{\prime}$ with $R^{D^{\prime}}:=R^{D}\cup\{(a_{1},\ldots,a_{r})\}$ and
$S^{D^{\prime}}:=S^{D}$ for all $S\in\sigma\setminus\{R\}$.
A deletion command is of the form
delete $R(a_{1},\ldots,a_{r})$
for $R\in\sigma$, $r=\operatorname{ar}(R)$, and $a_{1},\ldots,a_{r}\in\textbf{dom}$. When
applied to a $\sigma$-db $D$, it results in the updated $\sigma$-db
$D^{\prime}$ with $R^{D^{\prime}}:=R^{D}\setminus\{(a_{1},\ldots,a_{r})\}$ and
$S^{D^{\prime}}:=S^{D}$ for all $S\in\sigma\setminus\{R\}$.
Note that both types of commands may change the database’s active domain.
Dynamic algorithms for query evaluation.
Following [10], we use Random Access Machines (RAMs)
with $O(\log n)$ word-size and a uniform cost
measure to analyse our algorithms.
We will assume that the RAM’s memory is initialised to $0$. In
particular, if an algorithm uses an array, we will assume
that all array entries are initialised to $0$, and this initialisation
comes at no cost (in real-world computers this can be achieved by using the
lazy array initialisation technique, cf. e.g. [24]).
A further assumption is that for every fixed
dimension $k\in\mathbb{N}_{\scriptscriptstyle\geqslant 1}$ we have available an unbounded number of
$k$-ary arrays A such that for given $(n_{1},\ldots,n_{k})\in\mathbb{N}^{k}$
the entry $\texttt{A}[n_{1},\ldots,n_{k}]$
at position $(n_{1},\ldots,n_{k})$ can be accessed in constant
time.111While this can be accomplished easily in the RAM-model,
for an implementation on real-world
computers one would probably have to resort to replacing our use of
arrays by using suitably designed hash functions.
For our purposes it will be convenient to assume that $\textbf{dom}=\mathbb{N}_{\scriptscriptstyle\geqslant 1}$.
Our algorithms will take as input
a $k$-ary query $q$
and a $\sigma$-db ${D_{0}}$.
For all query evaluation problems considered in this paper, we aim at
routines preprocess and update which achieve the following.
Upon input of $q$
and ${D_{0}}$, the preprocess routine builds a data
structure $\mathtt{D}$ which represents ${D_{0}}$ (and which is designed in
such a way that it supports the evaluation of $q$ on ${D_{0}}$).
Upon input of a command $\textsf{update}\ R(a_{1},\ldots,a_{r})$ (with
$\textsf{update}\in\{\textsf{insert},\textsf{delete}\}$),
calling update modifies the data structure $\mathtt{D}$ such that it
represents the updated database $D$.
The preprocessing time $t_{p}$ is the
time used for performing preprocess.
The update time $t_{u}$ is the time used for performing
an update, and in this paper we aim at algorithms where $t_{u}$ is independent of the size
of the current database $D$.
By init we denote the particular case of the routine preprocess
upon input of a query $q$
and the empty database
${D_{\emptyset}}$, where $R^{{D_{\emptyset}}}=\emptyset$ for all $R\in\sigma$.
The initialisation time $t_{i}$
is the time used for performing init.
In all algorithms presented in this paper, the preprocess routine
for input of $q$ and ${D_{0}}$
will carry out the init routine for $q$
and then perform a sequence of $|{D_{0}}|$ update operations to
insert all the tuples of ${D_{0}}$ into the data structure.
Consequently, $t_{p}=t_{i}+|{D_{0}}|\cdot t_{u}$.
In the following, $D$ will always denote the database that is
currently represented by the data structure $\mathtt{D}$.
To solve the enumeration problem under updates,
apart from the routines preprocess and update,
we aim at a routine enumerate such that
calling enumerate invokes an enumeration of all tuples,
without repetition, that belong to the query result $q(D)$.
The delay $t_{d}$ is the maximum time
used during a call of enumerate
•
until the output of the first tuple (or the end-of-enumeration
message EOE,
if $q(D)=\emptyset$),
•
between the output of two consecutive tuples, and
•
between the output of the last tuple and the end-of-enumeration
message EOE.
To test if a given tuple belongs to the query result,
instead of enumerate we aim at a routine test which
upon input of a tuple $\overline{a}\in\textbf{dom}^{k}$ checks whether $\overline{a}\in q(D)$.
The testing time $t_{t}$ is the time used for
performing a test.
To solve the counting problem under updates,
we aim at a routine count which outputs the cardinality
$|q(D)|$ of the query result.
The counting time $t_{c}$ is the time used for
performing a count.
To answer a Boolean query under updates,
we aim at a routine answer
that produces the answer yes or no of $q$ on $D$.
The answer time $t_{a}$ is the time used for
performing answer.
Whenever speaking of a dynamic algorithm, we mean an algorithm
that has routines preprocess and update and, depending on the
problem at hand, at least one of the routines
answer,
test,
count,
and enumerate.
Throughout the paper, we often adopt the view of data complexity
and suppress factors that may depend on the query $q$
but not on the database $D$.
E.g., “linear preprocessing time” means
$t_{p}\leqslant f(q)\cdot|\!|{D_{0}}|\!|$ and
“constant update time” means $t_{u}\leqslant f(q)$, for a
function $f$ with codomain $\mathbb{N}$.
When writing $\operatorname{\textit{poly}}(n)$ we mean $n^{O(1)}$, and
for a query $q$ we often write $\operatorname{\textit{poly}}(q)$ instead of $\operatorname{\textit{poly}}(|\!|q|\!|)$.
Algorithmic conjectures.
Similarly as in [5] we obtain hardness
results that are conditioned on algorithmic conjectures concerning the
hardness of the following problems. These problems deal with
Boolean matrices and vectors, i.e., matrices and vectors over
$\{0,1\}$, and all the arithmetic is done over the Boolean semiring,
where multiplication means conjunction and addition means disjunction.
The orthogonal vectors
problem (OV-problem) is the following decision problem. Given two sets $U$
and $V$ of $n$ Boolean vectors of dimension $d$, decide whether there
are vectors $\vec{u}\in U$ and $\vec{v}\in V$ such that
$\vec{u}^{\,\mskip-1.5mu \mathsf{T}}\vec{v}=0$.
The OV-conjecture states that there is no $\epsilon>0$ such that
the OV-problem for $d=\lceil\log^{2}n\rceil$ can be solved in time
$O(n^{2-\epsilon})$, see [1].
The online matrix-vector multiplication problem (OMv-problem) is the
following algorithmic task. At first, the algorithm gets a Boolean
$n\times n$ matrix $M$ and is allowed to do some
preprocessing. Afterwards, the algorithm receives $n$ vectors
$\vec{v}^{\,1},\ldots,\vec{v}^{\,n}$ one by one and has to output
$M\vec{v}^{\,t}$ before it has access to $\vec{v}^{\,t+1}$ (for each
$t<n$).
The running time is the overall time the algorithm needs to produce
the output $M\vec{v}^{\,1},\ldots,M\vec{v}^{\,n}$.
The OMv-conjecture [18] states that there is no
$\epsilon>0$ such that the OMv-problem can be solved in time $O(n^{3-\epsilon})$.
A related problem is the OuMv-problem where the
algorithm, again, is given a Boolean $n\times n$ matrix $M$ and is
allowed to do some preprocessing. Afterwards, the algorithm receives a
sequence of pairs of $n$-dimensional Boolean vectors
$\vec{u}^{\,t},\vec{v}^{\,t}$ for each $t\in[n]$, and the task is to
compute $(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}$ before accessing
$\vec{u}^{\,t+1},\vec{v}^{\,t+1}$.
The OuMv-conjecture states that there is no $\epsilon>0$ such
that the OuMv-problem can be solved in time
$O(n^{3-\epsilon})$.
It was shown in [18] that the OuMv-conjecture is
equivalent to the OMv-conjecture, i.e.,
the OuMv-conjecture fails if, and only if, the OMv-conjecture fails.
3 Conjunctive queries
This section’s aim is twofold: Firstly, we observe that the notions and
results of [5] generalise to
CQs with constants in a straightforward way.
Secondly, we identify a new subclass of CQs which precisely
characterises the CQs for which testing can be done efficiently
under updates.
The definition of q-hierarchical CQs can be taken verbatim from
[5]:
Definition 3.1.
A CQ $q$ is q-hierarchical if for any two variables
$x,y\in\textrm{vars}(q)$ we have
(i)
$\textrm{atoms}(x)\subseteq\textrm{atoms}(y)$
or $\textrm{atoms}(y)\subseteq\textrm{atoms}(x)$
or
$\textrm{atoms}(x)\cap\textrm{atoms}(y)=\emptyset$,
and
(ii)
if $\textrm{atoms}(x)\varsubsetneq\textrm{atoms}(y)$ and $x\in\textrm{free}(q)$, then $y\in\textrm{free}(q)$.
Obviously, it can be checked in time $\operatorname{\textit{poly}}(q)$ whether a given CQ $q$
is q-hierarchical.
It is straightforward to see that if a CQ is q-hierarchical, then so
is its homomorphic core.
Using the main results of [5], it is not
difficult to show the following.
Theorem 3.2.
(a)
There is a dynamic algorithm that receives a q-hierarchical $k$-ary CQ $q$ and a $\sigma$-db ${D_{0}}$, and computes
within $t_{p}=\operatorname{\textit{poly}}({q})\cdot O(|\!|{D_{0}}|\!|)$ preprocessing time a data structure that can be
updated in time $t_{u}=\operatorname{\textit{poly}}({q})$ and allows to
(i)
compute the cardinality $|q(D)|$ in time $t_{c}=O(1)$,
(ii)
enumerate $q(D)$ with delay $t_{d}=\operatorname{\textit{poly}}({q})$,
(iii)
test for an input tuple $\overline{a}\in\textbf{dom}^{k}$ if $\overline{a}\in q(D)$ within time
$t_{t}=\operatorname{\textit{poly}}({q})$,
(iv)
and when given a tuple $\overline{a}\in q(D)$, the tuple $\overline{a}^{\prime}$ (or
the message EOE) that the enumeration procedure of
(aii) would output directly after
having output $\overline{a}$, can be computed within time
$\operatorname{\textit{poly}}({q})$.
(b)
Let $\epsilon>0$ and let $q$ be a CQ whose homomorphic core is not
q-hierarchical
(note that this is the case if, and only if, $q$ is not equivalent to a q-hierarchical CQ).
(i)
If $q$ is Boolean, then there is no dynamic algorithm with arbitrary
preprocessing time and $t_{u}=O(n^{1-\varepsilon})$
update time that
answers
$q(D)$ in time
$t_{a}=O(n^{2-\varepsilon})$, unless the OMv-conjecture fails.
(ii)
There is no dynamic algorithm with arbitrary
preprocessing time and $t_{u}=O(n^{1-\varepsilon})$
update time that
computes the cardinality $|q(D)|$ in time
$t_{c}=O(n^{1-\varepsilon})$, unless the OMv-conjecture or the OV-conjecture fails.
(iii)
If $q$ is self-join free, then there is no dynamic algorithm with
arbitrary
preprocessing time and $t_{u}=O(n^{1-\varepsilon})$
update time that
enumerates
$q(D)$
with delay $t_{d}=O(n^{1-\varepsilon})$, unless the OMv-conjecture fails.
All lower bounds remain true, if we restrict ourselves to the class of
databases that map homomorphically into $q$.
Proof.
From [5] we already know that the theorem’s
statements (ai) and
(aii)
and
(bi)–(biii)
are true for all CQs $q$ with $\textrm{cons}(q)=\emptyset$,
and a close look at the dynamic algorithm provided in
[5] shows that also the statements
(aiii) and (aiv) are true for
all CQs $q$ with $\textrm{cons}(q)=\emptyset$.
Furthermore, a close inspection of the proofs provided in
[5] for the statements
(bi)–(biii)
for constant-free CQs $q$ shows that with only very minor
modifications these proofs carry over to the case of CQs $q$ with
$\textrm{cons}(q)\neq\emptyset$.
All that remains to be done is to transfer the results
(ai)–(aiv)
from constant-free CQs to CQs $q$ with $\textrm{cons}(q)\neq\emptyset$.
To establish this,
let us consider an arbitrary CQ $q$ of schema $\sigma$ with
$\textrm{cons}(q)\neq\emptyset$.
Without loss of generality we can assume that
$$q\ =\ \ \{\ (x_{1},\ldots,x_{k},b_{1},\ldots,b_{\ell})\ :\ \varphi\ \}$$
(1)
where $\varphi$ is a conjunctive formula of schema $\sigma$,
$\textrm{free}(\varphi)=\{x_{1},\ldots,x_{k}\}$, and
$b_{1},\ldots,b_{\ell}~{}\in~{}\textbf{dom}$. Let $\overline{x}:=(x_{1},\ldots,x_{k})$ and
$\overline{b}:=(b_{1},\ldots,b_{\ell})$.
In the following, we construct a new schema $\hat{\sigma}$ (that
depends on $q$) and a
constant-free CQ $\hat{q}$ of schema $\hat{\sigma}$ and of size
$\operatorname{\textit{poly}}(|\!|q|\!|)$ such that the
following is true:
1.
$\hat{q}$ is q-hierarchical $\iff$ $q$ is q-hierarchical.
2.
A dynamic algorithm for evaluating $\hat{q}$ on
$\hat{\sigma}$-dbs with initialisation time
$\hat{t}_{i}$, update time $\hat{t}_{u}$, counting time
$\hat{t}_{c}$ (delay $\hat{t}_{d}$, testing time $\hat{t}_{t}$)
can be used to obtain a dynamic algorithm for
evaluating $q$ on $\sigma$-dbs with
initialisation time
$\hat{t}_{i}$, update time $\hat{t}_{u}{\cdot}\operatorname{\textit{poly}}(|\!|q|\!|)$, counting time
$\hat{t}_{c}$ (delay $O(\hat{t}_{d})+\operatorname{\textit{poly}}(|\!|q|\!|)$,
testing time $O(\hat{t}_{t})+\operatorname{\textit{poly}}(|\!|q|\!|)$).
For each atom $\psi$ of $q$ we introduce a new relation symbol
$R_{\psi}$ of arity $|\textrm{vars}(\psi)|$, and we let
$\hat{\sigma}:=\{R_{\psi}\ :\ \psi$ is an atom in $\varphi\}$.
For each atom $\psi$ of $q$ let us fix a tuple
$\overline{v}^{\psi}=(v_{1},\ldots,v_{m})$ of pairwise
distinct variables such that $\textrm{vars}(\psi)=\{v_{1},\ldots,v_{m}\}$.
The CQ $\hat{q}$ is defined as
$$\hat{q}\ :=\ \ \{\ (x_{1},\ldots,x_{k})\ :\ \hat{\varphi}\ \}\,,$$
where the conjunctive formula $\hat{\varphi}$ is obtained from $\varphi$ by
replacing every atom $\psi$ with the atom $R_{\psi}(\overline{v}^{\psi})$.
Obviously, $\hat{q}$ is a CQ of schema $\hat{\sigma}$,
$\textrm{cons}(\hat{q})=\emptyset$, $\textrm{free}(\hat{q})=\textrm{free}(q)$, and
$\textrm{vars}(\hat{q})=\textrm{vars}(q)$.
Furthermore, for every variable $y\in\textrm{vars}(q)$ we
have
$\textrm{atoms}_{\hat{q}}(y)=\{R_{\psi}\ :\ \psi\in\textrm{atoms}_{q}(y)\}$ (and,
equivalently,
$\textrm{atoms}_{q}(y)=\{\psi\ :\ R_{\psi}\in\textrm{atoms}_{\hat{q}}(y)\}$).
Therefore, $\hat{q}$ is q-hierarchical if and only if $q$ is q-hierarchical.
With every $\sigma$-db $D$ we associate a $\hat{\sigma}$-db
$\hat{D}$ as follows:
Consider an atom $\psi$ of $q$ and let $\psi$ be of the form
$Sw_{1}\cdots w_{s}$. Thus, $\{w_{1},\ldots,w_{s}\}\cap\textbf{var}=\textrm{vars}(\psi)=\{v_{1},\ldots,v_{m}\}$ for
$(v_{1},\ldots,v_{m}):=\overline{v}^{\psi}$. Then, the relation symbol
$R_{\psi}$ is interpreted in $\hat{D}$ by the relation
$$(R_{\psi})^{\hat{}D}\ :=\ \ \{\;\big{(}\beta(v_{1}),\ldots,\beta(v_{m})\big{)}%
\ :\ \text{$\beta$ is a valuation with }\big{(}\beta(w_{1}),\ldots,\beta(w_{s}%
)\big{)}\in S^{D}\;\}\,.$$
It is straightforward to verify that for every $\sigma$-db $D$ we
have
$$q(D)\quad=\quad\{\;(\overline{a},\overline{b})\;\ :\ \;\overline{a}\ \in\ \hat%
{q}(\hat{D})\;\}\,.$$
(2)
Now, assume we have available a
dynamic algorithm $\mathcal{A}$ for evaluating $\hat{q}$ on
$\hat{\sigma}$-dbs with preprocessing time
$\hat{t}_{p}$, update time $\hat{t}_{u}$, counting time
$\hat{t}_{c}$ (delay $\hat{t}_{d}$, testing time
$\hat{t}_{t}$).
We can use this algorithm to obtain a dynamic algorithm
$\mathcal{B}$ for
evaluating $q$ on $\sigma$-dbs as follows.
The init routine of $\mathcal{B}$ performs the
init routine of $\mathcal{A}$.
The update routine of $\mathcal{B}$ proceeds as follows.
Upon input of an update command of the form
$\textsf{update}\;S(c_{1},\ldots,c_{s})$ for some $S\in\sigma$,
we consider all atoms $\psi$ of $q$ of the form
$Sw_{1}\cdots w_{s}$.
For each such atom we check if
•
for all $i\in[s]$ with $w_{i}\in\textbf{dom}$ we have $w_{i}=c_{i}$, and
•
for all $i,j\in[s]$ with $w_{i}=w_{j}$ we have $c_{i}=c_{j}$.
If this is true, we carry out the update routine of $\mathcal{A}$ for
the command $\textsf{update}\,R_{\psi}(c_{j_{1}},\ldots,c_{j_{m}})$, where
$(w_{j_{1}},\ldots,w_{j_{m}})=(v_{1},\ldots,v_{m})=\overline{v}^{\psi}$.
Thus, one call of the update routine of $\mathcal{A}$ performs
$\operatorname{\textit{poly}}(|\!|q|\!|)$ calls of the update routine of
$\mathcal{B}$. This takes time $\hat{t}_{u}{\cdot}\operatorname{\textit{poly}}(|\!|q|\!|)$
and ensures that afterwards, the data structure of $\mathcal{B}$ has
stored the information concerning the $\hat{\sigma}$-db $\hat{D}$
associated with the updated $\sigma$-db $D$.
The count routine of $\mathcal{B}$ simply calls the count
routine of $\mathcal{A}$, and we know that the result is correct since
$|q(D)|=|\hat{q}(\hat{D})|$ due to (2).
For the same reason, the enumerate routine of $\mathcal{B}$ can
call the enumerate routine of $\mathcal{A}$ and output the tuple
$(\overline{a},\overline{b})$ for each output tuple $\overline{a}$ of $\mathcal{A}$.
The test routine of $\mathcal{B}$ upon input of a tuple
$(c_{1},\ldots,c_{k+\ell})\in\textbf{dom}^{k+\ell}$ outputs yes if
$(c_{k+1},\ldots,c_{k+\ell})=\overline{b}$ and the test routine of
$\mathcal{A}$ returns yes upon input of the tuple $(c_{1},\ldots,c_{k})$.
For statement (aiv) of Theorem 3.2,
when given a tuple $(\overline{a},\overline{b})\in q(D)$ we know that
$\overline{a}\in\hat{q}(\hat{D})$. Thus, we can use $\overline{a}$ and call the according routine of
$\mathcal{A}$ for (aiv) and obtain a tuple
$\overline{a}^{\prime}\in\hat{q}(\hat{D})$ (or the message EOE) and know that $(\overline{a}^{\prime},\overline{b})$ is
the next tuple that the enumerate routine of $\mathcal{B}$ will output
after having output the tuple $(\overline{a},\overline{b})$ (or that there is no
such tuple).
Note that this suffices to transfer the statements
(ai)–(aiv) from a
q-hierarchical CQ $\hat{q}$ with $\textrm{cons}(\hat{q})=\emptyset$ to
the q-hierarchical CQ $q$ with $\textrm{cons}(q)\neq\emptyset$.
This completes the proof of Theorem 3.2.
∎
Note that neither the results of [5] nor
Theorem 3.2 provide a precise
characterisation of the CQs for which testing can be done efficiently
under updates.
Of course, according to Theorem 3.2 (aiii), the testing
problem can be solved with constant update time and constant testing
time for every
q-hierarchical CQ. But the same holds true, for example, for
the non-q-hierarchical CQ
$p_{\textit{S-E-T}}:=\{\,(x,y)\,:\,Sx\,\wedge\,Exy\,\wedge\,Ty\,\}\,$.
The according dynamic algorithm simply uses 1-dimensional arrays
$\texttt{A}_{S}$ and
$\texttt{A}_{T}$ and a 2-dimensional array $\texttt{A}_{E}$ and that
for all $a,b\in\textbf{dom}$ we have
$\texttt{A}_{S}[a]=1$ if $a\in S^{D}$, and $\texttt{A}_{S}[a]=0$ otherwise,
$\texttt{A}_{T}[a]=1$ if $a\in T^{D}$, and $\texttt{A}_{T}[a]=0$
otherwise, and
$\texttt{A}_{E}[a,b]=1$ if $(a,b)\in E^{D}$, and $\texttt{A}_{E}[a,b]=0$ otherwise.
When given an update command, the arrays can be updated within
constant time. And when given a tuple $(a,b)\in\textbf{dom}^{2}$, the test
routine simply looks up the array entries $\texttt{A}_{S}[a]$,
$\texttt{A}_{E}[a,b]$, $\texttt{A}_{T}[b]$ and returns the correct
query result accordingly.
To characterise the conjunctive queries for which testing can be done
efficiently under updates, we introduce the following notion of
t-hierarchical CQs.
Definition 3.3.
A CQ $q$ is t-hierarchical if the following is satisfied:
(i)
for all $x,y\in\textrm{vars}(q)\setminus\textrm{free}(q)$, we have
$\textrm{atoms}(x)\subseteq\textrm{atoms}(y)$
or $\textrm{atoms}(y)\subseteq\textrm{atoms}(x)$
or
$\textrm{atoms}(x)\cap\textrm{atoms}(y)=\emptyset$,
and
(ii)
for all $x\in\textrm{free}(q)$ and all $y\in\textrm{vars}(q)\setminus\textrm{free}(q)$, we
have
$\textrm{atoms}(x)\cap\textrm{atoms}(y)=\emptyset$ or $\textrm{atoms}(y)\subseteq\textrm{atoms}(x)$.
Obviously, it can be checked in time $\operatorname{\textit{poly}}(q)$ whether a given CQ $q$
is t-hierarchical.
Note that every q-hierarchical CQ is t-hierarchical, and
a Boolean query is t-hierarchical if and only if it is q-hierarchical.
The queries $p_{\textit{S-E-T}}$ and
$p_{\textit{E-E-R}}:=\{\,(x,y)\,:\,\exists v_{1}\exists v_{2}\exists v_{3}\,(\,%
Exv_{1}\,\wedge\,Eyv_{2}\,\wedge\,Rxyv_{3}\,)\,\}$
are examples for queries that are t-hierarchical but not
q-hierarchical.
It is straightforward to verify that if a CQ is t-hierarchical, then so
is its homomorphic core.
This section’s main result shows that the
t-hierarchical CQs
precisely characterise the CQs for which the testing problem
can be solved efficiently under updates:
Theorem 3.4.
(a)
There is a dynamic algorithm that receives a t-hierarchical
$k$-ary CQ $q$ and a $\sigma$-db ${D_{0}}$, and computes within
$t_{p}=\operatorname{\textit{poly}}({q})\cdot O(|\!|{D_{0}}|\!|)$ preprocessing time a data structure that can be
updated in time $t_{u}=\operatorname{\textit{poly}}({q})$ and allows to
test for an input tuple $\overline{a}\in\textbf{dom}^{k}$ if $\overline{a}\in q(D)$ within
time $t_{t}=\operatorname{\textit{poly}}({q})$.
(b)
Let $\epsilon>0$ and let $q$ be a $k$-ary CQ whose homomorphic core
is not t-hierarchical (note that this is the case if, and only
if, $q$ is not equivalent to a t-hierarchical CQ).
There is no dynamic algorithm with arbitrary preprocessing time and $t_{u}=O(n^{1-\epsilon})$
update time that can test for any input tuple $\overline{a}\in\textbf{dom}^{k}$ if
$\overline{a}\in q(D)$ within testing time
$t_{t}=O(n^{1-\epsilon})$, unless the OMv-conjecture
fails.
The lower bound remains true, if we restrict ourselves to the class of
databases that map homomorphically into $q$.
Proof.
To avoid notational clutter, and without loss of generality, we
restrict attention
to queries $q_{\varphi}(u_{1},\ldots,u_{k})$ where
$(u_{1},\ldots,u_{k})$ is of the form $(z_{1},\ldots,z_{k})$ for pairwise
distinct variables $z_{1},\ldots,z_{k}$.
For the proof of (a), we combine the
array construction described above for the example query $p_{\textit{S-E-T}}$ with
the dynamic algorithm provided by
Theorem 3.2 (a)
and the
following Lemma 3.5.
To formulate the lemma, we need the following notation.
A $k$-ary generalised CQ is of the form
$\{\,(z_{1},\ldots,z_{k})\,:\,\varphi_{1}\,\wedge\,\cdots\,\wedge\,\varphi_{m}\,\}$ where $k\geqslant 0$, $z_{1},\ldots,z_{k}$ are pairwise distinct variables, $m\geqslant 1$, $\varphi_{j}$ is a
conjunctive formula for each $j\in[m]$,
$\textrm{free}(\varphi_{1})\cup\cdots\cup\textrm{free}(\varphi_{m})=\{z_{1},%
\ldots,z_{k}\}$, and the quantified
variables of $\varphi_{j}$ and $\varphi_{j^{\prime}}$ are pairwise disjoint for all
$j,j^{\prime}\in[m]$ with $j\neq j^{\prime}$ and disjoint from
$\{z_{1},\ldots,z_{k}\}$.
For each $j\in[m]$ let $\overline{z}^{(j)}$ be the
sublist of $\overline{z}:=(z_{1},\ldots,z_{k})$ that only contains the variables in $\textrm{free}(\varphi_{j})$.
I.e., $\overline{z}^{(j)}$ is obtained from $\overline{z}$ by deleting all variables
that do not belong to $\textrm{free}(\varphi_{j})$. Accordingly, for a tuple
$\overline{a}=(a_{1},\ldots,a_{k})\in\textbf{dom}^{k}$ by $\overline{a}^{(j)}$ we denote the tuple
that contains exactly those $a_{i}$ where $z_{i}$ belongs to $\overline{z}^{(j)}$.
The query result of $q$ on a $\sigma$-db $D$ is the set
$$q(D)\ :=\ \ \{\ \overline{a}\in\textbf{dom}^{k}\ :\ D\models\varphi_{j}[%
\overline{a}^{(j)}]\text{ \ for
each $j\in[m]$}\ \}\,,$$
where $D\models\varphi_{j}[\overline{a}^{(j)}]$ means that
there is a homomorphism
$\beta_{j}:q_{j}\to D$ for the query
$q_{j}:=\{\,\overline{z}^{(j)}\;:\;\varphi_{j}\,\}$, with $\beta_{j}(z_{i})=a_{i}$
for every $i$ with $z_{i}\in\textrm{free}(\varphi_{j})$.
For example,
$p^{\prime}_{\textit{E-E-R}}:=\{\;(x,y)\;:\;\exists v_{1}\,Exv_{1}\ \wedge\ %
\exists v_{2}\,Eyv_{2}\ \wedge\ \exists v_{3}\,Rxyv_{3}\;\}$ is a generalised CQ that is equivalent to the CQ $p_{\textit{E-E-R}}$.
Lemma 3.5.
Every t-hierarchical CQ $q_{\varphi}(z_{1},\ldots,z_{k})$ is equivalent to a
generalised CQ
$q^{\prime}=\{\,(z_{1},\ldots,z_{k})\,:\,\varphi_{1}\,\wedge\,\cdots\,\wedge\,%
\varphi_{m}\,\}$ such that for each $j\in[m]$ the CQ
$q_{j}:=\{\;\overline{z}^{(j)}\,:\,\varphi_{j}\;\}$ is q-hierarchical or quantifier-free.
Furthermore, there is an algorithm which decides in time
$\operatorname{\textit{poly}}({q_{\varphi}})$ whether $q_{\varphi}$ is t-hierarchical, and if so,
outputs an according $q^{\prime}$.
Proof.
Along Definition 3.3 it is straightforward to
construct an algorithm which decides in time $\operatorname{\textit{poly}}({q})$ whether
a given CQ $q$ is t-hierarchical.
Let $q:=q_{\varphi}(z_{1},\ldots,z_{k})$ be a given t-hierarchical CQ.
Let $A_{0}$ be the set of all atoms $\psi$ of $q$ with
$\textrm{vars}(\psi)\subseteq\textrm{free}(q)$, and let $\varphi_{0}$ be the
quantifier-free conjunctive formula
$$\varphi_{0}\ :=\ \ \bigwedge_{\psi\in A_{0}}\psi\,.$$
For each $Z\subseteq\textrm{free}(q)$ let $A_{Z}$ be the set of all atoms
$\psi$ of $q$ such that $\textrm{vars}(\psi)\varsupsetneq\textrm{vars}(\psi)\cap\textrm{free}(q)=Z$.
Let $Z_{1},\ldots,Z_{n}$ (for $n\geqslant 0$) be a list of all those
$Z\subseteq\textrm{free}(q)$ with $A_{Z}\neq\emptyset$.
For each $j\in[n]$ let $A_{j}:=A_{Z_{j}}$ and let
$Y_{j}:=\big{(}\bigcup_{\psi\in A_{j}}\textrm{vars}(\psi)\big{)}\setminus Z_{j}$.
Claim 3.6.
$Y_{j}\cap Y_{j^{\prime}}=\emptyset$ for all $j,j^{\prime}\in[n]$ with $j\neq j^{\prime}$.
Proof.
We know that $Z_{j}\neq Z_{j^{\prime}}$. W.l.o.g. there is a $z\in Z_{j}$ with
$z\not\in Z_{j^{\prime}}$.
For contradiction, assume that $Y_{j}\cap Y_{j^{\prime}}$ contains some variable
$y$.
Then, $y\in\textrm{vars}(\psi)$ for some $\psi\in A_{j}$ and
$y\in\textrm{vars}(\psi^{\prime})$ for some $\psi^{\prime}\in A_{j^{\prime}}$.
By definition of $A_{j}$ we know that $\textrm{vars}(\psi)\cap\textrm{free}(q)=Z_{j}$, and
hence $z\in\textrm{vars}(\psi)$.
By definition of $A_{j^{\prime}}$ we know that $\textrm{vars}(\psi^{\prime})\cap\textrm{free}(q)=Z_{j^{\prime}}$, and
hence $z\not\in\textrm{vars}(\psi^{\prime})$.
Hence, $\psi\in\textrm{atoms}(z)$ and $\psi^{\prime}\not\in\textrm{atoms}(z)$.
Since $\psi\in\textrm{atoms}(y)$ and $\psi^{\prime}\in\textrm{atoms}(y)$, we obtain that
$\textrm{atoms}(z)\cap\textrm{atoms}(y)\neq\emptyset$ and
$\textrm{atoms}(y)\not\subseteq\textrm{atoms}(z)$.
But by assumption, $q$ is t-hierarchical, and this contradicts
condition (ii) of
Definition 3.3.
∎
For each $j\in[n]$ consider the
conjunctive formula
$$\varphi_{j}\ :=\ \ \exists y_{1}^{(j)}\cdots\exists y_{\ell_{j}}^{(j)}\ %
\bigwedge_{\psi\in A_{j}}\psi\,,$$
where
$\ell_{j}:=|Y_{j}|$ and
$(y_{1}^{(j)},\ldots,y_{\ell_{j}}^{(j)})$ is a list of all variables in $Y_{j}$.
Using Claim 3.6, it is straightforward
to see that
$$q^{\prime}\ :=\ \ \{\ (z_{1},\ldots,z_{k})\ :\ \varphi_{0}\,\wedge\,\bigwedge_%
{j\in[n]}\varphi_{j}\ \}$$
is a generalised CQ that is equivalent to $q$.
Furthermore, $q^{\prime}$ can be constructed in time $\operatorname{\textit{poly}}({q})$.
To complete the proof of Lemma 3.5 we
consider for
each $j\in[n]$ the CQ
$$q_{j}\ :=\ \ \{\ \overline{z}^{(j)}\ :\ \varphi_{j}\ \}\,,$$
where $\overline{z}^{(j)}$ is a tuple of length $|Z_{j}|$
consisting of all the variables in $Z_{j}$.
Claim 3.7.
$q_{j}$ is q-hierarchical, for each $j\in[n]$.
Proof.
First of all, note that $q_{j}$ satisfies condition (ii) of
Definition 3.1, since
$\textrm{free}(q_{j})=Z_{j}$, $\textrm{atoms}_{q_{j}}(z)=A_{j}$ for every $z\in Z_{j}$, and
$\textrm{atoms}_{q_{j}}(y)\subseteq A_{j}$ for every $y\in Y_{j}=\textrm{vars}(q_{j})\setminus\textrm{free}(q_{j})$.
For contradiction, assume that $q_{j}$ is not q-hierarchical.
Then, $q_{j}$ violates condition (i) of
Definition 3.1. I.e.,
there are variables $x,x^{\prime}\in Z_{j}\cup Y_{j}$ and atoms $\psi_{1},\psi_{2},\psi_{3}\in A_{j}$ such that
$\textrm{vars}(\psi_{1})\cap\{x,x^{\prime}\}=\{x\}$,
$\textrm{vars}(\psi_{2})\cap\{x,x^{\prime}\}=\{x^{\prime}\}$, and
$\textrm{vars}(\psi_{3})\cap\{x,x^{\prime}\}=\{x,x^{\prime}\}$.
Since $\textrm{vars}(\psi)\cap\textrm{free}(q)=Z_{j}$ for all $\psi\in A_{j}$, we
know that $x,x^{\prime}\not\in\textrm{free}(q)$. Therefore,
$x,x^{\prime}\in\textrm{vars}(q)\setminus\textrm{free}(q)$, and hence $\psi_{1}$, $\psi_{2}$,
$\psi_{3}$ are atoms of $q$ which witness that condition (i) of
Definition 3.3 is violated. This contradicts the
assumption that $q$ is t-hierarchical.
∎
This completes the proof of Lemma 3.5.
∎
The proof of
Theorem 3.4 (a) now
follows easily: When given a t-hierarchical CQ
$q_{\varphi}(z_{1},\ldots,z_{k})$, use the algorithm provided by
Lemma 3.5 to compute an equivalent generalised CQ $q^{\prime}$ of
the form $\{(z_{1},\ldots,z_{k})\ :\ \varphi_{1}\wedge\cdots\wedge\varphi_{m}\}$
and let $q_{j}:=\{\overline{z}^{(j)}\ :\ \varphi_{j}\}$ for
each $j\in[m]$.
W.l.o.g. assume that there is an $m^{\prime}\in\{0,\ldots,m\}$ such that
$q_{j}$ is q-hierarchical for each $j\leqslant m^{\prime}$ and $q_{j}$ is
quantifier-free for each $j>m^{\prime}$.
We use in parallel, for each $j\leqslant m^{\prime}$, the data structures provided
by Theorem 3.2 (a) for the
q-hierarchical CQ $q_{j}$.
In addition to this, we use an $r$-dimensional array
$\texttt{A}_{R}$ for each relation symbol $R\in\sigma$ of arity
$r:=\operatorname{ar}(R)$, and we ensure
that for all $\overline{b}\in\textbf{dom}^{r}$ we have $\texttt{A}_{R}[\overline{b}]=1$
if $\overline{b}\in R^{D}$, and $\texttt{A}_{R}[\overline{b}]=0$ otherwise.
When receiving an update command $\textsf{update}\,R(\overline{b})$,
we let $\texttt{A}_{R}[\overline{b}]:=1$ if $\textsf{update}=\textsf{insert}$, and
$\texttt{A}_{R}[\overline{b}]:=0$ if $\textsf{update}=\textsf{delete}$, and in addition to
this, we call
the update routines of the data structure for $q^{(j)}$ for each $j\leqslant m^{\prime}$.
Upon input of a tuple $\overline{a}\in\textbf{dom}^{k}$, the test routine
proceeds as follows.
For each $j\leqslant m^{\prime}$, it calls the test routine of the data
structure for $q^{(j)}$ upon input $\overline{a}^{(j)}$. And additionally,
it uses the arrays $\texttt{A}_{R}$ for all $R\in\sigma$ to
check if for each $j>m^{\prime}$ the quantifier-free query $q^{(j)}$ is
satisfied by the tuple $\overline{a}^{(j)}$. All this is done within time
$\operatorname{\textit{poly}}({q})$, and we know that $\overline{a}\in q(D)$ if, and only
if, all these tests succeed.
This completes the proof of part (a) of Theorem 3.4.
Let us now turn to the proof of part (b) of Theorem 3.4.
We are given a query $q:=q_{\varphi}(z_{1},\ldots,z_{k})$ and without loss
of generality we assume that $q$ is a homomorphic
core and $q$ is not t-hierarchical.
Thus, $q$ violates condition (i) or
(ii) of Definition 3.3.
In case that it violates condition (i), the proof
is virtually identical to the proof of Theorem 3.4 in
[5]; for the reader’s convenience, the proof
details are given in Appendix A.
Let us consider the case where $q$ violates
condition (ii) of Definition 3.3.
In this case, there are two variables $x\in\textrm{free}(q)$ and
$y\in\textrm{vars}(q)\setminus\textrm{free}(q)$
and two atoms
$\psi^{x,y}$ and $\psi^{y}$ of $q$
with
$\textrm{vars}(\psi^{x,y})\cap\{x,y\}=\{x,y\}$ and
$\textrm{vars}(\psi^{y})\cap\{x,y\}=\{y\}$.
The easiest example of a query for which this is true is
$q_{\textit{E-T}}:=\{\,(x)\,:\,\exists y\,(\,Exy\,\wedge\,Ty\,)\,\}\,.$ Here, we illustrate the proof idea for the particular query
$q_{\textit{E-T}}$; a proof for the general case is given in Appendix A.
Assume that there is a dynamic algorithm that solves
the testing problem for $q_{\textit{E-T}}$ with update time
$t_{u}=O(n^{1-\epsilon})$ and testing time
$t_{t}=O(n^{1-\epsilon})$ on databases whose active domain
is of size $O(n)$.
We show how this algorithm can be used to
solve the OuMv-problem.
For the OuMv-problem, we receive as input
an $n\times n$ matrix $M$. We start the
preprocessing phase of our testing algorithm for $q_{\textit{E-T}}$ with the
empty database $D=(E^{D},T^{D})$ where
$E^{D}=T^{D}=\emptyset$.
As this database has constant size, the preprocessing is finished in
constant time.
We then apply $O(n^{2})$ update steps to ensure that
$E^{D}=\{(i,j)\ :\ M_{i,j}=1\}$.
All this takes time at most $O(n^{2})\cdot t_{u}=O(n^{3-\epsilon})$.
Throughout the remainder of the construction, we will never change
$E^{D}$, and we will always ensure that
$T^{D}\subseteq[n]$.
When we receive two vectors $\vec{u}^{\,t}$ and $\vec{v}^{\,t}$ in the
dynamic phase of the OuMv-problem, we proceed as follows. First,
we perform the update commands $\textsf{delete}\,T(j)$ for each $j\in[n]$
with $\vec{v}^{\,t}=0$, and
the update commands $\textsf{insert}\,T(j)$ for each $j\in[n]$
with $\vec{v}_{j}^{\,t}=1$.
This is done within time $n\cdot t_{u}=O(n^{2-\epsilon})$.
By construction of $D$ we know that for every $i\in[n]$ we have
$$i\ \in\ q_{\textit{E-T}}(D)\quad\iff\quad\text{there is a $j\in[n]$ such that %
$M_{i,j}=1$ and
$\vec{v}_{j}^{\,t}=1$\,.
}$$
Thus,
$(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}=1\iff\text{there is %
an $i\in[n]$ with $\vec{u}_{i}^{\,t}=1$ and $i\in q_{\textit{E-T}}(D)$.
}$
Therefore, after having called the test routine for $q_{\textit{E-T}}$ for each
$i\in[n]$ with $\vec{u}_{i}^{\,t}=1$, we can output the correct result
of $(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}$.
This takes time at most $n\cdot t_{t}=O(n^{2-\epsilon})$.
I.e., for each $t\in[n]$ after receiving the vectors $\vec{u}^{\,t}$
and $\vec{v}^{\,t}$, we can output $(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}$ within time $O(n^{2-\epsilon})$.
Consequently, the overall running time for solving the OuMv-problem is bounded by $O(n^{3-\epsilon})$.
By using the technical machinery of [5],
this approach can be generalised from $q_{\textit{E-T}}$ to all queries $q$ that
violate condition (ii) of
Definition 3.3; see
Appendix A for details.
This completes the proof of Theorem 3.4.
∎
4 Unions of conjunctive queries
In this section we consider dynamic query evaluation for UCQs.
To transfer our notions of hierarchical queries from CQs to
UCQs, we say that a UCQ $q(\overline{u})$ of the form $q_{1}(\overline{u}_{1})\,\cup\,\cdots\,\cup\,q_{d}(\overline{u}_{d})$
is q-hierarchical (t-hierarchical) if every CQ $q_{i}(\overline{u}_{i})$ in
the union is q-hierarchical (t-hierarchical).
Note that for Boolean queries (CQs as well as UCQs) the notions
of being q-hierarchical and being t-hierarchical coincide,
and for a $k$-ary UCQ $q$ it can be checked in time $\operatorname{\textit{poly}}(q)$ if
$q$ is q-hierarchical or t-hierarchical.
The following theorem generalises the statement of
Theorem 3.4 from CQs to UCQs.
Its proof follows
easily from the Theorems 3.4 and 3.2.
Theorem 4.1.
(a)
There is a dynamic algorithm that receives a t-hierarchical $k$-ary
UCQ $q$ and a $\sigma$-db ${D_{0}}$, and computes
within $t_{p}=\operatorname{\textit{poly}}({q})\cdot O(|\!|{D_{0}}|\!|)$ preprocessing time a data structure that can be
updated in time $t_{u}=\operatorname{\textit{poly}}({q})$ and allows to test for an
input tuple $\overline{a}\in\textbf{dom}^{k}$ if $\overline{a}\in q(D)$ within time
$t_{t}=\operatorname{\textit{poly}}({q})$. Furthermore, the algorithm allows to
answer a q-hierarchical Boolean UCQ within time $t_{a}=O(1)$.
(b)
Let $\epsilon>0$ and let $q$ be a $k$-ary UCQ whose
homomorphic core is not t-hierarchical
(note that this is the case if, and only
if, $q$ is not equivalent to a t-hierarchical UCQ).
There is no dynamic algorithm with arbitrary preprocessing time and $t_{u}=O(n^{1-\epsilon})$
update time that can test for any input tuple $\overline{a}\in\textbf{dom}^{k}$ if
$\overline{a}\in q(D)$ within testing time
$t_{t}=O(n^{1-\epsilon})$, unless the OMv-conjecture fails.
Furthermore, if $k=0$ (i.e., $q$ is a Boolean UCQ), then there
is no dynamic algorithm with arbitrary preprocessing time and
$t_{u}=O(n^{1-\varepsilon})$
update time that
answers
$q(D)$ in time
$t_{a}=O(n^{2-\varepsilon})$, unless the OMv-conjecture fails.
Proof.
Part (a)
follows immediately from
Theorem 3.4 (a) (and
Theorem 3.2 (ai) for the statement on
Boolean UCQs),
as we
can maintain all CQs in the union in parallel and then decide
whether at least one of them is satisfied by the current database
and the given tuple.
For the proof of (b) let $q^{\prime}$ be
the homomorphic core of $q$, and let $q^{\prime}$ be of the form
$q_{1}\cup\cdots\cup q_{m}$ for $k$-ary CQs $q_{1},\ldots,q_{m}$.
We first consider the case that $q$ is a Boolean UCQ.
Then, by assumption, $q^{\prime}$ is not q-hierarchical, and hence there
exists an $i\in[m]$ such that the Boolean CQ $q_{i}$ a
is not q-hierarchical. Suppose for
contradiction that there exists a dynamic algorithm that evaluates
$q$ with $t_{u}=O(n^{1-\varepsilon})$
update time and
$t_{a}=O(n^{2-\varepsilon})$ answer time.
Since $q^{\prime}$ is equivalent to $q$, it
follows that the algorithm also evaluates $q^{\prime}$ with the same time
bounds. Now consider the class of databases that map homomorphically
into $q_{i}$.
For every database $D$ in this class it holds that $q^{\prime}(D)=\texttt{yes}$ if, and
only if, $q_{i}(D)=\texttt{yes}$. This is because every other CQ $q_{j}$ in
$q^{\prime}$ is not satisfied by $D$, since otherwise there would be a
homomorphism from $q_{j}$ to $D$ and therefore, since there is a homomorphism
from $D$ to $q_{i}$, also from $q_{j}$ to $q_{i}$, contradicting that
$q^{\prime}$ is a core.
Hence, the dynamic algorithm evaluates the non-q-hierarchical CQ
$q_{i}$ (which is a homomorphic core), contradicting
Theorem 3.2 (bi).
The statement of (b) concerning
non-Boolean UCQs and the testing problem follows along the same
lines when using
Theorem 3.4 (b)
instead of
Theorem 3.2 (bi).
∎
It turns out that
similarly as q-hierarchical CQs, also
q-hierarchical UCQs allow for efficient enumeration under
updates. This, and the according lower bound, is stated in the
following Theorem 4.2, which will be proven
at the end of this section.
In contrast to Theorem 4.1, the result does not follow
immediately from the tractability of the enumeration problem for q-hierarchical CQs, because one has
to ensure that tuples from result sets of two different CQs are not
reported twice while enumerating their union.
Theorem 4.2.
(a)
There is a dynamic algorithm that receives a q-hierarchical $k$-ary
UCQ $q$
and a $\sigma$-db ${D_{0}}$, and computes within
$t_{p}=\operatorname{\textit{poly}}(q)\cdot O(|\!|{D_{0}}|\!|)$
preprocessing time a data structure that can be updated in time
$t_{u}=\operatorname{\textit{poly}}(q)$ and allows to
enumerate $q(D)$ with delay $t_{d}=\operatorname{\textit{poly}}(q)$.
(b)
Let $\epsilon>0$ and let $q$ be a $k$-ary UCQ whose homomorphic core
is not q-hierarchical and
is a
union of self-join free CQs.
There is no dynamic
algorithm with arbitrary preprocessing time and
$t_{u}=O(n^{1-\varepsilon})$ update time that enumerates
$q(D)$ with
delay $t_{d}=O(n^{1-\varepsilon})$, unless the OMv-conjecture fails.
Note that according to Theorem 3.2, for CQs the enumeration
problem as well as the counting problem can be solved by efficient
dynamic algorithms if, and (modulo algorithmic conjectures) only if, the query is q-hierarchical.
In contrast to this, it turns out that for UCQs computing the number
of output tuples can be much harder than enumerating the query
result.
To characterise the UCQs that allow for efficient dynamic counting algorithms, we
use the following notation.
For two $k$-ary CQs $q_{\varphi}(u_{1},\ldots,u_{k})$ and
$q_{\psi}(v_{1},\ldots,v_{k})$ we define the intersection $q:=q_{\varphi}\cap q_{\psi}$ to be the following $k$-ary query. If
there is an $i\in[k]$ such that $u_{i}$ and $v_{i}$ are distinct elements
from dom, then $q:=\emptyset$ (and this query is q-hierarchical
by definition). Otherwise, we let $w_{1},\ldots,w_{k}$ be elements from
$\textbf{var}\cup\textbf{dom}$ which satisfy the following for all $i,j\in[k]$ and all
$a\in\textbf{dom}$:
$$\big{(}$$ $$w_{i}=a$$ $$\iff$$ $$u_{i}=a$$ or $$v_{i}=a$$ $$\big{)}$$
and $$\big{(}$$ $$w_{i}=w_{j}$$ $$\iff$$ $$u_{i}=u_{j}$$ or $$v_{i}=v_{j}$$ $$\big{)}$$.
We obtain $\varphi^{\prime}$ from $\varphi$ (and $\psi^{\prime}$ from $\psi$) by replacing every
$u_{i}\in\{u_{1},\ldots,u_{k}\}\cap\textrm{free}(\varphi)$ (and $v_{i}\in\{v_{1},\ldots,v_{k}\}\cap\textrm{free}(\psi)$) by $w_{i}$.
Finally, we let $q=\{\ (w_{1},\ldots,w_{k})\ :\ \varphi^{\prime}\,\wedge\,\psi^{\prime}\ \}$, where we can assume w.l.o.g. that
$\varphi^{\prime}\,\wedge\,\psi^{\prime}$ is a conjunctive formula of the form ($*$ ‣ 2).
Note that for every database $D$ it holds that $q(D)=q_{\varphi}(D)\cap q_{\psi}(D)$.
Definition 4.3.
A UCQ $q$ of the form
$\bigcup_{i\in[d]}q_{i}(\overline{u}_{i})$
is exhaustively q-hierarchical if for every
$I\subseteq[d]$ the intersection
$q_{I}=\bigcap_{i\in I}q_{i}$ is equivalent to a q-hierarchical CQ.
It is not difficult to see that
a Boolean UCQ is exhaustively q-hierarchical if and only if its
homomorphic core is
q-hierarchical.
In the non-Boolean case, being exhaustively q-hierarchical is a stronger
requirement than being q-hierarchical as the following example shows: the UCQ
$\{\,(x,y)\,:\,Sx\,\wedge\,Exy\,\}\ \cup\ \{\,(x,y)\,:\,Exy\,\wedge\,Ty\,\}$ is q-hierarchical, but not exhaustively q-hierarchical.
In contrast to the q-hierarchical property, the straightforward way of deciding whether a UCQ $q$ is exhaustively
q-hierarchical requires $2^{\operatorname{\textit{poly}}(q)}$ and it is open whether this can be improved.
The next theorem shows
that the exhaustively q-hierarchical queries are precisely those UCQs that allow for efficient dynamic
counting algorithms.
Theorem 4.4.
(a)
There is a dynamic algorithm that receives an exhaustively
q-hierarchical UCQ $q$
and a $\sigma$-db ${D_{0}}$, and computes within
$t_{p}=2^{\operatorname{\textit{poly}}(q)}\cdot O(|\!|{D_{0}}|\!|)$
preprocessing time a data structure that can be updated in time
$t_{u}=2^{\operatorname{\textit{poly}}(q)}$
and computes $|q(D)|$ in time $t_{c}=O(1)$.
(b)
Let $\epsilon>0$ and let $q$ be a UCQ
whose homomorphic core is not exhaustively q-hierarchical.
There is no dynamic algorithm with arbitrary preprocessing time and
$t_{u}=O(n^{1-\varepsilon})$ update time that computes
$|q(D)|$
in time $t_{c}=O(n^{1-\varepsilon})$,
unless the
OMv-conjecture or the OV-conjecture fails.
Proof.
To prove part (a) we use the principle of
inclusion-exclusion along with the upper bound of
Theorem 3.2 (ai).
Let $q=\bigcup_{i\in[d]}q_{i}(\overline{u}_{i})$ be an
exhaustively q-hierarchical UCQ.
Our dynamic algorithm for solving the counting problem for $q$
proceeds as follows.
In the preprocessing phase we first compute for every non-empty
$I\subseteq[d]$ the homomorphic core $\widetilde{q_{I}}$ of the CQ
$q_{I}:=\bigcap_{i\in I}q_{i}$. This can be done in time
$2^{\operatorname{\textit{poly}}(q_{I})}$.
Since $q$ is exhaustively q-hierarchical, every $\widetilde{q_{I}}$ is q-hierarchical and we can apply Theorem 3.2 (ai) to
determine the number of result tuples $|\widetilde{q_{I}}(D)|=|q_{I}(D)|$ for
every $I\subseteq[d]$
with $\sum_{I\subseteq[d]}\operatorname{\textit{poly}}(q_{I})=2^{\operatorname{%
\textit{poly}}(q)}$ update time.
By the principle of inclusion-exclusion we have
$$|q(D)|\ \ =\ \ |\bigcup_{i\in[d]}q_{i}(D)|\ \ =\ \ \sum_{\emptyset\neq I%
\subseteq[d]}(-1)^{|I|+1}\,{\cdot}\,|\bigcap_{i\in I}q_{i}(D)|\ \ =\ \ \sum_{%
\emptyset\neq I\subseteq[d]}(-1)^{|I|+1}\,{\cdot}\,|q_{I}(D)|.$$
Therefore, we can compute the number of result tuples in
$q(D)$ by maintaining all $2^{d}-1$ numbers
$|\widetilde{q_{I}}(D)|$ in parallel (for all non-empty
$I\subseteq[d]$).
For the proof of part (b)
let $\widetilde{q}=\bigcup_{i\in[d]}q_{i}(\overline{u}_{i})$ be the
homomorphic core of $q$. Consider the CQs $q_{I}:=\bigcap_{i\in I}q_{i}$ and their homomorphic cores $\widetilde{q_{I}}$ for all non-empty $I\subseteq[d]$. First we take care of equivalent queries and write $I\cong J$ if $q_{I}\equiv q_{J}$.
Let ${\mathcal{I}}$ be a set of index sets $I$ that contains one
representative from each equivalence class
$I/_{\cong}$. By the principle of inclusion-exclusion we have
$$|q(D)|\ \ =\ \ |\widetilde{q}(D)|\ \ =\ \ \sum_{\emptyset\neq{I}\subseteq[d]}(%
-1)^{|{I}|+1}\,{\cdot}\,|q_{{I}}(D)|\ \ =\ \ \sum_{{I}\in{\mathcal{I}}}a_{{I}}%
\,{\cdot}\,|q_{{I}}(D)|\,,$$
where
$a_{{I}}:=\sum_{{J}\colon{J}\cong{I}}(-1)^{|{J}|+1}$. Because $q$ is
not exhaustively q-hierarchical, we can
choose a set ${I}\in{\mathcal{I}}$ such that $\widetilde{q}_{{I}}$ is a
non-q-hierarchical query, which is minimal in the sense that for
every ${J}\in{\mathcal{I}}\setminus\{{I}\}$ there is no
homomorphism from ${q}_{J}$ to ${q}_{I}$. Note that such a
minimal set $I$ exists since otherwise we could find two distinct
$J,J^{\prime}\in{\mathcal{I}}$ such that
$q_{J}\equiv q_{{J}^{\prime}}$.
Now suppose that $D$ is a database from the class of databases
that map homomorphically into $q_{I}$ and let $h\colon D\to q_{I}$
be a homomorphism. For every
${J}\in{\mathcal{I}}\setminus\{{I}\}$ it holds that there is no
homomorphism $h^{\prime}\colon q_{J}\to D$, since otherwise $h\circ h^{\prime}$ would be a homomorphism from $q_{J}$ to $q_{I}$.
Hence, $q_{J}(D)=\emptyset$ for all
${J}\in{\mathcal{I}}\setminus\{{I}\}$ and thus
$|q(D)|=a_{I}\cdot|q_{I}(D)|$. It
follows that we can compute
$|q_{I}(D)|=|\widetilde{q}_{I}(D)|$
by maintaining the value for
$|q(D)|$ and dividing it by $a_{I}$.
Since $\widetilde{q}_{I}$ is a non-q-hierarchical homomorphic core, the lower bound for
maintaining $|q(D)|$ follows from
Theorem 3.2 (bii).
This completes the proof of Theorem 4.4.
∎
The remainder of this section is devoted to the proof of Theorem 4.2.
To prove
Theorem 4.2 (a), we first develop a general
method for enumerating the union of sets.
We say that a data structure for a set $T$ allows to skip, if it is
possible to test whether $t\in T$ in constant time and for some
ordering $t_{1},\ldots,t_{n}$ of the elements in $T$ there is
•
a function $\mathsf{start}$, which returns $t_{1}$ in constant time
and
•
a function $\mathsf{next}(t_{i})$, which returns $t_{i+1}$ (if
$i<n$) or EOE (if $i=n$) in constant time.
Note that a data structure that allows to skip enables constant
delay enumeration of $t_{i}$, $t_{i+1}$, …, $t_{n}$ starting from an
arbitrary element $t_{i}\in T$ (but we do
not have control over the underlying order).
An example of such a data structure is an explicit
representation of the elements of $T$ in a linked list with constant access.
Another example is the data structure of the enumeration
algorithm for the result $T:=q(D)$ of a q-hierarchical CQ $q$, provided by Theorem 3.2 (aii)&(aiv).
The next lemma states that we can use these data structures for sets
$T_{j}$ to enumerate the union $\bigcup_{j}T_{j}$ with constant delay and
without repetition.
Lemma 4.5.
Let $\ell\geqslant 1$ and
let $T_{1},\ldots,T_{\ell}$ be sets such that for each
$j\in[\ell]$ there is a data structure for $T_{j}$ that allows to skip.
Then there is an algorithm that enumerates, without repetition, all
elements in
$T_{1}\cup\cdots\cup T_{\ell}$ with $O(\ell)$ delay.
Proof.
For each $i\in[\ell]$ let $\mathsf{start}^{i}$ and $\mathsf{next}^{i}$ be the start element and
the iterator for the set $T_{i}$. The main idea for enumerating the
union $T_{1}\cup\cdots\cup T_{\ell}$ is to first enumerate all
elements in $T_{1}$, and then $T_{2}\setminus T_{1}$, $T_{3}\setminus(T_{1}\cup T_{2})$, …, $T_{\ell}\setminus(T_{1}\cup\cdots\cup T_{\ell-1})$. In order to do
this we have to exclude all elements that have already been
reported from all subsequent sets. As we want to ensure constant
delay enumeration, we cannot just ignore the elements in $T_{i}\cap(T_{1}\cup\cdots\cup T_{i-1})$ while enumerating $T_{i}$. As a remedy, we
use an additional
pointer to jump from an element that has already been reported
to the least element that needs to be reported next.
To do this we use
arrays $\mathsf{skip}^{i}$ (for all $i\in[\ell]$) to jump over excluded elements: if
$t_{r},\ldots,t_{s}$ is a maximal interval of elements in $T_{i}$ that
have already been reported, then
$\mathsf{skip}^{i}[t_{r}]=t_{s+1}$ (if
$t_{s}$ is the last element in $T_{i}$, then $t_{s+1}:=\texttt{EOE}$). For technical reasons we also need the array
$\mathsf{skipback}^{i}$ which represents the inverse pointer, i.e., $\mathsf{skipback}^{i}[t_{s+1}]=t_{r}$.
The enumeration algorithm is stated in
Algorithm 1. It uses the procedure
exclude${}^{j}$ described in Algorithm 2 to update the
arrays whenever an element $t$ has been reported.
It is straightforward to verify that these algorithms provide the
desired functionality within the claimed time bounds.
∎
Lemma 4.5 enables us to prove the upper bound of
Theorem 4.2, and the lower bound is proved by using
Theorem 3.2 (biii).
Proof of Theorem 4.2.
The upper bound follows immediately from combining Lemma 4.5 with
Theorem 3.2 (aiv).
For the lower bound let $q_{i}$ be a self-join free non-q-hierarchical CQ in
the homomorphic core $q^{\prime}$ of the UCQ $q$. For every database $D$ that maps
homomorphically into $q_{i}$ it holds that $q_{j}(D)=\emptyset$ for
every other CQ $q_{j}$ in $q^{\prime}$ (with $j\neq i$), since otherwise there
would be a homomorphism from $q_{j}$ to $D$ and hence to $q_{i}$,
contradicting that $q^{\prime}$ is a homomorphic core.
It follows that every dynamic algorithm that enumerates the result
of $q$ on a
database $D$ which maps homomorphically into $q_{i}$ also enumerates
$q_{i}(D)=q(D)$, contradicting Theorem 3.2 (biii).
∎
5 CQs and UCQs with integrity constraints
In the presence of integrity constraints, the characterisation of
tractable queries changes and depends on the query as well as on the
set of constraints.
When considering a scenario where databases are required to satisfy a
set $\Sigma$ of constraints,
we allow to execute a given update command only if the resulting
database still satisfies all constraints in $\Sigma$.
When speaking of $(\sigma,\Sigma)$-dbs we mean $\sigma$-dbs
$D$ that satisfy all constraints in $\Sigma$.
Two queries $q$ and $q^{\prime}$ are
$\Sigma$-equivalent (for short: $q\equiv_{\Sigma}q^{\prime}$) if
$q(D)=q^{\prime}(D)$ for every
$(\sigma,\Sigma)$-db $D$.
We first consider small domain constraints, i.e.,
constraints $\delta$ of the form
$R[i]\subseteq C$
where $R\in\sigma$, $i\in\{1,\ldots,\operatorname{ar}(R)\}$, and $C\subseteq\textbf{dom}$
is a finite set.
A $\sigma$-db $D$ satisfies $\delta$ if
$\pi_{i}(R^{D})\subseteq C$.
For these constraints we are able to give a clear picture of
the tractability landscape by reducing CQs and UCQs with small domain
constraints to UCQs without integrity constraints and applying the
characterisations for UCQs achieved in Section 4.
We start with an example that illustrates how
a query can be simplified in the presence of
small domain constraints.
Example 5.1.
Consider the Boolean query
$q_{\textit{S-E-T}}:=\{\,()\,:\,\exists x\exists y\,(\,Sx\,\wedge\,Exy\,\wedge%
\,Ty\,)\,\}\,,$ which is not q-hierarchical. By Theorem 3.2
it cannot be answered by a dynamic algorithm with sublinear update
time and sublinear answer time, unless the OMv-conjecture fails.
But in the presence of the small domain constraint
$\delta_{\textit{sd}}:=S[1]\subseteq C$
for a set $C\subseteq\textbf{dom}$ of the form $C=\{a_{1},\ldots,a_{c}\}$, the
query $q_{\textit{S-E-T}}$ is $\{\delta_{\textit{sd}}\}$-equivalent to the q-hierarchical UCQ
$$q^{\prime}\ :=\ \ \bigcup_{a_{i}\in C}\ \{\ ()\ :\ \exists y\ (\;Sa_{i}\,%
\wedge\,Ea_{i}y\,\wedge\,Ty\;)\ \}\,.$$
Therefore, by Theorem 4.1, $q^{\prime}$ and hence $q_{\textit{S-E-T}}$ can be answered with constant
update time and constant answer time on all databases that satisfy $\delta_{\textit{sd}}$.
For handling the general case, assume we are given a
set $\Sigma$ of small domain constraints and
an arbitrary $k$-ary CQ $q$
of the form ($**$ ‣ 2) where $\varphi$ is of the form ($*$ ‣ 2).
We define a function $\textit{Dom}_{q,\Sigma}$ that maps each $x\in\textrm{vars}(q)$ to a set
$\textit{Dom}_{q,\Sigma}(x)\subseteq\textbf{dom}$ as follows.
As an initialisation let $f(x)=\textbf{dom}$ for each $x\in\textrm{vars}(q)$. Consider each constraint $\delta$ in $\Sigma$ and
let $S[i]\subseteq C$ be the form of $\delta$. Consider each atom $\psi_{j}$ of $\varphi$ and let
$Rv_{1}\cdots v_{r}$ be the form of $\psi_{j}$. If $R=S$ and $v_{i}\in\textbf{var}$, then let
$f(v_{i}):=f(v_{i})\cap C$. Let $\textit{Dom}_{q,\Sigma}$ be the mapping $f$ obtained at the end of this process.
Note that $\textit{rvars}_{\Sigma}(q):=\{x\in\textrm{vars}(q)\ :\ \textit{Dom}_{q,\Sigma}%
(x)\neq\textbf{dom}\}$
consists of the variables of $q$ that are restricted by $\Sigma$.
Let $M_{q,\Sigma}$ be the set of all mappings $\alpha:V\to\textbf{dom}$ with $V=\textit{rvars}_{\Sigma}(q)$
and $\alpha(x)\in\textit{Dom}_{q,\Sigma}(x)$ for each $x\in V$. Note that $M_{q,\Sigma}$ is finite; and it is
empty if, and only if, $\textit{Dom}_{q,\Sigma}(x)=\emptyset$ for some $x\in\textrm{vars}(q)$.
For an arbitrary mapping $\alpha:V\to\textbf{dom}$ with $V\subseteq\textbf{var}$ we let
$q_{\alpha}$ be the $k$-ary CQ obtained from $q$ as follows:
for each $x\in V$, if present in $q$, the existential quantifier
“$\exists x$” is omitted, and afterwards
every occurrence of $x$ in $q$ is replaced with the constant
$\alpha(x)$.
It is straightforward to check that $q_{\alpha}(D)\subseteq q(D)$
for every $\sigma$-db $D$.
With these notations, we obtain the following lemma.
Lemma 5.2.
Let $q$ be a CQ and let $\Sigma$ be a set of small domain constraints.
Let $M:=M_{q,\Sigma}$.
If $M=\emptyset$, then $q(D)=\emptyset$ for every $(\sigma,\Sigma)$-db
$D$.
Otherwise, $q$ is $\Sigma$-equivalent to the UCQ
$q_{\Sigma}\ :=\ \bigcup_{\alpha\in M}\,q_{\alpha}$ .
Proof.
For a set $\Sigma$ of constraints and
a $\sigma$-db $D$ we write $D\models\Sigma$ to indicate that
$D$ satisfies every constraint in $\Sigma$.
Let $V:=\textit{rvars}_{\Sigma}(q)$ and $M:=M_{q,\Sigma}$.
If $V=\emptyset$, then $M=\{\alpha_{\emptyset}\}$ where
$\alpha_{\emptyset}$ is the unique mapping with empty domain.
Thus, $q_{\Sigma}=q_{\alpha_{\emptyset}}=q$ and we are done.
It remains to consider the case where $V\neq\emptyset$.
Consider an arbitrary $\sigma$-db $D$ with $D\models\Sigma$ and
let $q$ be of the form ($**$ ‣ 2).
Consider an arbitrary tuple $\overline{b}=(b_{1},\ldots,b_{k})\in q(D)$. By definition of the
semantics of CQs, there is a
valuation $\beta:\textbf{var}\to\textbf{dom}$ such that
$(b_{1},\ldots,b_{k})=\big{(}\beta(u_{1}),\ldots,\beta(u_{k})\big{)}$ and
for every atomic formula $Rv_{1}\cdots v_{r}$ in $q$ we have
$\big{(}\beta(v_{1}),\ldots,\beta(v_{r})\big{)}\in R^{D}$.
If $x=v_{i}$ then $\beta(x)\in\pi_{i}(R^{D})$; and if $\Sigma$ contains a
constraint of the form $R[i]\subseteq C$ then, since
$D\models\Sigma$, we have $\pi_{i}(R^{D})\subseteq C$, and hence
$\beta(x)\in C$. This holds true for every occurrence of $x$ in
an atom of $q$, and hence $\beta(x)\in\textit{Dom}_{q,\Sigma}(x)$
for every $x\in\textrm{vars}(q)$. In other words, the restriction $\beta_{|V}$
of $\beta$ to $V$ belongs to $M$, and
$\overline{b}\in q_{\beta_{|V}}(D)\subseteq q_{\Sigma}(D)$.
In particular, this implies that the following is true.
1.
If $q(D)\neq\emptyset$ for some $\sigma$-db $D$ with
$D\models\Sigma$, then $M\neq\emptyset$. Hence, by contraposition,
if $M=\emptyset$ then $q(D)=\emptyset$ for every $\sigma$-db $D$
with $D\models\Sigma$.
2.
If $M\neq\emptyset$, then $q(D)\subseteq q_{\Sigma}(D)$ for every
$\sigma$-db $D$ with $D\models\Sigma$.
On the other hand,
since $q_{\alpha}(D)\subseteq q(D)$ for every $\alpha$ and every
$\sigma$-db $D$, we have $q_{\Sigma}(D)\subseteq q(D)$ for
every $\sigma$-db $D$. Hence, $q$ is $\Sigma$-equivalent to $q_{\Sigma}$.
This completes the proof of Lemma 5.2.
∎
This reduction from a CQ $q$ to a UCQ $q_{\Sigma}$ directly
translates to UCQs: if $q$ is a union of the CQs
$q_{1},\ldots,q_{d}$, then we define the UCQ
$q_{\Sigma}:=\bigcup_{i\in[d]}\,(q_{i})_{\Sigma}$. It is not
hard to verify that if the UCQ $q$ is a homomorphic core, then so is
$q_{\Sigma}$. Therefore, the following dichotomy theorem for UCQs under small
domain constraints is a direct consequence of
Lemma 5.2 and the
Theorems 4.1, 4.2, and 4.4.
Theorem 5.3.
Let $q$ be a UCQ that is a homomorphic core
and $\Sigma$ a set of small domain constraints
with $M_{q,\Sigma}\neq\emptyset$.
Suppose that the OMv-conjecture and the OV-conjecture hold.
(1a)
If $q_{\Sigma}$ is t-hierarchical, then $q$ can be tested
on $(\sigma,\Sigma)$-dbs
in constant time
with linear preprocessing time and constant update time.
(1b)
If $q_{\Sigma}$ is not t-hierarchical, then
on the class of $(\sigma,\Sigma)$-dbs
testing in time $O(n^{1-\epsilon})$ is not possible
with $O(n^{1-\epsilon})$ update time.
(2a)
If $q_{\Sigma}$ is q-hierarchical, then there is data structure with linear
preprocessing and constant update time that allows to enumerate $q(D)$ with constant
delay
on $(\sigma,\Sigma)$-dbs.
(2b)
If $q_{\Sigma}$ is not q-hierarchical and in addition self-join free, then
$q(D)$ cannot be enumerated with
$O(n^{1-\epsilon})$ delay and $O(n^{1-\epsilon})$ update time
on $(\sigma,\Sigma)$-dbs.
(3a)
If $q_{\Sigma}$ is exhaustively q-hierarchical, then there is data structure with linear
preprocessing and constant update time that allows to compute $|q(D)|$ in constant
time
on $(\sigma,\Sigma)$-dbs.
(3b)
If $q_{\Sigma}$ is not exhaustively q-hierarchical, then computing
$|q(D)|$
on $(\sigma,\Sigma)$-dbs
in time
$O(n^{1-\epsilon})$ is not possible
with $O(n^{1-\epsilon})$ update time.
In particular, this shows that the tractability of a UCQ $q$ on
$(\sigma,\Sigma)$-dbs only depends on the
structure of the query $q_{\Sigma}$. Note that while the size of
$q_{\Sigma}$ might be $c^{O(q)}$, where $c$ is largest number of
constants in a small domain, it can be checked in time $\operatorname{\textit{poly}}(q)$
whether $q_{\Sigma}$ is t-hierarchical or q-hierarchical.
Let us take a brief look at two other kinds of constraints: inclusion
dependencies and functional dependencies, which both can also cause a hard
query to become tractable.
An inclusion dependency $\delta$ is of the form
$R[i_{1},\ldots,i_{m}]\subseteq S[j_{1},\ldots,j_{m}]$
where $R,S\in\sigma$, $m\geqslant 1$, $i_{1},\ldots,i_{m}\in\{1,\ldots,\operatorname{ar}(R)\}$, and $j_{1},\ldots,j_{m}\in\{1,\ldots,\operatorname{ar}(S)\}$.
A $\sigma$-db $D$ satisfies $\delta$ if
$\pi_{i_{1},\ldots,i_{m}}(R^{D})\subseteq\pi_{j_{1},\ldots,j_{m}}(S^{D})$.
As an example consider the query $q_{\textit{S-E-T}}$
from Example 5.1 and the inclusion
dependency $\delta_{\textit{ind}}\ :=\ E[2]\subseteq T[1]$. Obviously, $q_{\textit{S-E-T}}$ is
$\{\delta_{\textit{ind}}\}$-equivalent to the q-hierarchical (and hence easy) CQ
$q^{\prime}:=\{\ ()\ :\ \exists x\exists y\ (\;Sx\,\wedge\,Exy\;)\ \}$.
To turn this into a
general principle, we say that an inclusion dependency $\delta$
of the form
$R[i_{1},\ldots,i_{m}]\subseteq S[j_{1},\ldots,j_{m}]$
can be applied to a CQ $q$
if $q$ contains an atom $\psi_{1}$ of the form $Rv_{1}\cdots v_{r}$ and an
atom $\psi_{2}$ of the form $Sw_{1}\cdots w_{s}$ such that
1.
$(v_{i_{1}},\ldots,v_{i_{m}})=(w_{j_{1}},\ldots,w_{j_{m}})$,
2.
for all
$j\in[s]\setminus\{j_{1},\ldots,j_{m}\}$ we
have
$w_{j}\in\textbf{var}$,
$w_{j}\not\in\textrm{free}(q)$, $\textrm{atoms}(w_{j})=\{\psi_{2}\}$, and
3.
for all $j,j^{\prime}\in[s]\setminus\{j_{1},\ldots,j_{m}\}$ with $j\neq j^{\prime}$ we
have $w_{j}\neq w_{j^{\prime}}$;
and applying
$\delta$ to $q$ at $(\psi_{1},\psi_{2})$ then yields the CQ $q^{\prime}$ which is obtained from $q$ by
omitting the atom $\psi_{2}$ and omitting the quantifiers $\exists z$
for all $z\in\textrm{vars}(\psi_{2})\setminus\{w_{j_{1}},\ldots,w_{j_{m}}\}$.
By this construction we have
$\textrm{vars}(q^{\prime})=\textrm{vars}(q)\setminus\{w_{j}\ :\ j\in[s]%
\setminus\{j_{1},\ldots,j_{m}\}\}$.
Claim 5.4.
$q^{\prime}\equiv_{\{\delta\}}q$, and if $q$ is q-hierarchical, then so is
$q^{\prime}$.
Proof.
For a set $\Sigma$ of constraints and
a $\sigma$-db $D$ we write $D\models\Sigma$ to indicate that
$D$ satisfies every constraint in $\Sigma$.
For a constraint $\delta$
we write $D\models\delta$ instead of $D\models\{\delta\}$.
Obviously, $q(D)\subseteq q^{\prime}(D)$ for every $\sigma$-db $D$.
For the opposite direction, let $q$ be of the form ($**$ ‣ 2),
and consider a $\sigma$-db $D$ with
$D\models\delta_{\textit{ind}}$ and a tuple $t\in q^{\prime}(D)$. Our goal is to show that $t\in q(D)$.
Since $t\in q^{\prime}(D)$, there is a valuation $\beta^{\prime}$ such that
$t=\big{(}\beta^{\prime}(u_{1}),\ldots,\beta^{\prime}(u_{k})\big{)}$ and
$(D,\beta^{\prime})\models\psi$ for each atom $\psi$ of $q^{\prime}$. In particular,
$(D,\beta^{\prime})\models Rv_{1}\cdots v_{r}$, i.e.,
$\big{(}\beta^{\prime}(v_{1}),\ldots,\beta^{\prime}(v_{r})\big{)}\in R^{D}$.
To show that $t\in q(D)$ it suffices to modify $\beta^{\prime}$ into a
valuation $\beta$ which coincides with $\beta^{\prime}$ on all variables in
$\textrm{vars}(q^{\prime})$ and which also ensures that
$(D,\beta)\models Sw_{1}\cdots w_{s}$, i.e., that
$\big{(}\beta(w_{1}),\ldots,\beta(w_{s})\big{)}\in S^{D}$.
Since $D\models\delta_{\textit{ind}}$ we obtain from $\big{(}\beta^{\prime}(v_{1}),\ldots,\beta^{\prime}(v_{r})\big{)}\in R^{D}$
that $\big{(}\beta^{\prime}(v_{i_{1}}),\ldots,\beta^{\prime}(v_{i_{m}})\big{)}\in\pi%
_{i_{1},\ldots,i_{m}}(R^{D})\subseteq\pi_{j_{1},\ldots,j_{m}}(S^{D})$.
Since $(v_{i_{1}},\ldots,v_{i_{m}})=(w_{j_{1}},\ldots,w_{j_{m}})$, this
implies that
$\big{(}\beta^{\prime}(w_{j_{1}}),\allowbreak\ldots,\allowbreak\pi_{j_{1},%
\ldots,j_{m}}(S^{D})$.
Hence, there exists a tuple $(a_{1},\ldots,a_{s})\in S^{D}$ such that
$\big{(}\beta^{\prime}(w_{j_{1}}),\allowbreak\ldots,\allowbreak\beta^{\prime}(w%
_{j_{m}})\big{)}=(a_{j_{1}},\ldots,a_{j_{m}})$.
We let $\beta$ be the valuation obtained from $\beta^{\prime}$ by letting
$\beta(w_{j}):=a_{j}$ for every $j\in[s]\setminus\{j_{1},\ldots,j_{m}\}$.
With this choice we have $(D,\beta)\models Sw_{1}\cdots w_{s}$.
Note that $\beta$ differs from $\beta^{\prime}$ only in variables $w_{j}$ for
which we know that
$\textrm{atoms}(w_{j})=\{\psi_{2}\}=\{Sw_{1}\cdots w_{s}\}$, i.e., variables
that occur in no other atom of $q$ than the atom $Sw_{1}\cdots w_{s}$.
Therefore, $(D,\beta)\models\psi$ for each atom $\psi$ of $q$, and
hence
$t=\big{(}\beta(u_{1}),\ldots,\beta(u_{k})\big{)}\in q(D)$.
In summary, we obtain that $q^{\prime}(D)\subseteq q(D)$ for every
$\sigma$-db $D$ with $D\models\delta_{\textit{ind}}$. This completes the
proof showing that $q^{\prime}\equiv_{\{\delta_{\textit{ind}}\}}q$.
To verify the claim’s second statement,
let $W:=\{w_{j}\ :\ j\in[s]\setminus\{j_{1},\ldots,j_{m}\}\}$ and
note that $\textrm{vars}(q^{\prime})=\textrm{vars}(q)\setminus W$.
For all $x\in W$ we have $\textrm{atoms}_{q}(x)=\{\psi_{2}\}$,
and for all $x\in\textrm{vars}(q^{\prime})$ we have
$\textrm{atoms}_{q^{\prime}}(x)=\textrm{atoms}_{q}(x)\setminus\{\psi_{2}\}$.
Using this, we obtain that if $q$ satisfies
condition (i) of
Definition 3.1 then so does $q^{\prime}$.
It remains to show that if $q$ is
q-hierarchical, then $q^{\prime}$ also satisfies condition (ii) of
Definition 3.1.
Assume for contradiction that $q^{\prime}$ does not satisfy this
condition. Then, there are $x\in\textrm{free}(q^{\prime})$ and
$y\in\textrm{vars}(q^{\prime})\setminus\textrm{free}(q^{\prime})$ with
$\textrm{atoms}_{q^{\prime}}(x)\varsubsetneq\textrm{atoms}_{q^{\prime}}(y)$.
Case 1: $\textrm{atoms}_{q}(x)=\textrm{atoms}_{q^{\prime}}(x)$.
Then, $\textrm{atoms}_{q}(x)\varsubsetneq\textrm{atoms}_{q}(y)$, and thus $q$ does not
satisfy condition (ii) of
Definition 3.1 and hence is not q-hierarchical.
Case 2: $\textrm{atoms}_{q}(x)=\textrm{atoms}_{q^{\prime}}(x)\cup\{\psi_{2}\}$.
If $\textrm{atoms}_{q}(y)=\textrm{atoms}{q^{\prime}}(y)\cup\{\psi_{2}\}$, then we are done by the
same reasoning as in Case 1.
On the other hand, if $\textrm{atoms}_{q}(y)=\textrm{atoms}_{q^{\prime}}(y)$, then
$\psi_{2}\in\textrm{atoms}_{q}(x)\setminus\textrm{atoms}_{q}(y)$.
Furthermore, since $\textrm{atoms}_{q^{\prime}}(x)\varsubsetneq\textrm{atoms}_{q^{\prime}}(y)$, there are atoms
$\psi$ and $\psi^{\prime}$ such that
$\psi\in\textrm{atoms}_{q}(x)\cap\textrm{atoms}_{q}(y)$ and
$\psi^{\prime}\in\textrm{atoms}_{q}(y)\setminus\textrm{atoms}_{q}(x)$.
Thus, $q$ violates condition (i) of
Definition 3.1 and hence is not q-hierarchical.
∎
From the claim it follows that we can simplify a given query by
iteratively applying inclusion dependencies to pairs of atoms of the query.
In some cases, this transforms queries
that are hard in general into $\Sigma$-equivalent queries that are
q-hierarchical and hence easy for dynamic evaluation.
For example, an iterated application of $\delta_{\textit{ind}}:=E[2]\subseteq E[1]$
transforms the non-t-hierarchical query
$\{\,(x,y)\,:\,\exists z_{1}\exists z_{2}\;(\,Exy\,\wedge\,Eyz_{1}\,\wedge\,Ez_%
{1}z_{2}\,)\,\}$ into the q-hierarchical query
$\{\,(x,y)\,:\,Exy\,\}.$
However, the limitations of this approach are documented by the query
$q:=\{\,()\,:\,\exists x\exists y\exists z\exists z^{\prime}\,(\,Sx\,\wedge\,%
Exy\,\wedge\,Ty\,\wedge\,Rzz^{\prime}\,)\,\},$
which is $\Sigma$-equivalent to the q-hierarchical query
$q^{\prime}:=\{\,()\,:\,\exists z\exists z^{\prime}\;Rzz^{\prime}\,\},$
for $\Sigma:=\{\,R[1,2]\subseteq E[1,2]\;,\;R[1]\subseteq S[1]\;,\;R[2]\subseteq T[%
1]\,\}$, but where $q^{\prime}$ cannot be obtained
by iteratively applying dependencies of $\Sigma$ to $q$.
The presence of
functional dependencies can also cause a hard query to become
tractable:
Consider the functional dependency
$\delta_{\textit{fd}}:=E[1\to 2]$,
which is satisfied by a database $D$ iff for every $a\in\textbf{dom}$ there
is at most one $b\in\textbf{dom}$ such that $(a,b)\in E^{D}$.
On databases that satisfy $\delta_{\textit{fd}}$,
the query $q_{\textit{S-E-T}}$ from Example 5.1
can be evaluated with constant
answer time and constant update time as follows:
One can
store for every $b$ the number $m_{b}$ of elements $(a,b)\in E^{D}$ such that
$a\in S^{D}$ and in addition the number $m=\sum_{b\in T^{D}}m_{b}$,
which is non-zero if and only if
$q_{\textit{S-E-T}}(D)=\texttt{yes}$.
The functional
dependency guarantees that every update affects at most one number
$m_{b}$ and one summand of
$m$. Using constant access data structures, the query result can
therefore be maintained with constant update time.
The nature of this example is somewhat different compared to the
approaches for small
domain constraints or inclusion constraints described above: We can show that
the query becomes tractable, but we are not aware of any $\{\delta_{\textit{fd}}\}$-equivalent
q-hierarchical CQ or UCQ that would explain its tractability via a
reduction to the setting without integrity constraints.
To exploit the full power of functional dependencies for improving
dynamic query evaluation, it seems therefore necessary to come up with
new algorithmic approaches that go beyond the techniques we have for
(q- or t-)hierarchical queries.
References
[1]
Amir Abboud, Richard Ryan Williams, and Huacheng Yu.
More applications of the polynomial method to algorithm design.
In Proceedings of the 26th Annual ACM-SIAM Symposium on
Discrete Algorithms, SODA’15, San Diego, CA, USA, January 4–6, 2015, pages
218–230, 2015.
URL: http://dx.doi.org/10.1137/1.9781611973730.17, doi:10.1137/1.9781611973730.17.
[2]
Serge Abiteboul, Richard Hull, and Victor Vianu.
Foundations of Databases.
Addison-Wesley, 1995.
URL: http://webdam.inria.fr/Alice/.
[3]
Guillaume Bagan, Arnaud Durand, and Etienne Grandjean.
On acyclic conjunctive queries and constant delay enumeration.
In Proceedings of the 16th Annual Conference of the EACSL,
CSL’07, Lausanne, Switzerland, September 11–15, 2007, pages 208–222, 2007.
URL: http://dx.doi.org/10.1007/978-3-540-74915-8_18, doi:10.1007/978-3-540-74915-8_18.
[4]
Pablo Barceló, Georg Gottlob, and Andreas Pieris.
Semantic acyclicity under constraints.
In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium
on Principles of Database Systems, PODS’16, San Francisco, CA, USA, June 26
– July 01, 2016, pages 343–354, 2016.
URL: http://doi.acm.org/10.1145/2902251.2902302, doi:10.1145/2902251.2902302.
[5]
Christoph Berkholz, Jens Keppeler, and Nicole Schweikardt.
Answering conjunctive queries under updates.
In Proceedings of the 36th ACM SIGMOD-SIGACT-SIGAI Symposium
on Principles of Database Systems, PODS’17, Chicago, IL, USA, May 14–19,
2017, pages 303–318, 2017.
Full version available at http://arxiv.org/abs/1702.06370.
URL: http://doi.org/10.1145/3034786.3034789, doi:10.1145/3034786.3034789.
[6]
Ashok K. Chandra and Philip M. Merlin.
Optimal implementation of conjunctive queries in relational data
bases.
In Proceedings of the 9th Annual ACM Symposium on Theory of
Computing, STOC’77, Boulder, Colorado, USA, May 4–6, 1977, pages 77–90,
1977.
URL: http://doi.acm.org/10.1145/800105.803397, doi:10.1145/800105.803397.
[7]
Hubie Chen and Stefan Mengel.
A trichotomy in the complexity of counting answers to conjunctive
queries.
In Proceedings of the 18th International Conference on Database
Theory, ICDT’15, Brussels, Belgium, March 23–27, 2015, pages 110–126,
2015.
URL: http://doi.org/10.4230/LIPIcs.ICDT.2015.110, doi:10.4230/LIPIcs.ICDT.2015.110.
[8]
Hubie Chen and Stefan Mengel.
Counting answers to existential positive queries: A complexity
classification.
In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium
on Principles of Database Systems, PODS’16, San Francisco, CA, USA, June 26
– July 01, 2016, pages 315–326, 2016.
URL: http://doi.acm.org/10.1145/2902251.2902279, doi:10.1145/2902251.2902279.
[9]
Rada Chirkova and Jun Yang.
Materialized views.
Foundations and Trends in Databases, 4(4):295–405, 2012.
URL: https://doi.org/10.1561/1900000020, doi:10.1561/1900000020.
[10]
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.
Introduction to Algorithms (3. ed.).
MIT Press, 2009.
URL: http://mitpress.mit.edu/books/introduction-algorithms.
[11]
Víctor Dalmau and Peter Jonsson.
The complexity of counting homomorphisms seen from the other side.
Theoretical Computer Science, Volume 329, Issues 1–3, pages
315–323, December 2004.
URL: http://dx.doi.org/10.1016/j.tcs.2004.08.008, doi:10.1016/j.tcs.2004.08.008.
[12]
Arnaud Durand and Stefan Mengel.
Structural tractability of counting of solutions to conjunctive
queries.
Theory of Computing Systems, Volume 57, Issue 4, pages
1202–1249, November 2015.
Full version of an ICDT 2013 paper.
URL: http://dx.doi.org/10.1007/s00224-014-9543-y, doi:10.1007/s00224-014-9543-y.
[13]
Georg Gottlob, Stephanie Tien Lee, Gregory Valiant, and Paul Valiant.
Size and treewidth bounds for conjunctive queries.
Journal of the ACM (JACM), Volume 59, Issue 3, Article No. 16,
June 2012.
URL: http://doi.acm.org/10.1145/2220357.2220363, doi:10.1145/2220357.2220363.
[14]
Gianluigi Greco and Francesco Scarcello.
Counting solutions to conjunctive queries: structural and hybrid
tractability.
In Proceedings of the 33rd ACM SIGMOD-SIGACT-SIGART
Symposium on Principles of Database Systems, PODS’14, Snowbird, UT, USA, June
22–27, 2014, pages 132–143, 2014.
URL: http://doi.org/10.1145/2594538.2594559, doi:10.1145/2594538.2594559.
[15]
Martin Grohe.
The complexity of homomorphism and constraint satisfaction problems
seen from the other side.
Journal of the ACM (JACM), Volume 54, Issue 1, Article No. 1,
March 2007.
URL: http://doi.acm.org/10.1145/1206035.1206036, doi:10.1145/1206035.1206036.
[16]
Martin Grohe, Thomas Schwentick, and Luc Segoufin.
When is the evaluation of conjunctive queries tractable?
In Proceedings on 33rd Annual ACM Symposium on Theory of
Computing, STOC’01, Heraklion, Crete, Greece, July 6–8, 2001, pages
657–666, 2001.
URL: http://doi.acm.org/10.1145/380752.380867, doi:10.1145/380752.380867.
[17]
Ashish Gupta, Inderpal Singh Mumick, and V. S. Subrahmanian.
Maintaining views incrementally.
In Proceedings of the 1993 ACM SIGMOD International Conference
on Management of Data, SIGMOD’93, Washington, D.C., USA, May 25–28, 1993,
pages 157–166. ACM, 1993.
URL: http://doi.acm.org/10.1145/170036.170066, doi:10.1145/170036.170066.
[18]
Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol
Saranurak.
Unifying and strengthening hardness for dynamic problems via the
online matrix-vector multiplication conjecture.
In Proceedings of the 47th Annual ACM on Symposium on Theory
of Computing, STOC’15, Portland, OR, USA, June 14–17, 2015, pages 21–30,
2015.
URL: http://doi.acm.org/10.1145/2746539.2746609, doi:10.1145/2746539.2746609.
[19]
Muhammad Idris, Martín Ugarte, and Stijn Vansummeren.
The dynamic Yannakakis algorithm: Compact and efficient query
processing under updates.
In Proceedings of the 2017 ACM International Conference on
Management of Data, SIGMOD’17, Chicago, IL, USA, May 14–19, 2017, pages
1259–1274, 2017.
URL: http://doi.acm.org/10.1145/3035918.3064027, doi:10.1145/3035918.3064027.
[20]
Mahmoud Abo Khamis, Hung Q. Ngo, and Dan Suciu.
Computing join queries with functional dependencies.
In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium
on Principles of Database Systems, PODS’16, San Francisco, CA, USA, June 26
– July 01, 2016, pages 327–342, 2016.
URL: http://doi.acm.org/10.1145/2902251.2902289, doi:10.1145/2902251.2902289.
[21]
Christoph Koch.
Incremental query evaluation in a ring of databases.
In Proceedings of the 29th ACM SIGMOD-SIGACT-SIGART
Symposium on Principles of Database Systems, PODS’10, June 6–11, 2010,
Indianapolis, Indiana, USA, pages 87–98, 2010.
URL: http://doi.acm.org/10.1145/1807085.1807100, doi:10.1145/1807085.1807100.
[22]
Christoph Koch, Daniel Lupei, and Val Tannen.
Incremental view maintenance for collection programming.
In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium
on Principles of Database Systems, PODS’16, San Francisco, CA, USA, June 26
– July 01, 2016, pages 75–90, 2016.
URL: http://doi.acm.org/10.1145/2902251.2902286, doi:10.1145/2902251.2902286.
[23]
Dániel Marx.
Tractable hypergraph properties for constraint satisfaction and
conjunctive queries.
Journal of the ACM (JACM), Volume 60, Issue 6, Article No. 42,
November 2013.
URL: http://doi.acm.org/10.1145/2535926, doi:10.1145/2535926.
[24]
Bernard M. E. Moret and Henry D. Shapiro.
Algorithms from P to NP: Volume 1: Design & Efficiency.
Benjamin-Cummings, 1991.
[25]
Milos Nikolic, Mohammad Dashti, and Christoph Koch.
How to win a hot dog eating contest: Distributed incremental view
maintenance with batch updates.
In Proceedings of the 2016 International Conference on
Management of Data, SIGMOD’16, San Francisco, CA, USA, June 26 – July 01,
2016, pages 511–526, 2016.
URL: http://doi.acm.org/10.1145/2882903.2915246, doi:10.1145/2882903.2915246.
[26]
Sushant Patnaik and Neil Immerman.
Dyn-FO: A parallel, dynamic complexity class.
J. Comput. Syst. Sci., 55(2):199–209, 1997.
doi:10.1006/jcss.1997.1520.
[27]
Thomas Schwentick and Thomas Zeume.
Dynamic complexity: recent updates.
SIGLOG News, 3(2):30–52, 2016.
URL: http://doi.acm.org/10.1145/2948896.2948899, doi:10.1145/2948896.2948899.
[28]
Thomas Zeume and Thomas Schwentick.
Dynamic conjunctive queries.
In Proceedings of the 17th International Conference on Database
Theory, ICDT’14, Athens, Greece, March 24–28, 2014, pages 38–49, 2014.
URL: http://dx.doi.org/10.5441/002/icdt.2014.08, doi:10.5441/002/icdt.2014.08.
APPENDIX
Appendix A Full proof of Theorem 3.4 (b)
Proof of Theorem 3.4 (b) for the case that
$q$ violates condition (i)
of Definition 3.3
Assume we are given a query $q:=q_{\varphi}(z_{1},\ldots,z_{k})$ that is a homomorphic
core and that is not t-hierarchical because it
violates condition (i)
of Definition 3.3.
Thus, there
are two variables $x,y\in\textrm{vars}(q)\setminus\textrm{free}(q)=\textrm{vars}(q)\setminus\{z_{%
1},\ldots,z_{k}\}$ and three atoms
$\psi^{x},\psi^{x,y},\psi^{y}$ of $q$
with
$\textrm{vars}(\psi^{x})\cap\{x,y\}=\{x\}$,
$\textrm{vars}(\psi^{x,y})\cap\{x,y\}=\{x,y\}$, and
$\textrm{vars}(\psi^{y})\cap\{x,y\}=\{y\}$.
We show how a dynamic algorithm that solves the testing problem for $q$ can be used to
solve the OuMv-problem.
Without loss of
generality we assume that
$\textrm{vars}(q)=\{x,y,z_{1},\ldots,z_{\ell}\}$ for some $\ell\geqslant k$, and
$|\textrm{vars}(q)|=\ell+2$.
For a
given $n\times n$ matrix $M$
we fix a domain $\textup{dom}_{n}$ that consists of $2n+\ell$ elements
$\{a_{i},b_{i}\ :\ i\in[n]\}\cup\{c_{s}\ :\ s\in[\ell]\}$
from $\textbf{dom}\setminus\textrm{cons}(q)$.
For $i,j\in[n]$ we let
$\iota_{i,j}$
be the injective mapping
from $\textrm{vars}(q)\cup\textrm{cons}(q)$ to $\textup{dom}_{n}\cup\textrm{cons}(q)$
with
•
$\iota_{i,j}(x)=a_{i}$,
•
$\iota_{i,j}(y)=b_{j}$,
•
$\iota_{i,j}(z_{s})=c_{s}$ for all $s\in[\ell]$, and
•
$\iota_{i,j}(d)=d$ for all $d\in\textrm{cons}(q)$.
We tacitly extend $\iota_{i,j}$ to a mapping from $\textrm{vars}(q)\cup\textbf{dom}$ to dom by letting
$\iota_{i,j}(d)=d$ for every $d\in\textbf{dom}$.
For the matrix $M$ and for
$n$-dimensional vectors $\vec{u}$ and $\vec{v}$,
we define a $\sigma$-db
$D=D(q,M,\vec{u},\vec{v})$ with
$\textrm{adom}(D)\subseteq\textup{dom}_{n}\cup\textrm{cons}(q)$ as follows (recall our notational
convention that $\vec{u}_{i}$ denotes the $i$-th entry of a vector
$\vec{u}$).
For every atom
$\psi=Rw_{1}\cdots w_{r}$ in $q$ we include
in $R^{D}$ the tuple
$\big{(}\iota_{i,j}(w_{1}),\ldots,\iota_{i,j}(w_{r})\big{)}$
•
for all $i,j\in[n]$ such that
$\vec{u}_{i}=1$, if $\psi=\psi^{x}$,
•
for all $i,j\in[n]$ such that
$\vec{v}_{j}=1$,
if $\psi=\psi^{y}$,
•
for all $i,j\in[n]$ such that $M_{i,j}=1$, if
$\psi=\psi^{x,y}$, and
•
for all $i,j\in[n]$, if $\psi\notin\{\,\psi^{x},\,\psi^{x,y},\,\psi^{y}\,\}$.
Note that the relations in the atoms $\psi^{x}$, $\psi^{y}$, and
$\psi^{x,y}$ are used to encode $\vec{u}$, $\vec{v}$, and $M$,
respectively.
Moreover,
since $\psi^{x}$ ($\psi^{y}$) does not contain the variable $y$ ($x$),
two databases $D=D(q,M,\vec{u},\vec{v})$ and
$D^{\prime}=D(q,M,\mbox{$\vec{u}\,{}^{\prime}$},\mbox{$\vec{v}\,{}^{\prime}$})$ differ only in at most
$2n$ tuples.
Therefore, $D^{\prime}$ can be obtained from $D$ by $2n$ update steps.
It follows from the definitions that $\iota_{i,j}$ is a
homomorphism from $q$ to $D$ if and only if $\vec{u}_{i}=1$,
$\vec{v}_{j}=1$, and $M_{i,j}=1$. Therefore,
$\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$ if and only if there are
$i,j\in[n]$ such that $\iota_{i,j}$ is a
homomorphism from $q$ to $D$.
We let
$g$ be the (surjective) mapping from $\textup{dom}_{n}\cup\textrm{cons}(q)$ to
$\textrm{vars}(q)\cup\textrm{cons}(q)$
defined by
$g(d)=d$ for all $d\in\textrm{cons}(q)$ and
$g(c_{s}):=z_{s}$,
$g(a_{i}):=x$,
$g(b_{j}):=y$ for all $i,j\in[n]$
and $s\in[\ell]$.
Clearly, $g$ is a
homomorphism from $D$ to $q$.
Obviously, the following is true for every mapping
$h$ from $\textrm{vars}(q)$ to $\textrm{adom}(D)$ and for all $w\in\textrm{vars}(q)$:
•
if $h(w)=c_{s}$ for some $s\in[\ell]$,
then $(g\circ h)(w)=z_{s}$,
•
if $h(w)=a_{i}$ for some $i\in[n]$,
then $(g\circ h)(w)=x$,
•
if $h(w)=b_{j}$ for some $j\in[n]$,
then $(g\circ h)(w)=y$,
•
if $h(w)=d$ for some $d\in\textrm{cons}(q)$,
then $(g\circ h)(w)=d$.
We define the partition
$\mathcal{P}=\big{\{}\{c_{1}\},\ldots,\{c_{\ell}\},\{a_{i}\ :\ i\in[n]\},\{b_{j%
}\ :\ j\in[n]\}\big{\}}$
of $\textup{dom}_{n}$ and say that a mapping
$h$ from $\textrm{vars}(q)\cup\textbf{dom}$ to dom
respects $\mathcal{P}$,
if for each set from the partition there is exactly one element in the
image of $\textrm{vars}(q)$ under $h$, i.e., the set
$\{h(w)\ :\ w\in\textrm{vars}(q)\}$.
Claim A.1.
$\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$ $\iff$ There exists a homomorphism $h\colon q\to D$ that
respects $\mathcal{P}$.
Proof.
For one direction assume that $\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$. Then
there are $i,j\in[n]$ such that $\iota_{i,j}$ is a homomorphism
from $q$ to $D$
that respects $\mathcal{P}$. For the other direction assume that
$h\colon q\to D$ is a homomorphism that respects
$\mathcal{P}$.
Thus, there are elements $w_{a},w_{b},w_{1},\ldots,w_{\ell}$ in $\textrm{vars}(q)$ such that
$h(w_{a})\in\{a_{i}\ :\ i\in[n]\}$, $h(w_{b})\in\{b_{j}\ :\ j\in[n]\}$, and $h(w_{s})=c_{s}$ for each $s\in[\ell]$.
It follows that $(g\circ h)$ is a
bijective homomorphism from $q_{\varphi}(z_{1},\ldots,z_{k})$ to $q_{\varphi}((g\circ h)(z_{1}),\ldots,(g\circ h)(z_{k}))$.
Therefore,
it can easily be verified that
$h\circ(g\circ h)^{-1}$ is a homomorphism from
$q$ to $D$ which equals $\iota_{i,j}$ for some
$i,j\in[n]$. This implies that $\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$.
∎
Claim A.2.
If $q$ is a homomorphic core, then every homomorphism $h\colon q\to D$ respects $\mathcal{P}$.
Proof.
Assume for contradiction that $h\colon q\to D$ is a
homomorphism that does not respect $\mathcal{P}$. Then
$(g\circ h)$ is a homomorphism from $q$ into a
proper subquery of $q$, contradicting that $q$
is a homomorphic core.
∎
Claim A.3.
If $q$ is a homomorphic core, then $\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$
$\iff$
$(c_{1},\ldots,c_{k})\in q(D)$ .
Proof.
We already know that $\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$ if and only if there are $i,j\in[n]$ such that
$\iota_{i,j}$ is a homomorphism from $q$ to $D$.
Furthermore, $\iota_{i,j}(z_{s})=c_{s}$ for all $s\in[\ell]$ and all $i,j\in[n]$.
Thus, if
$\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$, then there exist $i,j\in[n]$ such that
$\big{(}\iota_{i,j}(z_{1}),\ldots,\iota_{i,j}(z_{k})\big{)}=(c_{1},\ldots,c_{k}%
)\in q(D)$.
This proves direction “$\Longrightarrow$” of the claim.
For the opposite direction, note that if $(c_{1},\ldots,c_{k})\in q(D)$, then there is
a homomorphism $h$ from $q$ to $D$. By Claim A.2, $h$ respects
$\mathcal{P}$, and hence by Claim A.1, $\vec{u}^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}=1$.
∎
We are now ready for proving Theorem 3.4 (b) for the case that
$q$ violates condition (i)
of Definition 3.3.
Assume for contradiction that the
testing problem
for $q$ can be solved with update time
$t_{u}=O(n^{1-\varepsilon})$
and testing time $t_{t}=O(n^{2-\varepsilon})$.
We can use this
algorithm to solve the OuMv-problem in time
$O(n^{3-\varepsilon})$ as follows.
In the
preprocessing phase, we are given the $n\times n$ matrix
$M$ and let $\vec{u}^{\,0}$, $\vec{v}^{\,0}$ be the all-zero vectors
of dimension $n$.
We start the preprocessing phase of our testing algorithm for
$q$ with the empty database. As this database has constant size,
the preprocessing phase finishes in constant time.
Afterwards, we use $O(n^{2})$ insert operations to build the
database $D(q,M,\vec{u}^{\,0},\vec{v}^{\,0})$.
All this is done within time $O(n^{2}t_{u})=O(n^{3-\epsilon})$.
When a pair
of vectors $\vec{u}^{\,t}$, $\vec{v}^{\,t}$ (for $t\in[n]$) arrives, we change the
current database $D(q,M,\vec{u}^{\,t-1},\vec{v}^{\,t-1})$ into
$D(q,M,\vec{u}^{\,t},\vec{v}^{\,t})$ by using
at most $2n$ update steps.
By Claim A.3
we know that
$(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}=1$ if, and only if,
$(c_{1},\ldots,c_{k})\in q(D)$, for
$D:=D(q,M,\vec{u}^{\,t},\vec{v}^{\,t})$.
Hence, after running the test routine with input $(c_{1},\ldots,c_{k})$
in time $t_{t}=O(|\textrm{adom}(D)|^{2-\varepsilon})\allowbreak=O(n^{2-\varepsilon})$
we can output the value of $(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}$.
The time we spend for each $t\in[n]$ is bounded by
$O(2nt_{u}+t_{t})=O(n^{2-\varepsilon})$.
Thus, the
overall running time for solving the OuMv-problem sums up to $O(n^{3-\varepsilon})$, contradicting
the OuMv-conjecture and hence also the OMv-conjecture.
This completes the proof of Theorem 3.4 (b) for the case that
$q$ violates condition (i)
of Definition 3.3.
∎
Proof of Theorem 3.4 (b) for the case that
$q$ violates condition (ii)
of Definition 3.3
Assume we are given a query $q:=q_{\varphi}(z_{1},\ldots,z_{k})$ that is a homomorphic
core and that is not t-hierarchical because it
violates condition (ii)
of Definition 3.3.
Thus, there
are two variables $x\in\textrm{free}(q)$ and $y\in\textrm{vars}(q)\setminus\textrm{free}(q)$ and two atoms
$\psi^{x,y}$ and $\psi^{y}$ of $q$
with
$\textrm{vars}(\psi^{x,y})\cap\{x,y\}=\{x,y\}$ and
$\textrm{vars}(\psi^{y})\cap\{x,y\}=\{y\}$.
We show how a dynamic algorithm that solves the testing problem for $q$ can be used to
solve the OuMv-problem.
Without loss of
generality we assume that
$\textrm{vars}(q)=\{z_{1},\ldots,z_{\ell}\}$ with
$\textrm{free}(q)=\{z_{1},\ldots,z_{k}\}$, $\ell>k$,
$x=z_{1}$, and $y=z_{\ell}$.
For a given $n\times n$ matrix $M$
we fix a domain $\textup{dom}_{n}$ that consists of $2n+\ell-2$ elements
$\{a_{i},b_{i}\ :\ i\in[n]\}\cup\{c_{s}\ :\ s\in\{2,\ldots,\ell{-}1\}\}$
from $\textbf{dom}\setminus\textrm{cons}(q)$.
For $i,j\in[n]$ we let
$\iota_{i,j}$
be the injective mapping
from $\textrm{vars}(q)\cup\textrm{cons}(q)$ to $\textup{dom}_{n}\cup\textrm{cons}(q)$
with
•
$\iota_{i,j}(x)=a_{i}$,
•
$\iota_{i,j}(y)=b_{j}$,
•
$\iota_{i,j}(z_{s})=c_{s}$ for all $s\in\{2,\ldots,\ell{-}1\}$, and
•
$\iota_{i,j}(d)=d$ for all $d\in\textrm{cons}(q)$.
We tacitly extend $\iota_{i,j}$ to a mapping from $\textrm{vars}(q)\cup\textbf{dom}$ to dom by letting
$\iota_{i,j}(d)=d$ for every $d\in\textbf{dom}$.
For the matrix $M$ and for
an $n$-dimensional vector $\vec{v}$,
we define a $\sigma$-db
$D=D(q,M,\vec{v})$ with
$\textrm{adom}(D)\subseteq\textup{dom}_{n}\cup\textrm{cons}(q)$ as follows (recall our notational
convention that $\vec{v}_{j}$ denotes the $j$-th entry of a vector
$\vec{v}$).
For every atom
$\psi=Rw_{1}\cdots w_{r}$ in $q$ we include
in $R^{D}$ the tuple
$\big{(}\iota_{i,j}(w_{1}),\ldots,\iota_{i,j}(w_{r})\big{)}$
•
for all $i,j\in[n]$ such that
$\vec{v}_{j}=1$,
if $\psi=\psi^{y}$,
•
for all $i,j\in[n]$ such that $M_{i,j}=1$, if
$\psi=\psi^{x,y}$, and
•
for all $i,j\in[n]$, if $\psi\notin\{\,\psi^{x,y},\,\psi^{y}\,\}$.
Note that the relations in the atoms $\psi^{y}$ and
$\psi^{x,y}$ are used to encode $\vec{v}$ and $M$,
respectively.
Moreover,
since $\psi^{y}$ does not contain the variable $x$,
two databases $D=D(q,M,\vec{v})$ and
$D^{\prime}=D(q,M,\mbox{$\vec{v}\,{}^{\prime}$})$ differ only in at most
$n$ tuples.
Therefore, $D^{\prime}$ can be obtained from $D$ by $n$ update steps.
It follows from the definitions that
$$\text{$\iota_{i,j}$ is a
homomorphism from $q$ to $D$}\ \ \iff\ \ \text{$M_{i,j}=1$ and $\vec{v}_{j}=1$.}$$
We let
$g$ be the (surjective) mapping from $\textup{dom}_{n}\cup\textrm{cons}(q)$ to
$\textrm{vars}(q)\cup\textrm{cons}(q)$
defined by
$g(d)=d$ for all $d\in\textrm{cons}(q)$ and
$g(c_{s}):=z_{s}$,
$g(a_{i}):=x$,
$g(b_{j}):=y$ for all $i,j\in[n]$
and $s\in\{2,\ldots,\ell{-}1\}$.
Clearly, $g$ is a
homomorphism from $D$ to $q$.
Obviously, the following is true for every mapping
$h$ from $\textrm{vars}(q)$ to $\textrm{adom}(D)$ and for all $w\in\textrm{vars}(q)$:
•
if $h(w)=c_{s}$ for some $s\in\{2,\ldots,\ell{-}1\}$,
then $(g\circ h)(w)=z_{s}$,
•
if $h(w)=a_{i}$ for some $i\in[n]$,
then $(g\circ h)(w)=x$,
•
if $h(w)=b_{j}$ for some $j\in[n]$,
then $(g\circ h)(w)=y$,
•
if $h(w)=d$ for some $d\in\textrm{cons}(q)$,
then $(g\circ h)(w)=d$.
We define the partition
$\mathcal{P}=\big{\{}\{c_{2}\},\ldots,\{c_{\ell-1}\},\{a_{i}\ :\ i\in[n]\},\{b_%
{j}\ :\ j\in[n]\}\big{\}}$
of $\textup{dom}_{n}$ and say that a mapping
$h$ from $\textrm{vars}(q)\cup\textbf{dom}$ to dom
respects $\mathcal{P}$,
if for each set from the partition there is exactly one element in the
set $h(\textrm{vars}(q)):=\{h(w)\ :\ w\in\textrm{vars}(q)\}$.
Claim A.4.
For every $i\in[n]$, the following are equivalent:
•
There is a $j\in[n]$ such that $M_{i,j}=1$ and $\vec{v}_{j}=1$.
•
There is a homomorphism $h\colon q\to D$ that
respects $\mathcal{P}$ such that $a_{i}\in h(\textrm{vars}(q))$.
Proof.
Consider a fixed $i\in[n]$.
For one direction assume that there is a $j\in[n]$ such that
$M_{i,j}=1$ and $\vec{v}_{j}=1$.
Then, $\iota_{i,j}$ is a homomorphism from $q$ to $D$. Obviously,
$\iota_{i,j}$ respects $\mathcal{P}$, and $a_{i}=\iota_{i,j}(x)\in\iota_{i,j}(\textrm{vars}(q))$.
For the other direction assume that
$h\colon q\to D$ is a homomorphism that respects
$\mathcal{P}$ and $a_{i}\in h(\textrm{vars}(q))$.
Thus, there are elements $w_{a_{i}},w_{b},w_{2},\ldots,w_{\ell-1}$ in $\textrm{vars}(q)$ such that
$h(w_{a_{i}})=a_{i}$, $h(w_{b})\in\{b_{j}\ :\ j\in[n]\}$, and $h(w_{s})=c_{s}$ for each $s\in\{2,\ldots,\ell{-}1\}$.
It follows that $(g\circ h)$ is a
bijective homomorphism from $q_{\varphi}(z_{1},\ldots,z_{k})$ to
$q_{\varphi}((g\circ h)(z_{1}),\ldots,(g\circ h)(z_{k}))$.
Therefore,
it can easily be verified that
$h\circ(g\circ h)^{-1}$ is a homomorphism from
$q$ to $D$ which equals $\iota_{i,j}$ for some
$j\in[n]$. Thus, for some $j\in[n]$ we have $M_{i,j}=1$ and $\vec{v}_{j}=1$.
∎
Claim A.5.
If $q$ is a homomorphic core, then every homomorphism $h\colon q\to D$ respects $\mathcal{P}$.
Proof.
Assume for contradiction that $h\colon q\to D$ is a
homomorphism that does not respect $\mathcal{P}$. Then
$(g\circ h)$ is a homomorphism from $q$ into a
proper subquery of $q$, contradicting that $q$
is a homomorphic core.
∎
Claim A.6.
If $q$ is a homomorphic core, then for every $i\in[n]$ the following are equivalent:
•
There is a $j\in[n]$ such that $M_{i,j}=1$ and $\vec{v}_{j}=1$.
•
$(a_{i},c_{2}\ldots,c_{k})\in q(D)$ .
Proof.
Consider a fixed $i\in[n]$.
For one direction assume that there is a $j\in[n]$ such that $M_{i,j}=1$ and $\vec{v}_{j}=1$.
Then, $\iota_{i,j}$ is a homomorphism from $q$ to $D$, and thus
$\big{(}\iota_{i,j}(x),\iota_{i,j}(z_{2}),\ldots,\iota_{i,j}(z_{k})\big{)}\in q%
(D)$.
By definition of $\iota_{i,j}$ we have
$\big{(}\iota_{i,j}(x),\iota_{i,j}(z_{2}),\ldots,\iota_{i,j}(z_{k})\big{)}%
\allowbreak=(a_{i},c_{2},\ldots,c_{k})$, and hence we are done
(recall that $y=z_{\ell}$ and $\ell>k$).
For the other direction assume that $(a_{i},c_{2},\ldots,c_{k})\in q(D)$.
Thus, there exists a homomorphism $h:q\to D$ such that
$(a_{i},c_{2},\ldots,c_{k})=\big{(}h(x),h(z_{2}),\ldots,h(z_{k})\big{)}$.
According to Claim A.5, $h$ respects $\mathcal{P}$.
Furthermore, $a_{i}\in h(\textrm{vars}(q))$, since $h(x)=a_{i}$.
From Claim A.4,
we obtain that there is a $j\in[n]$ such that $M_{i,j}=1$ and $\vec{v}_{j}=1$.
∎
We are now ready for proving Theorem 3.4 (b) for the case that
$q$ violates condition (ii)
of Definition 3.3.
Assume for contradiction that the
testing problem
for $q$ can be solved with update time
$t_{u}=O(n^{1-\varepsilon})$
and testing time $t_{t}=O(n^{1-\varepsilon})$.
We can use this
algorithm to solve the OuMv-problem in time
$O(n^{3-\varepsilon})$ as follows.
In the
preprocessing phase, we are given the $n\times n$ matrix
$M$ and let $\vec{v}^{\,0}$ be the all-zero vectors
of dimension $n$.
We start the preprocessing phase of our testing algorithm for
$q$ with the empty database. As this database has constant size,
the preprocessing phase finishes in constant time.
Afterwards, we use $O(n^{2})$ insert operations to build the
database $D(q,M,\vec{v}^{\,0})$.
All this is done within time $O(n^{2}t_{u})=O(n^{3-\epsilon})$.
When a pair
of vectors $\vec{u}^{\,t}$, $\vec{v}^{\,t}$ (for $t\in[n]$) arrives, we change the
current database $D(q,M,\vec{v}^{\,t-1})$ into
$D(q,M,\vec{v}^{\,t})$ by using
at most $n$ update steps.
By assumption, $q$ is a homomorphic core.
Thus, Claim A.6 tells us that for
$D:=D(q,M,\vec{v}^{\,t})$
and for every $i\in[n]$ we have
$$(a_{i},c_{2},\ldots,c_{k})\in q(D)\quad\iff\quad\text{there is a $j\in[n]$ %
such that $M_{i,j}=1$ and $\vec{v}_{j}=1$}\,,$$
Hence, after running the test routine with input $(a_{i},c_{2},\ldots,c_{k})$ for
each $i\in[n]$ with $\vec{u}_{i}=1$,
we can output the value of $(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}$.
For this, we use at most $n$ calls of the test routine, and each such call is executed
within time
$t_{t}=O(|\textrm{adom}(D)|^{1-\varepsilon})\allowbreak=O(n^{1-\varepsilon})$.
The time we spend to compute $(\vec{u}^{\,t})^{\,\mskip-1.5mu \mathsf{T}}M\vec{v}^{\,t}$ for a fixed $t\in[n]$ is therefore bounded by
$O(nt_{u}+nt_{t})=O(n^{2-\varepsilon})$.
Thus, the
overall running time for solving the OuMv-problem sums up to $O(n^{3-\varepsilon})$, contradicting
the OuMv-conjecture and hence also the OMv-conjecture.
This completes the proof of Theorem 3.4 (b) for the case that
$q$ violates condition (ii)
of Definition 3.3.
∎ |
The faint outskirts of the blue compact galaxy Haro 11: is there a red excess?
Genoveva Micheva${}^{1}$, Erik Zackrisson${}^{2}$, Göran Östlin${}^{2}$, Nils Bergvall${}^{3}$ and Tapio Pursimo${}^{4}$
${}^{1}$Stockholm Observatory, Department of Astronomy, Stockholm University, 106 91 Stockholm, Sweden
${}^{2}$Oskar Klein Centre for Cosmoparticle Physics, Department of Astronomy, Stockholm University, 106 91 Stockholm, Sweden
${}^{3}$Division of Astronomy & Space Physics, Uppsala university, 751 20 Uppsala, Sweden
${}^{4}$Nordic Optical Telescope, Apdo 474, 38700 Santa Cruz de La Palma, Spain
E-mail: genoveva@astro.su.se (GM)E-mail: ez@astro.su.se (EZ)E-mail: ostlin@astro.su.se (GÖ)E-mail: nisse@fysast.uu.seE-mail: tpursimo@not.iac.es
(Accepted …. Received …; in original form …)
Abstract
Previous studies of the low surface brightness host of the blue compact galaxy (BCG) Haro 11 have suggested an abnormally red color of $V-K=4.2\pm 0.8$ for the host galaxy. This color is inconsistent with any normal stellar population over a wide range of stellar metallicities ($Z=0.001$–$0.02$). Similar though less extreme host colors have been measured for other BCGs and may be reconciled with population synthesis models, provided that the stellar metallicity of the host is higher than that of the ionized gas in the central starburst. We present the deepest V and K band observations to date of Haro 11 and derive a new $V-K$ color for the host galaxy. Our new data suggest a far less extreme colour of $V-K=2.3\pm 0.2$, which is perfectly consistent with the expectations for an old host galaxy with the same metallicty as that derived from nebular emission lines in the star-forming center.
keywords:
galaxies: dwarf - photometry - stellar content - halo, galaxies: individual: Haro 11
††pagerange: The faint outskirts of the blue compact galaxy Haro 11: is there a red excess?–The faint outskirts of the blue compact galaxy Haro 11: is there a red excess?††pubyear: 2008
1 Introduction
BCGs are gas-rich low-luminosity galaxies of particular interest since many of them have very high star formation rates (SFR) and low chemical abundances (Kunth &
Östlin, 2000). They are thus reminiscent of high-redshift starbursting young galaxies. This, together with their close proximity to us, makes BCGs useful test objects, suitable for gaining insight into galaxy formation and intense star formation (SF). Aside from the bright central starburst, deep photometric optical and near-infrared (NIR) studies (e.g. Loose &
Thuan, 1986; Doublier et al., 1997; Papaderos et al., 1996; Cairós
et al., 2001; Bergvall &
Östlin, 2002; Amorín
et al., 2007, 2009; Noeske et al., 2005) have revealed the presence of another component of low surface brightness (LSB). The LSB component hosts the central starburst and is therefore referred to as a LSB host galaxy. The host is only visible close to the outskirts of BCGs since at small radial distances the intensity from the central starburst outshines the contribution from the LSB component. Many photometric studies have concentrated on reaching faint levels in an attempt to isolate the LSB component and characterize it. Some have suggested that the LSB component has unusual properties such as an extreme red excess in the optical/NIR colors. Bergvall &
Östlin (2002, hereafter BÖ02) and Bergvall et al. (2005) present a sample of 10 BCGs with progressively redder colors towards the outskirts of the host. BÖ02 and Zackrisson et al. (2006, hereafter Z06) argue that these colors are much too red to be reconciled with a normal stellar population.
Extended faint structures of very red colors have also been detected in the outskirts of disk galaxies and, similarly to BCGs, these structures display colors much too red to be due to stellar populations with reasonable metallicities and normal initial mass functions (IMFs). In 2004, Zibetti, White & Brinkmann reported a faint structure with a notable red excess around the stacked disk of 1047 edge-on spirals from the Sloan Digital Sky Survey (SDSS). In the same year, Zibetti & Ferguson reported a red structure around a disk galaxy in the Hubble Ultra Deep Field (HUDF). Similarly, Bergvall, Zackrisson &
Caldwell (2009) find a red excess in the outskirts of disk galaxies by stacking $1510$ edge-on LSB galaxies from the SDSS.
These detections have been collectively dubbed “red halos” and a number of possible explanations have been proposed. Some detections have been falsified and ascribed to instrumental effects. For instance, de Jong (2008) demonstrated that the HUDF red halo is likely due to reddening by the far wings of the point spread function (PSF). Most detections, however, have persisted and even given rise to exotic scenarios. BÖ02 proposed that the extreme colors seen in their sample could be explained by a metal rich ($>Z_{\odot}$) halo stellar population, in contrast to the low nebular chemical abundances. Z06 argued that the only stellar population which explains the published colors of the halos of both disk galaxies and BCGs is one with an extremely bottom-heavy ($dN/dM\propto M^{-\alpha}$ with $\alpha=4.50$) IMF. If proven to be a common component of galaxies of various Hubble types, this would imply that faint low-mass stars are so numerous in these galaxies that they may significantly contribute to the baryons still missing from inventories in the low-redshift Universe. Another effect which may produce red halos around both disk galaxies and BCGs is that the surface brightness profiles used to derive the colors have been oversubtracted due to failure to account for extinction by the extragalactic background light (Zackrisson, Micheva &
Östlin, 2009).
If one is willing to trade off a common explanation for all red halo detections with individual explanations applicable to certain galaxies and not to others then most of the red halos of the BÖ02 and Bergvall et al. (2005) BCGs sample can be modeled with a Salpeter IMF, with the caveat that the required stellar metallicity is in some cases much higher than what is observed from the gas in the central starburst (Zackrisson et al., 2009). Among the sample, one of the biggest, most massive and most luminous BCGs is Haro 11, with a very high SFR of 18-20 $M_{\odot}$yr${}^{-1}$ and the reddest color of $V-K=4.2\pm 0.8$, which cannot be reconciled with any normal stellar population.
Throughout this paper we assume a Hubble parameter of $H_{0}=75~{}km~{}s^{-1}~{}Mpc^{-1}$ and a distance to Haro 11 of $82~{}Mpc$, with a redshift of $z=0.020598$. In $\lx@sectionsign$ 2 we present the deepest observations to date for Haro 11 in the V and K passbands and a new and more accurate measurement of the $V-K$ color. We are pushing the limits of surface photometry down to isophotes 10000 times fainter than the brightness of the night sky in K and we therefore examine the reliability of the obtained profiles in $\lx@sectionsign$ 2.3 and $\lx@sectionsign$ 2.4. Our results for the $V-K$ color of Haro 11 are presented in $\lx@sectionsign$ 3. Discussion and summary can be found in $\lx@sectionsign$ 4 and $\lx@sectionsign$ 5, respectively.
2 Data
Table 1 summarizes the available data used in this paper and compares with the data published by Bergvall &
Östlin (2002). The V band data was obtained in 2008 at the Nordic Optical Telescope (NOT) with the MOSaic CAmera (MOSCA), which has a scale of 0.217 arcsec/pixel. The K band data was obtained in 2005 at the ESO NTT 3.58m telescope with Son OF Isaac (SOFI), which has a scale of 0.288 arcsec/pixel. The raw frame size in the optical (MOSCA) is 7.7 arcmin, in the NIR (SOFI) 4.9 arcmin. The NIR images are dithered with large offsets. Due to the large field of view of the instrument and the comparatively much smaller target size, this is the optimal observational strategy for this object.
2.1 Reductions
The data were reduced in the following way. The raw images in both bands were cleaned from bad pixels, bias subtracted in the V band, pair subtracted in the K band using the sky median as a scaling factor, trimmed, flatfielded with an illumination-corrected normalized masterflat, and sky subtracted with a flat surface interpolated with a first order polynomial. In the V band the final stacked image was calibrated with secondary standard stars against the published data by BÖ02. In the K band each sky-subtracted frame was calibrated against 2MASS stars located in the frame, whereby a minimum of five 2MASS stars per frame were used to obtain its zeropoint. The calibrated K band data was therefore independent of the photometric conditions, even though an examination of the temporal zeropoint variation throughout the observations revealed a scatter of only $\sim 0.049$ magnitudes. In both bands, images were registered using Pyraf GEOMAP/GEOTRAN. When shifting and rotating data we interpolated with a 5th order polynomial in order to avoid the Moiré effect caused by the default bilinear interpolation. The images were median and average combined without the use of a rejection algorithm. Comparison between the median and average stacked images revealed an insignificant difference in the aperture photometry of point sources on the order of $\sim 0.02$ magnitudes. Median stacking has the added advantage of filtering out any lingering cosmic rays and bad pixels, therefore the analysis in this paper was carried out on the median stacked images in both bands. Sky subtraction was performed on the final stacked image in each band in order to remove any residual sky. When performing sky subtraction we neglected the effects of dust extinction of the extragalactic background light in the outskirts of the host galaxy (Zackrisson, Micheva &
Östlin, 2009) and assumed that the sky over the target can be interpolated from sky regions around the target. The final reduced images in V and K have an effective field of view smaller than the field of view of the detectors in both bands, which is a consequence of the median stacking of dithered images.
2.2 Sky estimation
The shape of a radial color profile is sensitive to the slope of the two surface brightness profiles from which it has been derived. In turn, the slope of the surface brightness profiles is sensitive to any systematic errors in the sky subtraction, especially at large radial distances and very faint isophotal levels. In particular, over- or undersubtracting the sky background would lead to spurious color gradients. It is therefore important to have control over all aspects of the sky background estimation. We briefly outline the major steps involved in this procedure.
For each frame the sky is approximated by fitting a flat surface to regions free from sources (“sky regions”) with the PyMidas (Hook et al., 2006) procedure FitFlat. The placement and size of these sky regions are very important - one aims to have suitably large sky regions uniformly distributed over the frame in order for the interpolated sky surface to be a well-sampled approximation of the sky. The size of the sky regions together with the order of the interpolating function determine whether the interpolated surface will mimic the large scale or the small scale structure of the sky. It is difficult to apply a reasonable model to small scale sky structures. We have chosen to remove only the large scale structures, but include the remaining small scale sky fluctuations in the error analysis ($\lx@sectionsign$ 2.3).
In order to model the large scale sky structure of a frame, the area of each sky region is not allowed to drop below $600$ pixels${}^{2}$ and the interpolating function is a polynomial of first order in both $x$ and $y$. The position and size of the sky regions are determined automatically by the following algorithm. All positive and negative sources are masked out, giving a binary mask for each image. Initially, a very coarse fixed grid is placed over the mask image. Defining sky regions over a frame thus translates to iterating over gridboxes and, at each iteration, determining the maximum size of the sky region that can be placed inside each gridbox. The size of the sky region depends on the presence of sources. If sources are absent, the sky region takes the shape and size of the entire gridbox. If sources are present, a box of smaller size is generated and tested for sources. Smaller boxes are first generated by splitting the gridbox at an iteratively moving point along one axis while keeping the size of the box along the other axis at maximum. These boxes are thus always anchored at two grid vertices along the same axis. If all of the resulting boxes test positive for sources new boxes are generated by moving a strip of fixed width along one axis and maximum height in the other axis. These boxes may or may not be anchored to any vertices but always touch two opposing gridbox axes. If this too fails, new boxes are generated by varying their size along both axes. These boxes are not adjacent to vertices or to gridbox axes and instead “float” in the gridbox interior. If still no boxes free from sources are found the gridbox is simply split in four quadrants along the middle of both its axes, resulting in four boxes of a quarter size. If these quarter boxes also test positive for sources, then no sky region can be defined over this gridbox and the algorithm continues to the next gridbox.
After going through the entire grid each quadrant of the frame is checked for the number of obtained boxes. If the number of sky boxes in any quadrant is less than 20, the grid is refined and iteration starts over. The algorithm is such that the exact position, shape and size of the present sources is irrelevant. The imposed limit on the minimum number of sky boxes in each frame quadrant ensures that the tiepoints for the interpolated surface are well distributed to cover the entire frame.
We have also carried out tests to evaluate the performance of the automated sky box placement algorithm. These consisted of varying the box shape, box size, and the number of boxes per quadrant, and comparing the effect that would have on the PyMidas FitFlat sky estimation. We conclude that the algorithm is stable and reliable in all of our test cases with the caveat that spatially non-linear low frequency sky variations are intrinsically impossible to perfectly model with a first order polynomial regardless of the choice in placement and size of sky boxes. While we have used PyMidas FitFlat for the sky subtraction itself, a test with IRAF IMSURFIT gives nearly identical results, provided that the same sky box distribution is used in both cases.
2.3 Surface brightness profiles
The surface brightness profiles in the right panel of Fig. 5 are obtained by integrating in elliptical rings starting from the center of the galaxy. The faint outskirts of Haro 11 have an almost circular shape in V, however the central starburst is very bright and it is unclear at what radii its contribution to the overall intensity becomes negligible. The shape and center of the underlying host is therefore better estimated from the K band, where the contribution from the starburst quickly drops off with increasing radius. The coordinates of the center and the parameters of the ellipse are chosen by fitting isophotes to the K band image with IRAF ELLIPSE. For robustness, different centering positions, inclinations and position angles (P.A.) have been tested. There is no discernible difference in the radial color profiles obtained from integration with ellipticity and P.A. parameters in the range $e\in(0.20,0.34),~{}P.A.\in(103^{\circ},120^{\circ})$, the latter measured from North through East. The range of these values covers all elliptical isophotes returned by IRAF ELLIPSE at surface brightness levels of $\mu_{K}=21$–$23.5$ mag$~{}\textrm{arcsec}^{-2}$, where the signal from the host galaxy dominates the luminosity output. These values are similar to the ones used by BÖ02 ($e=0.20,~{}P.A.=120^{\circ}$). Once integration starts, the parameters of the elliptic rings and the step size along the major axis are kept constant. The profiles are sampled with a step size of 5 pixels.
At faint isophotal levels it is the uncertainty in the zero level of the sky background, $\sigma_{sky}$, that largely dominates the error (Fig. 3). This uncertainty is estimated by masking out all sources, measuring the mean intensity inside square apertures, and calculating the standard deviation of these means. The $\sigma_{sky}$ obtained in this fashion is by definition always numerically greater than the standard deviation of the mean sky level and thus represents a conservative estimate of the sky uncertainty. The apertures are of the same order in size as the area of the smallest elliptic ring inside of the LSB host, measured at $\mu_{K}\sim 21.5$ mag$~{}\textrm{arcsec}^{-2}$. Another source of uncertainty included in the error analysis is the uncertainty in the mean flux level of each ring, which is a composite error of the poisson noise and the intrinsic flux scatter across each ring, given by the standard deviation of the mean (SDOM). Instead of isolating the poisson noise in each ring, we use the SDOM in the rings as an upper limit for the poisson noise, because the reduction process invariably destroys the initial poisson distribution of the raw frames. To obtain a more accurate value, one would have to obtain the poisson noise per ring on the raw frames and then propagate those values throughout the reduction process. For the purposes of our analysis, however, the SDOM of each ring on the final reduced frames is an acceptable upper limit. Thus, the combination of the sky uncertainty and the SDOM gives a conservative representation for the uncertainties in our results, summarized in Table 2.
2.4 Setting limits on systematic errors in surface brightness profiles
At low signal-to-noise levels the interpretation of a surface brightness profile is subject to some ambiguity. Observed profile trends could either be real or due to the propagation of systematic errors that have not been accounted for in the reduction process. It is also not always obvious at what surface brightness levels the sky noise starts dominating the behavior of the profile. When pushing surface photometry down to very faint isophotes it is necessary to identify the maximum radial range within which the data are not dominated by any systematic errors that could have been introduced by the reduction process or by the random fluctuations in the sky background. Attempting to ascertain the effects of reduction on the data by analyzing the surface brightness profile of a real physical target is not always practical since the true shape and slope of the profile is often not known in advance.
A better way to perform such a test is to insert a synthetic galaxy with an a priori known surface brightness profile in all raw frames, then reduce and calibrate with the same pipeline that was used on the real data. The easiest profile to identify is that of an exponential disk, since it appears as a straight line in a log-lin plot. Analyzing the surface brightness profile of a reduced exponential disk will reveal the presence of any systematic errors, as well as give an indication at what surface brightness levels the sky noise starts dominating the profile. The surface brightness level, at which the reduced synthetic disk profile starts deviating from the expected straight line in a log-lin plot marks the faintest isophote that can be reached with the quality of the data and the accuracy of the pipeline.
It is the K band, rather than the V band, that has the noisiest sky and the largest uncertainty in the sky level, so a $V-K$ profile will inevitably be subject to limits imposed solely by the depth and quality of the K band data. We therefore perform the exponential disk test only on the K-band data. For robustness the test is carried out three times, each with different positions and scale lengths of the disks. Fig. 4 shows a plot of the surface brightness profiles of the three reduced exponential disks. The scale lengths are chosen to be steeper, approximately equal to, and flatter than the scale length of the Haro 11 LSB host. The latter is obtained by a least-squares fit to the Haro 11 profile in a range of $\mu_{K}=21.5$-$23.0$ mag$~{}\textrm{arcsec}^{-2}$, which is well away from the central starburst. This gives a Haro 11 scale length of $\approx 2.6$ kpc and a y-intercept of $\mu_{0,K}=19.6$ mag$~{}\textrm{arcsec}^{-2}$. The central surface brightness is kept constant for all three synthetic disks at $\mu_{0,K}\approx 19.5$ mag$~{}\textrm{arcsec}^{-2}$. Scale lengths of $2.1$, $2.7$ and $4.0$ kpc are chosen for the steeper, the approximately equal, and the flatter synthetic disks respectively. The shortest and longest scale lengths are chosen to correspond to radial surface brightness profiles that are $\sim 1^{\textrm{m}}$ fainter, respectively $\sim 1^{\textrm{m}}$ brighter than a profile with a scale length of $2.7$ kpc at a distance of $\textrm{r}=25$ arcsec where the surface brightness is $\mu_{K}=23$ mag$~{}\textrm{arcsec}^{-2}$. This allows the results of the test to be applicable to a range of profile slopes, without the need for great accuracy in the determination of the true scale length of the host galaxy. Indeed, measurements of the Haro 11 scale length from various locations and lengths within $\mu_{K}=20.5$-$23.8$ mag$~{}\textrm{arcsec}^{-2}$ give a slightly different value, however the difference is never found to be large enough to displace the profile from within the envelope created by the steep and flat exponential disks included in the test. The K band surface brightness profile for Haro 11 would have to be off by more than $\pm 1^{\textrm{m}}$ than what we observe at the radial distance of $\textrm{r}=25$ arcsec, in order for this test to significantly over- or underestimate the K band profile depth that can be reached.
Along with the surface brightness profiles of the three reduced exponential disks, Fig. 4 also shows the analytical profiles of each disk on an empty frame. In the absence of any systematic and random uncertainties or in regions where these are insignificant, the reduced exponential disk profiles will exactly follow their analytical counterparts. The steeper and the Haro 11-like disk profiles are consistent with their respective analytical profiles out to $r\sim 20.7$ arcsec and $r\sim 27.6$ arcsec, respectively, both reaching a depth of $\mu_{\textrm{K}}\approx 23.5$ mag$~{}\textrm{arcsec}^{-2}$ with a maximum deviation from their analytical profiles of $\delta\lesssim 0.15$ magnitudes at these radii. The flat disk, however, deviates from its analytical counterpart by more than $\delta\approx 0.3$ magnitudes beyond $\mu_{\textrm{K}}\approx 23.0$ mag$~{}\textrm{arcsec}^{-2}$, $r\gtrsim 32.5$ arcsec. We note that accepting a deviation of $\delta\lesssim 0.3$ magnitudes would make this profile consistent with its analytical counterpart down to $\mu_{\textrm{K}}\approx 24.5$ mag$~{}\textrm{arcsec}^{-2}$, $r\sim 46.5$ arcsec, however the corresponding maximum deviations of the steep and Haro 11-like profiles are then $\delta\lesssim 0.45$ and $\delta\lesssim 0.75$ magnitudes, respectively, which we deem to be unacceptably large.
Analyzing the exact behavior of different exponential disks at extremely low surface brightness levels is, however, beyond the scope of this paper. We are therefore content with using the results of this test to set a conservative limit on the depth of the K band data and we conclude that a surface brightness level of $\mu_{K}=23$ mag$~{}\textrm{arcsec}^{-2}$ can clearly be reached for all three profiles with negligible random scatter ($\delta\lesssim 0.15$) and no discernible systematic effects. We adopt the limit of $\mu_{K}=23$ mag$~{}\textrm{arcsec}^{-2}$ as the faintest isophote that can be reached in the K band with the quality of the Haro 11 data and with the current pipeline.
Using a synthetic disk with a Sersic index $n>1$ will not have an effect on the outcome or the interpretation of this test. At the relevant radial distances away from the center all profiles with reasonable values for n are essentially parallel to the exponential disk. Moreover, the exact shape of the analytical profile is irrelevant. It is only important at what radial distance from the center the profile shape can no longer be recovered with acceptable scatter and no systematic effects.
3 Results
The V and K surface brightness profiles are presented in Fig. 5 together with the profiles published by BÖ02. The V profile is well-behaved and with reasonably small errorbars down to $\mu_{V}\sim 28.5$ mag$~{}\textrm{arcsec}^{-2}$. For the K band, however, the synthetic disk tests in $\lx@sectionsign$ 2.4 suggest that a K band profile obtained from the current data with the current pipeline is reliable down to $\mu_{K}\sim 23$ mag$~{}\textrm{arcsec}^{-2}$. At fainter isophotes the profile is likely to be dominated by systematic errors introduced by the reduction procedure. For the Haro 11 K band profile in Fig. 5 $\mu_{K}=23$ mag$~{}\textrm{arcsec}^{-2}$ is reached at a radial distance of $r\sim 10$ kpc, or $\sim 25$ arcsec, from the center of the galaxy. This radius constitutes the maximum radius that we consider in the consecutive calculation of the total color for the Haro 11 LSB host galaxy.
Fig. 6 shows the $V-K$ radial color profile together with the profile published by BÖ02. Both profiles are uncorrected for dust extinction and can be directly compared. The two profiles display different behavior. The BÖ02 profile shows no indication of breaking the trend of becoming progressively redder outwards of $r\geq 5$ arcsec, reaching $V-K\simeq 3$ at $r\simeq 15$ arcsec, while the profile of the new and deeper data reddens from $r\simeq 5$ arcsec to $r\simeq 15$ arcsec only to flatten out and stabilize at $V-K\simeq 2.3$ outwards of $r\geq 15$ arcsec.
For the calculation of the total $V-K$ halo color we use the radial range $r\simeq 5$–$10$ kpc. The K band profile changes slope at $r\simeq 5$ kpc and seems consistent with a constant slope out to $r\simeq 13$ kpc. This suggests that the contribution from the central starburst in the K band is negligible already at $r\simeq 5$ kpc and beyond this radius the profile is instead dominated by the LSB host. We therefore take $r_{0}\simeq 5$ kpc as the starting radius for the calculation of the total color. For the maximum radius we take the limit of $r_{\textrm{max}}\simeq 10$ kpc, as previously justified in $\lx@sectionsign$ 2.4. Over this radial range, we obtain a total color of $V-K=2.3\pm 0.2$. This is $2.3\sigma$ away from the BÖ02 value of $V-K=4.2\pm 0.8$.
It should be noted that the total $V-K$ color reported in BÖ02, $V-K=4.2\pm 0.8$, is measured out to a distance of $r\simeq 12.3$ kpc. This is outside of the maximum radius limit we impose on our data, $r_{\textrm{max}}\simeq 10$ kpc. For direct comparison of the two total color measurements, we also measure the total color out to a radius of $r\simeq 12.3$ kpc as in BÖ02, and obtain the slightly higher value for the total color of $V-K=2.4\pm 0.5$, which is $2\sigma$ away from the total color of BÖ02. We cannot justify the use of data points out to such large radii in the total $V-K$ color because our exponential disk test with a Haro 11-like disk shows a systematic brightening of the K band surface brightness profile beyond $r\gtrsim 11$ kpc (Fig. 4). In the absence of a similar systematic effect in the V band at the same radii, the brightening of the K band profile will result in a spurious reddening of the $V-K$ radial profile.
We further note that a simple least-squares fit of the Haro 11 profile in the range of $\mu_{K}=21.5$-$23.0$ mag$~{}\textrm{arcsec}^{-2}$ gives a scale length of the host population of $\simeq 2.6$ kpc in the K band with an extrapolated central surface brightness of $\mu_{0,K}\simeq 19.6$ mag$~{}\textrm{arcsec}^{-2}$.
4 Discussion
In this section we examine how our results compare with previously published measurements and comment on the implications of the new Haro 11 host color for the underlying stellar population.
4.1 Optical/near-IR color interpretation
The previous color measurement of the LSB host of Haro 11, $V-K=4.2\pm 0.8$, was part of a larger deep optical/near-IR photometric study of four luminous metal-poor BCGs (Bergvall &
Östlin, 2002) and an additional six metal-poor BCGs (Bergvall et al., 2005). The colors for all galaxies in the sample were found to be consistently too red to reconcile with a normal stellar population of low metallicity. This has lead to somewhat controversial scenarios for the nature of the underlying stellar population (Bergvall &
Östlin, 2002; Zackrisson et al., 2006, 2009). The measured colors could be modeled with a low-metallicity ($Z=0.001$) stellar population with a very bottom-heavy IMF ($dN/dM\propto M^{-\alpha}$ with $\alpha=4.50$) and some dust reddening. Zackrisson et al. (2006) found that in addition to these BCGs, the red colors of the faint outskirts of the stacked edge-on disks of Zibetti
et al. (2004) could also be explained by such an IMF. This solution also works for the Bergvall, Zackrisson &
Caldwell (2009) detection, since the halo colors of their SDSS stack are similar to those of Zibetti
et al. (2004). If such red halos are found to be a common feature of galaxies of very different morphological types, this could substantially contribute to the baryonic dark matter content of the Universe, supporting the idea that a non-negligible fraction of the missing baryons is locked up in faint low-mass stars. These low-mass stars cannot be smoothly distributed over the halo, but would instead need to be concentrated in star clusters, since a smooth halo component with a bottom-heavy IMF is ruled out by Milky Way star count data (Zackrisson &
Flynn, 2008).
After the improved implementation of the thermally pulsing AGB phase in stellar evolutionary models (Marigo et al., 2008) all BCG galaxies in the sample, except Haro 11, could be reconciled with a stellar population with a Salpeter IMF of an intermediate metallicity of $Z=0.004$–$0.008$, or $20$–$40\%$ of $Z_{\odot}\sim 0.02$ (Zackrisson et al., 2009), with the caveat that these inferred stellar metallicities of the LSB hosts are higher than the observed gas metallicities of the central starburst regions (BÖ02, $Z=0.002$, $10\%$ of $Z_{\odot}$). Haro 11 had the reddest optical/near-IR color from the sample, $V-K=4.2\pm 0.8$, which could not be explained with a Salpeter IMF. Figure 7 shows the Marigo et al. (2008) stellar evolutionary tracks for all galaxies in the sample for a Salpeter IMF. Haro 11 is clearly an outlier and cannot be reconciled with any reasonable metallicity. The only model that provides a reasonable fit for $V-K=4.2\pm 0.8$ is the bottom-heavy IMF model, albeit with somewhat high metallicity (Zackrisson et al., 2006).
Analysis of the deep new data presented in this paper suggests that the color of the host galaxy of Haro 11 is instead $V-K=2.3\pm 0.2$. This revised color is shown in Fig. 8 with the Marigo et al. (2008) stellar evolutionary tracks for a Salpeter IMF for low and intermediate metallicity. We do not have new B band data for Haro 11 which would allow us to reduce the uncertainty in the $B-V$ color, therefore we cannot give a better age estimate for the LSB host. However, we have reduced the uncertainty in the $V-K$ color by a factor of 4 and from Fig. 8 we can draw the conclusion that the host galaxy is consistent with a metal-poor, very old, but normal IMF stellar population.
4.2 Comparison with previous observations
The discrepancy between our and the BÖ02 color measurement is $\sim 2\sigma$. The source of this discrepancy is most likely the improved quality of CCDs and NIR arrays, as well as the larger detector sizes of the new observations. The most dramatic change in data quality is seen in the K band (Figs. 1, 2). Because of the higher sky brightness and the lower atmospheric transmission in the K band, ground-based V band observations are usually $\sim 5$ (Vega) magnitudes deeper than ground-based K band data. The behavior of a $V-K$ color profile at very faint isophotes is thus always subject to limitations imposed by the depth of the K band data. Given favorable observing conditions and assuming the reduction is carried out correctly, this in turn would depend on the detector quality, the total exposure time, and the specific morphology of the target galaxy. Since the morphology of Haro 11 has not changed in the last decade, we examine the effects of the increase in exposure time. For the K band data the effective exposure time (taking telescope aperture into account) is now 2.4 times longer than the previously published data. Comparison of the two K surface brightness profiles in Fig. 5 shows that the added detector integration has increased the radial range within which the K profile is accurate to more than $\pm 0.5$ mag by $\approx 2$ kpc. To test whether the longer exposure time also accounts for the different slope of the profiles at large radii, we compared a K band surface brightness profile obtained with $100\%$ of all new K band frames to a profile obtained with only $40\%$ of the frames, equivalent to an effective exposure time of $25$ minutes (25 minutes on the ESO NTT correspond to $66$ minutes on the ESO 2.2m). We conclude that there is no indication that a shallower profile would be systematically brighter at smaller radii than a deeper profile. The effect is simply an increase in the random noise at smaller radii but the overall shape and slope of the profile at faint isophotes show no significant change.
On the other hand, an insufficient sky subtraction would cause a brightening in the surface brightness profile at large radii from the center of the galaxy. To test this, we have simulated a sky subtraction problem by adding a constant sky residual to each data point of the ESO NTT SOFI profile seen in the right panel of Fig. 5. We find that in the framework of the ESO 2.2m IRAC2 data, a sky residual on the order of $\sim 1.4\sigma$ is needed to account for the observed brightening of the ESO 2.2m IRAC2 surface brightness profile seen in the left panel of Fig. 5. We therefore conclude that the shape of the ESO 2.2m IRAC2 K band profile is most likely due to the lingering presence of a sky residual of $\sim 1.4\sigma$. Over the same radial range, this sky subtraction problem has been avoided in the ESO NTT SOFI profile mainly due to the improved observing techniques, the better quality of the NIR array, improved sky region estimation techniques and the use of a larger light-gathering area. Specifically, the effective field of view of the final stacked image has increased by a factor of 2.3, thus providing a comparatively larger area for sky estimation, well away from the faintest outskirts of Haro 11 visible in the K band.
The work in this paper indicates the need for a possible revision of the color measurements for the 10 BCGs in the BÖ02 and Bergvall et al. (2005) sample. Currently underway is the reduction of a sample of nearly 40 BCGs in the optical and NIR, which also includes new and deeper data for 5 of the 6 BCGs in the BÖ02 sample.
5 Summary
We have presented deep new optical/near-IR data of the BCG Haro 11 and performed surface photometry on the faint outskirts of the underlying LSB host galaxy. We have obtained a total color of $V-K=2.3\pm 0.2$ for the host over the range $r=5$–$10$ kpc. This is $2.3\sigma$ off from the previously published measurement of $V-K=4.2\pm 0.8$, which was measured out to $r=12.3$ kpc. Increasing the radial range of the total color measurement out to $r=12.3$ kpc gives $V-K=2.4\pm 0.5$ which has a $\sim 2\sigma$ discrepancy to previous results. The new color places Haro 11 on the blue end of the $V-K$ range occupied by the other BCGs in the sample and reconciles the LSB host of Haro 11 with a metal-poor ($Z=0.001$) stellar population with a standard Salpeter IMF. Hence, the metallicity indicated by the color is consistent with that measured from emission-line ratios, and an anomalous stellar population is no longer required to explain the properties of the Haro 11 host galaxy.
Acknowledgments
The near-IR observations in this work are made with the ESO NTT telescope at the La Silla Observatory under program ID 075.B-0220. The optical observations in this work are made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
The authors would like to thank the anonymous referee for thought-provoking comments, and the editor Anna Evripidou for her expeditious help. EZ acknowledges a research grant from the Swedish Royal Academy of Sciences. GÖ and NB acknowledge support from the Swedish Research Council. GÖ is a Royal Swedish Academy of Sciences Research Fellow supported by a grant from the Knut and Alice Wallenberg Foundation.
References
Amorín
et al. (2009)
Amorín R., Aguerri J. A. L., Muñoz-Tuñón C.,
Cairós L. M., 2009, ArXiv e-prints, arXiv:0903.2861
Amorín
et al. (2007)
Amorín R. O., Muñoz-Tuñón C., Aguerri J. A. L.,
Cairós L. M., Caon N., 2007, A&A, 467, 541
Bergvall et al. (2005)
Bergvall N., Marquart T., Persson C., Zackrisson E., Östlin
G., 2005, in Renzini A., Bender R., eds, Multiwavelength Mapping of
Galaxy Formation and Evolution Strange Hosts of Blue Compact Galaxies.
pp 355–+
Bergvall &
Östlin (2002)
Bergvall N., Östlin G., 2002, A&A, 390, 891
Bergvall et al. (2009)
Bergvall N., Zackrisson E., Caldwell B., 2009, ArXiv e-prints,
arXiv:0909.4296
Cairós
et al. (2001)
Cairós L. M., Vílchez J. M., González Pérez J. N.,
Iglesias-Páramo J., Caon N., 2001, ApJS, 133, 321
Cumming et al. (2009)
Cumming R. J., Östlin G., Marquart T., Fathi K., Bergvall N.,
Adamo A., 2009, ArXiv e-prints, arXiv:0901.2869
de Jong (2008)
de Jong R. S., 2008, MNRAS, 388, 1521
Doublier et al. (1997)
Doublier V., Comte G., Petrosian A., Surace C., Turatto M.,
1997, AAPS, 124, 405
Hook et al. (2006)
Hook R. N., Maisala S., Oittinen T., Ullgren M., Vasko K.,
Savolainen V., Lindroos J., Anttila M., Solin O., Møller
P. M., Banse K., Peron M., 2006, in Gabriel C., Arviset C.,
Ponz D., Enrique S., eds, Astronomical Data Analysis Software and
Systems XV Vol. 351 of Astronomical Society of the Pacific Conference Series,
PyMidas–A Python Interface to ESO-MIDAS.
pp 343–+
Kunth &
Östlin (2000)
Kunth D., Östlin G., 2000, AAPR, 10, 1
Loose &
Thuan (1986)
Loose H.-H., Thuan T. X., 1986, ApJ, 309, 59
Marigo et al. (2008)
Marigo P., Girardi L., Bressan A., Groenewegen M. A. T., Silva
L., Granato G. L., 2008, A&A, 482, 883
Noeske et al. (2005)
Noeske K. G., Papaderos P., Cairós L. M., Fricke K. J., 2005,
A&A, 429, 115
Papaderos et al. (1996)
Papaderos P., Loose H.-H., Thuan T. X., Fricke K. J., 1996,
AAPS, 120, 207
Zackrisson et al. (2006)
Zackrisson E., Bergvall N., Östlin G., Micheva G., Leksell
M., 2006, ApJ, 650, 812
Zackrisson &
Flynn (2008)
Zackrisson E., Flynn C., 2008, ApJ, 687, 242
Zackrisson et al. (2009)
Zackrisson E., Micheva G., Bergvall N., Östlin G., 2009, ArXiv
e-prints, arXiv:0902.4695
Zackrisson et al. (2009)
Zackrisson E., Micheva G., Östlin G., 2009, MNRAS, 397, 2057
Zibetti &
Ferguson (2004)
Zibetti S., Ferguson A. M. N., 2004, MNRAS, 352, L6
Zibetti
et al. (2004)
Zibetti S., White S. D. M., Brinkmann J., 2004, MNRAS, 347, 556 |
HIP-2005-11/TH
Supersymmetric Models and CP violation in B decays
E. Gabrielli${}^{a,c}$, K. Huitu${}^{a,b}$, and S. Khalil${}^{d,e}$
[5mm]
${}^{a}$Helsinki Institute of Physics,
P.O.B. 64, 00014 University of Helsinki, Finland
[0pt]
${}^{b}$Div. of HEP, Dept. of Phys.,
P.O.B. 64, 00014 University of Helsinki, Finland
[0pt]
${}^{c}$CERN PH-TH, Geneva 23, Switzerland
[0pt]
${}^{d}$IPPP, University of Durham, South Rd., Durham
DH1 3LE, U.K.
[0.pt]
${}^{e}$Dept. of Math., German University in Cairo - GUC,
New Cairo, El Tagamoa Al Khames, Egypt.
[0pt]
In this talk CP violation in the supersymmetric models, and especially
in $B$-decays is discussed.
We review our analysis of the supersymmetric contributions to the mixing
CP asymmetries of $B\to\phi K_{S}$ and $B\to\eta^{\prime}K_{S}$ processes.
Both gluino and chargino exchanges are considered in a model
independent way by using the mass insertion approximation method.
The QCD factorization method is used, and parametrization of this
method in terms of Wilson coefficients is presented in both
decay modes.
Correlations between the CP asymmetries of these processes and the
direct CP asymmetry in $b\to s\gamma$ decay are shown.
1 Introduction
In the Standard Model (SM), the CP violation is due to the misalignment
of the mass and charged current interaction eigenstates.
This misalignment is represented in the CKM matrix $V_{CKM}$ [1],
present in the charged current interaction Lagrangian,
$$\displaystyle L^{CC}_{int}=-\frac{g_{2}}{\sqrt{2}}\left(\begin{array}[]{ccc}%
\bar{u}_{L}&\bar{c}_{L}&\bar{t}_{L}\end{array}\right)\gamma_{\mu}V_{CKM}\left(%
\begin{array}[]{c}d_{L}\\
s_{L}\\
b_{L}\end{array}\right)W_{\mu}^{+}+h.c..$$
(1)
In the Wolfenstein parametrization $V_{CKM}$ is given by
$$\displaystyle V_{CKM}=\left(\begin{array}[]{ccc}V_{ud}&V_{us}&V_{ub}\\
V_{cd}&V_{cs}&V_{cb}\\
V_{td}&V_{ts}&V_{tb}\\
\end{array}\right)=\left(\begin{array}[]{ccc}1-\frac{1}{2}\lambda^{2}&\lambda&%
A\lambda^{3}(\rho-i\eta)\\
-\lambda&1-\frac{1}{2}\lambda^{2}&A\lambda^{2}\\
A\lambda^{3}(1-\rho-i\eta)&-A\lambda^{2}&1\end{array}\right)+{\cal{O}}(\lambda%
^{4}),$$
(2)
where the Cabibbo mixing angle $\lambda=0.22$.
The CKM matrix is unitary, $V_{CKM}^{\dagger}V_{CKM}=1=V_{CKM}V_{CKM}^{\dagger}$.
The unitarity conditions provide strong constraints for CP violation
in the Standard Model.
It is, however, well known that the amount of CP violation in the
Standard Model is not enough to account for the generation of the
matter-antimatter asymmetry in the universe.
Thus new sources for CP violation are expected from beyond the
Standard Model scenarios.
E.g. in supersymmetric models a large number of new phases
emerge.
These phases are strongly constrained by electric dipole moments.
The unitarity constraints can be represented as unitarity triangles,
for which the length of the sides correspond to the products of
elements in the CKM matrix due to
the unitarity conditions.
In the Standard Model, the angle $\beta$ in the unitary triangle
[2], can be measured from $B$ meson decays.
The golden mode $B^{0}\to J/\psi K_{S}$ is dominated by tree contribution
and measurement of the CP asymmetries very accurately gives the
$\beta$ angle.
The dominant part of the decay amplitudes for $B^{0}\to\phi K_{S},\;\eta^{\prime}K_{S}$
is assumed to come from the gluonic penguin,
but some contribution from the
tree level $b\to u\bar{u}s$ decay is expected.
The $|\phi\rangle$ is almost pure $|s\bar{s}\rangle$ and consequently
this decay mode corresponds also accurately, up to terms of
orders ${\cal{O}}(\lambda^{2})$,
to $\sin 2\beta$ in the SM [3].
The $b\to u\bar{u}s$ tree level contribution to $B_{d}\to\eta^{\prime}K$ was
estimated in [4].
It was found that the tree level amplitude is less than 2% of the
gluonic penguin amplitude.
Thus also in this mode one measures the angle $\beta$ with
a good precision in the SM.
Therefore, it is expected that NP contributions to the CP asymmetries in
$B^{0}\to\phi K_{S},\;\eta^{\prime}K_{S}$ decays are more significant than in
$B^{0}\to J/\psi K_{S}$ and can compete with the SM one.
$B$ physics is a natural framework to test the beyond the Standard Model CP
violation effects.
It is clear that ultimately one needs to test the Standard Model
with three generations and that large statistics is needed to achieve
the small effects of CP violation.
Flow of interesting data has been provided in recent years by
the B-factories.
In this talk we will not assume any particular model for CP violation,
but consider a general SUSY model.
The talk is based on papers [6, 5], where a comparative study
of SUSY contributions from
chargino and gluino to $B\rightarrow\phi K$ and
$B\rightarrow\eta^{\prime}K$ processes in naive factorization (NF)
and QCD factorization (QCDF) approaches is done.
We also analyzed in [6] the branching ratios of these decays and
investigated their correlations with CP asymmetries.
The correlations between CP asymmetries of these
processes and the direct CP asymmetry in $b\to s\gamma$ decay
[7]
is also discussed in [6].
In the analysis the mass insertion method (MIA) [8] is used.
Denoting by $\Delta^{q}_{AB}$ the off–diagonal terms
in the sfermion $(\tilde{q}=\tilde{u},\tilde{d})$
mass matrices for the up and down, respectively,
where $A,B$ indicate chirality couplings to
fermions $A,B=(L,R)$, the A–B squark propagator can be expanded as
$$\langle\tilde{q}^{a}_{A}\tilde{q}^{b*}_{B}\rangle=i\left(k^{2}{\bf 1}-\tilde{m%
}^{2}{\bf 1}-\Delta_{AB}^{q}\right)^{-1}_{ab}\simeq\frac{i\delta_{ab}}{k^{2}-%
\tilde{m}^{2}}+~{}\frac{i(\Delta_{AB}^{q})_{ab}}{(k^{2}-\tilde{m}^{2})^{2}}+{%
\cal O}(\Delta^{2}),$$
(3)
where $q=u,d$ selects up or down sector, respectively,
$a,b=(1,2,3)$ are flavor indices, ${\bf 1}$
is the unit matrix, and $\tilde{m}$ is the average squark mass.
As we will see in the following, it is convenient to parametrize
this expansion in terms of the dimensionless quantity
$(\delta_{AB}^{q})_{ab}\equiv(\Delta^{q}_{AB})_{ab}/\tilde{m}^{2}$.
New physics (NP) could in principle affect the $B$ meson decay
by means of a new source of CP violating phase in the corresponding
amplitude. In general this phase is different from the corresponding SM
one.
If so, then deviations on CP asymmetries from SM
expectations can be sizeable, depending on the relative magnitude of SM
and NP amplitudes. For instance, in the SM the
$B\to\phi K_{S}$ decay amplitude
is generated at one loop and therefore it is
very sensitive to NP contributions.
In this respect, SUSY models with non minimal flavor structure and new
CP violating phases in the squark mass matrices,
can easily generate large deviations in the
$B\to\phi K_{S}$ asymmetry.
The time dependent CP asymmetry for $B\to\phi K_{S}$ can
be described by
$$\displaystyle a_{\phi K_{S}}(t)$$
$$\displaystyle=$$
$$\displaystyle\frac{\Gamma(\overline{B}^{0}(t)\to\phi K_{S})-\Gamma(B(t)\to\phi
K%
_{S})}{\Gamma(\overline{B}^{0}(t)\to\phi K_{S})+\Gamma(B(t)\to\phi K_{S})}=C_{%
\phi K_{S}}\cos\Delta M_{B_{d}}t+S_{\phi K_{S}}\sin\Delta M_{B_{d}}t,$$
where $C_{\phi K_{S}}$ and $S_{\phi K_{S}}$ represent
the direct and the mixing CP asymmetry, respectively and they are given
by
$$C_{\phi K_{S}}=\frac{|\overline{\rho}(\phi K_{S})|^{2}-1}{|\overline{\rho}(%
\phi K_{S})|^{2}+1},\ \ S_{\phi K_{S}}=\frac{2Im\left[\frac{q}{p}~{}\overline{%
\rho}(\phi K_{S})\right]}{|\overline{\rho}(\phi K_{S})|^{2}+1}.$$
(5)
The parameter $\overline{\rho}(\phi K_{S})$ is defined by
$$\overline{\rho}(\phi K_{S})=\frac{\overline{A}(\phi K_{S})}{A(\phi K_{S})}.$$
(6)
where $\overline{A}(\phi K_{S})$ and $A(\phi K_{S})$ are
the decay amplitudes of $\overline{B}^{0}$ and $B^{0}$ mesons,
respectively.
Here, the mixing parameters $p$ and $q$ are defined by
$|B_{1}\rangle=p|B\rangle+q|\overline{B}^{0}\rangle,\ \ |B_{2}\rangle=p|B%
\rangle-q|\overline{B}^{0}\rangle$
where $|B_{1(2)}\rangle$ are mass eigenstates of $B$ meson.
The ratio of the mixing parameters is given by
$$\frac{q}{p}=-e^{-2i\theta_{d}}\frac{V_{tb}^{*}V_{td}}{V_{tb}V_{td}^{*}},$$
(7)
where $\theta_{d}$ represent any SUSY contribution
to the $B-\bar{B}^{0}$ mixing angle.
Finally, the above amplitudes can be written in terms of the matrix
element
of the $\Delta B=1$ transition as
$$\overline{A}(\phi K_{S})=\langle\phi K_{S}|H_{eff}^{\Delta B=1}|\overline{B}^{%
0}\rangle,\ \ \ A(\phi K_{S})=\langle\phi K_{S}|\left(H_{eff}^{\Delta B=1}%
\right)^{{\dagger}}|B^{0}\rangle.$$
(8)
Results by BaBar and Belle collaboration
have been announced in [9, 10].
The experimental value of the indirect CP asymmetry parameter
for $B^{0}\to J/\psi K_{S}$ is given by [9, 10]
$$\displaystyle S_{J/\psi K_{S}}=0.726\pm 0.037,$$
(9)
which agrees quite well with the SM prediction $0.715_{-0.045}^{+0.055}$
[11].
Results on the corresponding $\sin{2\beta}$
extracted for $B^{0}\to\phi K_{S}$ process is as follows
[9, 10]
$$\displaystyle S_{\phi K_{S}}$$
$$\displaystyle=$$
$$\displaystyle 0.50\pm 0.25^{+0.07}_{-0.04}\;\;({\rm BaBar}),$$
(10)
$$\displaystyle=$$
$$\displaystyle 0.06\pm 0.33\pm 0.09\;\;({\rm Belle})\,,$$
where the first errors are statistical and the second systematic.
As we can see from Eq.(10),
the relative central values are different. BaBar results
[9] are more
compatible with SM predictions, while Belle measurements
[10] show
a deviation from the $c\bar{c}$ measurements of about $2\sigma$.
Moreover, the average $S_{\phi K_{S}}=0.34\pm 0.20$
displays 1.7$\sigma$ deviation from Eq.(9).
Furthermore, the most recent measured CP asymmetry
in the $B^{0}\to\eta^{\prime}K_{S}$ decay is found by BaBar [9]
and Belle [10] collaborations as
$$\displaystyle S_{\eta^{\prime}K_{S}}$$
$$\displaystyle=$$
$$\displaystyle 0.27\pm 0.14\pm 0.03\;\;({\rm BaBar})$$
(11)
$$\displaystyle=$$
$$\displaystyle 0.65\pm 0.18\pm 0.04\;\;({\rm Belle}),$$
with an average $S_{\eta^{\prime}K_{S}}=0.41\pm 0.11$,
which shows a 2.5$\sigma$ discrepancy from Eq. (9).
It is interesting to note that the results on s-penguin modes
from both experiments differ from the value extracted from the
$c\bar{c}$ mode ($J/\psi$), BaBar by
2.7$\sigma$ and Belle by 2.4$\sigma$ [9, 10].
At the same time the experiments agree with each other, and even
the central values are quite close:
$$\displaystyle 0.42\pm 0.10\;\;{\rm BaBar},\;\;\;0.43^{+0.12}_{-0.11}\;\;{\rm
Belle}.$$
On the other hand, the experimental measurements of the branching
ratios of $B^{0}\to\phi K^{0}$ and $B^{0}\to\eta^{\prime}K^{0}$ at BaBar
[12], Belle [13],
and CLEO [14] lead to the following averaged results
[15] :
$$\displaystyle BR(B^{0}\to\phi K^{0})$$
$$\displaystyle=$$
$$\displaystyle(8.3^{+1.2}_{-1.0})\times 10^{-6},$$
(12)
$$\displaystyle BR(B^{0}\to\eta^{\prime}K^{0})$$
$$\displaystyle=$$
$$\displaystyle(65.2^{+6.0}_{-5.9})\times 10^{-6}.$$
(13)
From theoretical side, the SM predictions
for $BR(B\to\phi K)$
are in good agreement with Eq.(12),
while showing a large discrepancy, being experimentally two to five
times
larger, for
$BR(B\to\eta^{\prime}K)$ in Eq.(13)
[16].
This discrepancy is not new and it
has created a growing interest in the subject.
However, since it is observed only in $B\to\eta^{\prime}K$ process,
mechanisms
based on the peculiar structure of $\eta^{\prime}$ meson, such as
intrinsic charm and gluonium content, have been investigated to solve
the puzzle.
Correlations with branching ratios have been discussed in [6].
2 SUSY contributions to $B\to\phi(\eta^{\prime})K$ decay
We first consider the supersymmetric effect in
the non-leptonic $\Delta B=1$ processes.
Such an effect could be a probe for any testable
SUSY implications in CP violating experiments.
The most general effective Hamiltonian $H^{\Delta B=1}_{\rm eff}$
for the non-leptonic $\Delta B=1$ processes can be expressed via
the Operator
Product Expansion (OPE) as [17]
$$\displaystyle H^{\Delta B=1}_{\rm eff}$$
$$\displaystyle=$$
$$\displaystyle\left\{\frac{G_{F}}{\sqrt{2}}\sum_{p=u,c}\lambda_{p}~{}\left(C_{1%
}Q_{1}^{p}+C_{2}Q_{2}^{p}+\sum_{i=3}^{10}C_{i}Q_{i}+C_{7\gamma}Q_{7\gamma}+C_{%
8g}Q_{8g}\right)\right\}$$
(14)
$$\displaystyle+$$
$$\displaystyle\left\{Q_{i}\to\tilde{Q}_{i}\,,\,C_{i}\to\tilde{C}_{i}\right\}\;,$$
where $\lambda_{p}=V_{pb}V^{\star}_{ps}$,
with $V_{pb}$ the unitary CKM matrix elements satisfying the unitarity
triangle relation
$\lambda_{t}+\lambda_{u}+\lambda_{c}=0$, and $C_{i}\equiv C_{i}(\mu_{b})$ are
the Wilson coefficients at low energy scale
$\mu_{b}\simeq{\cal O}(m_{b})$.
The basis $Q_{i}\equiv Q_{i}(\mu_{b})$
is given by the relevant local operators
renormalized at the same scale $\mu_{b}$, namely
$$\displaystyle Q^{p}_{2}$$
$$\displaystyle=$$
$$\displaystyle(\bar{p}b)_{V-A}~{}~{}(\bar{s}p)_{V-A}\;,~{}~{}~{}~{}~{}~{}~{}~{}%
~{}~{}~{}~{}~{}~{}~{}Q^{p}_{1}=(\bar{p}_{\alpha}b_{\beta})_{V-A}~{}~{}(\bar{s}%
_{\beta}p_{\alpha})_{V-A}$$
$$\displaystyle Q_{3}$$
$$\displaystyle=$$
$$\displaystyle(\bar{s}b)_{V-A}~{}~{}\sum_{q}(\bar{q}q)_{V-A}\;,~{}~{}~{}~{}~{}~%
{}~{}~{}~{}~{}~{}Q_{4}=(\bar{s}_{\alpha}b_{\beta})_{V-A}~{}~{}\sum_{q}(\bar{q}%
_{\beta}q_{\alpha})_{V-A}\;,$$
$$\displaystyle Q_{5}$$
$$\displaystyle=$$
$$\displaystyle(\bar{s}b)_{V-A}~{}~{}\sum_{q}(\bar{q}q)_{V+A}\;,~{}~{}~{}~{}~{}~%
{}~{}~{}~{}~{}~{}Q_{6}=(\bar{s}_{\alpha}b_{\beta})_{V-A}~{}~{}\sum_{q}(\bar{q}%
_{\beta}q_{\alpha})_{V+A}\;,$$
$$\displaystyle Q_{7}$$
$$\displaystyle=$$
$$\displaystyle(\bar{s}b)_{V-A}~{}~{}\sum_{q}\frac{3}{2}e_{q}(\bar{q}q)_{V+A}\;,%
~{}~{}~{}~{}~{}~{}Q_{8}=(\bar{s}_{\alpha}b_{\beta})_{V-A}~{}~{}\sum_{q}\frac{3%
}{2}e_{q}(\bar{q}_{\beta}q_{\alpha})_{V+A}\;,$$
$$\displaystyle Q_{9}$$
$$\displaystyle=$$
$$\displaystyle(\bar{s}b)_{V-A}~{}~{}\sum_{q}\frac{3}{2}e_{q}(\bar{q}q)_{V-A}\;,%
~{}~{}~{}~{}~{}~{}Q_{10}=(\bar{s}_{\alpha}b_{\beta})_{V-A}~{}~{}\sum_{q}\frac{%
3}{2}e_{q}(\bar{q}_{\beta}q_{\alpha})_{V-A}\;,$$
$$\displaystyle Q_{7\gamma}$$
$$\displaystyle=$$
$$\displaystyle\frac{e}{8\pi^{2}}m_{b}\bar{s}\sigma^{\mu\nu}(1+\gamma_{5})F_{\mu%
\nu}b\;,~{}~{}~{}~{}~{}~{}~{}Q_{8g}=\frac{g_{s}}{8\pi^{2}}m_{b}\bar{s}_{\alpha%
}\sigma^{\mu\nu}(1+\gamma_{5})G^{A}_{\mu\nu}t^{A}_{\alpha\beta}b_{\beta}\;.$$
(15)
Here $\alpha$ and $\beta$ stand for color indices, and
$t^{A}_{\alpha\beta}$ for
the $SU(3)_{c}$ color matrices,
$\sigma^{\mu\nu}=\frac{1}{2}i[\gamma^{\mu},\gamma^{\nu}]$. Moreover,
$e_{q}$ are quark electric charges in unity of $e$,
$(\bar{q}q)_{V\pm A}\equiv\bar{q}\gamma_{\mu}(1\pm\gamma_{5})q$, and $q$ runs over $u$, $d$, $s$, and $b$ quark
labels.
In the SM only the first part
of right hand side of Eq.(14) (inside first curly brackets)
containing operators $Q_{i}$ will contribute, where
$Q^{p}_{1,2}$ refer to the current-current operators,
$Q_{3-6}$ to the QCD penguin operators,
and $Q_{7-10}$ to the electroweak penguin operators, while $Q_{7\gamma}$
and
$Q_{8g}$ are the magnetic and the chromo-magnetic dipole operators,
respectively.
In addition, operators $\tilde{Q}_{i}\equiv\tilde{Q}_{i}(\mu_{b})$
are obtained from $Q_{i}$ by the chirality exchange
$(\bar{q}_{1}q_{2})_{V\pm A}\to(\bar{q}_{1}q_{2})_{V\mp A}$.
Notice that in the SM the coefficients $\tilde{C}_{i}$ identically
vanish due to the V-A structure of charged weak currents,
while in MSSM they can
receive contributions from both chargino and gluino exchanges
[18, 19].
As mentioned, we calculated in [5] the chargino contribution
to the Wilson coefficients in the MIA approximation.
In MIA framework, one chooses
a basis (called super-CKM basis) where the couplings of
fermions and sfermions to neutral
gaugino fields are flavor diagonal.
In this basis, the interacting Lagrangian involving charginos is given
by
$$\displaystyle\mathcal{L}_{q\tilde{q}\tilde{\chi}^{+}}$$
$$\displaystyle=$$
$$\displaystyle-g~{}\sum_{k}\sum_{a,b}~{}\Big{(}~{}V_{k1}~{}K_{ba}^{*}~{}\bar{d}%
_{L}^{a}~{}(\tilde{\chi}^{+}_{k})^{*}~{}\tilde{u}^{b}_{L}-U_{k2}^{*}~{}(Y_{d}^%
{\mathrm{diag}}.K^{+})_{ab}~{}\bar{d}_{R}^{a}~{}(\tilde{\chi}^{+}_{k})^{*}~{}%
\tilde{u}^{b}_{L}$$
(16)
$$\displaystyle-V_{k2}~{}(K.Y_{u}^{\mathrm{diag}})_{ab}~{}\bar{d}_{L}^{a}~{}(%
\tilde{\chi}^{+}_{k})^{*}~{}\tilde{u}^{b}_{R}~{}\Big{)},$$
where $q_{R,L}=\frac{1}{2}(1\pm\gamma_{5})q$, and
contraction of color and Dirac indices is understood.
Here $Y_{u,d}^{\mathrm{diag}}$ are the diagonal Yukawa matrices, and
$K$ stands for the CKM matrix.
The indices $a,b$ and $k$ label
flavor and chargino mass eigenstates, respectively, and
$V$, $U$ are the chargino mixing matrices defined by
$$U^{*}M_{\tilde{\chi}^{+}}V^{-1}=\mathrm{diag}(m_{\tilde{\chi}_{1}^{+}},m_{%
\tilde{\chi}_{2}^{+}}),~{}\mathrm{and}~{}M_{\tilde{\chi}^{+}}=\left(\begin{%
array}[]{cc}M_{2}&\sqrt{2}m_{W}\sin\beta\\
\sqrt{2}m_{W}\cos\beta&\mu\end{array}\right)\;,$$
(17)
where $M_{2}$ is the weak gaugino mass, $\mu$ is the supersymmetric
Higgs mixing term, and $\tan\beta$ is the ratio of the
vacuum expectation value (VEV) of the up-type Higgs to the VEV
of the down-type Higgs111This $\tan\beta$ should not be
confused with the angle $\beta$ of the unitarity triangle. .
As one can see from Eq.(16),
the higgsino couplings are suppressed by Yukawas of the light quarks,
and
therefore they are negligible, except for the stop–bottom
interaction which is directly enhanced by the top Yukawa ($Y_{t}$).
In our analysis we neglect the higgsino contributions
proportional to the Yukawa couplings of light quarks
with the exception of the bottom Yukawa $Y_{b}$,
since its effect could be enhanced by large $\tan{\beta}$.
However, it is easy to show that this vertex cannot
affect dimension six operators of the effective Hamiltonian
for $\Delta B=1$ transitions (operators $Q_{i=1-10}$ in
Eq.(14))
and only interactions involving left down quarks will contribute.
On the contrary, contributions proportional to bottom Yukawa $Y_{b}$ enter
in the Wilson coefficients of dipole operators ($C_{7\gamma}$, $C_{8g}$)
due to the chirality flip of $b\to s\gamma$ and $b\to sg$ transitions.
The calculation of $B\to\phi(\eta^{\prime})K$ decays involves the
evaluation of the hadronic
matrix elements of related operators in the effective Hamiltonian,
which is the most
uncertain part of this calculation.
In the limit in which $m_{b}\gg\Lambda_{QCD}$ and neglecting QCD
corrections in $\alpha_{s}$,
the hadronic matrix elements of B meson decays in two mesons
can be factorized, for example for $B\to M_{1}M_{2}$, in the form
$$\langle M_{1}M_{2}|Q_{i}|\bar{B}^{0}\rangle=\langle M_{1}|j_{1}|\bar{B}^{0}%
\rangle\times\langle M_{2}|j_{2}|0\rangle$$
(18)
where $M_{1,2}$ indicates two generic mesons,
$Q_{i}$ are local four fermion operators of the effective Hamiltonian in
Eq.(14), and $j_{1,2}$ represent bilinear quark currents.
Then, the final results can be usually parametrized
by the product of the decay constants and the transition form factors.
This approach is known as naive factorization (NF) [20, 21].
In QCDF the hadronic
matrix element for $B\to MK$ with $M=\phi,\eta^{\prime}$
in the heavy quark limit $m_{b}\gg\Lambda_{QCD}$ can be written as
[22]
$$\langle MK|Q_{i}|B\rangle_{QCDF}=\langle MK|Q_{i}|B\rangle_{NF}.\left[1+\sum_{%
n}r_{n}\alpha_{S}^{n}+{\mathcal{O}}\left(\frac{\Lambda_{QCD}}{m_{b}}\right)%
\right],$$
(19)
where $\langle MK|Q_{i}|B\rangle_{NF}$ denotes
the NF results. The second
and third term in the bracket represent the radiative
corrections in $\alpha_{S}$ and $\Lambda_{QCD}/m_{b}$ expansions, respectively.
Notice that, even though at higher order in $\alpha_{s}$
the simple factorization is broken, these corrections can be calculated
systematically in terms of short-distance coefficients and meson
light-cone
distribution functions.
In the QCD factorization method [22, 23],
the decay amplitudes of $B\to\phi(\eta^{\prime})K$
can be expressed as
$$\mathcal{A}\left(B\to\phi(\eta^{\prime})K\right)=\mathcal{A}^{f}\left(B\to\phi%
(\eta^{\prime})K\right)+\mathcal{A}^{a}\left(B\to\phi(\eta^{\prime})K\right),$$
(20)
where
$$\mathcal{A}^{f}\left(B\to\phi(\eta^{\prime})K\right)=\frac{G_{F}}{\sqrt{2}}%
\sum_{p=u,c}\sum_{1=1}^{10}V_{pb}V_{ps}^{*}~{}a_{i}^{\phi(\eta^{\prime})}%
\langle\phi(\eta^{\prime})K|Q_{i}|B\rangle_{NF},$$
(21)
and
$$\mathcal{A}^{a}\left(B\to\phi(\eta^{\prime})K\right)=\frac{G_{F}}{\sqrt{2}}f_{%
B}f_{K}f_{\phi}\sum_{p=u,c}\sum_{i=1}^{10}V_{pb}V_{ps}^{*}~{}b_{i}^{\phi(\eta^%
{\prime})}.$$
(22)
The first term $\mathcal{A}^{f}\left(B\to\phi(\eta^{\prime})K\right)$
includes vertex corrections, penguin corrections and hard spectator
scattering contributions which are involved in the parameters
$a_{i}^{\phi(\eta^{\prime})}$. The other term
$\mathcal{A}^{a}\left(B\to\phi(\eta^{\prime})K\right)$
includes the weak annihilation contributions which are absorbed in the
parameters $b_{i}^{\phi(\eta^{\prime})}$.
However, these contributions contain infrared divergences, and the
subtractions of these divergences are
usually parametrized as [23]
$$\int_{0}^{1}\frac{dx}{x}\to X_{H,A}\equiv\left(1+\rho_{H,A}e^{i\phi_{H,A}}%
\right)\ln\left(\frac{m_{B}}{\Lambda_{QCD}}\right),$$
(23)
where $\rho_{H,A}$ are free parameters expected to be of order
of $\rho_{H,A}\simeq{\cal O}(1)$, and $\phi_{H,A}\in[0,2\pi]$.
As already discussed in Ref.[23], if one does not require fine
tuning
of the annihilation phase $\phi_{A}$, the $\rho_{A}$ parameter gets an upper
bound
from measurements on branching ratios, which is of order of
$\rho_{A}\raise 1.29pt\hbox{$\;<$\kern-7.5pt\raise-4.73pt\hbox{$\sim\;$}}2$. Clearly, large values of $\rho_{A}$
are still possible, but in this case strong fine
tuning in the phase $\phi_{A}$ is required.
However, assumptions of very large values of $\rho_{H,A}$, which
implicitly means large contributions from
hard scattering and weak annihilation diagrams, seem to be quite
unrealistic.
In [6] we assumed $\rho<2$.
3 CP asymmetry in $B\to{\phi K_{S}}$ and in
$B\to{\eta^{\prime}K_{S}}$
In order to simplify our analysis, it is useful to parametrize the SUSY
effects by introducing the ratio of SM and SUSY amplitudes as follows
$$\displaystyle\left(\frac{A^{\mbox{\tiny SUSY}}}{A^{\mbox{\tiny SM}}}\right)_{%
\phi K_{S}}$$
$$\displaystyle\equiv$$
$$\displaystyle R_{\phi}~{}e^{i\theta_{\phi}}~{}e^{i\delta_{\phi}}$$
(24)
and analogously for the $\eta^{\prime}K_{S}$ decay mode
$$\displaystyle\left(\frac{A^{\mbox{\tiny SUSY}}}{A^{\mbox{\tiny SM}}}\right)_{%
\eta^{\prime}K_{S}}$$
$$\displaystyle\equiv$$
$$\displaystyle R_{\eta^{\prime}}~{}e^{i\theta_{\eta^{\prime}}}~{}e^{i\delta_{%
\eta^{\prime}}}$$
(26)
where $R_{i}$ stands for the corresponding absolute values of
$|\frac{A^{\mbox{\tiny SUSY}}}{A^{\mbox{\tiny SM}}}|$,
the angles $\theta_{\phi,~{}\eta^{\prime}}$
are the corresponding SUSY CP violating phase,
and $\delta_{\phi,~{}\eta^{\prime}}=\delta^{SM}_{\phi,~{}\eta^{\prime}}-\delta^{%
SUSY}_{\phi,~{}\eta^{\prime}}$
parametrize the strong (CP conserving) phases.
In this case, the mixing CP asymmetry $S_{\phi K_{S}}$ in Eq.(LABEL:asym_phi)
takes the following form
$$\displaystyle S_{\phi K_{S}}$$
$$\displaystyle=$$
$$\displaystyle\frac{\displaystyle{\sin 2\beta+2R_{\phi}\cos\delta_{\phi}\sin(%
\theta_{\phi}+2\beta)+R_{\phi}^{2}\sin(2\theta_{\phi}+2\beta)}}{\displaystyle{%
1+2R_{\phi}\cos\delta_{\phi}\cos\theta_{\phi}+R_{\phi}^{2}}}.$$
(27)
and analogously for $B\to\eta^{\prime}K_{S}$
$$\displaystyle S_{\eta^{\prime}K_{S}}$$
$$\displaystyle=$$
$$\displaystyle\frac{\displaystyle{\sin 2\beta+2R_{\eta^{\prime}}\cos\delta_{%
\eta^{\prime}}\sin(\theta_{\eta^{\prime}}+2\beta)+R_{\eta^{\prime}}^{2}\sin(2%
\theta_{\eta^{\prime}}+2\beta)}}{\displaystyle{1+2R_{\eta^{\prime}}\cos\delta_%
{\eta^{\prime}}\cos\theta_{\eta^{\prime}}+R_{\eta^{\prime}}^{2}}}.$$
(28)
Our numerical results for the gluino contributions to CP asymmetry
$S_{\Phi K_{S}}$ are presented in Fig. 1, and to CP asymmetry
$S_{\eta^{\prime}K_{S}}$ are presented in Fig. 2.
In all the plots, regions inside the horizontal lines indicate
the allowed $2\sigma$ experimental range.
In the plots only one mass insertion per time is taken active,
in particular this means that we scanned over $|{\left(\delta^{d}_{LL}\right)_{23}}|<1$
and $|{\left(\delta^{d}_{LR}\right)_{23}}|<1$.
Then, $S_{\Phi(\eta^{\prime})K_{S}}$ is plotted versus $\theta_{\phi}$, which in the
case of one dominant mass insertion should be identified here as
$\theta_{\phi}={\rm arg}[(\delta_{AB}^{d})_{ij}]$.
We have scanned over the relevant SUSY parameter space,
in this case the average squark mass $\tilde{m}$ and gluino
mass $m_{\tilde{g}}$, assuming SM central values [24].
Moreover, we require that the SUSY spectra
satisfy the present experimental lower mass bounds [24].
In particular, $m_{\tilde{g}}>200$ GeV,
$\tilde{m}>300$ GeV. In addition, we impose that the
branching ratio (BR) of $b\to s\gamma~{}$and the $B-\bar{B}~{}$mixing are satisfied
at 95% C.L. [25], namely
$2\times 10^{-4}\leq BR(b\to s\gamma)<4.5\times 10^{-4}$.
Then, the allowed ranges for $|{\left(\delta^{d}_{LL}\right)_{23}}|$
and $|{\left(\delta^{d}_{LR}\right)_{23}}|$ are obtained by
taken into account the above constraints on $b\to s\gamma~{}$and $B-\bar{B}~{}$mixing.
We have also scanned over the
full range of the parameters $\rho_{A,H}$ and $\phi_{A,H}$ in
$X_{A}$ and $X_{H}$, respectively, as defined in Eq.(23).
The chargino effects to $S_{\phi K_{S}}$ and $S_{\eta^{\prime}K_{S}}$
[5, 6] are
summarized in Fig. 3 and Fig. 4,
where $S_{\phi(\eta^{\prime})K_{S}}$ is plotted versus the argument of the
relevant chargino mass insertions, namely
${\left(\delta^{u}_{LL}\right)_{32}}$ and ${\left(\delta^{u}_{RL}\right)_{32}}$.
As in the gluino dominated scenario, we have scanned over the
relevant SUSY parameter space,
in particular, the average squark mass $\tilde{m}$,
weak gaugino mass $M_{2}$, the $\mu$ term,
and the light right stop mass $\tilde{m}_{\tilde{t}_{R}}$.
Also $\tan{\beta}=40$ has been assumed and we take into
account the present experimental bounds on SUSY spectra, in particular
$\tilde{m}>300$ GeV, the lightest chargino mass $M_{\chi}>90$ GeV,
and $\tilde{m}_{\tilde{t}_{R}}\geq 150$ GeV. As in the gluino case,
we scan over the real and imaginary part of
the mass insertions ${\left(\delta^{u}_{LL}\right)_{32}}$ and
${\left(\delta^{u}_{RL}\right)_{32}}$, by considering the constraints on
BR($b\to s\gamma~{}$) and $B-\bar{B}~{}$mixing at 95% C.L..
The $b\to s\gamma~{}$constraints
impose stringent bounds on ${\left(\delta^{u}_{LL}\right)_{32}}$, specially at large $\tan{\beta}$
[5].
Finally, as in the other plots, we scanned over the QCDF free parameters
$\rho_{A,H}<2$ and $0<\phi_{A,H}<2\pi$.
The reason why extensive regions of negative values of $S_{\phi K_{S}}$
are excluded here, is only due to the $b\to s\gamma~{}$constraints [5].
Indeed,
the inclusion of ${\left(\delta^{u}_{LL}\right)_{32}}$ mass insertion
can generate large and negative values of $S_{\phi K_{S}}$,
by means of chargino contributions to chromo-magnetic operator $Q_{8g}$
which are enhanced by terms of order $m_{\chi^{\pm}}/m_{b}$.
However, contrary to the gluino scenario, the ratio $|C_{8g}/C_{7\gamma}|$
is not enhanced by color factors and large contributions to
$C_{8g}$ leave unavoidably to the breaking of $b\to s\gamma~{}$constraints.
We plot
in Figs. 5 the correlations between
$S_{\phi K_{S}}$ versus $S_{\eta^{\prime}K_{S}}$ for both
chargino and
gluino in QCDF. For illustrative purposes,
in all figures analyzing correlations, we colored the area of
the ellipse corresponding to the allowed experimental range
at $2\sigma$ level.222 All ellipses here
have axes of length $4\sigma$. As a first approximation, no correlation
between the expectation values of the two observables have been assumed.
In Fig. 6
the impact of a light charged Higgs in chargino
exchanges is presented, when a charged Higgs with mass $m_{H}=200$ GeV
and $\tan{\beta}=40$ has been taken into account.
The effects of charged Higgs exchange in
the case of ${\left(\delta^{u}_{RL}\right)_{32}}$ mass insertion are negligible,
as we expect from the fact that terms proportional to
${\left(\delta^{u}_{RL}\right)_{32}}$ in $b\to s\gamma$ and $b\to sg$ amplitudes are not enhanced by
$\tan{\beta}$.
On the other hand, in gluino exchanges with ${\left(\delta^{d}_{LR}\right)_{23}}$ or
${\left(\delta^{d}_{LL}\right)_{23}}$, the most conspicuous
effect of charged Higgs contribution is in populating the area outside
the allowed experimental region. This
is due to a destructive interference with $b\to s\gamma$ amplitude,
which relaxes the $b\to s\gamma$ constraints.
The most relevant effect of a charged Higgs exchange is in the
scenario of chargino exchanges with ${\left(\delta^{u}_{LL}\right)_{32}}$ mass insertion. In this case,
as can be seen from Fig. 6, a strong destructive interference with
$b\to s\gamma$ amplitude can relax the $b\to s\gamma$ constraints in the
right direction, allowing chargino predictions
to fit inside the experimental region.
Moreover, we have checked that, for $\tan{\beta}=40$,
charged Higgs heavier than approximately 600 GeV cannot affect the CP
asymmetries significantly.
4 Direct CP asymmetry in $b\to s\gamma$ versus
$S_{\phi(\eta^{\prime})K_{S}}$
Next we present the correlation for SUSY predictions
between the direct CP asymmetry $A_{CP}(b\to s\gamma)$ in $b\to s\gamma~{}$decay
and the other ones in $B\to\phi(\eta^{\prime})K_{S}$.
The CP asymmetry in $b\to s\gamma~{}$is measured
in the inclusive radiative decay of $B\to X_{s}\gamma$ by the quantity
$$\displaystyle A_{CP}(b\to s\gamma)=\frac{\Gamma(\bar{B}\to X_{s}\gamma)-\Gamma%
(B\to X_{\bar{s}}\gamma)}{\Gamma(\bar{B}\to X_{s}\gamma)+\Gamma(B\to X_{\bar{s%
}}\gamma)}.$$
(29)
The SM prediction for $A_{CP}(b\to s\gamma)$ is very small, less than
$1\%$ in magnitude, but known with high precision [7].
Indeed, inclusive decay rates of B mesons are free from large
theoretical uncertainties since they can be reliably calculated in QCD using
the OPE.
Thus, the observation of sizeable effects in
$A_{CP}(b\to s\gamma)$ would be a clean signal of new physics.
In particular, large asymmetries are expected in models with enhanced
chromo-magnetic dipole operators, like for instance supersymmetric models
[7].
In Fig. 7 we show our results for two
mass insertions ${\left(\delta^{d}_{LR}\right)_{23}}$ and ${\left(\delta^{u}_{LL}\right)_{32}}$
with both gluino and chargino exchanges.
In this case we see that $S_{\phi K_{S}}$ constraints
do not set any restriction on $A_{CP}(b\to s\gamma)$, and
also large and positive values of $A_{CP}(b\to s\gamma)$ asymmetry
can be achieved.
However, by imposing the constraints on
$S_{\eta^{\prime}K_{S}}$, see plot on the right side of Fig. 7,
the region of negative $A_{CP}(b\to s\gamma)$ is disfavored in this scenario
as well.
5 Conclusions
CP violation remains as one of the most interesting research topics
both theoretically and experimentally.
Especially the $B$-meson decay modes seem to be ideally suited
for searching effects of new physics, since new interesting results
are mounting from the $B$-factories.
For interpretation of those results, which get more accurate with
more statistics, also reducing theoretical uncertainties in the
calculations is a big challenge.
The $B$-meson decays to $\phi K$, $\eta^{\prime}K$, and to
$X_{S}\gamma$ provide a clean window to the physics beyond the SM.
Here our analysis of the supersymmetric
contributions to the CP asymmetries and the branching ratios of
these processes in a model independent way has been reviewed.
The relevant SUSY contributions
in the $b\to s$ transitions, namely chargino and gluino exchanges
in box and penguin diagrams, have been considered
by using the mass insertion method.
Due to the stringent constraints from the experimental
measurements of $BR(b\to s\gamma)$, the scenario with
pure chargino exchanges cannot give large and negative
values for CP asymmetry $S_{\phi K_{S}}$.
It is, however, seen that charged Higgs may enhance the chargino
contributions substantially.
On the other hand, it is quite possible for gluino exchanges to account for
$S_{\phi K_{S}}$ and $S_{\eta^{\prime}K_{S}}$ at the same time.
We also discussed the correlations between the CP asymmetries of these
processes and the direct CP asymmetry in $b\to s\gamma$ decay.
In this case, we found that the general trend of
SUSY models, satisfying all the experimental constraints,
favors large and positive contributions
to $b\to s\gamma$ asymmetry.
Acknowledgments
KH thanks the German University in Cairo for hospitality and the
pleasant atmosphere of the 1st GUC Workshop on High Energy Physics.
The authors appreciate the financial support from the Academy of Finland
(project numbers 104368 and 54023).
References
[1]
M. Kobayashi, T. Maskawa, Prog. Theor. Phys.
45 (1973) 652.
[2]
J. Charles, et al., The CKMfitter Group, hep-ph/0406184.
[3]
Y. Grossman, G. Isidori, M. Worah, Phys. Rev.
D 58 (1998) 057504.
[4]
D. London, A. Soni, Phys. Lett. B 407 (1997) 61.
[5]
D. Chakraverty, E. Gabrielli, K. Huitu and S. Khalil,
Phys. Rev. D 68 (2003) 095004.
[6]
E. Gabrielli, K. Huitu and S. Khalil,
Nucl. Phys. B 710 (2005) 139.
[7]
A. L. Kagan and M. Neubert,
Phys. Rev. D 58 094012 (1998).
[8]
L.J. Hall, V.A. Kostelecky, and S. Raby, Nucl. Phys. B 267 (1986) 415;
J.S. Hagelin, S. Kelley, and T. Tanaka Nucl. Phys. B 415 (1994) 293.
E. Gabrielli, A. Masiero, and L. Silvestrini, Phys. Lett. B 374
(1996) 80; F. Gabbiani, E. Gabrielli, A. Masiero, L. Silvestrini,
Nucl. Phys. B 477 (1996) 321.
[9]
M. A.Giorgi (BaBar collaboration),
plenary talk at XXXII Int. Conference on High Energy
Physics, Beijing, China, August 16-22, 2004, http://ichep04.ihep.ac.cn/
[10]
Y. Sakai (Belle collaboration),
plenary talk at XXXII Int. Conference on High Energy
Physics, Beijing, China, August 16-22, 2004, http://ichep04.ihep.ac.cn/
[11]
A.J. Buras, hep-ph/0210291; A.J. Buras, F. Parodi, and A. Stocchi,
JHEP 0301 (2003) 029.
[12]
B. Aubert et al., BaBar Collaboration, Phys. Rev. D 69
(2004) 0111102;
B. Aubert et al., BaBar Collaboration, Phys. Rev. Lett. 91
(2003) 161801.
[13]
K. Abe et al., Belle Collaboration, Phys. Rev. Lett. 91
(2003) 201801;
H. Aihara (for the Belle Collaboration), talk at FPCP (2003).
[14]
R.A. Briere et al., CLEO Collaboration,
Phys. Rev. Lett. 86 (2001) 3718;
S.J. Richichi et al., CLEO Collaboration,
Phys. Rev. Lett. 85 (2000) 520.
[15]
Heavy Flavor Averaging Group,
http://www.slac.stanford.edu/xorg/hfag/.
[16]
E. Kou and A.I. Sanda, Phys. Lett. B 525 (2002) 240.
[17]
G. Buchalla, A.J. Buras, and M.E. Lautenbacher,
Rev. Mod. Phys. 68 (1996) 1125.
[18]
S. Bertolini, F. Borzumati, A. Masiero, and G. Ridolfi,
Nucl. Phys. B 353 (1991) 591.
[19]
F. Gabbiani, E. Gabrielli, A. Masiero, L. Silvestrini,
Nucl. Phys. B 477 (1996) 321.
[20]
H. Simma and D. Wyler, Phys. Lett. B 272 (1991) 395;
G. Kramer, W.F. Palmer, and H. Simma,
Nucl. Phys. B 428 (1994) 77; Z. Phys. C 66 (1995) 429;
G. Kramer and W.F. Palmer, Phys. Rev. D 52 (1995) 6411;
G. Kramer, W.F. Palmer, and Y.L. Wu, Commun. Theor. Phys. 27
(1997) 457; D. Du and L. Guo, Z. Phys. C 75 (1997) 9;
N.G. Deshpande, B. Dutta, and S. Oh , Phys. Rev. D 57 (1998)
5723.
[21]
A. Ali, G. Kramer, and C. D. Lü, Phys. Rev. D 58 (1998)
094009.
[22]
M. Beneke, G. Buchalla, M. Neubert, and C.T. Sachrajda,
Phys. Rev. Lett. 83 (1999) 1914;
Nucl. Phys. B 591 (2000) 313.
[23]
M. Beneke and M. Neubert, Nucl. Phys. B 675 (2003) 333.
[24]
S. Eidelman et al., Phys. Lett. B 592 (2004) 1.
[25]
S. Chen et al., CLEO Collaboration,
Phys. Rev. Lett. 87 (2001) 251807;
R. Barate et al., ALEPH Collaboration, Phys. Lett. B 429
(1998) 169;
K. Abe et al., Belle Collaboration, Phys. Lett. B 511 (2001)
151; B. Aubert et al., BaBar Collaboration, hep-ex/0207076;
M. Steinhauser, hep-ph/0406240. |
Stabilizing confined quasiparticle dynamics in one-dimensional polar lattice gases
Guo-Qing Zhang
zhangptnoone@zjhu.edu.cn
Research Center for Quantum Physics, Huzhou University, Huzhou 313000, P. R. China
L. F. Quezada
lf_quezada@outlook.com
Research Center for Quantum Physics, Huzhou University, Huzhou 313000, P. R. China
Laboratorio de Ciencias de la Información Cuántica, Centro de Investigación en Computación, Instituto Politécnico Nacional, UPALM, 07700, Ciudad de México, México
Abstract
The disorder-free localization that occurred in the study of relaxation dynamics in far-from-equilibrium quantum systems has been widely explored. Here we investigate the interplay between the dipole-dipole interaction (DDI) and disorder in the hard-core polar bosons in a one-dimensional lattice. We find that the localized dynamics will eventually thermalize in the clean gas, but can be stabilized with the existence of a small disorder proportional to the inverse of DDI strength. From the effective dimer Hamiltonian, we show that the effective second-order hopping of quasiparticles between nearest-neighbor sites is suppressed by the disorder with strength similar to the effective hopping amplitude. The significant gap between the largest two eigenvalues of the entanglement spectrum indicates the dynamical confinement. We also find that the disorder related sample-to-sample fluctuation is suppressed by the DDI. Finally, we extend our research from the uncorrelated random disorder to the correlated quasiperiodic disorder and from the two-dimer model to the half-filling system, obtaining similar results.
I Introduction
Recently, exotic phenomena from the studies of relaxation dynamics in far-from-equilibrium quantum systems have been unraveled. The dynamical confinement has been shown to exist in the quantum quench dynamics of lattice models with the $1/r^{3}$ dipolar tail interaction [1, 2, 3, 4, 5] and short-range interacting spin chains with both transverse and longitudinal fields [6, 7, 8, 9]. The disorder-free localization (DFL) emergent from dynamical confinement extends the phenomenology of disorder induced many-body localization (MBL) [10, 11, 12]. A finite number of conservation laws lead to the fragmentation of Hilbert space in DFL which severely constrains the dynamics and violates the eigenstate thermalization hypothesis (ETH) [13], while the emergent of local integrals of motion is the ingredient of no thermalization in disorder MBL [14, 15]. MBL and DFL provide different mechanisms to violate ETH and localize the quench dynamics of local observables.
The interest in quench dynamics has been roused by breakthrough experimental developments in recent years. Isolated many-body quantum systems can be almost perfectly realized in cold gases and trapped ions [16, 17, 18, 19] with naturally occurring long-range interactions. These experiments unveil the possibility of simulating a wide variety of quantum many-body quench dynamics. The extended Hubbard model with nearest-neighbor interactions has already been realized on polar lattice gas experiments [20, 21]. For the paradigmatic interactions decaying as a power law $1/r^{3}$ with distance $r$, this term can be generalized in polar gases with strong dipole-dipole interactions (DDIs) [22, 23, 24], or Rydberg atoms with strong van der Waals interactions [25, 26, 27]. Such long-range interactions may host novel properties which are missing in their short-ranged counterparts [28].
For the quench dynamics of DFL systems, a small effective second-order hopping or a certain degree of gauge-breaking errors can create transitions between isolated sectors in the Hilbert space and finally undermine the quasiparticle confinement[29, 30]. At the long-time limit, the quasilocalized state will eventually thermalize regardless of the error strength. It is interesting to investigate the stabilizing of the dynamical confinement in a polar bosons lattice model with DDI, and reveal the interplay between long-range interactions and quenched disorders. For a one-dimensional (1D) system, the low-density filling of the extended Hubbard model corresponds to strong correlations, which present rich physics in the few-body problem [3], so we investigate the dynamics of two dimers (quasiparticles) in the extended boson-Hubbard model. In this paper, we adopt half-chain entanglement, out-of-time-ordered correlation (OTOC), inhomogeneity parameter, and return probability to characterize the dynamical confinement. All four quantities evidence the thermalization of DFL in the long-time limit for a clean polar gas. However, it is revealed that a very small disorder can stabilize DFL in the long-time dynamics, where the disorder induced localization is ignorable. We use a dimer approximation to unveil the stabilization. The disorder strength needed to suppress the effective second-order hopping of dimers between nearest-neighbor sites is proportional to the inverse of the DDI strength. The entanglement spectra of long-time dynamical confined states indicate a significant gap between the largest and the second largest eigenvalues. We also find that the DDI can reduce sample-to-sample fluctuations which are usually logarithmically broad in the disorder induced localization. Finally, we extend our results from the uncorrelated random disorder to the infinitely correlated quasiperiodic disorder saturation, and to the half-filling case with more quasiparticles dynamically stabilized in the presence of a very weak disorder.
The rest of this paper is organized as follows. In Sec. II, we introduce the extended boson-Hubbard model with DDI and its effective approximation. Section III is devoted to revealing the dynamical confinement stabilized via the existence of random disorder by four different physical quantities. We analyze the interplay of DDI and random disorder in Sec. IV, and unveil the same result for the quasiperiodic disorder case in Sec. V. A brief discussion and conclusion are finally given in Sec. VI.
II Model
We consider a hard-core polar boson gas in a 1D optical lattice with long-range DDIs and disordered on-site potentials. The system is described by the extended boson-Hubbard Hamiltonian [4]:
$$H=-J\sum_{i}(b_{i}^{\dagger}b_{i+1}+\mathrm{H.c.})+\sum_{i}\epsilon_{i}n_{i}+V\sum_{i<j}\frac{1}{|i-j|^{3}}n_{i}n_{j},$$
(1)
where $b_{i}$ ($b_{i}^{\dagger}$) is the annihilation (creation) operator of boson on site $i$ with hard-core condition $(b_{i}^{\dagger})^{2}=0$, $n_{i}=b_{i}^{\dagger}b_{i}$ is the number operator, $J$ is the hopping amplitude, $\epsilon_{i}\in[-W,W]$ is the uniformly distributed random potential with disorder strength $W$, and $V$ is the DDI strength between nearest neighbors. We use exact diagonalization calculation to investigate the long-time quench dynamics governed by Eq. (1) for a four-particle problem.
For moderate DDI strength $V$, two particles locate at two nearest sites can form a dynamically bound nearest-neighbor dimer (NND), and the dynamics are dominated by the effective Hamiltonian defined in the dimer subspace [2]. For sufficiently large $V$, all particles are paired in NNDs, and we can write Eq. (1) into NND bases which reads
$$\begin{split}H_{d}=&-J_{d}\sum_{l}(D_{i}^{\dagger}D_{i+1}+\mathrm{H.c.})+\sum_{i}\epsilon^{\prime}_{i}N_{i}\\
&+V\sum_{i,l>0}f(l)N_{i}N_{i+l+2},\end{split}$$
(2)
where $D_{i}^{\dagger}=b_{i}^{\dagger}b_{i+1}^{\dagger}$ is the creation operator of an NND at site $i$, $N_{i}=D^{\dagger}_{i}D_{i}$ is the number operator of NNDs, $J_{d}=8J^{2}/7V$ is the effective second-order hopping amplitude, $\epsilon^{\prime}_{i}=\epsilon_{i}+\epsilon_{i+1}$, and $f(l)=2/(l+2)^{3}+1/(l+1)^{3}+1/(l+3)^{3}$ obtained from the DDI between two dimers separated by distance $l$. The quasiparticle can move from site $i$ to $i+1$ in a second-order procedure. Let us consider a configuration $110$ where $1$ stands for occupied boson and $0$ for empty site. The NND can move forward to the right-hand site in the following procedure $110\rightarrow 101\rightarrow 011$ with amplitude $J_{d}=J^{2}/(V-V/2^{3})=8J^{2}/7V$ [1]. The initial four-boson evolution under Eq. (1) can be reduced to a two-dimer dynamics described by the effective Hamiltonian (2).
III Dynamical confinements
We first consider the long-time dynamics of the Hamiltonian (1) with both DDI and disorder. In the clean limit ($W=0$) and sufficiently large $V$, two dimers initially located within a critical distance will stay fixed for a certain time and eventually thermalize due to the effective second-order hopping [2]. In the following, we consider the system initially prepared as two dimers separated by four sites and exactly calculate the time dependent wave function using Eq. (1). The initial state reads $\ket{\psi_{0}}=\ket{\cdots 0110000110\cdots}$ in the Fock state basis, and the time-dependent wave function is $\ket{\psi(t)}=\mathrm{exp}(-iHt)\ket{\psi_{0}}$. We adopt four different quantities, half-chain entanglement $\mathcal{S}$, OTOC $\mathcal{C}$, inhomogeneity parameter $\eta$, and return probability $\Lambda$ to consistently characterize the localization and dynamical confinement. The open-boundary condition is assumed in our calculation.
III.1 Half-chain entanglement
The half-chain entanglement is a commonly used entropy whose slow-down dynamics and saturated value can reveal localization properties of the system [31, 32, 13, 33]. We can express quantum states under the bases of two subsystems $\ket{\psi(t)}=\sum_{ij}\delta_{ij}(t)\ket{\psi^{A}_{i}}\otimes\ket{\psi^{B}_{j}}$, where $\ket{\psi^{A}_{i}}$ ($\ket{\psi^{B}_{j}}$) is the subspace basis of the left (right) half chain. The reduced density matrix element of the left subsystem is then obtained as $\rho^{A}_{ii^{\prime}}(t)=\sum_{j}\delta_{ij}(t)\delta^{*}_{i^{\prime}j}(t)$, and the half-chain entanglement per site is defined as the von Neumann entropy of the reduced density matrix
$$\mathcal{S}(t)=-\frac{1}{L_{A}}\mathrm{Tr}\rho^{A}(t)\ln(\rho^{A}(t)).$$
(3)
One can use the reduced density matrix of the right half chain to obtain the same entanglement because $\rho^{B}$ shares the same spectrum as $\rho^{A}$.
We consider a chain length $L=16$ with a bipartition of equal half $L_{A}=L_{B}=L/2$ to calculate $\mathcal{S}$. Hamiltonian (1) is exactly diagonalized to obtain the long-time many-body wave function $\ket{\psi(t)}$. In Fig. 1 (a), we present the saturated value of $\mathcal{S}$ after the quench dynamics which is averaged from time interval $t\in[10^{6},10^{8}]$ as a function of DDI $V$ and disorder $W$. For $W\neq 0$, 100 random disorder configurations are used. The initial state $\ket{\psi_{0}}$ is a product state with vanishing entanglement, and for a well confined dynamics, the entanglement should be close to zero. We can observe in Fig. 1 (a) that when $W=0$, the saturated value of $\mathcal{S}$ is significantly different from zero for any $V$. Only in those regions with moderate $V$ and nonvanishing $W$, $\mathcal{S}$ remains close to zero. We further display the growth of half-chain entanglement $\mathcal{S}$ for $L=20$ in Fig. 2 with several typical values of $V$ and $W$. The smallest $V=0.01$ rather than $0$ is due to the fact that noninteracting systems can not produce any entanglement for the product state $\ket{\psi_{0}}$. For the clean limit $W=0$ in Fig. 2 (a), $\mathcal{S}$ tends to finite value for all values of DDI $V$. For $V=40$, $\mathcal{S}$ remains close to zero for a certain time because the dimer-dimer cluster is dynamically confined. When the system begins to thermalize due to the center-of-mass motion of the dimer-dimer cluster at $t\approx 10^{2}$, the entanglement grows steeply. The dimers begin to move at $t\sim 1/J_{D}\sim V$, and the rapid growth of the entanglement starts earlier for smaller values of $V$. For a weak disorder $W=0.1$ in Fig. 2 (b), the saturated value of entanglement for $V=40$ decreases significantly and the system shows a well confined dynamics, where a fixed distance between dimers is rigidly formed under the interplay of dipolar interaction and weak disorder. The saturated values for other values of $V$ are similar to the clean limit case, and the timescale for the rapid growth is not changed. In these cases, the particles can move as freely as the clean ones. When increasing disorder $W$ to $5$ in Fig. 2 (c), the disorder-induced localization comes into play for all values of $V$. Especially when $V=40$, $\mathcal{S}\approx 0$ in the entire time interval which reveals the perfect dynamical confinement.
III.2 Out-of-time-ordered correlation
OTOC characterizes the delocalization or scrambling of quantum information, whereby it describes the process of an initially localized state spreading over all degrees of freedom in a quantum many-body system [34, 35, 36, 37]. OTOC has been related to entanglement, and can serve as an experimentally accessible entanglement witness [38, 39]. OTOC can also be used as an order parameter to characterize the localization-delocalization transition in disordered many-body systems [35, 40, 41]. The OTOC arises from the squared commutator of two commuting local operators $\hat{V}$ and $\hat{W}$:
$$\begin{split}\mathcal{C}(t)=&\frac{1}{2}\braket{[\hat{V}(t),\hat{W}]^{\dagger}[\hat{V}(t),\hat{W}]}\\
=&\frac{1}{2}[\braket{\hat{W}^{\dagger}\hat{V}^{\dagger}(t)\hat{V}(t)\hat{W}}+\braket{\hat{V}^{\dagger}(t)\hat{W}^{\dagger}\hat{W}\hat{V}(t)}\\
&-\braket{\hat{V}^{\dagger}(t)\hat{W}^{\dagger}\hat{V}(t)\hat{W}}-\braket{\hat{W}^{\dagger}\hat{V}^{\dagger}(t)\hat{W}\hat{V}(t)}].\end{split}$$
(4)
For Hermitian and unitary local operators, the OTOC can be simplified as [42]
$$\mathcal{C}(t)=1-\mathrm{Re}\braket{\hat{V}^{\dagger}(t)\hat{W}^{\dagger}\hat{V}(t)\hat{W}}.$$
(5)
For confined dynamics, information spreading is suppressed and those two local operators commute at different times, which leads to the vanishment of OTOCs.
We choose two local operators as NND number operators of the initial state $\ket{\psi_{0}}$, $\hat{V}=n_{i}n_{i+1}$ and $\hat{W}=n_{i+l+1}n_{i+l+2}$ in our following study. In Fig. 1 (b), we plot the saturated value of OTOC $\mathcal{C}$ as a function of DDI $V$ and disorder $W$. Similar to the half-chain entanglement, OTOC remains close to zero in regions where $V$ is moderate and $W$ is not zero. We also show the long-time dynamics of $\mathcal{C}$ for $L=20$ in Fig. 3 with all parameters the same as in Fig. 2. The clean limit in Fig. 3 (a) indicates the system finally thermalizes for a long enough time $t$. When $V=40$ (green solid lines), the dynamics show well and perfect confinement for very small disorder $W=0.1$ [Fig. 3(b)] and large disorder $W=5$ [Fig. 3(c)], respectively. The localization properties induced by disorders appear for small $V$s when $W=5$, but the localization is not evident when disorder strength $W=0.1$. OTOC characterizes the propagation of localized information and can serve as measurable entanglement witness [38, 39]. The behaviors of OTOC thus can interpreted in the same way as entanglement.
III.3 Inhomogeneity parameter
The inhomogeneity parameter, similar to the imbalance for half-filling systems, characterizes the localization of particles [2, 4]. The inhomogeneity parameter $\eta$ is defined in such a way that $\eta=1$ for the initial state and $\eta=0$ when density waves are uniformly distributed in the whole system:
$$\eta(t)=\frac{N_{0}(t)L_{0}^{-1}-N_{b}L^{-1}}{1-N_{b}L^{-1}},$$
(6)
where $N_{b}=4$ is the total boson number, $L_{0}=4$ is the length of the occupied sites in the initial state, and $N_{0}(t)$ is the total particle number on the initially occupied sites after evolution time $t$.
We plot the saturated value of the inhomogeneity parameter $\eta$ averaged over the long-time dynamics interval $t\in[10^{6},10^{8}]$ in the $V$-$W$ plane in Fig. 1 (c). When both $V$ and $W$ are small, $\eta$ is close to zero, which means the system becomes homogeneous and thermalized after long-time dynamics. For moderate $V$ but vanishing $W$, this system is delocalized due to weak second-order coupling between Fock states. When $W>0$, this region shows localization with inhomogeneity parameter $\eta\approx 1$. In Fig. 4, the decreasing of $\eta$ is presented for $L=20$ systems with several typical values of interaction and disorder strength. Regarding the clean gas in Fig. 4 (a), the system can stay for longer periods of time in inhomogeneous with larger DDI $V$ before delocalization. $\eta$ is a direct indicator for the localization of particles, and the localization time of the rigidly formed dimer clusters can be directly seen from the plateau of $\eta$ before a steep decrease. For a very small disorder $W=0.1$ in Fig. 4 (b), the green solid curve indicates inhomogeneity of the long-time dynamics and a long-lived memory of the initial condition for large $V=40$. The interplay of dipolar interaction and weak disorder prevent the center-of-mass motion of the dimer cluster, and no significant decrease occurs. Plateaus of the other three curves still dwindle quickly similar to the clean ones, where the disorder induced localization is not evident and the disorder effect can not prevent particles from moving. For $W=5$ in Fig. 4 (c), disorder-induced localization takes into place and the saturated values of $\eta$ for $V=0.01,5,10$ improve significantly compared with those values in Fig. 4 (b). The interplay of DDI $V$ and disorder $W$ leads to inhomogeneity and thus makes dynamical confinement perfect.
III.4 Return probability
Return probability determines the global property of time evolution which is widely used in the investigation of quench dynamics [43, 44, 45, 46, 47]. Return probability is defined as the modulus squared of the overlap between the initial state $\ket{\psi_{0}}$ and the evolution wavefunction $\ket{\psi(t)}$ at time $t$:
$$\Lambda(t)=|\braket{\psi_{0}}{\psi(t)}|^{2}=|\braket{\psi_{0}}{e^{-iHt}}{\psi_{0}}|^{2}.$$
(7)
In the quench dynamics of thermalized systems, when the initial state is not close to any eigenstate of the Hamiltonian, this quantity is expected to tend to zero quickly in an exponential form $e^{-Lf(t)}$ with $L$ the system’s size. $\Lambda$ can be considered as the probability to find the evolved system staying in the initial state after time $t$, and can be used to characterize localization and confined dynamics.
We show the saturated value of the return probability $\Lambda$ averaged over long-time interval as a function of $V$ and $W$ in Fig. 1 (d). Similar to the previously studied three quantities, $\Lambda$ indicates that there is a small probability to find the initial state after a long-time evolution when both $V$ and $W$ are small, while the dynamics localized for moderate $V$ and nonzero $W$. The long-time dynamics of the return probability $\Lambda$ is plotted in Fig. 5 for $L=20$ sites with other parameters the same as in Fig. 2. Green solid curves in Figs. 5 (a) and (b) both show a certain time of confined dynamics when $t<10^{2}$ with $\Lambda$ close to one, but for $W=0$ the return probability suffers a steep reduction when center-of-mass motion plays the role at $t\sim 1/J_{D}$. The slow drop of $\Lambda$ before $t\sim 1/J_{D}$ lies in the fact that dimers are not bound rigidly to the original place but have a small probability to spread to their nearest neighbors. The localization property induced by disorder is ignorable for $W=0.1$ in Fig. 5 (b) but noticeable in Fig. 5 (c) when $V=0.01,5,10$. Meanwhile, we can also observe the disorder-stabilized high return probability for $V=40$ in the long-time limit.
IV Interplay between dipole-dipole interaction and disorder
In this section, we further investigate in detail the interplay between DDI and disorder in the extended Boson-Hubbard model. In the previous section, we have already revealed the long-time stabilized dynamical confinements with a very small disorder. This phenomenon can be interpreted by the approximated dimer Hamiltonian where all particles are paired in NNDs. We also discuss the entanglement spectrum of confined and unconfined wave functions whose properties are distinct. The sample-to-sample fluctuations for various disorders and interactions are revealed at the end of this section.
IV.1 Dynamically bound dimers under weak disorder
For moderate to strong DDI, the dynamics of the initial state $\ket{\psi_{0}}$ under the extended boson-Hubbard model (1) can be approximated by the dimer extended boson-Hubbard Hamiltonian (2) where two neighboring particles pair to an NND, and the quasiparticle NND can only move to a neighbor site via a second-order hopping process with amplitude $J_{d}=8J^{2}/7V$. In the clean limit, due to this small second-order hopping, the confined NNDs will eventually begin to spread over the whole system and thermalize after long-time dynamics. The initial state $\ket{\psi_{0}}$ is effectively two NNDs separated by five sites under the dimer bases. We exactly evaluate this initial state using Hamiltonian (2), and investigate the saturated value of the inhomogeneity parameter $\eta$ as functions of disorder $W$. For small disorder varying from $0$ to $0.1$, we display the saturation $\eta$ for $V$ varying from $30$ to $60$ in Fig. 6 (a). As we can see, $\eta$ increases rapidly and saturates for a very small disorder $W$ which is comparable to the second-order hopping amplitude $J_{d}\sim 1/V$. Due to disorder fluctuations, $\eta$ curves are not smooth enough to do numerical derivatives with respect to $W$. Thus, we define the turning disorder strength $W_{c}$ as the value where $\eta(W_{c})=0.9\eta(W\rightarrow\infty)$, and depict the relation between $W_{c}$ and $V$ in Fig. 6 (b). It is clear that $W_{c}\sim 1/V$ scales linearly as a function of $1/V$ similarly to $J_{d}$. From this respect, the well confined dynamics induced by disorder are related to the small second-order hopping, while a very small disorder is enough to localize quasiparticle NNDs with a small hopping amplitude $J_{d}$.
IV.2 Entanglement spectrum
The perfect dynamical confinement induced by the interplay between disorder and DDI can also be revealed from the viewpoint of the entanglement spectrum. In this section, we analyze the steady state $\ket{\psi(\infty)}$ of the system with $V=40$ after long-time evolution. Schmidt decomposition of the steady state reads $\ket{\psi(\infty)}=\sum_{ij}\delta_{ij}(\infty)\ket{\psi^{A}_{i}}\otimes\ket{\psi^{B}_{j}}$, from which we can define the reduced density matrix $\rho^{A}_{ii^{\prime}}(\infty)=\sum_{j}\delta_{ij}(\infty)\delta^{*}_{i^{\prime}j}(\infty)$. It can be diagonalized as $\rho^{A}=\lambda_{k}\ket{\psi_{k}^{A}}\bra{\psi_{k}^{A}}$ to obtain the entanglement spectrum $\lambda_{k}$. In numerical calculations, we use $t=10^{6}$ to approximate an infinite time. For $W=0$, the steady state is not dynamically confined and the first few values in the entanglement spectrum are about the same order of magnitude. We depict the first ten largest $\lambda_{k}$ in Fig. 7 (e) with red cycles labeled line, and the corresponding density distributions of the four states with largest $\lambda_{k}$ are presented in Fig. 7 (a-d), respectively. For a large DDI, although not confined due to the effective second-order coupling and center-of-mass motion, the state is kept in a paired NND basis where particles prefer to be in neighboring sites. While for a small disorder $W=0.1$, the entanglement spectrum begins to separate in Fig. 7 (e) with black squares labeled curve, and the density distribution of the largest eigenstate is the same as in Fig. 7 (a) which means the steady state is close to the initial state. For $W=1$ in Fig. 7 (e) with blue diamond labeled line, the gap between the first and second largest $\lambda_{k}$ is even larger than that for $W=0.1$ and the dynamics are perfectly confined. To generalize the spectrum gap, we show the entanglement spectra of $10^{3}$ different disorder realizations for $W=1$ in Fig. 7 (f). There is a distinct region between the largest $\lambda_{k}$ and the bulk spectrum, where only little points can be observed.
IV.3 Sample-to-sample fluctuations
In the study of disorder induced localization, the average value of a physical quantity remains constant while the distribution of that quantity is largely dominated by sample-to-sample fluctuations [48, 49, 50, 51, 52]. In the interplay of disorder and DDI, two types of localization mechanism may impact fluctuations differently. We further investigate the sample-to-sample fluctuation of the inhomogeneity parameter $\eta$. The effective exponent of $\eta$ in the long-time limit is given by
$$\beta_{f}=\frac{\partial\log_{10}\eta(t)}{\partial\log_{10}t}|_{t\rightarrow\infty},$$
(8)
which characterize the dynamics of $\eta$ for an individual disorder configuration. In numerical calculations, $t=10^{6}$ is used to approach the long-time limit. We present the probability distribution $P(\beta_{f})$ in Fig. 8 (a) for $W=5$ and various $V$’s, and in Fig 8 (b) for $V=40$ and several $W$’s. It is clearly illustrated by the blue cross labeled line in Fig 8 (a) that for strong disorder $W=5$ and small DDI $V=5$, the distribution is broad for the disorder dominated localization. Distribution of $P(\beta_{f})$ shrinks when increasing $V$, and the DDI dominated confinements have much smaller sample-to-sample fluctuations. In Fig. 8 (b), we show the probability distribution $P(\beta_{f})$ for $V=40$ dominated dynamical confinements with small $W=0.1$ to large $W=5$ disorders. The bandwidth does not increase significantly which reveals the stability of dynamical confinements with the existence of disorder. The sample-to-sample fluctuations between different disorder configurations for small $V$s are times larger than $V=40$ when $W=5$, which explains the quite large standard errors of mean values for those black, blue, and red curves shown in Figs. 2 (c), 3 (c), 4 (c), and 5 (c).
V Quasiperiodic disorder and half-filling cases
This section is devoted to extrapolating our results to systems with correlated disorders and more particles. We first show that the dynamically stabilized dimers can still be held with the interplay between quasiperiodic disorder effect and dipolar interaction. Then, we extend our two-dimer systems to the half-filling ones and reveal the stabilized quasiparticles in the presence of a very weak disorder.
The quenched disorder uniformly distributed between $[-W,W]$ is an uncorrelated disorder, and in this section we further consider correlated disorders. The quasiperiodic disorder is a long-range correlated disorder that can also induce localization of dynamics. In Hamiltonian (1), the quasiperiodic on-site potential $\epsilon_{i}=W\cos(2\pi\alpha i+\varphi)$ is used instead of the random $\epsilon_{i}$. $W$ is the quasiperiodic disorder strength, $\alpha=(\sqrt{5}-1)/2$ is an irrational number close to the golden ratio, and $\varphi$ is an offset phase randomly chosen in range $[0,2\pi)$ for sampling when the system’s size is finite. Using the same parameters as in Figs. 2-5, we plot the dynamics of half-chain entanglement, OTOC, inhomogeneity parameter, and return probability for the typical case where $W=0.1$ in Fig.9. Those green solid lines indicate that the small quasiperiodic disorder can also stabilize the confined quasiparticle dynamics for large DDI $V=40$. While other curves reveal that the disorder induced localization is ignorable if $V$ is not strong enough. This typical behavior is similar to the random disorder, which extends our result from the uncorrelated disorder to the correlated disorder situation.
Then we consider the half-filling case with the system size $L=16$ and the initial state containing $N_{b}=8$ bosons. The initial state has four dimers and is equally separated, which reads $\ket{\psi^{h}_{0}}=\ket{1100110011001100}$ in the Fock state basis. In this initial state, the distance between dimers is smaller compared with the previous configuration, where we use a smaller $V=20$ dipolar interaction strength. Despite being smaller, this interaction strength is sufficient to observe the confined dynamics. In Fig. 10, we plot the long-time evolution of density distribution $\braket{n_{j}}$ and half-chain entanglement $\mathcal{S}$. For the clean system in Fig. 10 (a), the density distribution reveals the confined dimers for a long period up to $t\approx 10^{7}$ before the quasiparticles begin to move. In Fig. 10 (b), we can see that a very small disorder $W=0.01$ can significantly stabilize the confined dimers and the quasiparticles localized at their initial position for all the periods studied. Such a small disorder is not sufficient to induce localization if no dimer is present. Fig. 10 (c) shows the growth of half-chain entanglement for $W=0$ and $W=0.01$. For the clean system, the entanglement encounters a steep increasing period corresponding to the moving of quasiparticles at $t\approx 10^{7}$. For the stabilized confinement case ($W=0.01$), the entanglement saturates to a value smaller than that of the system with $W=0$. Our results suggest that the stabilizing mechanism induced by the interplay between disorder and dipolar interaction holds for systems with more particles.
VI Discussion and conclusion
Before concluding, we emphasize the importance of investigating the interplay between dipolar interaction and disorder. Long-range interactions naturally occur in recent experimental quantum many-body systems such as cold gases, trapped ions, and engineered Rydberg atoms [16, 17, 18, 19, 53].
On-site potentials with disorders or quasiperiodic disorders can also be implemented due to recent developments in MBL experiments [54, 55]. Thus, the intriguing interplay between interaction- and disorder-induced quasiparticle localization is within the near-future quench dynamics in experiments. On the other hand, our findings provide a feasible technique in those long-range interacting systems to preserve the initially prepared quantum states for as long as possible. By introducing disorder on purpose, these systems can achieve longer decoherent time and realize more quantum tasks.
To summarize, we have explored the confined quasiparticle dynamics of the polar boson gas in a 1D lattice with both DDI and disorder. Several physical characteristics, such as the half-chain entanglement, the OTOC, the inhomogeneity parameter, and the return probability, have been numerically calculated. In the clean limit, four quantities evinced the thermalization of the system in the long-time limit. It was found that a small disorder, which is proportional to the inverse of DDI strength and comparable to the effective second-order hopping of dimers between nearest-neighbor sites, can stabilize the dynamical confinement. The entanglement spectra of perfect confined states indicated a significant gap between the first two largest eigenvalues. We have also unveiled that the DDI can reduce sample-to-sample fluctuations. Finally, the correlated quasiperiodic disorder and half-filling ystems have been considered and similar findings have been established.
Acknowledgements.
This work was supported by the NSFC (Grant No. 12104166), and the Guangdong Basic and Applied Basic Research Foundation (Grants No. 2020A1515110290).
*
Appendix A Numerical method
There are many approximation methods to calculate the time-dependent wave function $\ket{\psi(t)}=\mathrm{exp}(-iHt)\ket{\psi_{0}}$ such as the Krylov-subspace approach, Chebyshev polynomial approach, and tensor network base methods like the time-evolution block-decimation (TEBD) method and time-dependent variational principle (TDVP) method. Typically, these methods can evaluate time-dependent wave functions flexibly for a moderately long time from $t\approx 10^{3}$ to $t\approx 10^{4}$. However, we need to access physical properties of our systems with evolution times up to $t=10^{12}$, where the full exact diagonalization is the only appropriate method. The Hamiltonian matrix $H$ is represented under the Fock state basis with U(1) symmetry where the boson particle number is conserved. To calculate the time-dependent wave function, we first full diagnoalize $H=\sum_{i}E_{i}\ket{\psi_{i}}\bra{\psi_{i}}$ where $E_{i}$ and $\ket{\psi_{i}}$ are i-th eigenenergy and eigenfunction, respectively. The time-dependent wave function then reads $\ket{\psi(t)}=\sum_{i}e^{-iE_{i}t}\braket{\psi_{i}}{\psi_{0}}\ket{\psi_{i}}$. For any time $t$ and a disorder realization $d$, we calculate $\ket{\psi(t)}_{d}$ and its physical quantity $\braket{O(t)}_{d}$. All quantities are averaged over $100$ disorder realizations with their mean values plotted as lines and standard errors of the mean values plotted as the error bars.
References
Barbiero et al. [2015]
L. Barbiero, C. Menotti,
A. Recati, and L. Santos, Out-of-equilibrium states and quasi-many-body localization
in polar lattice gases, Phys. Rev. B 92, 180406 (2015).
Li et al. [2020]
W. Li, A. Dhar, X. Deng, K. Kasamatsu, L. Barbiero, and L. Santos, Disorderless quasi-localization of polar gases in one-dimensional
lattices, Phys. Rev. Lett. 124, 010404 (2020).
Morera et al. [2021]
I. Morera, G. E. Astrakharchik, A. Polls, and B. Juliá-Díaz, Universal
dimerized quantum droplets in a one-dimensional lattice, Phys. Rev. Lett. 126, 023001 (2021).
Li et al. [2021]
W.-H. Li, X. Deng, and L. Santos, Hilbert space shattering and disorder-free
localization in polar lattice gases, Phys. Rev. Lett. 127, 260601 (2021).
Korbmacher et al. [2023]
H. Korbmacher, P. Sierant,
W. Li, X. Deng, J. Zakrzewski, and L. Santos, Lattice
control of nonergodicity in a polar lattice gas, Phys. Rev. A 107, 013301 (2023).
Kormos et al. [2017]
M. Kormos, M. Collura,
G. Takács, and P. Calabrese, Real-time confinement following a quantum quench
to a non-integrable model, Nat. Phys. 13, 246 (2017).
James et al. [2019]
A. J. A. James, R. M. Konik, and N. J. Robinson, Nonthermal states
arising from confinement in one and two dimensions, Phys. Rev. Lett. 122, 130603 (2019).
Liu et al. [2019]
F. Liu, R. Lundgren,
P. Titum, G. Pagano, J. Zhang, C. Monroe, and A. V. Gorshkov, Confined
quasiparticle dynamics in long-range interacting quantum spin chains, Phys. Rev. Lett. 122, 150601 (2019).
Mazza et al. [2019]
P. P. Mazza, G. Perfetto,
A. Lerose, M. Collura, and A. Gambassi, Suppression of transport in nondisordered quantum spin chains due to
confined excitations, Phys. Rev. B 99, 180302 (2019).
Altshuler et al. [1980]
B. L. Altshuler, A. G. Aronov, and P. A. Lee, Interaction effects in
disordered fermi systems in two dimensions, Phys. Rev. Lett. 44, 1288 (1980).
Shepelyansky [1994]
D. L. Shepelyansky, Coherent propagation
of two interacting particles in a random potential, Phys. Rev. Lett. 73, 2607 (1994).
Pal and Huse [2010]
A. Pal and D. A. Huse, Many-body localization phase
transition, Phys. Rev. B 82, 174411 (2010).
D’Alessio et al. [2016]
L. D’Alessio, Y. Kafri,
A. Polkovnikov, and M. Rigol, From quantum chaos and eigenstate thermalization
to statistical mechanics and thermodynamics, Adv. Phys. 65, 239 (2016).
Rigol et al. [2006]
M. Rigol, A. Muramatsu, and M. Olshanii, Hard-core bosons on optical
superlattices: Dynamics and relaxation in the superfluid and insulating
regimes, Phys. Rev. A 74, 053616 (2006).
Rigol et al. [2007]
M. Rigol, V. Dunjko,
V. Yurovsky, and M. Olshanii, Relaxation in a completely integrable many-body
quantum system: An ab initio study of the dynamics of the highly excited
states of 1d lattice hard-core bosons, Phys. Rev. Lett. 98, 050405 (2007).
Kinoshita et al. [2006]
T. Kinoshita, T. Wenger, and D. S. Weiss, A quantum newton’s cradle, Nature 440, 900 (2006).
Gring et al. [2012]
M. Gring, M. Kuhnert,
T. Langen, T. Kitagawa, B. Rauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler, and J. Schmiedmayer, Relaxation and
prethermalization in an isolated quantum system, Science 337, 1318 (2012).
Richerme et al. [2014]
P. Richerme, Z.-X. Gong,
A. Lee, C. Senko, J. Smith, M. Foss-Feig, S. Michalakis, A. V. Gorshkov, and C. Monroe, Non-local propagation of correlations in quantum systems with long-range
interactions, Nature 511, 198 (2014).
Gärttner et al. [2017]
M. Gärttner, J. G. Bohnet, A. Safavi-Naini, M. L. Wall, J. J. Bollinger, and A. M. Rey, Measuring out-of-time-order
correlations and multiple quantum spectra in a trapped-ion quantum magnet, Nat. Phys. 13, 781 (2017).
Yan et al. [2013]
B. Yan, S. A. Moses,
B. Gadway, J. P. Covey, K. R. Hazzard, A. M. Rey, D. S. Jin, and J. Ye, Observation
of dipolar spin-exchange interactions with lattice-confined polar
molecules, Nature 501, 521 (2013).
de Paz et al. [2013]
A. de Paz, A. Sharma,
A. Chotia, E. Maréchal, J. H. Huckans, P. Pedri, L. Santos, O. Gorceix, L. Vernac, and B. Laburthe-Tolra, Nonequilibrium quantum magnetism in a dipolar lattice gas, Phys. Rev. Lett. 111, 185305 (2013).
Ni et al. [2008]
K.-K. Ni, S. Ospelkaus,
M. De Miranda, A. Pe’Er, B. Neyenhuis, J. Zirbel, S. Kotochigova, P. Julienne, D. Jin, and J. Ye, A high
phase-space-density gas of polar molecules, science 322, 231 (2008).
Ni et al. [2010]
K.-K. Ni, S. Ospelkaus,
D. Wang, G. Quéméner, B. Neyenhuis, M. De Miranda, J. Bohn, J. Ye, and D. Jin, Dipolar
collisions of polar molecules in the quantum regime, Nature 464, 1324 (2010).
Chotia et al. [2012]
A. Chotia, B. Neyenhuis,
S. A. Moses, B. Yan, J. P. Covey, M. Foss-Feig, A. M. Rey, D. S. Jin, and J. Ye, Long-lived
dipolar molecules and feshbach molecules in a 3d optical lattice, Phys. Rev. Lett. 108, 080405 (2012).
Saffman et al. [2010]
M. Saffman, T. G. Walker, and K. Mølmer, Quantum information
with rydberg atoms, Rev. Mod. Phys. 82, 2313 (2010).
Schauß et al. [2012]
P. Schauß, M. Cheneau,
M. Endres, T. Fukuhara, S. Hild, A. Omran, T. Pohl, C. Gross, S. Kuhr, and I. Bloch, Observation of spatially ordered structures in a
two-dimensional rydberg gas, Nature 491, 87 (2012).
Béguin et al. [2013]
L. Béguin, A. Vernier,
R. Chicireanu, T. Lahaye, and A. Browaeys, Direct measurement of the van der waals interaction between two
rydberg atoms, Phys. Rev. Lett. 110, 263201 (2013).
Maghrebi et al. [2017]
M. F. Maghrebi, Z.-X. Gong, and A. V. Gorshkov, Continuous symmetry breaking in 1d
long-range interacting quantum systems, Phys. Rev. Lett. 119, 023001 (2017).
Halimeh et al. [2022]
J. C. Halimeh, L. Homeier,
H. Zhao, A. Bohrdt, F. Grusdt, P. Hauke, and J. Knolle, Enhancing disorder-free localization through dynamically emergent local
symmetries, PRX Quantum 3, 020345 (2022).
Halimeh et al. [2021]
J. C. Halimeh, H. Zhao,
P. Hauke, and J. Knolle, Stabilizing disorder-free localization, (2021), arXiv:2111.02427 .
Bardarson et al. [2012]
J. H. Bardarson, F. Pollmann, and J. E. Moore, Unbounded growth of
entanglement in models of many-body localization, Phys. Rev. Lett. 109, 017202 (2012).
Chanda et al. [2020]
T. Chanda, R. Yao, and J. Zakrzewski, Coexistence of localized and extended phases:
Many-body localization in a harmonic trap, Phys. Rev. Res. 2, 032039 (2020).
Zhang et al. [2020]
G.-Q. Zhang, D.-W. Zhang,
Z. Li, Z. D. Wang, and S.-L. Zhu, Statistically related many-body localization in the one-dimensional
anyon hubbard model, Phys. Rev. B 102, 054204 (2020).
Hashimoto et al. [2017]
K. Hashimoto, K. Murata, and R. Yoshii, Out-of-time-order correlators in
quantum mechanics, J. High Energy Phys. 2017 (10), 138.
He and Lu [2017]
R.-Q. He and Z.-Y. Lu, Characterizing
many-body localization by out-of-time-ordered correlation, Phys. Rev. B 95, 054201 (2017).
Gärttner et al. [2018]
M. Gärttner, P. Hauke, and A. M. Rey, Relating out-of-time-order
correlations to entanglement via multiple-quantum coherences, Phys. Rev. Lett. 120, 040402 (2018).
Xu et al. [2019]
S. Xu, X. Li, Y.-T. Hsu, B. Swingle, and S. Das Sarma, Butterfly effect in interacting aubry-andre model: Thermalization,
slow scrambling, and many-body localization, Phys. Rev. Res. 1, 032039 (2019).
Hosur et al. [2016]
P. Hosur, X.-L. Qi,
D. A. Roberts, and B. Yoshida, Chaos in quantum channels, J. High Energy Phys. 2016 (2), 004.
Fan et al. [2017]
R. Fan, P. Zhang, H. Shen, and H. Zhai, Out-of-time-order correlation for many-body
localization, Sci. Bull. 62, 707 (2017).
Lee et al. [2019]
J. Lee, D. Kim, and D.-H. Kim, Typical growth behavior of the out-of-time-ordered
commutator in many-body localized systems, Phys. Rev. B 99, 184202 (2019).
Nico-Katz et al. [2022]
A. Nico-Katz, A. Bayat, and S. Bose, Information-theoretic memory scaling in the
many-body localization transition, Phys. Rev. B 105, 205133 (2022).
Lin and Motrunich [2018]
C.-J. Lin and O. I. Motrunich, Out-of-time-ordered
correlators in a quantum ising chain, Phys. Rev. B 97, 144304 (2018).
Quan et al. [2006]
H. T. Quan, Z. Song, X. F. Liu, P. Zanardi, and C. P. Sun, Decay of loschmidt echo enhanced by quantum criticality, Phys. Rev. Lett. 96, 140604 (2006).
Izrailev and Castañeda-Mendoza [2006]
F. Izrailev and A. Castañeda-Mendoza, Return
probability: Exponential versus gaussian decay, Phys. Lett. A 350, 355 (2006).
Heyl et al. [2013]
M. Heyl, A. Polkovnikov, and S. Kehrein, Dynamical quantum phase transitions in
the transverse-field ising model, Phys. Rev. Lett. 110, 135704 (2013).
Stéphan [2017]
J.-M. Stéphan, Return probability
after a quench from a domain wall initial state in the spin-1/2 xxz chain, J. Stat. Mech. Theory Exp. 2017, 103108 (2017).
Bera et al. [2018]
S. Bera, G. De Tomasi,
I. M. Khaymovich, and A. Scardicchio, Return probability for the anderson
model on the random regular graph, Phys. Rev. B 98, 134205 (2018).
Serbyn et al. [2015]
M. Serbyn, Z. Papić, and D. A. Abanin, Criterion for many-body localization-delocalization phase
transition, Phys. Rev. X 5, 041047 (2015).
Serbyn et al. [2016]
M. Serbyn, A. A. Michailidis, D. A. Abanin, and Z. Papić, Power-law entanglement spectrum in many-body localized phases, Phys. Rev. Lett. 117, 160601 (2016).
Barišić et al. [2016]
O. S. Barišić,
J. Kokalj, I. Balog, and P. Prelovšek, Dynamical
conductivity and its fluctuations along the crossover to many-body
localization, Phys. Rev. B 94, 045126 (2016).
Geraedts et al. [2017]
S. D. Geraedts, N. Regnault, and R. M. Nandkishore, Characterizing the many-body
localization transition using the entanglement spectrum, New J. Phys. 19, 113021 (2017).
Krajewski et al. [2022]
B. Krajewski, M. Mierzejewski, and J. Bonča, Modeling sample-to-sample fluctuations of the gap ratio in finite
disordered spin chains, Phys. Rev. B 106, 014201 (2022).
Bernien et al. [2017]
H. Bernien, S. Schwartz,
A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner,
et al., Probing many-body
dynamics on a 51-atom quantum simulator, Nature 551, 579 (2017).
Roushan et al. [2017]
P. Roushan, C. Neill,
J. Tangpanitanon, V. M. Bastidas, A. Megrant, R. Barends, Y. Chen, Z. Chen, B. Chiaro,
A. Dunsworth, A. Fowler, B. Foxen, M. Giustina, E. Jeffrey, J. Kelly, E. Lucero, J. Mutus, M. Neeley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner,
T. White, H. Neven, D. G. Angelakis, and J. Martinis, Spectroscopic signatures of localization with interacting photons in
superconducting qubits, Science 358, 1175 (2017).
Kohlert et al. [2019]
T. Kohlert, S. Scherg,
X. Li, H. P. Lüschen, S. Das Sarma, I. Bloch, and M. Aidelsburger, Observation of many-body localization in a one-dimensional
system with a single-particle mobility edge, Phys. Rev. Lett. 122, 170403 (2019). |
Magnon-MediatedIndirectExcitonCondensationthroughAntiferromagneticInsulators
ØyvindJohansen
oyvinjoh@ntnu.no
CenterforQuantumSpintronics,DepartmentofPhysics,NorwegianUniversityofScienceandTechnology,NO-7491Trondheim,Norway
AkashdeepKamra
CenterforQuantumSpintronics,DepartmentofPhysics,NorwegianUniversityofScienceandTechnology,NO-7491Trondheim,Norway
CamiloUlloa
InstituteforTheoreticalPhysics,UtrechtUniversity,Princetonplein5,3584CCUtrecht,theNetherlands
ArneBrataas
CenterforQuantumSpintronics,DepartmentofPhysics,NorwegianUniversityofScienceandTechnology,NO-7491Trondheim,Norway
RembertA.Duine
CenterforQuantumSpintronics,DepartmentofPhysics,NorwegianUniversityofScienceandTechnology,NO-7491Trondheim,Norway
InstituteforTheoreticalPhysics,UtrechtUniversity,Princetonplein5,3584CCUtrecht,theNetherlands
DepartmentofAppliedPhysics,EindhovenUniversityofTechnology,P.O.Box513,5600MBEindhoven,TheNetherlands
(January 15, 2021)
Abstract
ElectronsandholesresidingontheopposingsidesofaninsulatingbarrierandexperiencinganattractiveCoulombinteractioncanspontaneouslyformacoherentstateknownasanindirectexcitoncondensate.Westudyatrilayersystemwherethebarrierisanantiferromagneticinsulator.Theelectronsandholeshereadditionallyinteractviainterfacialcouplingtotheantiferromagneticmagnons.Weshowthatbyemployingmagneticallyuncompensatedinterfaces,wecandesignthemagnon-mediatedinteractiontobeattractiveorrepulsivebyvaryingthethicknessoftheantiferromagneticinsulatorbyasingleatomiclayer.Wederiveananalyticalexpressionforthecriticaltemperature$T_c$oftheindirectexcitoncondensation.Withinourmodel,anisotropyisfoundtobecrucialforachievingafinite$T_c$,whichincreaseswiththestrengthoftheexchangeinteractionintheantiferromagneticbulk.Forrealisticmaterialparameters,weestimate$T_c$tobearound$7\text{\,}\mathrm{K}\text{/}$,thesameorderofmagnitudeasthecurrentexperimentallyachievableexcitoncondensationwheretheattractionissolelyduetotheCoulombinteraction.Themagnon-mediatedinteractionisexpectedtocooperatewiththeCoulombinteractionforcondensationofindirectexcitons,therebyprovidingameanstosignificantlyincreasetheexcitoncondensationtemperaturerange.
Introduction.—Interactionsbetweenfermionsresultinexoticstatesofmatter.Superconductivityisaprimeexample,wherethenegativelychargedelectronscanhaveanoverallattractivecouplingmediatedbyindividualcouplingstothevibrations,knownasphonons,ofthepositivelychargedlattice.Inadditiontocharge,theelectronalsohasaspindegreeoffreedom.Theelectronspincaninteractwithlocalizedmagneticmomentsthroughanexchangeinteractionexcitingthemagneticmomentbytransferofangularmomentum.Theseexcitationsarequasiparticlesknownasmagnons.Theoreticalpredictionsofelectron-magnoninteractionshaveshownthatthesecanalsoinduceeffectssuchassuperconductivitySuhl (2001); Karchev (2003); Funaki and Shimahara (2014); Kar et al. (2014); Kargarian et al. (2016); Gong et al. (2017); Rohling et al. (2018); Hugdal et al. (2018); Erlandsen et al. (2019); Fjærbu et al. (2019).ResearchinterestinantiferromagneticmaterialsissurgingJungwirth et al. (2016); Baltz et al. (2018).ThisenthusiasmisduetothepromisingpropertiesofantiferromagnetssuchashighresonancefrequenciesintheTHzregimeandavanishingnetmagneticmoment.Muchofthisresearchfocusesoninteractionsinvolvingmagnonsorspinwavesatmagneticinterfacesinhybridstructures.ExamplesofthisarespinpumpingTserkovnyak et al. (2002); Ross (2013); Cheng et al. (2014); Takei et al. (2014); Ross et al. (2015); Johansen and Brataas (2017); Kamra and Belzig (2017),spintransferCheng et al. (2014, 2016); Sluka (2017); Johansen et al. (2018),andspinHallmagnetoresistanceHan et al. (2014); Hou et al. (2017); Hoogeboom et al. (2017); Manchon (2017); Fischer et al. (2018); Baldrati et al. (2018)atnormalmetalinterfaces,andmagnon-mediatedsuperconductivityinmetalsFjærbu et al. (2019)andtopologicalinsulatorsErlandsen et al. (2019).Recently,anexperimenthasalsodemonstratedspintransportinanantiferromagneticinsulatoroverdistancesupto$80\text{\,}\mathrm{\SIUnitSymbolMicro m}\text{/}$Lebrun et al. (2018).Moreover,antiferromagneticmaterialsarealsoofinterestsinceitisbelievedthathigh-temperaturesuperconductivityincupratesisintricatelylinkedtomagneticfluctuationsnearanantiferromagneticMottinsulatingphaseBonn (2006); Hig (2006).Thusitiscrucialtoachieveagoodunderstandingofantiferromagneticmagnon-electroninteractions,aswellaselectron-electroninteractionsmediatedbyantiferromagneticmagnons.
InthisLetter,wetheoreticallydemonstratetheapplicationofantiferromagneticinsulatorstocondensationofindirectexcitons.Anexcitonisaboundstateconsistingofanelectronandahole.TheexcitonsinteractattractivelythroughtheCoulombinteractionduetotheiroppositechargesWannier (1937).InitiallypredictedmanydecadesagoBlatt et al. (1962); Casella (1963),theexcitoncondensatehasbeensurprisinglyelusive.Achallengeisthattheexcitonlifetimeistooshorttoformacondensateduetoexciton-excitonannihilationprocessessuchasAugerrecombinationO’Hara et al. (1999); Klimov et al. (2000); Wang et al. (2004, 2006).Theproblemofshortexcitonlifetimescanbesolvedbyhavingaspatialseparationbetweentheelectronsandholesinatrilayersystem,wheretheelectronsandholesareseparatedbyaninsulatingbarrierLozovik and Yudson (1975, 1976, 1977)todrasticallylowertherecombinationrate.Excitonsinsuchsystemsareoftenreferredtoas(spatially)indirectexcitons,andtheseareidealtoobservetheexcitoncondensate.Herein,weconsiderasystemwheretheinsulatingbarrierisanantiferromagneticinsulator,asshowninFig.1.Theinsulatingbarriercanthenserveadualpurpose:inadditiontoincreasingtheexcitonlifetime,thespinfluctuationsintheantiferromagnetmediateanadditionalattractiveinteractionbetweentheelectronsandtheholes.Thismagnon-mediatedinteractioncooperateswiththeCoulombinteractiontherebyenablinganincreaseofthetemperaturerangeforobservingexcitoncondensationinexperiments.Theindirectexcitoncondensatehastwomainexperimentalsignatures.ThefirstisadissipationlesscounterflowofelectriccurrentsinthetwolayersTutuc et al. (2004); Kellogg et al. (2004); Nandi et al. (2012).Whentheexcitoncondensatemovesinonedirection,theresultingchargecurrentsintheindividuallayersareantiparallelduetotheoppositelychargedcarriersinthetwolayers.Thesecondsignatureisalargeenhancementofthezero-biastunnelingconductancebetweenthelayersSpielman et al. (2000, 2001),reminiscentoftheJosephsoneffectinsuperconductors.Theexcitoncondensateisexpectedtoexistwhenthenumberofelectronsinonelayerequalsthenumberofholesintheother.Thusfar,tothebestofourknowledge,experimentswithanunequivocaldetectionoftheexcitoncondensatehaveutilizedquantumHallsystemswithahalffillingofthelowestLandauleveltosatisfythiscriterionEisenstein and MacDonald (2004); Wiersma et al. (2004); Eisenstein (2014); Li et al. (2017); Liu et al. (2017).Suchsystemsrelyonhighexternalmagneticfields.Arecentexperimentstudyingdouble-bilayergraphenesystemshas,however,beenabletodetecttheenhancedzero-biastunnelingconductancesignatureofindirectexcitoncondensationwithoutanymagneticfield,bycontrollingtheelectronandholepopulationsthroughgatevoltagesBurg et al. (2018).Thisisanindicationofthepossibleexistenceofanexcitoncondensate,andshowspromiseforfindingamagnetic-fieldfreeexcitoncondensate.InthisLetter,weshowthatthemagnon-mediatedinteractionbetweentheelectronsandholescanbeattractiveorrepulsivedependingonwhetherthetwomagneticinterfacesarewiththesameoroppositemagneticsublattices.Inturn,thisenablesanunprecedentedcontrolovertheinteractionnatureviathevariationoftheantiferromagneticinsulatorthicknessbyasingleatomiclayer.Consequently,whenthemagnon-mediatedinteractionispairedwiththeCoulombinteraction,thiscanbeusedtocontrolthefavoredspinstructureoftheexcitons.Inourmodel,wefindthatthecriticaltemperatureforcondensationisenhancedbytheexchangeinteractionintheantiferromagnet,andthatafinitemagneticanisotropyisneededtohaveanattractiveinteractionaroundtheFermilevel.Ourresultssuggestthatifoneletstheinsulatingbarrierinindirectexcitoncondensationexperimentsbeanantiferromagneticinsulator,themagnon-mediatedinteractionscansignificantlystrengthenthecorrelationsbetweentheelectronsandholes.Model.—Weconsideratrilayersystemwhereanantiferromagneticinsulatorissandwichedbetweentwofermionreservoirs,asillustratedinFig.1(a).Wewillthenlaterconsiderthecasewhereoneofthesereservoirsispopulatedbyelectrons,andtheotherbyholes.ThissystemcanbedescribedbytheHamiltonian$\mathcal{H}=\mathcal{H}_\text{el}+\mathcal{H}_\text{mag}+\mathcal{H}_\text{int}$,where$\mathcal{H}_\text{el}$describestheelectronicpartofthesysteminthefermionreservoirs,$\mathcal{H}_\text{mag}$describesthespinsintheantiferromagneticinsulator,and$\mathcal{H}_\text{int}$describestheinterfacialinteractionbetweenthefermionsandmagnons.Weassumeallthreelayerstobeatomicallythin,andthustwo-dimensional,forsimplicity.Weconsiderauniaxialeasy-axisantiferromagneticinsulatordescribedbytheHamiltonian
$$\displaystyle\mathcal{H}_\text{mag}=J\sum_{\langle i,j\rangle}\bm{S}_i\cdot\bm%
{S}_j-\frac{K}{2}\sum_{i}S_{iz}^{2}\,.$$
(1)
Here$J>0$isthestrengthofthenearest-neighborexchangeinteractionbetweenthespinswhichhaveamagnitude${\lvert\bm{S}_i\rvert=\hbar S}$forall$i$,and$K>0$istheeasy-axisanisotropyconstant.Next,weperformaHolstein–Primakofftransformation(HPT)Holstein and Primakoff (1940)ofthespinoperatorsoneachsublattice,denotedbysublattices$A$and$B$,asdefinedinFig.1.FromtheHPT,wehavethattheoperator$a_i^{\left(\dagger\right)}$annihilates(creates)amagnonat$\bm{r}_i$when$\bm{r}_i\in A$,andequivalently$b_i^{\left(\dagger\right)}$annihilates(creates)amagnonat$\bm{r}_i$when$\bm{r}_i\in B$.ThemagneticHamiltoniancanbediagonalizedthroughFourierandBogoliubovtransformationstotheform$\mathcal{H}_\text{mag}=\sum_{\bm{k}}\varepsilon_{\bm{k}}\left(\mu_{\bm{k}}^{%
\dagger}\mu_{\bm{k}}+\nu_{\bm{k}}^{\dagger}\nu_{\bm{k}}\right)$.Themagnonenergyisgivenby$\varepsilon_{\bm{k}}=\hbar\sqrt{(1-\gamma_{\bm{k}}^{2})\omega_E^{2}+\omega_%
\parallel(2\omega_E+\omega_\parallel)}$,where$\bm{k}$isthemagnonmomentum,$\gamma_{\pm\bm{k}}=z^{-1}\sum_{\bm{\delta}}\exp\left(i\bm{k}\cdot\bm{\delta}\right)$,$\bm{\delta}$asetofvectorstoeachnearestneighbor,$z$thenumberofnearestneighbors,$\omega_E=\hbar JSz$,and$\omega_\parallel=\hbar KS$.Theeigenmagnonoperators$\mu_{\bm{k}}^{\left(\dagger\right)}$and$\nu_{\bm{k}}^{\left(\dagger\right)}$arerelatedtotheHPTmagnonoperatorsthroughtheBogoliubovtransformation${\mu_{\bm{k}}=u_{\bm{k}}a_{\bm{k}}+v_{\bm{k}}b_{-\bm{k}}^{\dagger}}$,${\nu_{\bm{k}}=u_{\bm{k}}b_{\bm{k}}+v_{\bm{k}}a_{-\bm{k}}^{\dagger}}$.TheBogoliubovcoefficients$u_{\bm{k}}$and$v_{\bm{k}}$aregivenby$u_{\bm{k}}=\sqrt{(\Gamma_{\bm{k}}+1)/2}$and$v_{\bm{k}}=\sqrt{(\Gamma_{\bm{k}}-1)/2}$,with${\Gamma_{\bm{k}}=\{1-[\omega_E\gamma_{\bm{k}}/(\omega_E+\omega_\parallel)]^{2}%
\}^{-1/2}}$.Theinterfacialexchangeinteractionbetweenthefermionsandmagnonsatthetwomagneticinterfacesismodeledbythe$s$-$d$interactionZener (1951); Kasuya (1956)
$$\displaystyle\mathcal{H}_\text{int}=-\sum_{j=L,R}\sum_{k=A,B}\sum_{i\in%
\mathcal{A}_k^{j}}J_k^{j}(\bm{r}_i)\hat{\bm{\rho}}_j(\bm{r}_i)\cdot\bm{S}(\bm{%
r}_i)\,,$$
(2)
where$\mathcal{A}_k^{L(R)}$istheinterfacesectionbetweentheleft(right)fermionreservoirandthe$k$-th($k=A,B$)sublatticeoftheantiferromagneticinsulator.Theinterfacialexchangecouplingconstants$J_k^{j}(\bm{r}_i)$aredefinedsothattheytakeonthevalue$J_k^{j}(\bm{r}_i)=J_k^{j}$if$\bm{r}_i\in\mathcal{A}_k^{j}$,andzerootherwise.Wehavealsodefinedtheelectronicspindensity
$$\displaystyle\hat{\bm{\rho}}_j(\bm{r}_i)=\frac{1}{2}\sum_{\sigma,\sigma^{%
\prime}}\psi_{\sigma,j}^{\dagger}(\bm{r}_i)\bm{\sigma}_{\sigma\sigma^{\prime}}%
\psi_{\sigma^{\prime},j}(\bm{r}_i)$$
(3)
with$\psi_{\sigma,j}^{\left(\dagger\right)}$annihilating(creating)afermionwithspin$\sigma$inthe$j$-th($j=L,R$)fermionreservoir,and$\bm{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$beingavectorofPaulimatrices.Effectivemagnonpotential.—Wewillnowuseapathintegralapproachwherewetreatthemagnon-fermioninteractionasaperturbation,andintegrateoutthemagnonicfieldsthatgiverisetoprocessesasillustratedinFig.1(b)toexpresstheinteractionasaneffectivepotentialbetweenthefermionreservoirs.Weconsiderthecoherent-statepathintegral$\mathcal{Z}=\int\mathcal{D}^{2}\psi_L\mathcal{D}^{2}\psi_R\mathcal{D}^{2}\mu%
\mathcal{D}^{2}\nu\exp\left(-\mathcal{S}/\hbar\right)$inimaginarytime,where$\mathcal{D}^{2}\mu\equiv\mathcal{D}\mu\mathcal{D}\mu^{*}$etc.Theaction$\mathcal{S}$isgivenby
$$\displaystyle\mathcal{S}=$$
$$\displaystyle\int_0^{\hbar\beta}\mathrm{d}\tau\Bigg{\{}\hbar\sum_i\Bigg{[}%
\sum_{\sigma=\uparrow,\downarrow}\sum_{j=L,R}\psi_{\sigma,j}^{*}(\bm{r}_i,\tau%
)\partial_\tau\psi_{\sigma,j}(\bm{r}_i,\tau)$$
$$\displaystyle+\sum_{\eta=\mu,\nu}\eta^{*}(\bm{r}_i,\tau)\partial_\tau\eta(\bm{%
r}_i,\tau)\Bigg{]}+\mathcal{H}(\tau)\Bigg{\}}\,,$$
(4)
where$\tau=it$isimaginarytime,and$\beta=1/(k_BT)$with$k_B$beingtheBoltzmannconstantand$T$thetemperature.Notethatinthecoherent-statepathintegralwecanreplacefermionoperatorsbyGrassmannumbers(e.g. $\psi^{\dagger}\rightarrow\psi^{*}$)andbosonoperatorsbycomplexnumbers(e.g. $\eta^{\dagger}\rightarrow\eta^{*}$).Wenowtreat$\mathcal{H}_\text{int}$asaperturbation,andkeeptermsuptosecondorder.Wediscardanytermsthatonlycontributetointralayerinteractions,asweareinterestedintheinterlayerpotentialbetweenthefermionreservoirs.Next,weintegrateoutthemagnonfields$\mu^{(*)}$and$\nu^{(*)}$,andwritethepathintegraloverthefermionreservoirsas$\mathcal{Z}\approx\int\mathcal{D}^{2}\psi_L\mathcal{D}^{2}\psi_R\exp\left(-%
\mathcal{S}_\text{eff}/\hbar\right)$.InthemomentumandMatsubara-frequencybases,theeffectiveaction$\mathcal{S}_\text{eff}$isgivenby Supp
$$\displaystyle\mathcal{S}_\text{eff}=\mathcal{S}_\text{el}+\hbar\beta\sum_{%
\sigma=\uparrow,\downarrow}\sum_{lmn}\sum_{\bm{k}\bm{k^{\prime}}\bm{q}}U_%
\sigma(\bm{q},i\omega_n)\psi_{\sigma,L}^{*}(\bm{k^{\prime}}+\bm{q},i\nu_l+i\omega_n)$$
$$\displaystyle\times\psi_{-\sigma,L}(\bm{k^{\prime}},i\nu_l)\psi_{-\sigma,R}^{*%
}(\bm{k}-\bm{q},i\nu_m-i\omega_n)\psi_{\sigma,R}(\bm{k},i\nu_m)\,,$$
(5)
wherewehavehereintroducedthefermionicandbosonicMatsubarafrequencies,$\nu_n=(2n+1)\pi/(\hbar\beta)$and$\omega_n=2\pi n/(\hbar\beta)$respectively.Theaction$\mathcal{S}_\text{el}$describesthecontributionofthefermionicfieldstotheactioninEq.(4),exceptforthecontributionsfrom$\mathcal{H}_\text{int}$.Thelatterterm,$\mathcal{H}_\text{int}$,isinsteaddescribedbythecontributionofthemagnon-mediatedinterlayer-fermionpotential
$$\displaystyle U_{\sigma}(\bm{q},i\omega_n)\equiv-\frac{\hbar^{2}S}{N}\left[%
\frac{J_\mu^{L}(\bm{q})J_\mu^{R}(\bm{q})}{-\sigma i\hbar\omega_n+\varepsilon_{%
\bm{q}}}+\frac{J_\nu^{L}(\bm{q})J_\nu^{R}(\bm{q})}{\sigma i\hbar\omega_n+%
\varepsilon_{\bm{q}}}\right]$$
(6)
totheeffectiveaction,where$N$isthetotalnumberofspinsitesintheantiferromagnet.Weassumethetwomagneticinterfacesareuncompensated,i.e. eachinterfaceisonlywithoneoftheantiferromagneticsublatticesasshowninFig.1.Wecomputethatthecouplingconstants$J_{\mu,\nu}^{L,R}(\bm{q})$describingtheeffectiveexchangecouplingstrengthbetweenthespinofthefermionsinreservoirs$L$,$R$tothespinoftheeigenmagnons$\mu_{\bm{q}}$,$\nu_{\bm{q}}$are$J_\mu^{L/R}(\bm{q})={v_{\bm{q}}J_B^{L/R}(\bm{r}_{L/R})-u_{\bm{q}}J_A^{L/R}(\bm%
{r}_{L/R})}$and$J_\nu^{L/R}(\bm{q})={v_{\bm{q}}J_A^{L/R}(\bm{r}_{L/R})-u_{\bm{q}}J_B^{L/R}(\bm%
{r}_{L/R})}$.Sinceeachinterfaceiswithonlyonesublattice,$J_{\mu}^{L}(\bm{q})=-u_{\bm{q}}J_A^{L}$iftheleftinterfaceiswithsublattice$A$,and$J_{\mu}^{L}(\bm{q})=v_{\bm{q}}J_B^{L}$iftheleftinterfaceiswithsublattice$B$.Wegetanalogousresultsfortherightinterface.Weseethattheeffectivecouplingconstants$J_{\mu,\nu}^{L,R}(\bm{q})$canhavethesameoroppositesignasthecouplingconstants$J_{A,B}^{L,R}$dependingonwhichsublatticeisattheinterface.Thishastodowiththespinprojectionoftheeigenmagnonrelativetotheequilibriumspindirectionofthesublatticeattheinterface.Theeffectivecouplingconstants$J_{\mu,\nu}^{L,R}(\bm{q})$arealsoenhancedbyaBogoliubovcoefficient$u_{\bm{q}}$or$v_{\bm{q}}$withrespecttothecouplingconstants$J_{A,B}^{L,R}$.Thesearetypicallylargenumbers.For$\bm{q}=\bm{0}$wehave$u_{\bm{0}}\approx v_{\bm{0}}\approx 2^{-3/4}\times(\omega_E/\omega_\parallel)^%
{1/4}$tolowestorderinthesmallratio$\omega_\parallel/\omega_E$.Theenhancementisduetolargespinfluctuationsateachsublatticeoftheantiferromagnetpereigenmagnoninthesystem,sincetheeigenmagnonsaresqueezedstatesKamra et al. (2019); Erlandsen et al. (2019).BystudyingEq.(6),wenotethatwehave$\operatorname{Re}[U_{\sigma}(\bm{q},i\omega_n)]<0$foridenticaluncompensatedinterfaces,whereasforasystemwhereoneoftheinterfacesiswithsublattice$A$andtheotherwithsublattice$B$,wehave$\operatorname{Re}[U_{\sigma}(\bm{q},i\omega_n)]>0$.Consequentially,thisallowsustocontrolwhetherthemagnon-mediatedinterlayer-fermionpotential$U_{\sigma}(\bm{q},i\omega_n)$isattractiveorrepulsivebydesigningtheinterfaces.Whetherthispotentialisattractiveorrepulsivecandependonasingleatomiclayer.Thisallowsforanunprecedentedhighdegreeofcontrolandtunabilityoftheinterlayer-fermioninteractions.Thesigndifferenceofthepotentialcanbeexplainedbyhowthetwofermionscoupledbythemagnoninteractwiththeeigenmagnonspin.For$\operatorname{Re}[U_{\sigma}(\bm{q},i\omega_n)]<0$wehaveprocesseswherethefermionscouplesymmetricallytothemagnonspin,i.e. bothfermionscoupleeitherferromagneticallyorantiferromagneticallytoitsspin.Ontheotherhand,for$\operatorname{Re}[U_{\sigma}(\bm{q},i\omega_n)]>0$wehaveanasymmetriccoupling,whereonefermioncouplesferromagneticallytotheeigenmagnonspinandtheotherfermioncouplesantiferromagnetically.Indirectexcitoncondensation.—WewillnowstudyspontaneousBose–Einsteincondensationofspatially-indirectexcitonswheretheattractionismediatedbytheantiferromagneticmagnons.Weconsidertheleft(right)reservoirtobeann-doped(p-doped)semiconductor.WedescribethesemiconductorsbytheHamiltonian
$$\displaystyle\mathcal{H}_\text{el}(\tau)=\sum_{j=L,R}\sum_{\bm{k}}\sum_{\sigma%
=\uparrow,\downarrow}\epsilon_j(\bm{k})\psi_{\sigma,j}^{\dagger}(\bm{k},\tau)%
\psi_{\sigma,j}(\bm{k},\tau)\,,$$
(7)
with$\epsilon_L(\bm{k})=-\epsilon_R(\bm{k})=\hbar^{2}k^{2}/(2m)-\epsilon_F\equiv%
\epsilon(\bm{k})$.Here$m$istheeffectiveelectronandholemass,whichweassumetobeequal,and$\epsilon_F$istheFermilevel.Whiletheoperator$\psi_{\sigma,L/R}^{\dagger}$createsanelectronwithspin$\sigma$intheleft/rightlayer,wenotethatduetothenegativedispersionintherightlayertheexcitationsinthislayerareeffectivelydescribedbyelectronholes.WealsonotethatwehavenotincludedaCoulombinteractionbetweentheelectronandtheholesinourmodel.TheeffectoftheCoulombpotentialonindirectexcitoncondensationhasbeenwidelystudiedinpreviousliteratureFil and Shevchenko (2018).Wewilllaterarguewhythemagnon-mediatedpotentialisexpectedtocooperatewiththeCoulombpotentialinthecaseofindirectexcitoncondensation.TheinteractioninEq.(5)istoocomplicatedforustosolvefortheexcitoncondensation.WethendoanapproximationsimilartotheBardeen–Cooper–Schrieffer(BCS)theoryofsuperconductivityBardeen et al. (1957); Kopnin (2001),andassumethatthedominantcontributiontotheinteractionariseswhentheexcitonshavezeronetmomentum($\bm{k}+\bm{k^{\prime}}=\bm{q}$),andsimilarlyfortheMatsubarafrequencies($i\nu_l+i\nu_m=i\omega_n$).Next,weintroducetheorderparameter
$$\displaystyle\Delta_\sigma(\bm{k},i\nu_m)\equiv$$
$$\displaystyle-\sum_n\sum_{\bm{k^{\prime}}}U_{\sigma}(\bm{k}-\bm{k^{\prime}},i%
\nu_m-i\nu_n)$$
$$\displaystyle\times\psi^{*}_{\sigma,R}(\bm{k^{\prime}},i\nu_n)\psi_{\sigma,L}(%
\bm{k^{\prime}},i\nu_n)$$
(8)
anditsHermitianconjugate,andperformaHubbard–Stratonovichtransformationoftheeffectiveaction.Bydoingasaddle-pointapproximationandintegratingoverthefermionicfieldsSupp ,wethenobtainthegapequation
$$\displaystyle\Delta_{-\sigma}(\bm{k^{\prime}},i\nu_n)=$$
$$\displaystyle\sum_{m}\sum_{\bm{k}}\beta^{-1}U_{\sigma}(\bm{k}-\bm{k^{\prime}},%
i\nu_m-i\nu_n)$$
$$\displaystyle\times\frac{\Delta_\sigma(\bm{k},i\nu_m)}{\lvert\Delta_\sigma(\bm%
{k},i\nu_m)\rvert^{2}+\epsilon(\bm{k})^{2}+(\hbar\nu_m)^{2}}\,.$$
(9)
Wenotethatthemagnon-mediatedpotentialisattractivewhen$U_{\sigma}\left(\bm{q},i\omega_n\right)>0$inthecaseofindirectexcitoncondensation,whichcanbeseenfromrearrangingthefermionicfieldsinEq.(5).WenowuseEq.(9)tofindananalyticalexpressionforthecriticaltemperature$T_c$belowwhichtheexcitonsspontaneouslyformaBose–Einsteincondensate.Toobtainananalyticalsolution,wefocusonthecasewhenthegapfunctionsandthemagnon-mediatedpotentialareindependentofmomentumandfrequency.Thiscorrespondstoaninstantaneouscontactinteraction,andwethereforeassumethatthegapfunctionshavean$s$-wavepairing.Moreover,weseethatthegapequationinEq.(9)onlyhasasolutionwhen$\Delta_\sigma$and$\Delta_{-\sigma}$havethesamesign.Inthecasewherespin-degeneracyisunbroken,itisfairtoassumethat$\Delta_\sigma=\Delta_{-\sigma}$,indicatingtriplet-likepairing.Insuperconductivity,$s$-waveandtripletpairingaremutuallyexclusiveforevenfrequencyorderparameters,butinthecaseofindirectexcitonsthesamesymmetryrestrictionsontheorderparameterdonotapply,asthecompositebosondoesnotconsistofidenticalparticles.Inotherwords,forindirectexcitonsthesymmetriesinmomentumspaceandspinspacearedecoupledfromoneanother.Asboththemagnon-mediatedpotentialandtheCoulombpotentialareinthe$s$-wavechannelandtheCoulombpotentialisindependentofspin,themagnon-mediatedpotentialworkstogetherwiththeCoulombpotentialenhancingtheattractiveexcitonpairinginteraction.Thefactthatwecandesignwhetherthemagnon-mediatedpotentialisattractiveorrepulsiveallowsustocontrolwhichspinchannelisthemostfavorablefortheexcitonstocondensate.
Todetermine$T_c$weperformaBCS-likecalculationSupp ; Kopnin (2001)andrestrictthesumoverMatsubarafrequenciestoathinshellaroundtheFermilevel($\lvert\hbar\nu_m\rvert<\varepsilon_{\bm{0}}$),wherethemagnon-mediatedpotentialisattractive.Theanalyticalexpressionfor$T_c$isfoundtobe
$$\displaystyle T_c=\frac{2e^{\gamma_\text{EM}}\varepsilon_{\bm{0}}}{\pi k_B}%
\exp\left(-\frac{2\pi\varepsilon_{\bm{0}}}{Su_{\bm{0}}v_{\bm{0}}ma^{2}J_A^{L}%
J_B^{R}}\right)\,,$$
(10)
where$\gamma_\text{EM}\approx 0.577$istheEuler–Mascheroniconstantand$a$thelatticeconstantofthesemiconductors.Herewehaveassumedthattheleftandrightmagneticinterfacesconsistofoppositesublattices.Thisleadstoanattractiveexcitoninteraction.Ifweassumetheexchangeenergyamongthespinsinthebulkismuchlargerthantheinterfacecoupling($\hbar\omega_E\gg Sma^{2}J_A^{L}J_B^{R}$),thevalueoftheanisotropythatmaximizes$T_c$is
$$\displaystyle\hbar\omega_\parallel^{\text{(opt)}}\equiv\frac{Sma^{2}J_A^{L}J_B%
^{R}}{16\pi}\,.$$
(11)
Thefulldependenceof$T_c$onthemagneticanisotropyisshowninFig.2.Thecriticaltemperatureforindirectexcitoncondensationislargestforanonzeroandfinitemagneticanisotropy.Thisisbecauseinthelimit$\omega_\parallel\rightarrow 0$themagnongapintheantiferromagneticinsulatorvanishes,andconsequentiallysodoesthethinshellaroundtheFermilevelwherethemagnon-mediatedpotentialisattractive.Inthecaseofalargeanisotropy,$\omega_\parallel\gg\omega_\parallel^{\text{(opt)}}$,theenhancementofthemagnon-mediatedpotentialduetomagnonsqueezingislostKamra et al. (2019).Whentheanisotropytakesonitsoptimalvalue,thecriticaltemperaturebecomes
$$\displaystyle T_c^{\text{(opt)}}\equiv\frac{\sqrt{\hbar\omega_E Sma^{2}J_A^{L}%
J_B^{R}}}{\sqrt{2}\pi^{3/2}k_B}e^{\gamma_\text{EM}-1/2}\,.$$
(12)
Notably,weseethatthecriticaltemperatureincreasesmonotonouslywithincreasingstrengthoftheexchangeinteraction$\hbar\omega_E$.Theoptimalchoiceofanantiferromagneticinsulatorwouldthenbeamaterialwithamagneticanisotropy($\hbar\omega_\parallel)$onanenergyscaleproportionaltotheexchangecouplingattheinterface($\hbar J^{L,R}_{A,B}$),andaverystrongexchangeinteractioninthebulkoftheantiferromagneticinsulator($\hbar\omega_E$).Toshowhowhighthe$T_c$ofindirectexcitoncondensationinourmodelcanbeusingonlythemagnon-mediatedinteraction,wegiveanumericalestimateforrealisticmaterialparameters.Usingtheparameters$S=1$,$m$equaltheelectronmass,$a=$5\text{\,}\mathrm{\SIUnitSymbolAngstrom}\text{/}$$,$\hbar J_A^{L}=\hbar J_B^{R}=$10\text{\,}\mathrm{meV}\text{/}$$Rohling et al. (2018),$\omega_E=$8.6\text{\cdot}{10}^{13}\text{\,}\mathrm{s}^{-1}$$Satoh et al. (2010),andassumingthemagneticanisotropytakesonitsoptimalvalue$\omega_\parallel^{\text{(opt)}}=$9.9\text{\cdot}{10}^{9}\text{\,}\mathrm{s}^{-%
1}$$,weobtaina$T_c^{\rm(opt)}$ofapproximately$7\text{\,}\mathrm{K}\text{/}$.Incomparison,arecentexperimentstudyingdoublebilayergrapheneinthequantumHallregimefoundtheCoulomb-mediatedexcitoncondensationtohaveanactivationenergyof$\sim$8\text{\,}\mathrm{K}\text{/}$$Liu et al. (2017),whichwastentimeshigherthanwhatwasfoundinanexperimentusingGaAsKellogg et al. (2002).ThisdemonstratesthatthepotentialmediatedbytheantiferromagneticmagnonsiscapableofcreatingstrongcorrelationsbetweentheelectronsandholesthatcouldsignificantlyincreasethecriticaltemperatureforcondensationcomparedtowhentheexcitonsarejustboundthroughtheCoulombinteraction.
Acknowledgements.ThisworkwassupportedbytheResearchCouncilofNorwaythroughitsCentresofExcellencefundingscheme,ProjectNo.262633“QuSpin”andGrantNo.239926“SuperInsulatorSpintronics,”theEuropeanResearchCouncilviaAdvancedGrantNo.669442“Insulatronics”,aswellastheStichtingvoorFundamenteelOnderzoekderMaterie(FOM).
References
Suhl (2001)
H. Suhl, “Simultaneousonsetofferromagnetismandsuperconductivity,” Phys.Rev.Lett. 87, 167007(2001).
Karchev (2003)
N. Karchev, “Magnonexchangemechanismofferromagneticsuperconductivity,” Phys.Rev.B 67, 054416(2003).
Funaki and Shimahara (2014)
H. Funaki and H. Shimahara, “Odd-andeven-frequencysuperconductivitiesmediatedbyferromagneticmagnons,” J.Phys.Soc.Jpn. 83, 123704(2014).
Kar et al. (2014)
R. Kar,T. Goswami,B. C. Paul, and A. Misra, “OnmagnonmediatedCooperpairformationinferromagneticsuperconductors,” AIPAdvances 4, 087126(2014).
Kargarian et al. (2016)
M. Kargarian,D. K. Efimkin, and V. Galitski, “Ampereanpairingatthesurfaceoftopologicalinsulators,” Phys.Rev.Lett. 117, 076806(2016).
Gong et al. (2017)
X. Gong,M. Kargarian,A. Stern,D. Yue,H. Zhou,X. Jin,V. M. Galitski,V. M. Yakovenko, and J. Xia, “Time-reversalsymmetry-breakingsuperconductivityinepitaxialbismuth/nickelbilayers,” ScienceAdvances 3(2017), 10.1126/sciadv.1602579.
Rohling et al. (2018)
N. Rohling,E. L. Fjærbu, and A. Brataas, “Superconductivityinducedbyinterfacialcouplingtomagnons,” Phys.Rev.B 97, 115401(2018).
Hugdal et al. (2018)
H. G. Hugdal,S. Rex,F. S. Nogueira, and A. Sudbø, “Magnon-inducedsuperconductivityinatopologicalinsulatorcoupledtoferromagneticandantiferromagneticinsulators,” Phys.Rev.B 97, 195438(2018).
Erlandsen et al. (2019)
E. Erlandsen,A. Kamra,A. Brataas, and A. Sudbø, “Superconductivityenhancementonatopologicalinsulatorsurfacebyantiferromagneticsqueezedmagnons,” arXiv (2019), arXiv:1903.01470.
Fjærbu et al. (2019)
E. L. Fjærbu,N. Rohling, and A. Brataas, “Superconductivityatmetal-antiferromagneticinsulatorinterfaces,” arXiv (2019), arXiv:1904.00233.
Jungwirth et al. (2016)
T. Jungwirth,X. Marti,P. Wadley, and J. Wunderlich, “Antiferromagneticspintronics,” Nat.Nanotechnol. 11, 231(2016).
Baltz et al. (2018)
V. Baltz,A. Manchon,M. Tsoi,T. Moriyama,T. Ono, and Y. Tserkovnyak, “Antiferromagneticspintronics,” Rev.Mod.Phys. 90, 015005(2018).
Tserkovnyak et al. (2002)
Y. Tserkovnyak,A. Brataas, and G. E. W. Bauer, “EnhancedGilbertdampinginthinferromagneticfilms,” Phys.Rev.Lett. 88, 117601(2002).
Ross (2013)
M. P. Ross, SpinDynamicsinanAntiferromagnet, Ph.D.thesis, TechnischeUniversitätMünchen(2013).
Cheng et al. (2014)
R. Cheng,J. Xiao,Q. Niu, and A. Brataas, “Spinpumpingandspin-transfertorquesinantiferromagnets,” Phys.Rev.Lett. 113, 057601(2014).
Takei et al. (2014)
S. Takei,B. I. Halperin,A. Yacoby, and Y. Tserkovnyak, “Superfluidspintransportthroughantiferromagneticinsulators,” Phys.Rev.B 90, 094408(2014).
Ross et al. (2015)
P. Ross,M. Schreier,J. Lotze,H. Huebl,R. Gross, and S. T. B. Goennenwein, “AntiferromagenticresonancedetectedbydirectcurrentvoltagesinMnF$_2$/Ptbilayers,” J.Appl.Phys. 118, 233907(2015).
Johansen and Brataas (2017)
Ø. Johansen and A. Brataas, “SpinpumpingandinversespinHallvoltagesfromdynamicalantiferromagnets,” Phys.Rev.B 95, 220408(2017).
Kamra and Belzig (2017)
A. Kamra and W. Belzig, “Spinpumpingandshotnoiseinferrimagnets:Bridgingferro-andantiferromagnets,” Phys.Rev.Lett. 119, 197201(2017).
Cheng et al. (2016)
R. Cheng,D. Xiao, and A. Brataas, “TerahertzantiferromagneticspinHallnano-oscillator,” Phys.Rev.Lett. 116, 207603(2016).
Sluka (2017)
V. Sluka, “Antiferromagneticresonanceexcitedbyoscillatingelectriccurrents,” Phys.Rev.B 96, 214412(2017).
Johansen et al. (2018)
Ø. Johansen,H. Skarsvåg, and A. Brataas, “Spin-transferantiferromagneticresonance,” Phys.Rev.B 97, 054423(2018).
Han et al. (2014)
J. H. Han,C. Song,F. Li,Y. Y. Wang,G. Y. Wang,Q. H. Yang, and F. Pan, “Antiferromagnet-controlledspincurrenttransportinSrMnO$_{3}/$Pthybrids,” Phys.Rev.B 90, 144431(2014).
Hou et al. (2017)
D. Hou,Z. Qiu,J. Barker,K. Sato,K. Yamamoto,S. Vélez,J. M. Gomez-Perez,L. E. Hueso,F. Casanova, and E. Saitoh, “TunablesignchangeofspinHallmagnetoresistanceinPt/NiO/YIGstructures,” Phys.Rev.Lett. 118, 147202(2017).
Hoogeboom et al. (2017)
G. R. Hoogeboom,A. Aqeel,T. Kuschel,T. T. M. Palstra, and B. J. vanWees, “NegativespinHallmagnetoresistanceofPtonthebulkeasy-planeantiferromagnetNiO,” Appl.Phys.Lett. 111, 052409(2017).
Manchon (2017)
A. Manchon, “SpinHallmagnetoresistanceinantiferromagnet/normalmetalbilayers,” Phys.StatusSolidiRRL 11, 1600409(2017).
Fischer et al. (2018)
J. Fischer,O. Gomonay,R. Schlitz,K. Ganzhorn,N. Vlietstra,M. Althammer,H. Huebl,M. Opel,R. Gross,S. T. B. Goennenwein, and S. Geprägs, “SpinHallmagnetoresistanceinantiferromagnet/heavy-metalheterostructures,” Phys.Rev.B 97, 014417(2018).
Baldrati et al. (2018)
L. Baldrati,A. Ross,T. Niizeki,C. Schneider,R. Ramos,J. Cramer,O. Gomonay,M. Filianina,T. Savchenko,D. Heinze,A. Kleibert,E. Saitoh,J. Sinova, and M. Kläui, “FullangulardependenceofthespinHallandordinarymagnetoresistanceinepitaxialantiferromagneticNiO(001)/Ptthinfilms,” Phys.Rev.B 98, 024422(2018).
Lebrun et al. (2018)
R. Lebrun,A. Ross,S. A. Bender,A. Qaiumzadeh,L. Baldrati,J. Cramer,A. Brataas,R. A. Duine, and M. Kläui, “Tunablelong-distancespintransportinacrystallineantiferromagneticironoxide,” Nature 561, 222–225(2018).
Bonn (2006)
D. A. Bonn, “Arehigh-temperaturesuperconductorsexotic?” Nat.Phys. 2, 159(2006).
Hig (2006)
“Towardsacompletetheoryofhigh${T}_c$,” Nat.Phys. 2, 138(2006).
Wannier (1937)
G. H. Wannier, “Thestructureofelectronicexcitationlevelsininsulatingcrystals,” Phys.Rev. 52, 191–197(1937).
Blatt et al. (1962)
J. M. Blatt,K. W. Böer, and W. Brandt, “Bose-Einsteincondensationofexcitons,” Phys.Rev. 126, 1691–1692(1962).
Casella (1963)
R. C. Casella, “Acriterionforexcitonbindingindenseelectron—holesystems—applicationtolinenarrowingobservedinGaAs,” J.Appl.Phys. 34, 1703–1705(1963).
O’Hara et al. (1999)
K. E. O’Hara,L. Ó Súilleabháin, and J. P. Wolfe, “Strongnonradiativerecombinationofexcitonsin$\mathrm{C}\mathrm{u}_{2}\mathrm{O}$anditsimpactonBose-Einsteinstatistics,” Phys.Rev.B 60, 10565–10568(1999).
Klimov et al. (2000)
V. I. Klimov,A. A. Mikhailovsky,D. W. McBranch,C. A. Leatherdale, and M. G. Bawendi, “QuantizationofmultiparticleAugerratesinsemiconductorquantumdots,” Science 287, 1011–1013(2000).
Wang et al. (2004)
F. Wang,G. Dukovic,E. Knoesel,L. E. Brus, and T. F. Heinz, “ObservationofrapidAugerrecombinationinopticallyexcitedsemiconductingcarbonnanotubes,” Phys.Rev.B 70, 241403(2004).
Wang et al. (2006)
F. Wang,Y. Wu,M. S. Hybertsen, and T. F. Heinz, “Augerrecombinationofexcitonsinone-dimensionalsystems,” Phys.Rev.B 73, 245424(2006).
Lozovik and Yudson (1975)
Y. E. Lozovik and V. I. Yudson, “Feasibilityofsuperfluidityofpairedspatiallyseparatedelectronsandholes;anewsuperconductivitymechanism,” JETPLett. 22(1975).
Lozovik and Yudson (1976)
Y. E. Lozovik and V. I. Yudson, “Superconductivityatdielectricpairingofspatiallyseparatedquasiparticles,” SolidStateCommun. 19, 391–393(1976).
Lozovik and Yudson (1977)
Y. E. Lozovik and V. I. Yudson, “Electron—holesuperconductivity.Influenceofstructuredefects,” SolidStateCommun. 21, 211–215(1977).
Tutuc et al. (2004)
E. Tutuc,M. Shayegan, and D. A. Huse, “CounterflowmeasurementsinstronglycorrelatedGaAsholebilayers:Evidenceforelectron-holepairing,” Phys.Rev.Lett. 93, 036802(2004).
Kellogg et al. (2004)
M. Kellogg,J. P. Eisenstein,L. N. Pfeiffer, and K. W. West, “VanishingHallresistanceathighmagneticfieldinadouble-layertwo-dimensionalelectronsystem,” Phys.Rev.Lett. 93, 036801(2004).
Nandi et al. (2012)
D. Nandi,A. D. K. Finck,J. P. Eisenstein,L. N. Pfeiffer, and K. W. West, “ExcitoncondensationandperfectCoulombdrag,” Nature 488, 481(2012).
Spielman et al. (2000)
I. B. Spielman,J. P. Eisenstein,L. N. Pfeiffer, and K. W. West, “ResonantlyenhancedtunnelinginadoublelayerquantumHallferromagnet,” Phys.Rev.Lett. 84, 5808–5811(2000).
Spielman et al. (2001)
I. B. Spielman,J. P. Eisenstein,L. N. Pfeiffer, and K. W. West, “ObservationofalinearlydispersingcollectivemodeinaquantumHallferromagnet,” Phys.Rev.Lett. 87, 036803(2001).
Eisenstein and MacDonald (2004)
J. P. Eisenstein and A. H. MacDonald, “Bose-Einsteincondensationofexcitonsinbilayerelectronsystems,” Nature 432, 691(2004).
Wiersma et al. (2004)
R. D. Wiersma,J. G. S. Lok,S. Kraus,W. Dietsche,K. vonKlitzing,D. Schuh,M. Bichler,H.-P. Tranitz, and W. Wegscheider, “Activatedtransportintheseparatelayersthatformthe${\nu}_{T}=1$excitoncondensate,” Phys.Rev.Lett. 93, 266805(2004).
Eisenstein (2014)
J.P. Eisenstein, “ExcitoncondensationinbilayerquantumHallsystems,” Annu.Rev.Condens.MatterPhys. 5, 159–181(2014).
Li et al. (2017)
J. I. A. Li,T. Taniguchi,K. Watanabe,J. Hone, and C. R. Dean, “Excitonicsuperfluidphaseindoublebilayergraphene,” Nat.Phys. 13, 751(2017).
Liu et al. (2017)
X. Liu,K. Watanabe,T. Taniguchi,B. I. Halperin, and P. Kim, “QuantumHalldragofexcitoncondensateingraphene,” Nat.Phys. 13, 746(2017).
Burg et al. (2018)
G. W. Burg,N. Prasad,K. Kim,T. Taniguchi,K. Watanabe,A. H. MacDonald,L. F. Register, and E. Tutuc, “Stronglyenhancedtunnelingattotalchargeneutralityindouble-bilayergraphene-WSe$_{2}$heterostructures,” Phys.Rev.Lett. 120, 177702(2018).
Holstein and Primakoff (1940)
T. Holstein and H. Primakoff, “Fielddependenceoftheintrinsicdomainmagnetizationofaferromagnet,” Phys.Rev. 58, 1098–1113(1940).
Zener (1951)
C. Zener, “Interactionbetweenthe$d$shellsinthetransitionmetals,” Phys.Rev. 81, 440–444(1951).
Kasuya (1956)
T. Kasuya, “ATheoryofMetallicFerro-andAntiferromagnetismonZener’sModel,” Prog.Theor.Phys. 16, 45–57(1956).
(56)
SeetheSupplementalMaterial,whichincludesRef.Coleman (2015),forthefullderivationoftheeffectivepotential,gapequation,andcriticaltemperature.
Kamra et al. (2019)
A. Kamra,E. Thingstad,G. Rastelli,R. A. Duine,A. Brataas,W. Belzig, and A Sudbø, “AntiferromagneticMagnonsasHighlySqueezedFockStatesunderlyingQuantumCorrelations,” arXiv (2019), arXiv:1904.04553.
Fil and Shevchenko (2018)
D. V. Fil and S. I. Shevchenko, “Electron-holesuperconductivity(review),” LowTemp.Phys. 44, 867–909(2018).
Bardeen et al. (1957)
J. Bardeen,L. N. Cooper, and J. R. Schrieffer, “Microscopictheoryofsuperconductivity,” Phys.Rev. 106, 162–164(1957).
Kopnin (2001)
N. Kopnin, TheoryofNonequilibriumSuperconductivity, InternationalSeriesofMonographsonPhysics (ClarendonPress, 2001).
Satoh et al. (2010)
T. Satoh,S.-J. Cho,R. Iida,T. Shimura,K. Kuroda,H. Ueda,Y. Ueda,B. A. Ivanov,F. Nori, and M. Fiebig, “SpinoscillationsinantiferromagneticNiOtriggeredbycircularlypolarizedlight,” Phys.Rev.Lett. 105, 077402(2010).
Kellogg et al. (2002)
M. Kellogg,I. B. Spielman,J. P. Eisenstein,L. N. Pfeiffer, and K. W. West, “ObservationofquantizedHalldraginastronglycorrelatedbilayerelectronsystem,” Phys.Rev.Lett. 88, 126804(2002).
Coleman (2015)
P. Coleman, IntroductiontoMany-BodyPhysics (CambridgeUniversityPress, 2015). |
TUW - 93 - 01
Comment on Gravity and the Poincaré Group
THOMAS STROBL***e-mail:
tstrobl@email.tuwien.ac.at
Institut für Theoretische Physik
Technische Universität Wien
Wiedner Hauptstr. 8-10, A-1040 Vienna
Austria
submitted to: Phys. Rev. D
Following the approach of Grignani and Nardelli [1], we show how to
cast the two–dimensional model ${\cal L}\sim curv^{2}+torsion^{2}+cosm.const$
— and in fact any theory of gravity —
into the form of a Poincaré gauge theory.
By means of the above example we then clarify the limitations of this
approach:
The diffeomorphism invariance of the action still leads
to a nasty constraint algebra.
Moreover, by simple changes
of variables (e.g. in a path integral) one can reabsorb all the
modifications of the original theory.
Vienna, January 1993
The similarities of Cartan’s formulation of gravity to the
gauge theories responsible for the remaining interactions has again and again
lead to
attempts of reformulating gravity as a gauge theory (cf. e.g. [2]).
The reformulation of
pure 2+1 dim gravity as a Poincaré gauge theory with
Chern–Simons action [3] and of 1+1 dim Liouville
gravity as a SO(2,1) ($\Lambda\neq 0$) resp. ISO(1,1) ($\Lambda=0$)
gauge theory of the BF–type
[4] certainly
spurred such endeavors, all the more since in these cases it was
crucial for the successful quantization.
By introducing the so–called Poincaré coordinates $q^{a}(x^{\mu})$
as auxiliary fields, Grignani and Nardelli formulated several
gravitational theories as Poincaré gauge theories [1].
Although their gauge theoretical formulation is
equivalent to the original theories, it,
to our mind, misses the decisive advantage for quantization
present in the above mentioned works, i.e. the ability to ’eat
up’ the diffeomorphism invariance of the respective
gravitational theory by gauge transformations and, correlated to
that, to have to deal with the quite well–known space of flat
connections (cf. also
[5]). Moreover, as we shall illustrate at the 2 dim model of
NonEinsteinian Gravity given by [6]
$${\cal L}\,=e\,(-\frac{\gamma}{4}\,R^{2}+\frac{\beta}{2}\,T^{2}-\lambda),$$
(1)
it is not only possible to formulate any gravitational theory as a
Poincaré
gauge theory along the lines of [1], but all of these
formulations are trivially equivalent to the original ones after an
appropriate shift of variables so that the Poincaré coordinates drop
out completely.111The reader who is further interested in the
classical and quantum mechanical aspects of the integrable model
(1) shall be refered to the literature [7], [8],
[9], [10], as well as references
therein. Here (1) serves only as a nontrivial two–dimensional
example for the present considerations.
The basic quantities in (1) are
the orthonormal one–forms $e^{a}$, $e\equiv\det e_{\mu}{}^{a}$, the SO(1,1) connection $\omega$, the Ricci
scalar $R=2\ast{\rm d}\omega$, and
$T^{a}=\ast\Theta^{a}$ with the torsion two–form $\Theta^{a}\equiv{\rm D}e^{a}$.
The first order (or Hamiltonian) form of (1)
is
$${\cal L}_{H}=-e({\pi_{2}\over 2}R+\pi_{a}T^{a}-E),$$
(2)
with
$$E\;\equiv\;\frac{1}{4\gamma}\,(\pi_{2})^{2}-\frac{1}{2\beta}\,\pi^{2}-\lambda,$$
(3)
as is most easily seen [9] by plugging the field equations for the
momenta $\pi_{A}\equiv(\pi_{a},\pi_{2})$ back into (2).
The first two terms of ${\cal L}_{H}{\rm d}^{2}x$ can be
rewritten in a standard manner as $\pi_{A}F^{A}$ in which $F^{A}$
is the curvature two–form of the appropriate Poincaré group ISO(1,1):
$$F\equiv{\rm d}A+A\wedge A=\Theta^{a}P_{a}+{\rm d}\omega J.$$
This follows by making use of the iso(1,1) Lie Algebra
$[P_{a},P_{b}]=0$, $[P_{a},J]=\varepsilon_{a}{}^{b}P_{b}$ and setting
$$A=e^{a}P_{a}+\omega J.$$
(4)
The above identifications determine the behavior of the last term in
(2) under Poincaré transformations, which then is obvioulsly not
ISO(1,1) invariant. In the spirit of [1] we therefore replace it
by
$${1\over 2}\varepsilon_{ab}{\cal D}q^{a}\wedge{\cal D}q^{b}\,\tilde{E}$$
(5)
with $\tilde{E}$
evolving from (3) by the substitution $\pi_{2}\to\tilde{\pi}_{2}$,
$$\tilde{\pi}_{2}\equiv\pi_{2}-\varepsilon^{a}{}_{b}\pi_{a}q^{b}.$$
(6)
In (5) $q^{a}$ are auxiliary fields transforming under the defining
representation of the Poincaré group and ${\cal D}$ is a covariant derivative
ensuring that ${\cal D}q^{a}$ transformes homogenously, i.e. as a Lorentz vector:
$${\cal D}q^{a}\equiv dq^{a}+\varepsilon^{a}_{b}\omega q^{b}+e^{a}=:V^{a}.$$
(7)
The complete ISO(1,1) invariant action density $\tilde{\cal L}$ is then
given by ($V\equiv\det V_{\mu}{}^{a}$):
$$\tilde{\cal L}=\pi_{A}F^{A}_{01}+V\tilde{E}.$$
(8)
That (8) is (classically) equivalent to (2) is already
intuitively clear from the observation that the two Lagrangians
coincide for $q^{a}=0$, which is an always attainable gauge choice (the
so–called ’physical gauge’)
due to $q^{a}\sim q^{a}+\rho^{a}$. Formally it can be verified by
means of second Noether’s theorem (cf. e.g. [11])
corresponding to the above symmetry; one
obtains (for any $S$ with the same symmetry and field content as
$\int d^{2}x\tilde{\cal L}$)
$${\delta S\over\delta q^{a}}=\varepsilon^{b}{}_{a}\omega_{\mu}{\delta S\over%
\delta e_{\mu}{}^{b}}+\partial_{\mu}{\delta S\over\delta e_{\mu}{}^{a}}+%
\varepsilon^{b}{}_{a}\pi_{b}{\delta S\over\delta\pi_{2}}$$
so that the variation with respect to $q^{a}$ never yields new field equations.
Due to (7), varying $\tilde{\cal L}$ with respect $e^{a}$, $\omega$,
as well as $\pi_{A}$, and then choosing the physical gauge,
one obviously regains the corresponding variations of ${\cal L}_{H}$.
Thus (8) is a gauge theoretic formulation of 2d NonEinsteinian
Gravity. But does this — and the other Poincaré formulations for
gravitational theories [1] except the specific ones mentioned in the
first paragraph — provide a promising approach for quantization?
One aspect of the answer to such a question is provided by a
Dirac–Hamiltonian analysis. (8) is already in a first
order form. Instead of applying the procedure suggested in
[12], however,
it is in this case more useful to rewrite the term $\varepsilon_{ac}V_{1}{}^{c}\tilde{E}\dot{q}^{a}$ in $\tilde{\cal L}$ as $\,p_{a}\dot{q}^{a}+\lambda^{a}(p_{a}-\varepsilon_{ac}V_{1}{}^{c}\tilde{E})$; denoting the ’rewritten’ Lagrangian by $\tilde{\cal L}_{H}$,
which equals $\tilde{\cal L}$ when integrating out $\lambda^{a}$ and $p_{a}$, we obtain
$$\tilde{\cal L}_{H}=\pi_{A}\dot{A}_{1}^{A}+p_{a}\dot{q}^{a}-\tilde{\cal H}$$
(9)
with
$$\displaystyle\tilde{\cal H}$$
$$\displaystyle=$$
$$\displaystyle-A_{0}^{A}G_{A}+\lambda^{a}J_{a}+\partial_{1}(A_{0}^{A}\pi_{A})$$
(10)
$$\displaystyle G_{a}$$
$$\displaystyle=$$
$$\displaystyle\partial\,\pi_{a}-\varepsilon^{a}{}_{b}\,A_{1}^{2}\,\pi_{a}+p_{a}$$
$$\displaystyle G_{2}$$
$$\displaystyle=$$
$$\displaystyle\partial\,\pi_{2}+\varepsilon^{a}{}_{b}\,A_{1}^{b}\,\pi_{a}+%
\varepsilon^{a}{}_{b}q^{b}p_{a}$$
$$\displaystyle J_{a}$$
$$\displaystyle=$$
$$\displaystyle\varepsilon_{ab}V_{1}{}^{b}\tilde{E}-p_{a}.$$
(11)
From $\tilde{\cal L}_{H}$
the Hamiltonian structure is obvious: The phase space is spanned by the
canonical coordinates $(A_{1}^{A},q^{a};\pi_{A},p_{a})$ and the (arbitrary) Lagrange
multipliers $A_{0}^{A}$, $\lambda^{a}$ enforce the vanishing of the constraints
(11) [at least when disregarding the surface term in (10),
e.g. due to periodic boundary conditions in $x^{1}$].
All of the constraints are first class:
The $G_{A}$ are exactly the generators of
ISO(1,1) gauge transformations in the phase space, therefore they satisfy
the corresponding Lie algebra
$$\{G_{a},G_{2}\}=\varepsilon_{a}{}^{b}G_{b}\,\delta\qquad\{G_{a},G_{b}\}=0,$$
$J_{a}$ behaves as a Lorentz vector under ISO(1,1) gauge transformations
[cf. (7)], which leads to
$$\{J_{a},G_{2}\}=\varepsilon_{a}{}^{b}J_{b}\,\delta\qquad\{J_{a},G_{b}\}=0,$$
and a straightforward, somewhat lenghty computation yields
$$\{J_{a},J_{b}\}=\varepsilon_{ab}[{1\over 2\gamma}\tilde{\pi}_{2}G_{2}-({1\over%
\beta}\pi^{c}+{1\over 2\gamma}\pi_{2}\varepsilon^{c}{}_{d}q^{d})G_{c}-{1\over%
\beta}\pi^{c}J_{c}]\,\delta.$$
So in contrast to the gauge theoretical formulations quoted in the first
paragraph of this note here the constraint algebra is not just a
representation of the Lie algebra of the gauge group. There
appear additional
constraints responsible for diffeomorphisms, and as usually this
leads to structure functions of the constraint algebra, one of the
characteristic difficulties of gravity. The reformulation of (1)
as (8) has by no means simplfied the Hamiltonian structure of the
former (cf. [8]),
which is reobtained from the above in the gauge choice $q^{a}=0$
[this gauge allows to eliminate the $q^{a}$ and $G_{a}$ (or $J_{a}$)
via Dirac brackets,
leaving a phase space spanned by the still conjugates $A_{1}^{A}$ and $\pi_{A}$].
The reason for the appearance of the diffeomorphism constraints
$J_{a}$ is already obvious from (8): On forms a diffeomorphism $x\to x-f(x)$ acts as a Lie derivative $L_{f}$, which acting on some connection
one–form A is given by the well–known formula
$$L_{f}A=i_{f}F+D(i_{f}A).$$
Thus an infinitesimal diffeomorphism can be generated on–shell by a gauge
transformation iff the field equations enforce
$F=0$.
This is the
case for $\tilde{\cal L}$ only in the limit $\beta\to\infty,\gamma\to\infty$,222Let us note on this occasion
that after the identification given in (4)
the most complicated 1+1 dim theory of gravity which can be formulated
as a gauge theory with (part of the) field equations being $F=0$
is given by $R=2c_{2}=const,\,T^{a}=c^{a}=const$.
The corresponding gauge group is defined through
$[P_{a},P_{b}]=\varepsilon_{ab}(c_{2}J+c^{d}P_{d})$, $[P_{a},J]=\varepsilon_{a}{}^{b}P_{b}$, reproducing
[4] for the case $c^{a}=0$. Also when allowing for more then three
generators (cf. [13]), it will never be possible to formulate
(1) as a BF–theory.
which for $\lambda=0$
reproduces just the Liouville gravity with vanishing cosmological
constant, mentioned in the first paragraph
(for $\lambda\neq 0$ cf. below).
In our context even more striking, however,
is the following observation.
Using the definition for $\tilde{\pi}_{2}$ and $V^{a}$ one easily
verifies
$$\pi_{A}F^{A}\equiv\pi_{a}(de^{b}+\varepsilon^{a}{}_{b}\omega\wedge e^{b})+\pi_%
{2}d\omega=\pi_{a}(dV^{b}+\varepsilon^{a}{}_{b}\omega\wedge V^{b})+\tilde{\pi}%
_{2}d\omega$$
(12)
so that we find
$$\tilde{\cal L}(e^{a},\omega,\pi_{a},\pi_{2},q^{a})={\cal L}_{H}(V^{a},\omega,%
\pi_{a},\tilde{\pi}_{2})\sim{\cal L}(V^{a},\omega).$$
(13)
That is to say
the appearance of $e^{a}$ and $q^{a}$ within $\tilde{\cal L}$ can be reabsorbed
into the ’true vielbein’ $V^{a}$ and a redefinition of the field $\pi_{2}$;
integrating out $\pi_{A}$, moreover,
one ends up with (1) in
which $e^{a}$ has been replaced by $V^{a}$. At the classical
level this redefinition (6), (7)
of coordinates can be performed either before or
after a minimalization of the action. But also
within a path integral the
corresponding functional determinant yields just
one. Therefore the Poincaré gauge theoretic
formulation of (1), given in (8),
reduces to a mere renaming of $e^{a}$ by $V^{a}$.
That this is not a special feature of the present model (1)
shall be illustrated by means of the action $S_{S}$ of a scalar field
$\varphi$ coupled to 4 dim Poincaré gravity which was given in the first
ref. [1]:
$$S_{S}=-{1\over 3}\int_{M_{4}}\varepsilon_{abcd}{\cal D}q^{a}\wedge{\cal D}q^{b%
}\wedge{\cal D}q^{c}\wedge(\varphi{\cal D}\varphi^{d}-\varphi^{d}d\varphi+%
\tilde{\varphi}^{e}\tilde{\varphi}_{e}{\cal D}q^{d})$$
with (the Lorentz vectors)
$$\displaystyle{\cal D}\varphi^{a}$$
$$\displaystyle=$$
$$\displaystyle d\varphi^{a}+\omega^{a}{}_{b}\,\varphi^{b}+m^{2}e^{a}\varphi\,,$$
$$\displaystyle\tilde{\varphi}^{a}$$
$$\displaystyle=$$
$$\displaystyle\varphi^{a}-m^{2}q^{a}\varphi.$$
(14)
Although not obvious at first sight,
shifting the auxiliary field $\varphi^{a}$ according
to (14),
also here $e^{a}$ and $q^{a}$ can be recombined to the combination
$V^{a}$ given in (7). One obtains
$$S_{S}=-{1\over 3}\int_{M_{4}}\varepsilon_{abcd}V^{a}\wedge V^{b}\wedge V^{c}%
\wedge[\varphi D\tilde{\varphi}^{d}-\tilde{\varphi}^{d}d\varphi+(\tilde{%
\varphi}^{e}\tilde{\varphi}_{e}+m^{2}\varphi^{2})V^{d}]$$
with the Lorentz covariant derivative
$$D\tilde{\varphi}^{a}=d\tilde{\varphi}^{a}+\omega^{a}{}_{b}\tilde{\varphi}^{b}.$$
As a byproduct of these considerations
let us note the incorrectness of the statement in the
appendix of the second ref. [1],
namely that always $V\neq 0$ in the case of
2d black hole gravity in its Poincaré formulation. The latter is given
by (8) in the limit $\beta\to\infty,\gamma\to\infty$. According to
(13) this Lagrangian is equal to (2)
in the same limit [which
does not affect (6), (7)] when exchanging
$e^{a}$ by $V^{a}$. Since this Lagrangian in turn
can be understood as a Poincaré gauge theory [14] with connection
(4), $V^{a}\leftrightarrow e^{a}$, — the cosmological constant term yields only a
surface term under ISO(1,1) transformations —
any axial gauge, obtainable at least locally, leads to $V=0$.
Clearly one can vice versa obtain a Poincaré gauge theory from any
theory of gravity by the replacement $e^{a}:={\cal D}q^{a}$, or, even simpler,
one can regard any theory of gravity as being already a Lorentz gauge theory
when allowing for one–forms $e^{a}$ in the fundamental
representation. However, this does not seem to provide any
advantage in
the quest for quantization of gravity.
The author wants to thank W. Kummer and D. J. Schwarz for reading the
manuscript.
References
[1]
G. Grignani and G. Nardelli,
Phys. Rev. D45, 2719 (1992); Poincaré Gauge
Theories for Lineal Gravity, preprint DFUPG–57–1992, UTF–266–1992.
[2]
R. U. Sexl and H. K. Urbantke, Gravitation und
Kosmologie, 3. korr. Aufl., Mannheim: BI-Wiss.-erl., 1987, and references
therein, especially: F. W. Hehl, P. von der Heyde and G. D. Kerlick,
Rev. Mod. Phys. 48, 393 (1976).
[3]
A. Achucarro and P. Townsend, Phys. Lett. B180,
85 (1986); E. Witten, Nucl. Phys. B311, 46 (1988).
[4]
T. Fukuyama and K. Kamimura, Phys. Lett.160B,
259 (1985); K. Isler and C. Trugenberger, Phys. Rev. Lett. 63,
834 (1989); A. Chamsedine and D. Wyler, Phys. Lett.B228,
75 (1989); Nucl. Phys.B340, 595 (1990).
[5]
D. Birmingham, M. Blau, M. Rakowski and G. Thompson,
Phys. Rep.
209, 129 (1991).
[6]
M. O. Katanaev and I. V. Volovich,
Phys. Lett. B175, 413 (1986).
[7]
M. O. Katanaev and I. V. Volovich, Ann. Phys.
(N.Y.) 197, 1 (1990); M. O. Katanaev, J. Math. Phys.
31, 882 (1990); J. Math. Phys. 32, 2483 (1991); All
universal coverings of two–dimensional gravity with torsion,
preprint; W. Kummer and D. J. Schwarz,
Phys. Rev. D45, 3628 (1992).
[8]
H. Grosse, W. Kummer, P. Pre$\check{\rm s}$najder and D. J.
Schwarz, J. Math. Phys. 33, 3892 (1992);
T. Strobl, Int.
J. Mod. Phys. A 8, 1383 (1993).
[9]
N. Ikeda and K.I. Izawa, Quantum Gravity with Dynamical Torsion in Two Dimensions,
Kyoto preprint RIMS–888 (1992).
[10]
P. Schaller and T. Strobl, Canonical Quantization of NonEinsteinian
Gravity and the Problem of Time, preprint TUW–92–13, submitted to
Classical and Quantum Gravity ; F. Haider and W. Kummer,
Quantum Functional Integrability of NonEinsteinian Gravity in
$d=2$, preprint TUW–92–21.
[11]
K. Sundermeyer, Constrained Dynamics,
Lecture Notes in Physics 169,
Springer Berlin Heidelberg New York 1982.
[12]
L. Faddeev and R. Jackiw, Phys. Rev. Lett.
60, 1692 (1988).
[13]
D. Cangemi and R. Jackiw, Phys. Rev. Lett.
69/2, 233 (1992).
[14]
H. Verlinde, in The Sixth Marcel Grossmann
Meeting on General Relativity, edited by M. Sato (World Scientific,
Singapore, 1992); for a short review cf. also the previous citation. |
Lie higher derivations on generalized matrix algebras
F. Moafian
Department of Pure Mathematics, Ferdowsi University
of Mashhad, P.O. Box 1159, Mashhad 91775, Iran.
fahimeh.moafian@yahoo.com
Abstract.
In this paper, at first the construction of Lie higher derivations and higher derivations on a generalized matrix algebra were characterized; then the conditions under which a Lie higher derivation on generalized matrix algebras is proper are provided.
Finally, the applications of the findings are discused.
Key words and phrases:Lie higher derivation, higher derivation, generalized matrix algebra, triangular algebra.
2010 Mathematics Subject Classification: Primary 16W25; Secondary 47B47; 15A78
1. introduction
Let us recall some basic facts related to (Lie) higher derivations on a general algebra. Let $A$ be a unital algebra, over a unital commutative ring R, $\mathbb{N}$ be the set of all natural numbers and $\mathbb{N}_{0}=\mathbb{N}\cup\{0\}$.
(a)
A sequence $\mathcal{D}=\{\mathcal{D}_{k}\}_{k\in\mathbb{N}_{0}}$ (with $\mathcal{D}_{0}=id_{A}$) of linear maps on $A$ is called a higher derivation if
$$\mathcal{D}_{k}(xy)=\sum_{i+j=k}\mathcal{D}_{i}(x)\mathcal{D}_{j}(y),$$
for all $x,y\in A$ and $k\in\mathbb{N}_{0}$.
(b)
A sequence $\mathcal{L}=\{\mathcal{L}_{k}\}_{k\in\mathbb{N}_{0}}$ (with $\mathcal{L}_{0}=id_{A}$) of linear maps on $A$ is called a Lie higher derivation if
$$\mathcal{L}_{k}([x,y])=\sum_{i+j=k}[\mathcal{L}_{i}(x),\mathcal{L}_{j}(y)],$$
for all $x,y\in A$ and $k\in\mathbb{N}_{0}$, where $[\cdot,\cdot]$ stands for a commutator defined by $[x,y]=xy-yx$.
Note that $\mathcal{D}_{1}$ (resp. $\mathcal{L}_{1}$) is a derivation (resp. Lie derivation) when $\{\mathcal{D}_{k}\}_{k\in\mathbb{N}_{0}}$ (resp. $\{\mathcal{L}_{k}\}_{k\in\mathbb{N}_{0}}$) is a higher derivation (resp. Lie higher derivation).
Let $D$ (resp. $L$) be a derivation (resp. Lie derivation) on $A$, then $\mathcal{D}=\{\frac{D^{k}}{k!}\}_{k\in\mathbb{N}_{0}}$ (resp. $\mathcal{L}=\{\frac{L^{k}}{k!}\}_{k\in\mathbb{N}_{0}}$) is a higher derivation (resp. Lie higher derivation) on $A$, where $D^{0}=id_{A}$ (resp. $\mathcal{L}^{0}=id_{A}$), the identity mapping of A. These kind of higher derivations (resp. Lie higher derivations) are called ordinary higher derivations (resp. Lie higher derivations).
Trivially, every higher derivation is a Lie higher derivation, but the converse is not true, in general. If $\mathcal{D}=\{\mathcal{D}_{k}\}_{k\in\mathbb{N}_{0}}$ is a higher derivation on $A$ and
$\tau=\{\tau_{k}\}_{k\in\mathbb{N}}$ is a sequence of linear maps on $A$ which is center valued (i.e. $\tau_{k}(A)\subseteq Z(A)$= the center of $A$), then $\mathcal{D}+\tau$ is a Lie higher derivation if and only if $\tau$ vanishes at commutators, i.e. $\tau_{k}([x,y])=0,$ for all $x,y\in A$ and $k\in\mathbb{N}$. Lie higher derivations of this form are called proper Lie higher derivations.
We say that an algebra $A$ has Lie higher derivation (LHD for short) property if every Lie higher derivation on it is proper. A main problem in the realm of Lie higher derivations is that, under what conditions a Lie higher derivation on an algebra is proper. Many authors have studied the problem for various algebras; see [5, 6, 7, 8, 15, 16, 17, 21, 22, 23] and references therein.
Han [7] studied Lie-type higher derivations on operator algebras. He showed that every Lie (triple) higher derivation on some
classical operator algebras is proper.
Wei and Xiao [21] have examined innerness of higher derivations on triangular algebras. They also discussed Jordan higher derivations and nonlinear Lie higher derivations on a triangular algebra in [22] and [23], respectively. Qi and Hou [17] showed that every Lie higher derivation on a nest algebra is proper. Li and Shen [8] and also Qi [16] have extended the main result of [17] for a triangular algebra by providing some sufficient conditions under which a Lie higher derivation on a triangular algebra is proper.
In this paper we investigate the LHD property for a generalized matrix algebra. Generalized matrix algebras were first introduced by Sands [18]. Here, we offer definition of a generalized matrix algebra. A Morita context $(A,B,M,N,\Phi_{MN},\Psi_{NM})$ consists of two unital algebras $A$, $B$, an $(A,B)-$module $M$, a $(B,A)-$module $N$, and two module homomorphisms $\Phi_{MN}:M\otimes_{B}N\longrightarrow A$ and $\Psi_{NM}:N\otimes_{A}M\longrightarrow B$ satisfying the following commutative diagrams:
$$\begin{CD}M\otimes_{B}N\otimes_{A}M@ >\Phi_{MN}\otimes I_{M}>>A\otimes_{A}M\\
@ VVI_{M}\otimes\Psi_{NM}V@ VV\cong V\\
M\otimes_{B}B@ >\cong>>M\end{CD}$$
and
$$\begin{CD}N\otimes_{A}M\otimes_{B}N@ >\Psi_{NM}\otimes I_{N}>>B\otimes_{B}N\\
@ VVI_{N}\otimes\Phi_{MN}V@ VV\cong V\\
N\otimes_{A}A@ >\cong>>N.\end{CD}$$
For a Morita context $(A,B,M,N,\Phi_{MN},\Psi_{NM})$, the set
$$\mathcal{G}=\left(\begin{array}[]{cc}A&M\\
N&B\\
\end{array}\right)=\Bigg{\{}\left(\begin{array}[]{cc}a&m\\
n&b\\
\end{array}\right)\Big{|}\ a\in A,\ m\in M,\ n\in N,\ b\in B\Bigg{\}}$$
forms an algebra under the usual matrix operations, where at least one of two modules $M$ and $N$ is nonzero. The algebra $\mathcal{G}$ is called a generalized matrix algebra. In above definition if $N=0$, then $\mathcal{G}$ becomes the triangular algebra ${\rm Tri}(A,M,B)$, whose (Lie) derivations and its properties are extensively examined by Cheung [3].
Let $\mathcal{G}=\left(\begin{array}[]{cc}A&M\\
N&B\\
\end{array}\right)$ be a generalized matrix algebra. We are dealing with various types of faithfullness.
(1)
The $(A,B)-$module $M$ is called left (resp. right) faithful if $aM=\{0\}$ (resp. $Mb=\{0\}$) necessities $a=0$ (resp. $b=0$), for all $a\in A$ (resp. $b\in B$). If $M$ is both left and right faithful it is called faithful. The left and right faithfulness of $N$ can be defined in a similar way.
(2)
The $(A,B)-$module $M$ is called strongly faithful if
either $M$ is faithful as a right $B-$module and $am=0$ implies $a=0$ or $m=0$ for all $a\in A,m\in M$; or
$M$ is faithful as a left $A-$module and $mb=0$ implies $m=0$ or $b=0$ for all $m\in M,b\in B.$
The strong faithfulness for $N$ can be defined similarly.
(3)
The generalized matrix algebra $\mathcal{G}$ is called weakly faithful if
$$\displaystyle aM=\{0\}=Na\quad{\rm implies}\quad a=0,$$
$$\displaystyle Mb=\{0\}=bN\quad{\rm implies}\quad b=0.$$
(1.1)
It is evident that if $M$ is strongly faithful then $M$ is faithful and either $A$ or $B$ has no zero devisors. It is also trivial that if either $M$ or $N$ is faithful, then $\mathcal{G}$ is weakly faithful.
It is worth mentioning that in the case $\mathcal{G}$ is a triangular algebra the weak faithfulness of $\mathcal{G}$ is nothing more than faithfulness of $M$.
By an standard argument one can check that the center $Z(\mathcal{G})$ of $\mathcal{G}$ is
$$Z(\mathcal{G})=\{a\oplus b|\ a\in Z(A),b\in Z(B),\ am=mb,\ na=bn\quad{\rm for%
\ all}\quad m\in M,n\in N\},$$
where $a\oplus b=\left(\begin{array}[]{cc}a&0\\
0&b\\
\end{array}\right)\in\mathcal{G}.$
Consider two natural projections $\pi_{A}:\mathcal{G}\longrightarrow A$ and $\pi_{B}:\mathcal{G}\longrightarrow B$ by
$$\pi_{A}:\left(\begin{array}[]{cc}a&m\\
n&b\\
\end{array}\right)\mapsto a\quad{\rm and}\quad\pi_{B}:\left(\begin{array}[]{cc%
}a&m\\
n&b\\
\end{array}\right)\mapsto b.$$
Clearly $\pi_{A}(Z(\mathcal{G}))\subseteq Z(A)$ and $\pi_{B}(Z(\mathcal{G}))\subseteq Z(B)$. Moreover, if $\mathcal{G}$ is weakly faithful then $\pi_{A}(Z(\mathcal{G}))$ is isomrphic to $\pi_{B}(Z(\mathcal{G}))$. More precisely, there exists a unique algebra isomorphism
$$\varphi:\pi_{A}(Z(\mathcal{G}))\longrightarrow\pi_{B}(Z(\mathcal{G}))$$
such that $am=m\varphi(a)$ and $\varphi(a)n=na$ for all $m\in M$, $n\in N$; or equivalently, $a\oplus\varphi(a)\in Z(\mathcal{G})$ for all $a\in A,$ (see [1, Proposition 2.1] and
[3, Proposition 3]).
This paper is organized as follows; in section 2, we characterize the structure of Lie higher derivations and higher derivations on the generalized matrix algebra $\mathcal{G}$. The LHD property for the generalized matrix algebra $\mathcal{G}$ is investigated in section 3. In section 4, we offer some alternative sufficient conditions ensuring the LHD property for $\mathcal{G}$ (Theorems 4.1, 4.2, 4.3). We then come to our main result, Theorem 4.4, collecting some sufficient conditions ensuring the LHD property for a generalized matrix algebra.
Section 5 includes some applications of our conclusions to some main examples of a generalized matrix algebra such as: trivial generalized matrix algebras, triangular algebras, unital algebras with a nontrivial idempotent, the algebra $B(X)$ of operators on a Banach space $X$ and the full matrix algebra $M_{n}(A)$ on a unital algebra $A$. Since the proof of Theorems 2.2 and 2.3 are too long we devote section 6 to them.
2. The structure of (Lie) higher derivations on $\mathcal{G}$
We start this section with the following result of [11] which describes the structure of derivations and Lie derivations on a generalized matrix algebra.
Proposition 2.1 ([11, Propositions 4.1, 4.2]).
Let $\mathcal{G}$ be a generalized matrix algebra.
$\bullet$ If $A$ and $B$ are $2-$torsion free then a linear map $\mathcal{L}_{1}:\mathcal{G}\to\mathcal{G}$ is a Lie derivation if and only if it has the presentation
$$\mathcal{L}_{1}\left(\begin{array}[]{cc}a&m\\
n&b\\
\end{array}\right)=\left(\begin{array}[]{cc}\mathfrak{p}_{11}(a)+\mathfrak{p}_%
{12}(b)-mn_{1}-m_{1}n&am_{1}-m_{1}b+\mathfrak{f}_{13}(m)\\
n_{1}a-bn_{1}+\mathfrak{g}_{14}(n)&\mathfrak{q}_{11}(a)+\mathfrak{q}_{12}(b)+n%
_{1}m+nm_{1}\\
\end{array}\right),$$
where $m_{1}\in M,n_{1}\in N$ and
$\mathfrak{p}_{11}:A\longrightarrow A,\ \mathfrak{p}_{12}:B\longrightarrow A,\ %
\mathfrak{q}_{11}:A\longrightarrow B,\ \mathfrak{q}_{12}:B\longrightarrow B,\ %
\mathfrak{f}_{13}:M\longrightarrow M,\ \mathfrak{g}_{14}:N\longrightarrow N$ are linear maps satisfying the following properties:
(1)
$\mathfrak{p}_{11}$ and $\mathfrak{q}_{12}$ are Lie derivations.
(2)
$\mathfrak{p}_{12}([b,b^{\prime}])=0,\mathfrak{q}_{11}([a,a^{\prime}])=0.$
(3)
$\mathfrak{p}_{12}(B)\subseteq Z(A),\mathfrak{q}_{11}(A)\subseteq Z(B).$
(4)
$\mathfrak{f}_{13}(am)=\mathfrak{p}_{11}(a)m-m\mathfrak{q}_{11}(a)+a\mathfrak{f%
}_{13}(m)$, $\mathfrak{f}_{13}(mb)=m\mathfrak{q}_{12}(b)-\mathfrak{p}_{12}(b)m+\mathfrak{f}%
_{13}(m)b$.
(5)
$\mathfrak{g}_{14}(na)=n\mathfrak{p}_{11}(a)-\mathfrak{q}_{11}(a)n+\mathfrak{g}%
_{14}(n)a$, $\mathfrak{g}_{14}(bn)=\mathfrak{q}_{12}(b)n-n\mathfrak{p}_{12}(b)+b\mathfrak{g%
}_{14}(n)$.
(6)
$\mathfrak{p}_{11}(mn)-\mathfrak{p}_{12}(nm)=m\mathfrak{g}_{14}(n)+\mathfrak{f}%
_{13}(m)n$, $\mathfrak{q}_{12}(nm)-\mathfrak{q}_{11}(mn)=\mathfrak{g}_{14}(n)m+n\mathfrak{f%
}_{13}(m)$.
$\bullet$ A linear map $\mathcal{D}_{1}:\mathcal{G}\to\mathcal{G}$ is a derivation if and only if it has the presentation
$$\mathcal{D}_{1}\left(\begin{array}[]{cc}a&m\\
n&b\\
\end{array}\right)=\left(\begin{array}[]{cc}\mathtt{p}_{11}(a)-mn_{1}-m_{1}n&%
am_{1}-m_{1}b+\mathtt{f}_{13}(m)\\
n_{1}a-bn_{1}+\mathtt{g}_{14}(n)&\mathtt{q}_{12}(b)+n_{1}m+nm_{1}\\
\end{array}\right),$$
where $m_{1}\in M,n_{1}\in N$ and
$\mathtt{p}_{11}:A\longrightarrow A,\ \mathtt{p}_{12}:B\longrightarrow A,\ %
\mathtt{q}_{11}:A\longrightarrow B,\ \mathtt{q}_{12}:B\longrightarrow B,\ %
\mathtt{f}_{13}:M\longrightarrow M,\ \mathtt{g}_{14}:N\longrightarrow N$ are linear maps satisfying the following properties:
(a)
$\mathtt{p}_{11}$ and $\mathtt{q}_{12}$ are derivations.
(b)
$\mathtt{f}_{13}(am)=\mathtt{p}_{11}(a)m+a\mathtt{f}_{13}(m)$, $\mathtt{f}_{13}(mb)=m\mathtt{q}_{12}(b)+\mathtt{f}_{13}(m)b$.
(c)
$\mathtt{g}_{14}(na)=n\mathtt{p}_{11}(a)+\mathtt{g}_{14}(n)a$, $\mathtt{g}_{14}(bn)=\mathtt{q}_{12}(b)n+b\mathtt{g}_{14}(n)$.
(d)
$\mathtt{p}_{11}(mn)=m\mathtt{g}_{14}(n)+\mathtt{f}_{13}(m)n$, $\mathtt{q}_{12}(nm)=\mathtt{g}_{14}(n)m+n\mathtt{f}_{13}(m).$
We are preparing to describe the structure of Lie higher and higher derivations on a generalized matrix algebra. We start with fixing some notations which will be needed in the sequel.
2.1. Some notations
Before we proceed for the result, for more convenience, we fix some notations. Throughout $\mathbb{N}$ stands for the natural numbers and $\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}$. For each $k\in\mathbb{N},$ we define $\eta_{k}$ and $\nu_{k}$ by
$$\eta_{k}:=\left\{\begin{array}[]{ccc}(k-1)/2&;&k\mbox{ is odd}\\
k/2&;&k\mbox{ is even}\end{array}\right.\ {\rm and}\quad\nu_{k}:=\left\{\begin%
{array}[]{ccc}(k-1)/2&;&k\mbox{ is odd}\\
k/2-1&;&k\mbox{ is even}\end{array}\right..$$
Let $k\in\mathbb{N}$, $\alpha_{i},\beta_{i}\in\{1,2,\cdots,k\},(1\leq i\leq r)$ and $n_{\alpha_{i}}\in N,m_{\beta_{i}}\in M$ we define
$(\alpha+\beta)_{r}:=\sum_{i=1}^{r}(\alpha_{i}+\beta_{i}),$
$(n_{\alpha}m_{\beta})^{r}:=n_{\alpha_{2}}m_{\beta_{2}}\ldots n_{\alpha_{r}}m_{%
\beta_{r}},$ and $(n_{\alpha}m_{\beta})_{r}:=n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{2}}m_{%
\beta_{2}},$
$(m_{\beta}n_{\alpha})^{r}:=m_{\beta_{2}}n_{\alpha_{2}}\ldots m_{\beta_{r}}n_{%
\alpha_{r}},$ and $(m_{\beta}n_{\alpha})_{r}:=m_{\beta_{r}}n_{\alpha_{r}}\ldots m_{\beta_{2}}n_{%
\alpha_{2}}.$
We also define $\mathcal{N}_{k}$ and $\mathcal{M}_{k}$ so that $\mathcal{N}_{1}:=n_{1},\ \mathcal{M}_{1}:=m_{1},\ \mathcal{N}_{2}:=n_{2},\ %
\mathcal{M}_{2}:=m_{2}$ and for each $k\geq 3,$ by
$$\mathcal{N}_{k}=\sum_{r=1}^{\nu_{k}}\sum_{(\alpha+\beta)_{r}+\gamma=k}(\prod_{%
\rho=1}^{r}n_{\alpha_{\rho}}m_{\beta_{\rho}}n_{\gamma})+n_{k},$$
$$\mathcal{M}_{k}=\sum_{r=1}^{\nu_{k}}\sum_{(\alpha+\beta)_{r}+\gamma=k}(\prod_{%
{\rho}=1}^{r}m_{\beta_{\rho}}n_{\alpha_{\rho}}m_{\gamma})+m_{k}.$$
For example, for $k=3$ and $k=4$ we have:
$\mathcal{N}_{3}=n_{1}m_{1}n_{1}+n_{3},\,\mathcal{M}_{3}=m_{1}n_{1}m_{1}+m_{3}.$
$\mathcal{N}_{4}=n_{1}m_{1}n_{2}+n_{1}m_{2}n_{1}+n_{2}m_{1}n_{1}+n_{4},\,%
\mathcal{M}_{4}=m_{1}n_{1}m_{2}+m_{1}n_{2}m_{1}+m_{2}n_{1}m_{1}+m_{4}.$
2.2. The structure of Lie higher derivations on $\mathcal{G}$
It is easy to check that every sequence $\{\mathcal{L}_{k}\}_{k\in\mathbb{N}_{0}}$ of linear mappings on a generalized matrix algebra $\mathcal{G}=\left(\begin{array}[]{cc}A&M\\
N&B\\
\end{array}\right)$ enjoys the presentation
$$\mathcal{L}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\\
\end{array}\right)=\left(\begin{array}[]{cc}{\mathfrak{p}}_{1k}(a)+{\mathfrak{%
p}}_{2k}(b)+{\mathfrak{p}}_{3k}(m)+{\mathfrak{p}}_{4k}(n)&{\mathfrak{f}}_{1k}(%
a)+{\mathfrak{f}}_{2k}(b)+{\mathfrak{f}}_{3k}(m)+{\mathfrak{f}}_{4k}(n)\\
{\mathfrak{g}}_{1k}(a)+{\mathfrak{g}}_{2k}(b)+{\mathfrak{g}}_{3k}(m)+{%
\mathfrak{g}}_{4k}(n)&{\mathfrak{q}}_{1k}(a)+{\mathfrak{q}}_{2k}(b)+{\mathfrak%
{q}}_{3k}(m)+{\mathfrak{q}}_{4k}(n)\\
\end{array}\right)$$
for each $k\in\mathbb{N}_{0}$, where the entries mappings
$\mathfrak{p}_{1k}:A\longrightarrow A,\ \mathfrak{p}_{2k}:B\longrightarrow A,\ %
\mathfrak{p}_{3k}:M\longrightarrow A,\ \mathfrak{p}_{4k}:N\longrightarrow A,\ %
\mathfrak{q}_{1k}:A\longrightarrow B,\ \mathfrak{q}_{2k}:B\longrightarrow B,\ %
\mathfrak{q}_{3k}:M\longrightarrow B,\ \mathfrak{q}_{4k}:N\longrightarrow B,\ %
\mathfrak{f}_{1k}:A\longrightarrow M,\ \mathfrak{f}_{2k}:B\longrightarrow M,\ %
\mathfrak{f}_{3k}:M\longrightarrow M,\ \mathfrak{f}_{4k}:N\longrightarrow M,\ %
\mathfrak{g}_{1k}:A\longrightarrow N,\ \mathfrak{g}_{2k}:B\longrightarrow N,\ %
\mathfrak{g}_{3k}:M\longrightarrow N$ and $\mathfrak{g}_{4k}:N\longrightarrow N$ are linear. For each $k\in\mathbb{N}$ we set $m_{k}:=\mathfrak{f}_{1k}(1)$ and
$n_{k}:=\mathfrak{g}_{1k}(1).$
Before proceeding for the structure of Lie higher derivations on $\mathcal{G}$ in Theorem 2.2, we need to fix some more notations and also define some auxiliary mappings by setting $P_{0}=P^{\prime}_{0}:=id_{A},P^{\prime\prime}_{0}:=0$, $Q_{0}=Q^{\prime}_{0}:=id_{B},Q^{\prime\prime}_{0}:=0$ and $p_{1}=p^{\prime}_{1}=q^{\prime\prime}_{1}:=0,q_{1}=q^{\prime}_{1}=p^{\prime%
\prime}_{1}:=0$.
Further for each $k\in\mathbb{N},$ we set $P_{k}:=\mathfrak{p}_{1k}+p_{k},\ P^{\prime}_{k}:=\mathfrak{p}_{1k}+p^{\prime}_%
{k},\ P^{\prime\prime}_{k}:=\mathfrak{p}_{2k}-p^{\prime\prime}_{k},\ Q_{k}:=%
\mathfrak{q}_{2k}+q_{k},\ Q^{\prime}_{k}:=\mathfrak{q}_{2k}+q^{\prime}_{k}$ and $Q^{\prime\prime}_{k}:=\mathfrak{q}_{1k}-q^{\prime\prime}_{k}$ where for each $k\geq 2$
$$\displaystyle p_{k}(a)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(P_%
{i}(a)m_{\beta_{1}}-m_{\beta_{1}}Q^{\prime\prime}_{i}(a))n_{\alpha_{1}}(m_{%
\beta}n_{\alpha})^{r},$$
$$\displaystyle p^{\prime}_{k}(a)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(m_%
{\beta}n_{\alpha})_{r}m_{\beta_{1}}(n_{\alpha_{1}}P^{\prime}_{i}(a)-Q^{\prime%
\prime}_{i}(a)n_{\alpha_{1}}),$$
$$\displaystyle p^{\prime\prime}_{k}(b)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(m_%
{\beta_{1}}Q_{i}(b)-P^{\prime\prime}_{i}(b)m_{\beta_{1}})n_{\alpha_{1}}(m_{%
\beta}n_{\alpha})^{r},$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(m_%
{\beta}n_{\alpha})_{r}m_{\beta_{1}}(Q^{\prime}_{i}(b)n_{\alpha_{1}}-n_{\alpha_%
{1}}P^{\prime\prime}_{i}(b)),$$
$$\displaystyle q_{k}(b)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(n_%
{\alpha}m_{\beta})_{r}n_{\alpha_{1}}(m_{\beta_{1}}Q_{i}(b)-P^{\prime\prime}_{i%
}(b)m_{\beta_{1}}),$$
$$\displaystyle q^{\prime}_{k}(b)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(Q^%
{\prime}_{i}(b)n_{\alpha_{1}}-n_{\alpha_{1}}P^{\prime\prime}_{i}(b))m_{\beta_{%
1}}(n_{\alpha}m_{\beta})^{r},$$
$$\displaystyle q^{\prime\prime}_{k}(a)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(n_%
{\alpha}m_{\beta})_{r}n_{\alpha_{1}}(P_{i}(a)m_{\beta_{1}}-m_{\beta_{1}}Q^{%
\prime\prime}_{i}(a))$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(n_%
{\alpha_{1}}P^{\prime}_{i}(a)-Q^{\prime\prime}_{i}(a)n_{\alpha_{1}})m_{\beta_{%
1}}(n_{\alpha}m_{\beta})^{r}.$$
For example; for $k=2$ we get;
$$p_{2}(a)=am_{1}n_{1},\ p^{\prime}_{2}(a)=m_{1}n_{1}a,\ p^{\prime\prime}_{2}(b)%
=m_{1}bn_{1},\ q_{2}(b)=n_{1}m_{1}b,\ q^{\prime}_{2}(b)=bn_{1}m_{1}\ and\ q^{%
\prime\prime}_{2}(a)=n_{1}am_{1}.$$
Similarly, for $k=3$ one can check that
$$\displaystyle p_{3}(a)$$
$$\displaystyle=$$
$$\displaystyle am_{1}n_{2}+am_{2}n_{1}+P_{1}(a)m_{1}n_{1}-m_{1}Q^{\prime\prime}%
_{1}(a)n_{1}$$
$$\displaystyle p^{\prime}_{3}(a)$$
$$\displaystyle=$$
$$\displaystyle m_{1}n_{2}a+m_{2}n_{1}a+m_{1}n_{1}P_{1}(a)-m_{1}Q^{\prime\prime}%
_{1}(a)n_{1},$$
$$\displaystyle p^{\prime\prime}_{3}(b)$$
$$\displaystyle=$$
$$\displaystyle m_{1}bn_{2}+m_{2}bn_{1}+m_{1}Q_{1}(b)n_{1}-P^{\prime\prime}_{1}(%
b)m_{1}n_{1}$$
$$\displaystyle=$$
$$\displaystyle m_{1}bn_{2}+m_{2}bn_{1}+m_{1}Q^{\prime}_{1}(b)n_{1}-m_{1}n_{1}P^%
{\prime\prime}_{1}(b),$$
$$\displaystyle q_{3}(b)$$
$$\displaystyle=$$
$$\displaystyle n_{1}m_{2}b+n_{2}m_{1}b+n_{1}m_{1}Q_{1}(b)-n_{1}P^{\prime\prime}%
_{1}(b)m_{1}$$
$$\displaystyle q^{\prime}_{3}(b)$$
$$\displaystyle=$$
$$\displaystyle bn_{1}m_{2}+bn_{2}m_{1}+Q_{1}(b)n_{1}m_{1}-n_{1}P^{\prime\prime}%
_{1}(b)m_{1}$$
$$\displaystyle q^{\prime\prime}_{3}(a)$$
$$\displaystyle=$$
$$\displaystyle n_{1}am_{2}+n_{2}am_{1}+n_{1}P_{1}(a)m_{1}-n_{1}m_{1}Q^{\prime%
\prime}_{1}(a)$$
$$\displaystyle=$$
$$\displaystyle n_{1}am_{2}+n_{2}am_{1}+n_{1}P^{\prime}_{1}(a)m_{1}-Q^{\prime%
\prime}_{1}(a)n_{1}m_{1}.$$
Now, we are ready to present the following result describing the structure of Lie higher derivations on the generalized matrix algebra $\mathcal{G}$.
Theorem 2.2.
Let $\mathcal{G}=\left(\begin{array}[]{cc}A&M\\
N&B\\
\end{array}\right)$ be a generalized matrix algebra such that $A,B$ are $2$-torsion free. Then a sequence $\mathcal{L}=\{\mathcal{L}_{k}\}_{k\in\mathbb{N}_{0}}:\mathcal{G}%
\longrightarrow\mathcal{G}$ of linear mappings (as presented in ($\bigstar$)) is a Lie higher derivation if and only if
(1)
$\{P_{k}\}_{k\in\mathbb{N}_{0}},\{P^{\prime}_{k}\}_{k\in\mathbb{N}_{0}}$ are Lie higher derivations on $A$, $\{Q_{k}\}_{k\in\mathbb{N}_{0}},\{Q^{\prime}_{k}\}_{k\in\mathbb{N}_{0}}$ are Lie higher derivations on $B,$ $Q^{\prime\prime}_{k}(A)\subseteq Z(B),P^{\prime\prime}_{k}(B)\subseteq Z(A),$ and $Q^{\prime\prime}_{k}([a,a^{\prime}])=0,P^{\prime\prime}_{k}([b,b^{\prime}])=0$, for all $k\in\mathbb{N}.$
(2)
$\mathfrak{g_{1}}_{k}(a)=\sum_{i+j=k,\,j\neq k}(n_{i}P^{\prime}_{j}(a)-Q^{%
\prime\prime}_{j}(a)n_{i})$ and $\mathfrak{f}_{1k}(a)=\sum_{i+j=k,\,j\neq k}(P_{j}(a)m_{i}-m_{i}Q^{\prime\prime%
}_{j}(a))$,
(3)
$\mathfrak{g_{2}}_{k}(b)=-\sum_{i+j=k,\,j\neq k}(Q^{\prime}_{j}(b)n_{i}-n_{i}P^%
{\prime\prime}_{j}(b))$ and $\mathfrak{f}_{2k}(b)=-\sum_{i+j=k,\,j\neq k}(m_{i}Q_{j}(b)-P^{\prime\prime}_{j%
}(b)m_{i})$,
(4)
$\mathfrak{f}_{3k}(am)=\sum_{i+j=k}\big{(}P_{i}(a)\mathfrak{f}_{3j}(m)-%
\mathfrak{f}_{3j}(m)Q^{\prime\prime}_{i}(a)\big{)}$,
(5)
$\mathfrak{f}_{3k}(mb)=\sum_{i+j=k}\big{(}\mathfrak{f}_{3j}(m)Q_{i}(b)-P^{%
\prime\prime}_{i}(b)\mathfrak{f}_{3j}(m)\big{)}$,
(6)
$\mathfrak{g}_{4k}(na)=\sum_{i+j=k}\big{(}\mathfrak{g}_{4j}(n)P^{\prime}_{i}(a)%
-Q^{\prime\prime}_{i}(a)\mathfrak{g}_{4j}(n)\big{)}$,
(7)
$\mathfrak{g}_{4k}(bn)=\sum_{i+j=k}\big{(}Q^{\prime}_{i}(b)\mathfrak{g}_{4j}(n)%
-\mathfrak{g}_{4j}(n)P^{\prime\prime}_{i}(b)\big{)}$,
(8)
$\mathfrak{g}_{3k}(m)=-\sum_{i+j+r=k}\mathcal{N}_{i}\mathfrak{f}_{3r}(m)%
\mathcal{N}_{j}$ and $\mathfrak{f}_{4k}(n)=-\sum_{i+j+r=k}\mathcal{M}_{i}\mathfrak{g}_{4r}(n)%
\mathcal{M}_{j}$,
(9)
$\mathfrak{p}_{3k}(m)=\sum_{i+j=k}-\mathfrak{f}_{3i}(m)\mathcal{N}_{j}$ and $\mathfrak{q}_{3k}(m)=\sum_{i+j=k}\mathcal{N}_{j}\mathfrak{f}_{3i}(m)$,
(10)
$\mathfrak{p}_{4k}(n)=\sum_{i+j=k}-\mathcal{M}_{j}\mathfrak{g}_{4i}(n)$ and $\mathfrak{q}_{4k}(n)=\sum_{i+j=k}\mathfrak{g}_{4i}(n)\mathcal{M}_{j}$,
(11)
$\mathfrak{p}_{1k}(mn)-\mathfrak{p}_{2k}(nm)=\sum_{i+j=k}\big{(}\mathfrak{p}_{3%
i}(m)\mathfrak{p}_{4j}(n)+\mathfrak{f}_{3i}(m)\mathfrak{g}_{4j}(n)-\mathfrak{p%
}_{4j}(n)\mathfrak{p}_{3i}(m)-\mathfrak{f}_{4j}(n)\mathfrak{g}_{3i}(m)\big{)},$
(12)
$\mathfrak{q}_{1k}(mn)-\mathfrak{q}_{2k}(nm)=\sum_{i+j=k}\big{(}\mathfrak{g}_{3%
i}(m)\mathfrak{f}_{4j}(n)+\mathfrak{q}_{3i}(m)\mathfrak{q}_{4j}(n)-\mathfrak{g%
}_{4j}(n)\mathfrak{f}_{3i}(m)-\mathfrak{q}_{4j}(n)\mathfrak{q}_{3i}(m)\big{)}.$
2.3. The structure of higher derivations on $\mathcal{G}$
Similar to $(\bigstar)$ let
$\mathcal{D}=\{\mathcal{D}_{k}\}_{k\in\mathbb{N}_{0}}:\mathcal{G}%
\longrightarrow\mathcal{G}$ be a sequence of linear mappings with the following presentation
$${\small\mathcal{D}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\\
\end{array}\right)=\left(\begin{array}[]{cc}\mathtt{p}_{1k}(a)+\mathtt{p}_{2k}%
(b)+\mathtt{p}_{3k}(m)+\mathtt{p}_{4k}(n)&\mathtt{f}_{1k}(a)+\mathtt{f}_{2k}(b%
)+\mathtt{f}_{3k}(m)+\mathtt{f}_{4k}(n)\\
\mathtt{g}_{1k}(a)+\mathtt{g}_{2k}(b)+\mathtt{g}_{3k}(m)+\mathtt{g}_{4k}(n)&%
\mathtt{q}_{1k}(a)+\mathtt{q}_{2k}(b)+\mathtt{q}_{3k}(m)+\mathtt{q}_{4k}(n)\\
\end{array}\right),}$$
whose entries maps are linear. As before for each $k\in\mathbb{N}_{0}$ we set $m_{k}:=\mathtt{f}_{1k}(1)$ and $n_{k}:=\mathtt{g}_{1k}(1).$
Before proceeding for the structure of higher derivations on $\mathcal{G}$ in Theorem 2.3, we need to fix some more notations and also define some auxiliary mappings by setting ${\mathsf{P}}_{0}={\mathsf{P}}^{\prime}_{0}:=id_{A}$, ${\mathsf{Q}}_{0}={\mathsf{Q}}^{\prime}_{0}:=id_{B}$, and ${\mathsf{p}}_{1}={\mathsf{p}}^{\prime}_{1}:=0$, ${\mathsf{q}}_{1}={\mathsf{q}}^{\prime}_{1}:=0.$ Further for each $k\in\mathbb{N},$ we set ${\mathsf{P}}_{k}:=\mathtt{p}_{1k}+{\mathsf{p}}_{k}$, ${\mathsf{P}}^{\prime}_{k}:=\mathtt{p}_{1k}+{\mathsf{p}}^{\prime}_{k}$,
${\mathsf{Q}}_{k}:=\mathtt{q}_{2k}+{\mathsf{q}}_{k}$, ${\mathsf{Q}}^{\prime}_{k}:=\mathtt{q}_{2k}+{\mathsf{q}}^{\prime}_{k},$ where for for each $k\geq 2$
$$\displaystyle{\mathsf{p}}_{k}(a)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}{%
\mathsf{P}}_{i}(a)m_{\beta_{1}}n_{\alpha_{1}}(m_{\beta}n_{\alpha})^{r},$$
$$\displaystyle{\mathsf{p}}^{\prime}_{k}(a)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}m_{%
\beta_{1}}n_{\alpha_{1}}(m_{\beta}n_{\alpha})^{r}{\mathsf{P}}^{\prime}_{i}(a),$$
$$\displaystyle{\mathsf{q}}_{k}(b)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(n_%
{\alpha}m_{\beta})_{r}n_{\alpha_{1}}m_{\beta_{1}}{\mathsf{Q}}_{i}(b),$$
$$\displaystyle{\mathsf{q}}^{\prime}_{k}(b)$$
$$\displaystyle:=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}{%
\mathsf{Q}}^{\prime}_{i}(b)(n_{\alpha}m_{\beta})_{r}n_{\alpha_{1}}m_{\beta_{1}}.$$
In particular for the case $k=2$ we get
$${\mathsf{p}}_{2}(a)=am_{1}n_{1},\ {\mathsf{p}}^{\prime}_{2}(a)=m_{1}n_{1}a,\ {%
\mathsf{q}}_{2}(b)=n_{1}m_{1}b\ and\ {\mathsf{q}}^{\prime}_{2}(b)=bn_{1}m_{1}.$$
Similarly for the case $k=3$ it is easy to check that
$$\displaystyle{\mathsf{p}}_{3}(a)$$
$$\displaystyle=$$
$$\displaystyle am_{1}n_{2}+am_{2}n_{1}+{\mathsf{P}}_{1}(a)m_{1}n_{1},$$
$$\displaystyle{\mathsf{p}}^{\prime}_{3}(a)$$
$$\displaystyle=$$
$$\displaystyle m_{1}n_{2}a+m_{2}n_{1}a+m_{1}n_{1}{\mathsf{P}}^{\prime}_{1}(a),$$
$$\displaystyle{\mathsf{q}}_{3}(b)$$
$$\displaystyle=$$
$$\displaystyle n_{1}m_{2}b+n_{2}m_{1}b+n_{1}m_{1}{\mathsf{Q}}_{1}(b),$$
$$\displaystyle{\mathsf{q}}^{\prime}_{3}(b)$$
$$\displaystyle=$$
$$\displaystyle bn_{1}m_{2}+bn_{2}m_{1}+{\mathsf{Q}}^{\prime}_{1}(b)n_{1}m_{1}.$$
Parallel to Theorem 2.2, in the following result we characterize the structure of higher derivations on the generalized matrix algebra $\mathcal{G}$.
Theorem 2.3.
Let $\mathcal{G}=\left(\begin{array}[]{cc}A&M\\
N&B\\
\end{array}\right)$ be a generalized matrix algebra. Then a sequence $\mathcal{D}=\{\mathcal{D}_{k}\}_{k\in\mathbb{N}_{0}}:\mathcal{G}%
\longrightarrow\mathcal{G}$ of linear mappings (as presented in $\clubsuit$) is a higher derivation if and only if
(a)
$\{{\mathsf{P}}_{k}\}_{k\in\mathbb{N}_{0}},\{{\mathsf{P}}^{\prime}_{k}\}_{k\in%
\mathbb{N}_{0}}$ are higher derivations on $A$ and $\{{\mathsf{Q}}_{k}\}_{k\in\mathbb{N}_{0}},\{{\mathsf{Q}}^{\prime}_{k}\}_{k\in%
\mathbb{N}_{0}}$ are higher derivations on $B.$
(b)
$\mathtt{g}_{1k}(a)=\sum_{i+j=k,j\neq k}n_{i}{\mathsf{P}}^{\prime}_{j}(a)$ and $\mathtt{f}_{1k}(a)=\sum_{i+j=k,j\neq k}{\mathsf{P}}_{j}(a)m_{i}$,
(c)
$\mathtt{g}_{2k}(b)=-\sum_{i+j=k,j\neq k}{\mathsf{Q}}^{\prime}_{j}(b)n_{i}$ and $\mathtt{f}_{2k}(b)=-\sum_{i+j=k,j\neq k}m_{i}{\mathsf{Q}}_{j}(b)$,
(d)
$\mathtt{f}_{3k}(am)=\sum_{i+j=k}{\mathsf{P}}_{i}(a)\mathtt{f}_{3j}(m)$ and $\mathtt{f}_{3k}(mb)=\sum_{i+j=k}\mathtt{f}_{3j}(m){\mathsf{Q}}_{i}(b)$,
(e)
$\mathtt{g}_{4k}(na)=\sum_{i+j=k}\mathtt{g}_{4j}(n){\mathsf{P}}^{\prime}_{i}(a)$ and $\mathtt{g}_{4k}(bn)=\sum_{i+j=k}{\mathsf{Q}}^{\prime}_{i}(b)\mathtt{g}_{4j}(n)$,
(f)
$\mathtt{g}_{3k}(m)=-\sum_{i+j+r=k}\mathcal{N}_{i}\mathtt{f}_{3r}(m)\mathcal{N}%
_{j}$ and $\mathtt{f}_{4k}(n)=-\sum_{i+j+r=k}\mathcal{M}_{i}\mathtt{g}_{4r}(n)\mathcal{M}%
_{j}$,
(g)
$\mathtt{p}_{3k}(m)=-\sum_{i+j=k}\mathtt{f}_{3i}(m)\mathcal{N}_{j}$ and $\mathtt{q}_{3k}(m)=\sum_{i+j=k}\mathcal{N}_{j}\mathtt{f}_{3i}(m)$,
(h)
$\mathtt{p}_{4k}(n)=-\sum_{i+j=k}\mathcal{M}_{j}\mathtt{g}_{4i}(n)$ and $\mathtt{q}_{4k}(n)=\sum_{i+j=k}\mathtt{g}_{4i}(n)\mathcal{M}_{j}$,
(i)
$\mathtt{q}_{1k}(a)=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k%
-2}(n_{\alpha}m_{\beta})_{r}n_{\alpha_{1}}{\mathsf{P}}_{i}(a)m_{\beta_{1}}\\
\hskip 31.298031pt=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k%
-2}n_{\alpha_{1}}{\mathsf{P}}^{\prime}_{i}(a)m_{\beta_{1}}(n_{\alpha}m_{\beta}%
)^{r},$
(j)
$\mathtt{p}_{2k}(b)=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k%
-2}m_{\beta_{1}}{\mathsf{Q}}_{i}(b)n_{\alpha_{1}}(m_{\beta}n_{\alpha})^{r}\\
\hskip 30.444449pt=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k%
-2}(m_{\beta}n_{\alpha})_{r}m_{\beta_{1}}{\mathsf{Q}}^{\prime}_{i}(b)n_{\alpha%
_{1}},$
(k)
$\mathtt{p}_{1k}(mn)=\sum_{i+j=k}\big{(}\mathtt{p}_{3i}(m)\mathtt{p}_{4j}(n)+%
\mathtt{f}_{3i}(m)\mathtt{g}_{4j}(n)\big{)},$
(l)
$\mathtt{q}_{1k}(mn)=\sum_{i+j=k}\big{(}\mathtt{g}_{3i}(m)\mathtt{f}_{4j}(n)+%
\mathtt{q}_{3i}(m)\mathtt{q}_{4j}(n)\big{)},$
(m)
$\mathtt{q}_{2k}(nm)=\sum_{i+j=k}\big{(}\mathtt{g}_{4i}(n)\mathtt{f}_{3j}(m)+%
\mathtt{q}_{4i}(n)\mathtt{q}_{3j}(m)\big{)},$
(n)
$\mathtt{p}_{2k}(nm)=\sum_{i+j=k}\big{(}\mathtt{p}_{4i}(n)\mathtt{p}_{3j}(m)+%
\mathtt{f}_{4i}(n)\mathtt{g}_{3j}(m)\big{)}.$
2.4. The center valued mappings on $\mathcal{G}$
In the next result we characterize the structure of center valued maps on $\mathcal{G}$ vanishing at commutators of $\mathcal{G}.$
Proposition 2.4.
A sequence $\tau=\{\tau_{k}\}_{k\in\mathbb{N}}$ of linear maps on $\mathcal{G}$ is center valued and vanishes at commutators if and only if for each $k\in\mathbb{N}$ the map $\tau_{k}$ has the presentation
$$\tau_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}\ell_{k}(a)+P^{\prime\prime}_{k%
}(b)&\\
&Q^{\prime\prime}_{k}(a)+\ell^{\prime}_{k}(b)\end{array}\right),$$
where $\ell_{k}:A\longrightarrow Z(A)$, $P^{\prime\prime}_{k}:B\longrightarrow Z(A)$, $Q^{\prime\prime}_{k}:A\longrightarrow Z(B)$ and $\ell^{\prime}_{k}:B\longrightarrow Z(B)$ are linear maps vanishing at commutators and satisfying the following properties:
(i)
$\ell_{k}(a)\oplus Q^{\prime\prime}_{k}(a)\in Z(\mathcal{G})$ and $P^{\prime\prime}_{k}(b)\oplus\ell^{\prime}_{k}(b)\in Z(\mathcal{G}),$ for all $a\in A,b\in B$ and $k\in\mathbb{N}$.
(ii)
$\ell_{k}(mn)=P^{\prime\prime}_{k}(nm)$ and $\ell^{\prime}_{k}(nm)=Q^{\prime\prime}_{k}(mn),$ for all $m\in M,n\in N$ and $k\in\mathbb{N}$.
3. Proper Lie higher derivations
Hereinafter, suppose that the modules $M$ and $N$ appeared in definition of the generalized matrix algebras are $2-$torion free; ($M$ is said to be $2-$torsion free if $2m=0$ implies $m=0$ for all $m\in M$).
According to the Cheung’s method [3, Theorem 6], in the following theorem we give a necessary and sufficient condition under which Lie higher derivations on the generalized matrix algebra $\mathcal{G}$ are proper.
Theorem 3.1.
Let $\mathcal{G}$ be a generalized matrix algebra. A Lie higher derivation $\mathcal{L}$ on $\mathcal{G}$ of the form ($\bigstar$) is proper if and only if there exist two sequences of linear mappings $\{\ell_{k}\}_{k\in\mathbb{N}}:A\longrightarrow Z(A)$ and
$\{\ell^{\prime}_{k}\}_{k\in\mathbb{N}}:B\longrightarrow Z(B)$ satisfying the following three properties:
($A$)
$\{P_{k}-\ell_{k}\}_{k\in\mathbb{N}}$ and $\{Q_{k}-\ell^{\prime}_{k}\}_{k\in\mathbb{N}}$ are higher derivations on $A$ and $B$, respectively.
($B$)
$\ell_{k}(a)\oplus Q^{\prime\prime}_{k}(a)\in Z(\mathcal{G})$ and $P^{\prime\prime}_{k}(b)\oplus\ell^{\prime}_{k}(b)\in Z(\mathcal{G}),$ for all $a\in A,b\in B$ and $k\in\mathbb{N}$.
($C$)
$\ell_{k}(mn)=P^{\prime\prime}_{k}(nm)$ and $\ell^{\prime}_{k}(nm)=Q^{\prime\prime}_{k}(mn),$ for all $m\in M,n\in N$ and $k\in\mathbb{N}$.
Proof.
For sufficiency by induction on $k$, we know that for $k=1$ the result is true by [3]. Now let the result holds for any integer less than $k$, we prove this for $k$. By using induction hypothesis and Theorems 2.2 and 2.3, like step $1$ appeared in [4], without loss of generality we can consider structure of $\mathcal{L}_{k}$ as
$$\mathcal{L}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}\mathfrak{p}_{1k}(a)+\mathfrak{%
p}_{2k}(b)&\mathfrak{f}_{1k}(a)+\mathfrak{f}_{2k}(b)+\mathfrak{f}_{3k}(m)\\
\mathfrak{g}_{1k}(a)+\mathfrak{g}_{2k}(b)+\mathfrak{g}_{4k}(n)&\mathfrak{q}_{1%
k}(a)+\mathfrak{q}_{2k}(b)\end{array}\right)$$
where $\mathfrak{p}_{1k},\mathfrak{p}_{2k},\mathfrak{q}_{1k},\mathfrak{q}_{2k},%
\mathfrak{f}_{3k}$ and $\mathfrak{g}_{4k}$ have the properties $(1),(2),(6),(7),(8)$ and $(9)$ in Lemma 2.2. Replace $\mathfrak{p}_{1k},\mathfrak{p}_{2k},\mathfrak{q}_{1k},\mathfrak{q}_{2k}$ with
$P_{k}-p_{k},Q_{k}-q_{k},P^{\prime\prime}_{k}+p^{\prime\prime}_{k},Q^{\prime%
\prime}_{k}+q^{\prime\prime}_{k}$ respectively, then we have
$$\displaystyle\mathcal{L}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)$$
$$\displaystyle=$$
$$\displaystyle\left(\begin{array}[]{cc}P_{k}(a)-p_{k}(a)+P^{\prime\prime}_{k}(b%
)+p^{\prime\prime}_{k}(b)&\mathfrak{f}_{1k}(a)+\mathfrak{f}_{2k}(b)+\mathfrak{%
f}_{3k}(m)\\
\mathfrak{g}_{1k}(a)+\mathfrak{g}_{2k}(b)+\mathfrak{g}_{4k}(n)&Q^{\prime\prime%
}_{k}(a)+q^{\prime\prime}_{k}(a)+Q_{k}(b)-q_{k}(b)\end{array}\right),$$
(3.1)
By the induction hypothesis as $\mathcal{L}_{i}$ is proper for all $i<k$ we may write $p_{k},p^{\prime}_{k},p^{\prime\prime}_{k},q_{k},q^{\prime}_{k},q^{\prime\prime%
}_{k},\mathfrak{f}_{1k},\mathfrak{f}_{2k},\mathfrak{g}_{1k}$ and $\mathfrak{g}_{2k}$ as follows
$$p_{k}(a)=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}\big{(}%
P_{i}(a)-\ell_{i}(a)\big{)}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{%
\alpha_{r}},$$
$$p^{\prime}_{k}(a)=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-%
2}m_{\beta_{r}}n_{\alpha_{r}}\ldots m_{\beta_{1}}n_{\alpha_{1}}\big{(}P^{%
\prime}_{i}(a)-\ell_{i}(a)\big{)},$$
$$\displaystyle p^{\prime\prime}_{k}(b)$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}m_{%
\beta_{1}}\big{(}Q_{i}(b)-\ell^{\prime}_{i}(b)\big{)}n_{\alpha_{1}}\ldots m_{%
\beta_{r}}n_{\alpha_{r}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}m_{%
\beta_{r}}n_{\alpha_{r}}\ldots m_{\beta_{1}}\big{(}Q^{\prime}_{i}(b)-\ell^{%
\prime}_{i}(b)\big{)}n_{\alpha_{1}},$$
$$q_{k}(b)=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}n_{%
\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}\big{(}Q_{i}(b)-\ell%
^{\prime}_{i}(b)\big{)},$$
$$q^{\prime}_{k}(b)=\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-%
2}\big{(}Q^{\prime}_{i}(b)-\ell^{\prime}_{i}(b)\big{)}n_{\alpha_{1}}m_{\beta_{%
1}}\ldots n_{\alpha_{r}}m_{\beta_{r}},$$
$$\displaystyle q^{\prime\prime}_{k}(a)$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}n_{%
\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}\big{(}P_{i}(a)-\ell_{i}(a)\big{)%
}m_{\beta_{1}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}n_{%
\alpha_{1}}\big{(}P^{\prime}_{i}(a)-\ell_{i}(a)\big{)}m_{\beta_{1}}\ldots n_{%
\alpha_{r}}m_{\beta_{r}}$$
$$\mathfrak{f}_{1k}(a)=\sum_{i+j=k,i\neq k}(P_{i}(a)m_{j}-m_{j}Q^{\prime\prime}_%
{i}(a))=\sum_{i+j=k,i\neq k}\big{(}P_{i}(a)-\ell_{i}(a)\big{)}m_{j},$$
$$\mathfrak{f}_{2k}(b)=-\sum_{i+j=k,i\neq k}(m_{j}Q_{i}(b)-P^{\prime\prime}_{i}(%
b)m_{j})=-\sum_{i+j=k,i\neq k}m_{j}\big{(}Q_{i}(b)-\ell^{\prime}_{i}(b)\big{)},$$
$$\mathfrak{g}_{1k}(a)=\sum_{i+j=k,i\neq k}(n_{j}P^{\prime}_{i}(a)-Q^{\prime%
\prime}_{i}(a)n_{j})=\sum_{i+j=k,i\neq k}n_{j}\big{(}P^{\prime}_{i}(a)-\ell_{i%
}(a)\big{)},$$
$$\mathfrak{g}_{2k}(b)=-\sum_{i+j=k,i\neq k}(Q^{\prime}_{i}(b)n_{j}-n_{j}P^{%
\prime\prime}_{i}(b))=-\sum_{i+j=k,i\neq k}\big{(}Q^{\prime}_{i}(b)-\ell^{%
\prime}_{i}(b)\big{)}n_{j},$$
i.e. $p_{k}(a),p^{\prime}_{k}(a),p^{\prime\prime}_{k}(b),q_{k}(b),q^{\prime}_{k}(b),%
q^{\prime\prime}_{k}(a),\mathfrak{f}_{1k}(a),\mathfrak{f}_{2k}(b),\mathfrak{g}%
_{1k}(a)$ and $\mathfrak{g}_{2k}(b)$ are the same as those appeared in the structure of higher derivations in Theorem 2.3. So we can present (3.1) to the simpler form
$$\displaystyle\mathcal{L}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)$$
$$\displaystyle=$$
$$\displaystyle\left(\begin{array}[]{cc}P_{k}(a)+P^{\prime\prime}_{k}(b)&%
\mathfrak{f}_{3k}(m)\\
\mathfrak{g}_{4k}(n)&Q^{\prime\prime}_{k}(a)+Q_{k}(b)\end{array}\right).$$
Set
$$\mathcal{D}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}P_{k}(a)-\ell_{k}(a)&\mathfrak{%
f}_{3k}(m)\\
\mathfrak{g}_{4k}(n)&Q_{k}(b)-\ell^{\prime}_{k}(b)\end{array}\right),$$
and
$$\tau_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}\ell_{k}(a)+P^{\prime\prime}_{k%
}(b)&\\
&Q^{\prime\prime}_{k}(a)+\ell^{\prime}_{k}(b)\end{array}\right).$$
For more convenience set
(i)
$\mathsf{P}_{k}(a)=P_{k}(a)-\ell_{k}(a),\ \mathsf{Q}_{k}(b)=Q_{k}(b)-\ell^{%
\prime}_{k}(b)$
(ii)
$\gamma_{k}(a,b)=\ell_{k}(a)+P^{\prime\prime}_{k}(b)$ and
(iii)
$\gamma^{\prime}_{k}(a,b)=Q^{\prime\prime}_{k}(a)+\ell^{\prime}_{k}(b)$.
Considering the above relations we have
$$\mathcal{L}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}\mathsf{P}_{k}(a)+\gamma_{k}(a,%
b)&\mathfrak{f}_{3k}(m)\\
\mathfrak{g}_{4k}(n)&\mathsf{Q}_{k}(b)+\gamma^{\prime}_{k}(a,b)\end{array}%
\right).$$
Apply $\mathcal{L}_{k}$ on commutator
$\left[\left(\begin{array}[]{cc}0&0\\
n&1\end{array}\right),\left(\begin{array}[]{cc}0&m\\
0&-nm\end{array}\right)\right]$, we get:
$$P_{k}(mn)=\sum_{i+j=k}\mathfrak{f}_{3i}(m)\mathfrak{g}_{4j}(n)-\gamma_{k}(mn,-%
nm),$$
(3.2)
$$Q_{k}(nm)=\sum_{i+j=k}\mathfrak{g}_{4j}(n)\mathfrak{f}_{3i}(m)-\gamma^{\prime}%
_{k}(mn,-nm).$$
(3.3)
From the assumption $(C)$, since $\ell_{k}(mn)=P^{\prime\prime}_{k}(nm)$ and $\ell^{\prime}_{k}(nm)=Q^{\prime\prime}_{k}(mn)$, then $\gamma_{k}(mn,-nm)=0$ and $\gamma^{\prime}_{k}(mn,-nm)=0$, it follows that
$$\mathsf{P}_{k}(mn)=\sum_{i+j=k}\mathfrak{f}_{3i}(m)\mathfrak{g}_{4j}(n)\,\,\,%
and\,\,\,\mathsf{Q}_{k}(nm)=\sum_{i+j=k}\mathfrak{g}_{4j}(n)\mathfrak{f}_{3i}(m)$$
for all $m\in M,n\in N$ and $k\in\mathbb{N}$. A direct verification reveals that $\mathcal{D}=\{\mathcal{D}_{k}\}_{k\in\mathbb{N}}$ is a higher derivation and $\tau=\{\tau_{k}\}_{k\in\mathbb{N}}$ is a sequence of center valued maps.
For necessity, let $\mathcal{L}$ be proper i.e. $\mathcal{L}=\mathcal{D}+\tau$ for some higher derivation $\mathcal{D}$ and a sequence of center valued maps $\tau$. Applying the presentations ($\bigstar$), ($\clubsuit$) for $\mathcal{L}$ and $\mathcal{D}$, respectively, we have $\tau=\mathcal{L}-\mathcal{D}$ as
$$\tau_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}(P_{k}-\mathsf{P}_{k})(a)+P^{%
\prime\prime}_{k}(b)&\\
&Q^{\prime\prime}_{k}(a)+(Q_{k}-\mathsf{Q}_{k})(b)\end{array}\right).$$
We set $\ell_{k}=P_{k}-\mathsf{P}_{k},\ell^{\prime}_{k}=Q_{k}-\mathsf{Q}_{k}$, one can directly check taht $\{\ell_{k}\}_{k\in\mathbb{N}},\{\ell^{\prime}_{k}\}_{k\in\mathbb{N}}$ are two sequences of maps satisfying the required properties.
∎
Remark 3.2.
It is worthwhile mentioning that in the case where $M$ is a faithful $(A,B)$-module then;
(i)
In Theorem 2.2, the conditions $Q^{\prime\prime}_{k}([a,a^{\prime}])=0$ and
$P^{\prime\prime}_{k}([b,b^{\prime}])=0$ for all $k\in\mathbb{N}$, are superfluous as those can be acquired from (1), (4) and (5). Indeed, by induction on $k$ we know that for $k=1$ this is true by [2]. Suppose that the result holds for any integer less than $k$. For $a,a^{\prime}\in A,m\in M,$ from (4) we get
$$\mathfrak{f}_{3k}([a,a^{\prime}]m)=\sum_{i+j=k,\,j\neq 0}\big{(}P_{i}([a,a^{%
\prime}])\mathfrak{f}_{3j}(m)-\mathfrak{f}_{3j}(m)Q^{\prime\prime}_{i}([a,a^{%
\prime}])\big{)}$$
(3.4)
On the other hand, employing (c) and then (a), we have
$$\displaystyle\mathfrak{f}_{3k}([a,a^{\prime}]m)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{3k}(aa^{\prime}m-a^{\prime}am)$$
(3.5)
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,\,j\neq 0}\big{(}P_{i}(a)\mathfrak{f}_{3j}(a^{\prime}%
m)-\mathfrak{f}_{3j}(a^{\prime}m)Q^{\prime\prime}_{i}(a)\big{)}$$
$$\displaystyle-\big{(}\sum_{i+j=k,\,j\neq 0}\big{(}P_{i}(a^{\prime})\mathfrak{f%
}_{3j}(am)-\mathfrak{f}_{3j}(am)Q^{\prime\prime}_{i}(a^{\prime})\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r+s+t=k}\big{(}P_{r}(a)P_{s}(a^{\prime})\mathfrak{f}_{3t}(m%
)-P_{r}(a)\mathfrak{f}_{3t}(m)Q^{\prime\prime}_{s}(a^{\prime})\big{)}$$
$$\displaystyle-\sum_{k+s+t=k}\big{(}P_{s}(a^{\prime})\mathfrak{f}_{3t}(m)Q^{%
\prime\prime}_{k}(a)-\mathfrak{f}_{3t}(m)Q^{\prime\prime}_{s}(a^{\prime})Q^{%
\prime\prime}_{r}(a)\big{)}$$
$$\displaystyle-\sum_{r+s+t=k}\big{(}P_{r}(a^{\prime})P_{s}(a)\mathfrak{f}_{3t}(%
m)-P_{r}(a^{\prime})\mathfrak{f}_{3t}(m)Q^{\prime\prime}_{s}(a)\big{)}$$
$$\displaystyle+\sum_{r+s+t=k}\big{(}P_{s}(a)\mathfrak{f}_{3t}(m)Q^{\prime\prime%
}_{r}(a^{\prime})-\mathfrak{f}_{3t}(m)Q^{\prime\prime}_{s}(a)Q^{\prime\prime}_%
{r}(a^{\prime})\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r+s+t=k}[P_{r}(a),P_{s}(a^{\prime})]\mathfrak{f}_{3t}(m).$$
Comparing the equations (3.4) and (3.5) along with assumption of induction indicates that $mQ^{\prime\prime}_{k}([a,a^{\prime}])=0,$ for any $m\in M$, thus the equality $Q^{\prime\prime}_{k}([a,a^{\prime}])=0$ follows from the faithfulness of $M$ (as a right $B-$module). Similarly on can check that $P^{\prime\prime}_{k}([b,b^{\prime}])=0$ for all $k\in\mathbb{N}$.
(ii)
In Theorem 2.3, the assertion $\rm(a)$ can also be removed as it can be acquired from $\rm(d)$ and $\rm(e)$ by a similar reasoning as in (i), (see [3, Page 303]).
(iii)
In Theorem 3.1, the same reason as in (ii) indicates that the assertion $\rm(A)$ in Theorem 3.1, stating that $P_{k}-\ell_{k}$ and $Q_{k}-\ell_{k}$ are higher derivations, is extra.
In the next corollary we offer the criterion characterizing LHD property for the generalized matrix algebra $\mathcal{G}$ as a conclusion of
Theorem 3.1.
Corollary 3.3.
Let $\mathcal{G}$ be a generalized matrix algebra and $\mathcal{L}$ be a Lie higher derivation on $\mathcal{G}$ of the form stated in Theorem 2.2. If $\mathcal{L}$ is proper then
($A^{\prime}$)
$Q^{\prime\prime}_{k}(A)\subseteq\pi_{B}(Z(\mathcal{G}))$, $P^{\prime\prime}_{k}(B)\subseteq\pi_{A}(Z(\mathcal{G}))$ and,
($B^{\prime}$)
$P^{\prime\prime}_{k}(nm)\oplus Q^{\prime\prime}_{k}(mn)\in Z(\mathcal{G}),$
for all $m\in M,n\in N$ and $k\in\mathbb{N}$. The converse is valid when $\mathcal{G}$ is weakly faithful.
Proof.
Let $\mathcal{L}$ be proper, then Theorem 3.1 ensures that $\rm(A^{\prime})$ and $\rm(B^{\prime})$ hold.
Conversely, suppose that $\mathcal{G}$ is weakly faithful. Let $\varphi:\pi_{A}(Z(\mathcal{G}))\longrightarrow\pi_{B}(Z(\mathcal{G}))$ be the isomorphism satisfying $a\oplus\varphi(a)\in Z(\mathcal{G})$ for all $a\in A,$ whose existence guaranteed by the weak faithfulness of $\mathcal{G}$ and [1]. By using assumption $\rm(A^{\prime})$, we define $\ell_{k}:A\longrightarrow Z(A)$ and
$\ell^{\prime}_{k}:B\longrightarrow Z(B)$ by $\ell_{k}=\varphi^{-1}\circ Q^{\prime\prime}_{k}$ and
$\ell^{\prime}_{k}=\varphi\circ P^{\prime\prime}_{k}$. Obviously $\ell_{k}(a)\oplus Q^{\prime\prime}_{k}(a)\in Z(\mathcal{G})$ and $P^{\prime\prime}_{k}(b)\oplus\ell^{\prime}_{k}(b)\in Z(\mathcal{G}),$ for all $a\in A,b\in B.$
Further, $\rm(B^{\prime})$ follows that
$$\ell_{k}(mn)=\varphi^{-1}(Q^{\prime\prime}_{k}(mn))=P^{\prime\prime}_{k}(nm)\ %
{\rm and}\ \ell^{\prime}_{k}(nm)=\varphi(P^{\prime\prime}_{k}(nm))=Q^{\prime%
\prime}_{k}(mn).$$
Now properness of $\mathcal{L}$ follows from Theorem 3.1 and part (iii) of Remark 3.2.
∎
4. Some sufficient conditions and the main result
By using Corollary 3.3 in the next theorem we give the “higher” version of a modification of Du and Wang’s result [4, Theorem 1], (see also [20, Corollary 1] and [19, Theorem 2.1] in the case $n=2$).
Theorem 4.1.
Let $\mathcal{G}$ be a weakly faithful generalized matrix algebra. If
(i)
$\pi_{A}(Z(\mathcal{G}))=Z(A),\pi_{B}(Z(\mathcal{G}))=Z(B)$, and
(ii)
either $A$ or $B$ does not contain nonzero central ideals,
then $\mathcal{G}$ has LHD property.
Proof.
By Corollary 3.3, it is enough to show that $P^{\prime\prime}_{k}(nm)+Q^{\prime\prime}_{k}(mn)\in Z(\mathcal{G})$ for all $m\in M,n\in N$. Without loss of generality suppose that $A$ has no nonzero central ideal. Put
$$\gamma_{k}(a,b)=\ell_{k}(a)+P^{\prime\prime}_{k}(b)\quad(a\in A,\ b\in B,\ k%
\in\mathbb{N}),$$
where, as in the proof of the above corollary, $\ell_{k}=\varphi^{-1}\circ Q^{\prime\prime}_{k}$ and that $\mathsf{P}_{k}=P_{k}-\ell_{k}$ is a higher derivation. Now equation (3.2) implies that
$$\mathsf{P}_{k}(amn)=\sum_{i+j=k}\mathfrak{f}_{3i}(am)\mathfrak{g}_{4j}(n)-%
\gamma_{k}(amn,-nam).$$
The latter equation with the fact that $\mathsf{P}_{k}$ is a higher derivation follows that
$$\sum_{i+j=k}\mathsf{P}_{i}(a)\mathsf{P}_{j}(mn)=\sum_{l+r+j=k}\mathsf{P}_{l}(a%
)\mathfrak{f}_{3r}(m)\mathfrak{g}_{4j}(n)-\gamma_{k}(amn,-nam).$$
By using assumption of induction we get
$$a\mathsf{P}_{k}(mn)=\sum_{r+j=k}a\mathfrak{f}_{3r}(m)\mathfrak{g}_{4j}(n)-%
\gamma_{k}(amn,-nam).$$
Multiply equation (3.2) from the left by $a$, then we have
$$a\mathsf{P}_{k}(mn)=a\sum_{i+j=k}\mathfrak{f}_{3i}(m)\mathfrak{g}_{4j}(n)-a%
\gamma_{k}(mn,-nm),$$
for all $a\in A,m\in M$ and $n\in N$. The two last equations imply that the set $A\gamma_{k}(mn,-nm)$ is a central ideal of $A$ for each pair of elements $m\in M,n\in N$. Hence $\ell_{k}(mn)-P^{\prime\prime}_{k}(nm)=\gamma_{k}(mn,-nm)=0$ and so
$P^{\prime\prime}_{k}(nm)\oplus Q^{\prime\prime}_{k}(mn)=\ell_{k}(mn)\oplus Q^{%
\prime\prime}_{k}(mn)\in Z(\mathcal{G})$.
∎
As some examples of an algebra that has no nonzero central ideal we can mention to a noncommutative unital prime algebra with a nontrivial idempotent, in particular B(X), the algebra of operators on a Banach space $X$ with dim$(X)>1$, and the full matrix matrix algebra $M_{n}(A)$ with $n\geq 2$ (see [4, Lemma 1]). Also in [4, Theorem 2] it is shown that in the generalized matrix algebra $\mathcal{G}$ with loyal $M$, $A$ does not contain central ideal if $A$ is noncommutative.
Parallel to the results of [14] we have the three following theorems which all of them can be proved by induction and using techniques of [14] for step 1.
Recall that an algebra $A$ is called domain if it has no zero devisors or equivalently if $aa^{\prime}=0$ implies $a=0$ or $a^{\prime}=0$ for every two elements $a,a^{\prime}\in A$.
Theorem 4.2.
Let $\mathcal{G}$ be a weakly faithful generalized matrix algebra.
Then $\mathcal{G}$ has LHD property if
(i)
$\pi_{A}(Z(\mathcal{G}))=Z(A),\pi_{B}(Z(\mathcal{G}))=Z(B)$ and
(ii)
$A$ and $B$ are domain.
Theorem 4.3.
The generalized matrix algebra $\mathcal{G}$ has LHD property if
(i)
$\pi_{A}(Z(\mathcal{G}))=Z(A),\pi_{B}(Z(\mathcal{G}))=Z(B)$ and
(ii)
either $M$ or $N$ is strongly faithful.
It’s remarkable that, we do not know when one can withdraw the assertion strong faithfulness in Theorem 4.3.
Now, by gathering the above observations and combination of the assertions in Theorems 4.1, 4.2 and 4.3, we are able to give the main result of this paper providing several sufficient conditions that guarantee the LHD property for a generalized matrix algebra, which one part of its is a generalization of [13, Theorem 3.3].
Theorem 4.4.
Let $\mathcal{G}$ be a weakly faithful generalized matrix algebra. If the following two conditions hold:
(I)
$\pi_{A}(Z(\mathcal{G}))=Z(A)$, $\pi_{B}(Z(\mathcal{G}))=Z(B)$
(II)
one of the following conditions holds:
(i)
either $A$ or $B$ does not contain nonzero central ideals
(ii)
$A$ and $B$ are domain
(iii)
either $M$ or $N$ is strongly faithful,
then $\mathcal{G}$ has LHD property.
5. Applications
In this section we investigate LHD property for some main examples of a generalized matrix algebra which includes: trivial generalized matrix algebras, triangular algebras, unital algebras with a nontrivial idempotent, the algebra $B(X)$ of operators on a Banach space $X$ and the full matrix algebra $M_{n}(A)$ on a unital algebra $A$.
LHD property of trivial generalized matrix algebras and ${\rm Tri}(A,M,B)$
The generalized matrix algebra $\mathcal{G}$ is called trivial when $MN=0$ and $NM=0$ in its definition.
It can be pointed out to the triangular algebra ${\rm Tri}(A,M,B)$ as a main example of a trivial generalized matrix algebra that whose LHD property has been studied in [12, 13, 16, 21, 23]. As a urgent consequence of Corollary 3.3 and Theorem 4.4 we achieve the next result which characterizing the LHD property for trivial generalized matrix algebras.
Corollary 5.1.
Let $\mathcal{G}$ be a trivial generalized matrix algebra and $\mathcal{L}$ be a Lie higher derivation on $\mathcal{G}$ of the form stated in ($\bigstar$). If $\mathcal{L}$ is proper then $Q^{\prime\prime}_{k}(A)\subseteq\pi_{B}(Z(\mathcal{G}))$, $P^{\prime\prime}_{k}(B)\subseteq\pi_{A}(Z(\mathcal{G}))$.
The converse is valid when $\mathcal{G}$ is weakly faithful.
Specifically, a trivial generalized matrix algebra $\mathcal{G}$ has LHD property if the following two conditions hold:
(I)
$\mathcal{G}$ is weakly faithful,
(II)
$\pi_{A}(Z(\mathcal{G}))=Z(A)$ and $\pi_{B}(Z(\mathcal{G}))=Z(B)$.
In the next example which has been raised by Benkovič [1, Example 3.8] and modified in [14] we give a trivial generalized matrix algebra, which is not triangular, without the LHD property.
Example 5.2.
Let $M$ be a commutative unital algebra of dimension $3$, on the commutative unital ring $R$, with base $\{1,m,m^{\prime}\}$ such that ${m}^{2}={m^{\prime}}^{2}=mm^{\prime}=m^{\prime}m=0$. Put $N=M$ and let $A=\{r+r^{\prime}m|\ r,r^{\prime}\in R\}$ and $B=\{u+u^{\prime}m^{\prime}|\ u,u^{\prime}\in R\}$ be the subalgebras of $M$. Consider the generalized matrix algebra $\mathcal{G}=\left(\begin{array}[]{cc}A&M\\
N&B\\
\end{array}\right)$ under the usual addition, usual scalar mulitiplication and the multiplication defined by
$$\left(\begin{array}[]{cc}a&m\\
n&b\\
\end{array}\right)\left(\begin{array}[]{cc}a^{\prime}&m^{\prime}\\
n^{\prime}&b^{\prime}\\
\end{array}\right)=\left(\begin{array}[]{cc}aa^{\prime}&am^{\prime}+mb^{\prime%
}\\
na^{\prime}+bn^{\prime}&bb^{\prime}\\
\end{array}\right).$$
The generalized matrix algebra $\mathcal{G}$ is trivial since $MN=0=NM$. The linear map $\mathcal{L}:\mathcal{G}\longrightarrow\mathcal{G}$ defined by
$$\displaystyle\mathcal{L}\left(\begin{array}[]{cc}r+r^{\prime}m&s+s^{\prime}m+s%
^{\prime\prime}m^{\prime}\\
t+t^{\prime}m+t^{\prime\prime}m^{\prime}&u+u^{\prime}m^{\prime}\\
\end{array}\right)=\left(\begin{array}[]{cc}u^{\prime}m&-s^{\prime\prime}m-s^{%
\prime}m^{\prime}\\
-t^{\prime\prime}m-t^{\prime}m^{\prime}&r^{\prime}m^{\prime}\\
\end{array}\right),$$
where all coefficients are in the ring $R$, is an improper Lie derivation. Now the ordinary Lie higher derivation induced by $\mathcal{L}$ is improper.
Applying Theorems 2.2 and 2.3 for the special case $N=0$ we arrive to the following characterizations of (Lie) higher derivations for the triangular algebra ${\rm Tri}(A,M,B)$ which have already presented in [13].
Corollary 5.3.
$\bullet$ Let $\mathcal{L}=\{\mathcal{L}_{k}\}_{k\in\mathbb{N}}$ be a sequence of linear maps on ${\rm Tri}(A,M,B)$, then
$\mathcal{L}$ is a Lie higher derivation if and only if $\mathcal{L}_{k}$ can be presented in the form
$$\mathcal{L}_{k}\left(\begin{array}[]{cc}a&m\\
&b\\
\end{array}\right)=\left(\begin{array}[]{cc}\mathfrak{p}_{1k}(a)+\mathfrak{p}_%
{2k}(b)&\sum_{i+j=k,i\neq k}\big{(}(\mathfrak{p}_{1i}(a)+\mathfrak{p}_{2i}(b))%
m_{j}-m_{j}(\mathfrak{q}_{2i}(b)+\mathfrak{q}_{1i}(a))\big{)}+\mathfrak{f}_{3k%
}(m)\\
&\mathfrak{q}_{1k}(a)+\mathfrak{q}_{2k}(b)\\
\end{array}\right)$$
where $\{m_{j}\}_{j\in\mathbb{N}}\subseteq M$, and for each $k\in\mathbb{N}$, $\mathfrak{q}_{1k}:A\longrightarrow Z(B)$, $\mathfrak{p}_{2k}:B\longrightarrow Z(A),$ $\mathfrak{f}_{3k}:M\longrightarrow M$ are linear maps satisfying:
(1)
$\{\mathfrak{p}_{1k}\}_{n\in\mathbb{N}}$, $\{\mathfrak{q}_{2k}\}_{n\in\mathbb{N}}$ are Lie higher derivations on $A,B$, respectively,
(2)
$\mathfrak{q}_{1k}[a,a^{\prime}]=0$ and $\mathfrak{p}_{2k}[b,b^{\prime}]=0$ for all $a,a^{\prime}\in A,b,b^{\prime}\in B,$ and
(3)
$\mathfrak{f}_{3k}(am)=\sum_{i+j=k}\big{(}\mathfrak{p}_{1i}(a)\mathfrak{f}_{3j}%
(m)-\mathfrak{f}_{3j}(m)\mathfrak{q}_{1i}(a)\big{)},$
$\mathfrak{f}_{3k}(mb)=\sum_{i+j=k}\big{(}\mathfrak{f}_{3j}(m)\mathfrak{q}_{2i}%
(b)-\mathfrak{p}_{2i}(b))\mathfrak{f}_{3j}(m)\big{)}$ for all $a\in A,b\in B,m\in M.$
$\bullet$ Let $\mathcal{D}=\{\mathcal{D}_{k}\}_{k\in\mathbb{N}}$ be a sequence of linear maps on ${\rm Tri}(A,M,B)$, then
$\mathcal{D}$ is a higher derivation if and only if, $\mathcal{D}_{k}$ can be presented in the form
$$\small\mathcal{D}_{k}\left(\begin{array}[]{cc}a&m\\
&b\\
\end{array}\right)=\left(\begin{array}[]{cc}\mathtt{p}_{1k}(a)+\mathtt{p}_{2k}%
(b)&\sum_{i+j=k,i\neq k}\big{(}(\mathtt{p}_{1i}(a)+\mathtt{p}_{2i}(b))m_{j}-m_%
{j}(\mathtt{q}_{2i}(b)+\mathtt{q}_{1i}(a))\big{)}+\mathtt{f}_{3k}(m)\\
&\mathtt{q}_{1k}(a)+\mathtt{q}_{2k}(b)\\
\end{array}\right)$$
where $\{m_{j}\}_{j\in\mathbb{N}}\subseteq M$, and for each $k\in\mathbb{N}$, $\mathtt{q}_{1k}:A\longrightarrow Z(B)$, $\mathtt{p}_{2k}:B\longrightarrow Z(A),$ $\mathtt{f}_{3k}:M\longrightarrow M$ are linear maps satisfying:
(1)
$\{\mathtt{p}_{1k}\}_{n\in\mathbb{N}}$, $\{\mathtt{q}_{2k}\}_{n\in\mathbb{N}}$ are Lie higher derivations on $A,B$, respectively,
(2)
$\mathtt{q}_{1k}[a,a^{\prime}]=0$ and $\mathtt{p}_{2k}[b,b^{\prime}]=0$ for all $a,a^{\prime}\in A,b,b^{\prime}\in B,$ and
(3)
$\mathtt{f}_{3k}(am)=\sum_{i+j=k}\big{(}\mathtt{p}_{1i}(a)\mathtt{f}_{3j}(m)-%
\mathtt{f}_{3j}(m)\mathtt{q}_{1i}(a)\big{)},$
$\mathtt{f}_{3k}(mb)=\sum_{i+j=n}\big{(}\mathtt{f}_{3j}(m)\mathtt{q}_{2i}(b)-%
\mathtt{p}_{2i}(b))\mathtt{f}_{3j}(m)\big{)}$ for all $a\in A,b\in B,m\in M.$
LHD property of unital algebras with a nontrivial idempotent
Let $A$ be a unital algebra with a nontrivial idempotent $e$ and $f=1-e$. From the Peirce decomposition we can presented $A$ as $A=\left(\begin{array}[]{cc}eAe&eAf\\
fAe&fAf\\
\end{array}\right)$. By using Theorem 4.4 for the generalized matrix algebra $A$ we get the next result which is the “higehr” version of [14, Corollary 4.3].
Corollary 5.4.
Let $A$ be a $2-$torsion free unital algebra with a nontrivial idempotent $e$ satisfying
$$eae\cdot eAf=0\ implies\ eae=0,\quad and\quad eAf\cdot faf=0\ implies\ faf=0,$$
(5.1)
for any $a\in A,$ where $f=1-e$. If the following conditions hold:
(I)
$Z(fAf)=Z(A)f$, $Z(eAe)=Z(A)e$
(II)
one of the following three conditions holds:
(i)
either $eAe$ or $fAf$ does not contain nonzero central ideals
(ii)
$eAe$ and $fAf$ are domain
(iii)
either $eAf$ or $fAe$ is strongly faithful,
then $A$ has LHD property.
As urgent consequences of Corollary 5.4 in the next results we obtain LHD property of the full matrix algebra $M_{n}(A)$ and $B(X)$, the algebra af all operator on Banach space $X$ with $dim(X)\geq 2$. The LHD property of $B(X)$ with $dim(X)>1$ was proved by Han [7, Corollary 3.3] by a completely different method. Also the Lie derivation property of $B(X)$ was proved by Lu and Jing [9] for Lie derivable maps at zero and idempotents. In addition, for properness of nonlinear Lie derivations on $B(X)$ see [10].
Corollary 5.5.
The algebra $B(X)$ of bounded operators on a Banach space $X$ with $dim(X)\geq 2$ has LHD property.
Proof.
It follows from Corollary 5.4 and the proof appeared in [14, Corollary 4.4].
∎
Corollary 5.6.
Let $A$ be a $2-$torsion free unital algebra. The full matrix algebra $\mathfrak{A}=M_{n}(A)$ with $n\geq 3$ enjoys the LHD property.
Proof.
Consider nontrivial idempotents $e=e_{11}$ and $f=e_{22}+\cdots+e_{nn}$. It is obvious that $e\mathfrak{A}e=A,f\mathfrak{A}f=M_{n-1}(A)$. From $Z(\mathfrak{A})=Z(A)1_{\mathfrak{A}}$ we conclude that $Z(e\mathfrak{A}e)=Z(\mathfrak{A})e$ and $Z(f\mathfrak{A}f)=Z(\mathfrak{A})f,$ so assumption (I) of Corollary 5.4 holds. Moreover, [4, Lemma 1] guarantees that the algebra $f\mathfrak{A}f=M_{n-1}(A)$ does not contain nonzero central ideals, so part (i) of condition (II) in Corollary 5.4 is fulfilled. Hence by the mentioned corollary $M_{n}(A)$ has the LHD property.
∎
It’s remarkable that Corollary 5.6 is the “higher” version of [4, Corollary 1].
6. Proofs of Theorems 2.2 and 2.3
Proof of Theorem 2.2
Proof.
We proceed the proof by induction on $k$. The case $k=1$ follows from Proposition 2.1. Suppose that the conclusion holds for any integer less than $k$. By ($\bigstar$), $\mathcal{L}_{k}$ has the presentation
$$\mathcal{L}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}\mathfrak{p}_{1k}(a)+\mathfrak{%
p}_{2k}(b)+\mathfrak{p}_{3k}(m)+\mathfrak{p}_{4k}(n)&\mathfrak{f}_{1k}(a)+%
\mathfrak{f}_{2k}(b)+\mathfrak{f}_{3k}(m)+\mathfrak{f}_{4k}(n)\\
\mathfrak{g}_{1k}(a)+\mathfrak{g}_{2k}(b)+\mathfrak{g}_{3k}(m)+\mathfrak{g}_{4%
k}(n)&\mathfrak{q}_{1k}(a)+\mathfrak{q}_{2k}(b)+\mathfrak{q}_{3k}(m)+\mathfrak%
{q}_{4k}(n)\\
\end{array}\right),$$
for each $\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)\in\mathcal{G}.$ Applying $\mathcal{L}_{k}$ for the commutator $\left[\left(\begin{array}[]{cc}a&m\\
0&0\end{array}\right),\left(\begin{array}[]{cc}0&0\\
0&b\end{array}\right)\right]$, we have
$$\displaystyle\begin{pmatrix}\mathfrak{p}_{3k}(mb)&\mathfrak{f}_{3k}(mb)\\
\mathfrak{g}_{3k}(mb)&\mathfrak{q}_{3k}(mb)\end{pmatrix}$$
(6.1)
$$\displaystyle\quad=\sum_{i+j=k,i,j\neq 0}\left[\begin{pmatrix}\mathfrak{p}_{1i%
}(a)+\mathfrak{p}_{3i}(m)&\mathfrak{f}_{1i}(a)+\mathfrak{f}_{3i}(m)\\
\mathfrak{g}_{1i}(a)+\mathfrak{g}_{3i}(m)&\mathfrak{q}_{1i}(a)+\mathfrak{q}_{3%
i}(m)\end{pmatrix},\begin{pmatrix}\mathfrak{p}_{2j}(b)&\mathfrak{f}_{2j}(b)\\
\mathfrak{g}_{2j}(b)&\mathfrak{q}_{2j}(b)\end{pmatrix}\right]$$
(6.2)
$$\displaystyle\qquad+\left[\begin{pmatrix}\mathfrak{p}_{1k}(a)+\mathfrak{p}_{3k%
}(m)&\mathfrak{f}_{1k}(a)+\mathfrak{f}_{3k}(m)\\
\mathfrak{g}_{1k}(a)+\mathfrak{g}_{3k}(m)&\mathfrak{q}_{1k}(a)+\mathfrak{q}_{3%
k}(m)\end{pmatrix},\begin{pmatrix}0&0\\
0&b\end{pmatrix}\right]$$
(6.3)
$$\displaystyle\qquad+\left[\begin{pmatrix}a&m\\
0&0\end{pmatrix},\begin{pmatrix}\mathfrak{p}_{2k}(b)&\mathfrak{f}_{2k}(b)\\
\mathfrak{g}_{2k}(b)&\mathfrak{q}_{2k}(b)\end{pmatrix}\right].$$
(6.4)
Use the equalities $\mathfrak{p}_{1i}=P_{i}-p_{i},\mathfrak{q}_{2j}=Q_{j}-q_{j},\mathfrak{q}_{1i}=%
Q^{\prime\prime}_{i}+q^{\prime\prime}_{i},\mathfrak{p}_{2j}=P^{\prime\prime}_{%
j}+p^{\prime\prime}_{j}$, it follows that,
$$\displaystyle\mathfrak{f}_{3k}(mb)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{1k}(a)b+\mathfrak{f}_{3k}(m)b+a\mathfrak{f}_{2k}(b)%
+m\mathfrak{q}_{2k}(b)-\mathfrak{p}_{2k}(b)m$$
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\big{(}\big{(}\mathfrak{p}_{1i}(a)+%
\mathfrak{p}_{3i}(m)\big{)}\mathfrak{f}_{2j}(b)+\big{(}\mathfrak{f}_{1i}(a)+%
\mathfrak{f}_{3i}(m)\big{)}\mathfrak{q}_{2j}(b)\big{)}$$
$$\displaystyle-\sum_{i+j=k,i,j\neq 0}\big{(}\mathfrak{p}_{2j}(b)\big{(}%
\mathfrak{f}_{1i}(a)+\mathfrak{f}_{3i}(m)\big{)}+\mathfrak{f}_{2j}(b)\big{(}%
\mathfrak{q}_{1i}(a)+\mathfrak{q}_{3i}(m)\big{)}\big{)}$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{1k}(a)b+\mathfrak{f}_{3k}(m)b+a\mathfrak{f}_{2k}(b)%
+m\mathfrak{q}_{2k}(b)-\mathfrak{p}_{2k}(b)m$$
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\big{(}P_{i}(a)\mathfrak{f}_{2j}(b)-%
\mathfrak{f}_{2j}(b)Q^{\prime\prime}_{i}(a)\big{)}-\sum_{i+j=k,i,j\neq 0}p_{i}%
(a)\mathfrak{f}_{2j}(b)$$
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\big{(}\mathfrak{f}_{1i}(a)Q_{j}(b)-P^{%
\prime\prime}_{j}(b)\mathfrak{f}_{1i}(a)\big{)}+\sum_{i+j=k,i,j\neq 0}%
\mathfrak{p}_{3i}(m)\mathfrak{f}_{2j}(b)$$
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\big{(}\mathfrak{f}_{3i}(m)Q_{j}(b)-P^{%
\prime\prime}_{j}(b)\mathfrak{f}_{3i}(m)\big{)}-\sum_{i+j=k,i,j\neq 0}%
\mathfrak{f}_{1i}(a)q_{j}(b)$$
$$\displaystyle-\sum_{i+j=k,i,j\neq 0}\mathfrak{f}_{3i}(m)q_{j}(b)-\sum_{i+j=k,i%
,j\neq 0}p^{\prime\prime}_{j}(b)\mathfrak{f}_{3i}(m)-\sum_{i+j=k,i,j\neq 0}p^{%
\prime\prime}_{j}(b)\mathfrak{f}_{1i}(a)$$
$$\displaystyle-\sum_{i+j=k,i,j\neq 0}\mathfrak{f}_{2j}(b)q^{\prime\prime}_{i}(a%
)-\sum_{i+j=k,i,j\neq 0}\mathfrak{f}_{2j}(b)\mathfrak{q}_{3i}(m)$$
In (6), if we put $m=0,a=1,b=1$ and use the definition of $p_{i},p^{\prime\prime}_{i},q_{i}$ and $q^{\prime\prime}_{i}$ appeared in page $5$ for $a=1,b=1$ then we arrive at,
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{1k}(1)+\mathfrak{f}_{2k}(1)-\sum_{i+j=k,i,j\neq 0}p%
_{i}(1)\mathfrak{f}_{2j}(1)-\sum_{i+j=k,i,j\neq 0}\mathfrak{f}_{2j}(1)q^{%
\prime\prime}_{i}(1)$$
$$\displaystyle-\sum_{i+j=k,i,j\neq 0}\mathfrak{f}_{1i}(1)q_{j}(1)-\sum_{i+j=k,i%
,j\neq 0}p^{\prime\prime}_{j}(1)\mathfrak{f}_{1i}(1)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{1k}(1)+\mathfrak{f}_{2k}(1)+\sum_{i+j=k,i,j\neq 0}%
\sum_{r=1}^{\eta_{i}}\sum_{(\alpha+\beta)_{r}=i}m_{\beta_{1}}n_{\alpha_{1}}%
\ldots m_{\beta_{r}}n_{\alpha_{r}}m_{j}$$
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\sum_{r=1}^{\eta_{i}}\sum_{(\alpha+\beta)_%
{r}=i}m_{j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_%
{r}=j}m_{i}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_%
{r}=j}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}m_{i};$$
from which we get $\mathfrak{f}_{1k}(1)=-\mathfrak{f}_{2k}(1)$. Note that in the recent calculations, by induction hypothesis we have $\mathfrak{f}_{1j}(1)=-\mathfrak{f}_{2j}(1)=m_{j}$ for all $j<k$ and
$$\sum_{i+j=k,i,j\neq 0}\big{(}P_{i}(1)m-mQ^{\prime\prime}_{i}(1)\big{)}=0,\sum_%
{i+j=k,i,j\neq 0}\big{(}mQ_{j}(1)-P^{\prime\prime}_{j}(1)m\big{)}=0$$
for all $m\in M$.
Next, if we apply (6) for $m=0,\,b=1$ and use the equations $\mathfrak{q}_{2j}=Q_{j}-q_{j}$, $\mathfrak{p}_{2j}=P^{\prime\prime}_{j}+p^{\prime\prime}_{j}$ then we have
$$\displaystyle\mathfrak{f}_{1k}(a)$$
$$\displaystyle=$$
$$\displaystyle a\mathfrak{f}_{1k}(1)+\sum_{i+j=k,i,j\neq 0}\big{(}\mathfrak{p}_%
{1i}(a)\mathfrak{f}_{1j}(1)-\mathfrak{f}_{1i}(a)\mathfrak{q}_{2j}(1)+\mathfrak%
{p}_{2j}(1)\mathfrak{f}_{1i}(a)-\mathfrak{f}_{1j}(1)\mathfrak{q}_{1i}(a)\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,j\neq 0}\big{(}\mathfrak{p}_{1i}(a)m_{j}-m_{j}%
\mathfrak{q}_{1i}(a)\big{)}+\sum_{i+j=k,i,j\neq 0}\big{(}\mathfrak{p}^{\prime%
\prime}_{j}(1)\mathfrak{f}_{1i}(a)+\mathfrak{f}_{1i}(a)\mathfrak{q}_{j}(1)\big%
{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,j\neq 0}\big{(}\mathfrak{p}_{1i}(a)m_{j}-m_{j}%
\mathfrak{q}_{1i}(a)\big{)}$$
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\big{(}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+%
\beta)_{r}=j}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}\sum%
_{s+t=i}(P_{t}(a)m_{s}-m_{s}Q^{\prime\prime}_{t}(a))\big{)}$$
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\big{(}\sum_{s+t=i}\big{(}P_{t}(a)m_{s}-m_%
{s}Q^{\prime\prime}_{t}(a)\big{)}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}%
=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,j\neq 0}\big{(}P_{i}(a)m_{j}-m_{j}Q^{\prime\prime}_{i%
}(a)\big{)}.$$
It’s remarkable that $\sum_{i+j=k,i,j\neq 0}\big{(}P^{\prime\prime}_{j}(1)\mathfrak{f}_{1i}(a)-%
\mathfrak{f}_{1i}(a)Q_{j}(1)\big{)}=0$ by induction hypothesis.
Similarly we can show that $\mathfrak{g}_{1k}(1)=-\mathfrak{g}_{2k}(1)$ and similar equations hold for
$\mathfrak{g}_{1k}(a),\mathfrak{g}_{2k}(b),\mathfrak{f}_{2k}(b)$.
On the other hand when we set $a=0$ in (6) we get
$$\displaystyle\mathfrak{f}_{3k}(mb)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{3k}(m)b+m\mathfrak{q}_{2k}(b)-\mathfrak{p}_{2k}(b)m%
+\hskip-11.381102pt\sum_{i+j=k,i,j\neq 0}\big{(}\mathfrak{p}_{3i}(m)\mathfrak{%
f}_{2j}(b)+\mathfrak{f}_{3i}(m)\mathfrak{q}_{2j}(b)\big{)}$$
(6.7)
$$\displaystyle-\hskip-11.381102pt\sum_{i+j=k,i,j\neq 0}\big{(}\mathfrak{p}_{2j}%
(b)\mathfrak{f}_{3i}(m)+\mathfrak{f}_{2j}(b)\mathfrak{q}_{3i}(m)\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\big{(}\mathfrak{f}_{3i}(m)\mathfrak{q}_{2j}(b)-%
\mathfrak{p}_{2j}(b)\mathfrak{f}_{3i}(m)\big{)}+\hskip-11.381102pt\sum_{i+j=k,%
i,j\neq 0}\big{(}\mathfrak{p}_{3i}(m)\mathfrak{f}_{2j}(b)-\mathfrak{f}_{2j}(b)%
\mathfrak{q}_{3i}(m)\big{)}.$$
We calculated the phrase appeared in the last Sigma of (6.7).
$$\displaystyle\mathfrak{p}_{3i}(m)\mathfrak{f}_{2j}(b)\hskip-2.845276pt-\hskip-%
2.845276pt\mathfrak{f}_{2j}(b)\mathfrak{q}_{3i}(m)$$
$$\displaystyle=$$
$$\displaystyle\quad\hskip-11.381102pt\sum_{s+t=i}\sum_{\lambda+\mu=j}\mathfrak{%
f}_{3t}(m)\mathcal{N}_{s}(m_{\lambda}Q_{\mu}(b)-P^{\prime\prime}_{\mu}(b)m_{%
\lambda})+\sum_{s+t=i}\sum_{\lambda+\mu=j}(m_{\lambda}Q_{\mu}(b)-P^{\prime%
\prime}_{\mu}(b)m_{\lambda})\mathcal{N}_{s}\mathfrak{f}_{3t}(m)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{3i}(m)\sum_{s+\lambda+\mu=j}\mathcal{N}_{s}(m_{%
\lambda}Q_{\mu}(b)-P^{\prime\prime}_{\mu}(b)m_{\lambda})+\sum_{s+\lambda+\mu=j%
}(m_{\lambda}Q_{\mu}(b)-P^{\prime\prime}_{\mu}(b)m_{\lambda})\mathcal{N}_{s}%
\mathfrak{f}_{3i}(m)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{3i}(m)\sum_{s+\lambda+\mu=j}\sum_{r=1}^{\nu_{s}}%
\hskip-2.845276pt\sum_{(\alpha+\beta)_{r}+\gamma=s}\hskip-14.226378pt(n_{%
\alpha_{1}}m_{\beta_{1}}\ldots n_{\alpha_{r}}m_{\beta_{r}}n_{\gamma}+n_{s})(m_%
{\lambda}Q_{\mu}(b)-P^{\prime\prime}_{\mu}(b)m_{\lambda})$$
$$\displaystyle+\sum_{s+\lambda+\mu=j}\sum_{r=1}^{\nu_{s}}\hskip-2.845276pt\sum_%
{(\alpha+\beta)_{r}+\gamma=s}\hskip-14.226378pt(m_{\lambda}Q_{\mu}(b)-P^{%
\prime\prime}_{\mu}(b)m_{\lambda})(n_{\alpha_{1}}m_{\beta_{1}}\ldots n_{\alpha%
_{r}}m_{\beta_{r}}n_{\gamma}+n_{s})\mathfrak{f}_{3i}(m)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{f}_{3i}(m)\sum_{r^{\prime}=1}^{\eta_{j}}\sum_{(\alpha+%
\beta)_{r^{\prime}}+\mu=j,\,\mu\leq j-2}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{%
\prime}}}\ldots n_{\alpha_{1}}(m_{\beta_{1}}Q_{\mu}(b)-P^{\prime\prime}_{\mu}(%
b)m_{\beta_{1}})$$
$$\displaystyle+\sum_{r^{\prime}=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r^{\prime}}+%
\mu=j,\,\mu\leq j-2}(m_{\beta_{1}}Q_{\mu}(b)-P^{\prime\prime}_{\mu}(b)m_{\beta%
_{1}})n_{\alpha_{1}}\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}%
\mathfrak{f}_{3i}(m).$$
The indices $s,t$ appeared in the first equation of above relations acquire all values between $0$ to $i$ and $i$ takes all values between $1$ to $k-1$, because of all these changes is symmetric the second equation of above relations hold.
Note that one of the maximum length for $(n_{\alpha_{1}}m_{\beta_{1}})\ldots(n_{\alpha_{r}}m_{\beta_{r}})$ is at
$\mu=0,\lambda=1$, in this case $j=s+1$ and the length of
$(n_{\alpha_{1}}m_{\beta_{1}})\ldots(n_{\alpha_{\nu_{s}}}m_{\beta_{\nu_{s}}})(n%
_{\gamma}m_{1})$ is
$$\nu_{s}+1=\nu_{j-1}+1=\left\{\begin{array}[]{ccc}j/2&;&j-1\in\mathbb{O}\\
(j-1)/2&;&j-1\in\mathbb{E}\end{array}\right.=\left\{\begin{array}[]{ccc}j/2&;&%
j\in\mathbb{E}\\
(j-1)/2&;&j\in\mathbb{O}\end{array}\right.=\eta_{j}.$$
It is remarkable that the length of
$$(n_{\alpha_{1}}m_{\beta_{1}})\ldots(n_{\alpha_{\nu_{s}}}m_{\beta_{\nu_{s}}})(n%
_{\gamma}m_{\lambda}),\qquad\sum_{i=1}^{\nu_{s}}(\alpha_{i}+\beta_{i})+\gamma+%
\lambda=j,$$
in the case where $\lambda\neq 1$ is the same as length of
$(n_{\alpha_{1}}m_{\beta_{1}})\ldots(n_{\alpha_{\nu_{s}}}m_{\beta_{\nu_{s}}})(n%
_{\gamma}m_{1})$, or equivalently it is $\eta_{j}$.
Now by replacing this relation in (6.7) we get
$$\mathfrak{f}_{3k}(mb)=\sum_{i+j=k}\big{(}\mathfrak{f}_{3i}(m)Q_{j}(b)-P^{%
\prime\prime}_{j}(b)\mathfrak{f}_{3i}(m)\big{)}.$$
Similarly one can check similar equations for $\mathfrak{f}_{3k}(am),\mathfrak{g}_{4k}(na),\mathfrak{g}_{4k}(bn)$. Again from (6.1) we get,
$$\displaystyle\mathfrak{p}_{3k}(mb)$$
$$\displaystyle=$$
$$\displaystyle a\mathfrak{p}_{2k}(b)+m\mathfrak{g}_{2k}(b)-\mathfrak{p}_{2k}(b)%
a+\hskip-11.381102pt\sum_{i+j=k,\,i,j\neq 0}\hskip-11.381102pt\big{(}\big{(}%
\mathfrak{p}_{1i}(a)+\mathfrak{p}_{3i}(m)\big{)}\mathfrak{p}_{2j}(b)+\big{(}%
\mathfrak{f}_{1i}(a)+\mathfrak{f}_{3i}(m)\big{)}\mathfrak{g}_{2j}(b)\big{)}$$
(6.8)
$$\displaystyle-\hskip-11.381102pt\sum_{i+j=k,\,i,j\neq 0}\hskip-11.381102pt\big%
{(}\mathfrak{p}_{2j}(b)\big{(}\mathfrak{p}_{1i}(a)+\mathfrak{p}_{3i}(m)\big{)}%
+\mathfrak{f}_{2j}(b)\big{(}\mathfrak{g}_{1i}(a)+\mathfrak{g}_{3i}(m)\big{)}%
\big{)}.$$
If we put $m=0,\,b=1$ in (6.8), then we have
$$\displaystyle\mathfrak{p}_{3k}(m)$$
$$\displaystyle=$$
$$\displaystyle-mn_{k}-\sum_{i+j=k,\,i,j\neq 0}\bigg{(}\mathfrak{f}_{3i}(m)n_{j}%
+m_{j}\sum_{s+l+t=i}\mathcal{N}_{l}\mathfrak{f}_{3t}(m)\mathcal{N}_{s}\bigg{)}$$
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}\mathfrak{p}_{3i}(m)\sum_{r=1}^{\eta_{j}%
}\sum_{(\alpha+\beta)_{r}=j}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{%
\alpha_{r}}$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta%
)_{r}=j}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}\mathfrak%
{p}_{3i}(m)$$
$$\displaystyle=$$
$$\displaystyle-\sum_{i+j=k}\mathfrak{f}_{3i}(m)n_{j}-\sum_{i+j=k,\,i,j\neq 0}%
\sum_{s+l+t=i}m_{j}\mathcal{N}_{l}\mathfrak{f}_{3t}(m)\mathcal{N}_{s}$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta%
)_{r}=j}\sum_{s+t=i}\mathfrak{f}_{3t}(m)\mathcal{N}_{s}m_{\beta_{1}}n_{\alpha_%
{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}$$
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta%
)_{r}=j}\sum_{s+t=i}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{%
r}}\mathfrak{f}_{3t}(m)\mathcal{N}_{s}$$
$$\displaystyle=$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}\sum_{r=1}^{\nu_{j}}\sum_{(\alpha+\beta)%
_{r}+\gamma=j}\mathfrak{f}_{3i}(m)\bigg{(}n_{j}+n_{\gamma}m_{\beta_{1}}n_{%
\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}\bigg{)}$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta%
)_{r}=j}\sum_{s+t=i}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{%
r}}\mathfrak{f}_{3t}(m)\mathcal{N}_{s}$$
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta%
)_{r}=j}\sum_{s+t=i}m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{%
r}}\mathfrak{f}_{3t}(m)\mathcal{N}_{s}$$
$$\displaystyle=$$
$$\displaystyle-\sum_{i+j=k}\mathfrak{f}_{3i}(m)\mathcal{N}_{j}.$$
Also it is not difficult to check similar relations for $\mathfrak{q}_{3k}(m),\mathfrak{p}_{4k}(n),\mathfrak{q}_{4k}(n)$. From $(2,1)$-entry equation (6.1) we have
$$\displaystyle\mathfrak{g}_{3k}(mb)$$
$$\displaystyle=$$
$$\displaystyle-b(\mathfrak{g}_{1k}(a)+\mathfrak{g}_{3k}(m))-\mathfrak{g}_{2k}(b)a$$
$$\displaystyle+\sum_{i+j=k}\big{(}(\mathfrak{g}_{1i}(a)+\mathfrak{g}_{3i}(m))%
\mathfrak{p}_{2j}(b)+(\mathfrak{q}_{1i}(a)+\mathfrak{q}_{3i}(m))\mathfrak{g}_{%
2j}(b)\big{)}$$
$$\displaystyle-\sum_{i+j=k}\big{(}\mathfrak{g}_{2j}(b)(\mathfrak{p}_{1i}(a)+%
\mathfrak{p}_{3i}(m))+\mathfrak{q}_{2j}(b)(\mathfrak{g}_{1i}(a)+\mathfrak{g}_{%
3i}(m))\big{)}.$$
Set $m=0,b=1$ then we have
$$\displaystyle 2\mathfrak{g}_{3k}(m)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,\,i,j\neq 0}\big{(}\mathfrak{g}_{3i}(m)\mathfrak{p}_{%
2j}(1)-\mathfrak{q}_{2j}(1)\mathfrak{g}_{3i}(m)-\mathfrak{q}_{3i}(m)n_{j}+n_{j%
}\mathfrak{p}_{3i}(m)\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,\,i,j\neq 0}(\mathfrak{g}_{3i}(m)p^{\prime\prime}_{j}%
(1)+q_{j}(1)\mathfrak{g}_{3i}(m))$$
$$\displaystyle-\sum_{s+t+j=k}(\mathcal{N}_{s}\mathfrak{f}_{3t}(m)n_{j}+n_{j}%
\mathfrak{f}_{3t}(m)\mathcal{N}_{s})$$
$$\displaystyle=$$
$$\displaystyle-\sum_{s+t+l+j=k}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}=j}%
\mathcal{N}_{s}\mathfrak{f}_{3t}(m)\mathcal{N}_{l}m_{\beta_{1}}n_{\alpha_{1}}%
\ldots m_{\beta_{r}}n_{\alpha_{r}}$$
$$\displaystyle-\sum_{s+t+l+j=k}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}=j}%
n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}\mathcal{N}_{s}%
\mathfrak{f}_{3t}(m)\mathcal{N}_{l}$$
$$\displaystyle-\sum_{s+t+j=k}(\mathcal{N}_{s}\mathfrak{f}_{3t}(m)n_{j}+n_{j}%
\mathfrak{f}_{3t}(m)\mathcal{N}_{s})$$
$$\displaystyle=$$
$$\displaystyle-\sum_{s+t+j=k}\big{(}\mathcal{N}_{s}\mathfrak{f}_{3t}(m)(n_{j}+%
\sum_{r=1}^{\nu_{j}}\sum_{(\alpha+\beta)_{r}+l=j}n_{\alpha_{r}}m_{\beta_{r}}%
\ldots n_{\alpha_{1}}m_{\beta_{1}}n_{l})\big{)}$$
$$\displaystyle-\sum_{s+t+j=k}\big{(}(\sum_{r=1}^{\nu_{j}}\sum_{(\alpha+\beta)_{%
r}+l=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}n_{l}+n_{j%
})\mathfrak{f}_{3t}(m)\mathcal{N}_{s}\big{)}$$
$$\displaystyle=$$
$$\displaystyle-2\sum_{s+t+j=k}\mathcal{N}_{s}\mathfrak{f}_{3t}(m)\mathcal{N}_{j}.$$
As $A$ is $2$-torsion free we get $\mathfrak{g}_{3k}(m)=-\sum_{s+t+j=k}\mathcal{N}_{s}\mathfrak{f}_{3t}(m)%
\mathcal{N}_{j}$. By similar argument and $2$-torsion freeness of $B$, it follows that
$\mathfrak{f}_{4k}(n)=-\sum_{s+t+j=k}\mathcal{M}_{s}\mathfrak{g}_{4t}(n)%
\mathcal{M}_{j}$.
From $(2,2)$-entry of equation (6.1) we have
$$\displaystyle\mathfrak{q}_{3k}(mb)$$
$$\displaystyle=$$
$$\displaystyle(\mathfrak{q}_{1k}(a)+\mathfrak{q}_{3k}(m))b-b(\mathfrak{q}_{1k}(%
a)+\mathfrak{q}_{3k}(m))-\mathfrak{g}_{2k}(m)b$$
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}\big{(}(\mathfrak{g}_{1i}(a)+\mathfrak{g%
}_{3i}(m))\mathfrak{f}_{2j}(b)+(\mathfrak{q}_{1i}(a)+\mathfrak{q}_{3i}(m))%
\mathfrak{q}_{2j}(b)\big{)}$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}\big{(}\mathfrak{g}_{2j}(b)(\mathfrak{f}%
_{1i}(a)+\mathfrak{f}_{3i}(m))+\mathfrak{q}_{2j}(b)(\mathfrak{q}_{1i}(a)+%
\mathfrak{q}_{3i}(m))\big{)},$$
Set $m=0$ in the last equation then we get
$0=\mathfrak{q}_{1k}(a)b-b\mathfrak{q}_{1k}(a)+\sum_{i+j=k,\,i,j\neq 0}\big{(}%
\mathfrak{g}_{1i}(a)\mathfrak{f}_{2j}(b)+\mathfrak{q}_{1i}(a)\mathfrak{q}_{2j}%
(b)-\mathfrak{g}_{2j}(b)\mathfrak{f}_{1i}(a)-\mathfrak{q}_{2j}(b)\mathfrak{q}_%
{1i}(a)\big{)}$
From replacement $\mathfrak{q}_{1i},\mathfrak{q}_{2j}$ in the last equation with $Q^{\prime\prime}_{i}+q^{\prime\prime}_{i},Q_{j}-q_{j}$ respectively and assumption of induction we have
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{q}_{1k}(a)b+\sum_{s+t+\mu+\lambda=k}(n_{s}P^{\prime}_{t%
}(a)-Q^{\prime\prime}_{t}(a)n_{s})(P^{\prime\prime}_{\mu}(b)m_{\lambda}-m_{%
\lambda}Q_{\mu}(b))$$
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}(Q^{\prime\prime}_{i}(a)+q^{\prime\prime%
}_{i}(a))(Q_{j}(b)-q_{j}(b))$$
$$\displaystyle-b\mathfrak{q}_{1k}(a)-\sum_{s+t+\mu+\lambda=k}(n_{s}P^{\prime%
\prime}_{\mu}(b)-Q^{\prime}_{\mu}(b)n_{s})(P_{t}(a)m_{\lambda}-m_{\lambda}Q^{%
\prime\prime}_{t}(a))$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}(Q_{j}(b)-q_{j}(b))(Q^{\prime\prime}_{i}%
(a)+q^{\prime\prime}_{i}(a))$$
hence
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{q}_{1k}(a)b-\sum_{s+t+\lambda=k}(n_{s}P^{\prime}_{t}(a)%
-Q^{\prime\prime}_{t}(a)n_{s})m_{\lambda}b$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}q^{\prime\prime}_{i}(a)\sum_{r=1}^{\eta_%
{j}}\sum_{(\alpha+\beta)_{r}=j,}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1%
}}m_{\beta_{1}}b$$
$$\displaystyle-b\mathfrak{q}_{1k}(a)+\sum_{s+t+\lambda=k}bn_{s}(P_{t}(a)m_{%
\lambda}-m_{\lambda}Q^{\prime\prime}_{t}(a))$$
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta%
)_{r}=j,}bn_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}q^{%
\prime\prime}_{i}(a)$$
$$\displaystyle=$$
$$\displaystyle\mathfrak{q}_{1k}(a)b-\sum_{s+t+\lambda=k}(n_{s}P^{\prime}_{t}(a)%
-Q^{\prime\prime}_{t}(a)n_{s})m_{\lambda}b$$
$$\displaystyle-\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{s+(\alpha+\beta)%
_{r^{\prime}}=i}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}=j,}(n_{\alpha_{1%
}}P^{\prime}_{s}(a)-Q^{\prime\prime}_{s}(a)n_{\alpha_{1}})m_{\beta_{1}}\ldots n%
_{\alpha_{r^{\prime}}}m_{\beta_{r^{\prime}}}n_{\alpha_{r}}m_{\beta_{r}}\ldots n%
_{\alpha_{1}}m_{\beta_{1}}b$$
$$\displaystyle-b\mathfrak{q}_{1k}(a)+\sum_{s+t+\lambda=k}bn_{s}(P_{t}(a)m_{%
\lambda}-m_{\lambda}Q^{\prime\prime}_{t}(a))$$
$$\displaystyle+b\sum_{i+j=k}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}=j}%
\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{s+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_%
{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}n_{\alpha_{r^{\prime}}}m_{%
\beta_{r^{\prime}}}\ldots n_{\alpha_{1}}(P_{s}(a)m_{\beta_{1}}-m_{\beta_{1}}Q^%
{\prime\prime}_{s}(a))$$
$$\displaystyle=$$
$$\displaystyle\big{(}\mathfrak{q}_{1k}(a)-\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+%
\beta)_{r}=k,\,i\leq k-2}(n_{\alpha_{1}}P^{\prime}_{i}(a)-Q^{\prime\prime}_{i}%
(a)n_{\alpha_{1}})m_{\beta_{1}}\ldots n_{\alpha_{r}}m_{\beta_{r}}\big{)}b$$
$$\displaystyle-b\big{(}\mathfrak{q}_{1k}(a)-\sum_{r=1}^{\eta_{k}}\sum_{i+(%
\alpha+\beta)_{r}=k,\,i\leq k-2}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1%
}}(P_{i}(a)m_{\beta_{1}}-m_{\beta_{1}}Q^{\prime\prime}_{i}(a))\big{)}$$
$$\displaystyle=$$
$$\displaystyle[Q^{\prime\prime}_{k}(a),b]$$
i.e. $Q^{\prime\prime}_{k}(a):=\mathfrak{q}_{1k}(a)-q^{\prime\prime}_{k}(a)\in Z(B)$ for all $a\in A$. By similar argument one can check that
$P^{\prime\prime}_{k}(b):=\mathfrak{p}_{2k}(b)-p^{\prime\prime}_{k}(b)\in Z(A)$ for all $b\in B$.
Now apply $\mathcal{L}_{k}$ on commutator
$\left[\left(\begin{array}[]{cc}0&0\\
0&b\end{array}\right),\left(\begin{array}[]{cc}0&0\\
0&b^{\prime}\end{array}\right)\right]$ we have
$$\displaystyle\left(\begin{array}[]{cc}\mathfrak{p}_{2k}[b,b^{\prime}]&*\\
*&\mathfrak{q}_{2k}[b,b^{\prime}]\end{array}\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\left[\left(\begin{array}[]{cc}\mathfrak{p}_{2i}(b)&%
\mathfrak{f}_{2i}(b)\\
\mathfrak{g}_{2i}(b)&\mathfrak{q}_{2i}(b)\end{array}\right),\left(\begin{array%
}[]{cc}\mathfrak{p}_{2j}(b^{\prime})&\mathfrak{f}_{2j}(b^{\prime})\\
\mathfrak{g}_{2j}(b^{\prime})&\mathfrak{q}_{2j}(b^{\prime})\end{array}\right)%
\right].$$
(6.9)
From $(1,1)$-entry of above equation and assumption of induction we have
$$\displaystyle\mathfrak{p}_{2k}[b,b^{\prime}]$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}[\mathfrak{p}_{2i}(b),\mathfrak{p}_{2j}(b^{\prime})]+%
\sum_{i+j=k}\big{(}\mathfrak{f}_{2i}(b)\mathfrak{g}_{2j}(b^{\prime})-\mathfrak%
{f}_{2j}(b^{\prime})\mathfrak{g}_{2i}(b)\big{)}$$
(6.10)
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}[p^{\prime\prime}_{i}(b),p^{\prime\prime}_{j}(b^{%
\prime})]+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}\big{(}P^{\prime\prime}_{%
\zeta}(b)m_{\beta_{1}}-m_{\beta_{1}}Q_{\zeta}(b)\big{)}\big{(}n_{\alpha_{1}}P^%
{\prime\prime}_{\xi}(b^{\prime})-Q^{\prime}_{\xi}(b^{\prime})n_{\alpha_{1}}%
\big{)}$$
$$\displaystyle-\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}\big{(}P^{\prime\prime}_{%
\xi}(b^{\prime})m_{\beta_{1}}-m_{\beta_{1}}Q_{\xi}(b^{\prime})\big{)}\big{(}n_%
{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)-Q^{\prime}_{\zeta}(b)n_{\alpha_{1}}%
\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\zeta+(\alpha+%
\beta)_{r^{\prime}}=i}\sum_{r=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta)_{r}=j}$$
$$\displaystyle\big{(}m_{\beta_{1}}Q_{\zeta}(b)n_{\alpha_{1}}\ldots m_{\beta_{r^%
{\prime}}}n_{\alpha_{r^{\prime}}}(m_{\beta_{1}}Q_{\xi}(b^{\prime})-P^{\prime%
\prime}_{\xi}(b^{\prime})m_{\beta_{1}})n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{%
\alpha_{r}}$$
$$\displaystyle-P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}n_{\alpha_{1}}\ldots m_{%
\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}(m_{\beta_{1}}Q_{\xi}(b^{\prime})-P^%
{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}})n_{\alpha_{1}}\ldots m_{\beta_{r%
}}n_{\alpha_{r}}$$
$$\displaystyle-m_{\beta_{1}}Q_{\xi}(b^{\prime})n_{\alpha_{1}}\ldots m_{\beta_{r%
}}n_{\alpha_{r}}(m_{\beta_{1}}Q_{\zeta}(b)-P^{\prime\prime}_{\zeta}(b)m_{\beta%
_{1}})n_{\alpha_{1}}\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}$$
$$\displaystyle+P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}n_{\alpha_{1}}%
\ldots m_{\beta_{r}}n_{\alpha_{r}}(m_{\beta_{1}}Q_{\zeta}(b)-P^{\prime\prime}_%
{\zeta}(b)m_{\beta_{1}})n_{\alpha_{1}}\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{%
r^{\prime}}}\big{)}$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}m_{\beta_{1}}Q_{\zeta}(b)%
Q^{\prime}_{\xi}(b^{\prime})n_{\alpha_{1}}-\sum_{\alpha_{1}+\beta_{1}+\xi+%
\zeta=k}m_{\beta_{1}}Q_{\xi}(b^{\prime})Q^{\prime}_{\zeta}(b)n_{\alpha_{1}}$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}m_{\beta_{1}}Q_{\xi}(b^{%
\prime})n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)-\sum_{\alpha_{1}+\beta_{1}+%
\xi+\zeta=k}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}Q^{\prime}_{\xi}(b^{\prime%
})n_{\alpha_{1}}$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}P^{\prime\prime}_{\xi}(b^%
{\prime})m_{\beta_{1}}Q^{\prime}_{\zeta}(b)n_{\alpha_{1}}-\sum_{\alpha_{1}+%
\beta_{1}+\xi+\zeta=k}m_{\beta_{1}}Q_{\zeta}(b)n_{\alpha_{1}}P^{\prime\prime}_%
{\xi}(b^{\prime})$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r=1}^{\eta_{j}}\sum_{\zeta+(\alpha+\beta)_{r}=j%
}\big{(}m_{\beta_{1}}Q_{\zeta}(b)q_{i}(b^{\prime})n_{\alpha_{1}}\ldots m_{%
\beta_{r}}n_{\alpha_{r}}-P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}q_{i}(b^{%
\prime})n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}\big{)}$$
$$\displaystyle-\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\xi+(\alpha+%
\beta)_{r^{\prime}}=i}m_{\beta_{1}}Q_{\xi}(b^{\prime})q_{j}(b)n_{\alpha_{1}}%
\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\xi+(\alpha+%
\beta)_{r^{\prime}}=i}P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}q_{j}(b)n%
_{\alpha_{1}}\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\zeta=j}m_{\beta_{1}}Q_{%
\zeta}(b)q_{2i}(b^{\prime})n_{\alpha_{1}}-\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{%
1}+\xi=i}m_{\beta_{1}}Q_{\xi}(b^{\prime})q_{2j}(b)n_{\alpha_{1}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\zeta=j}m_{\beta_{1}}Q_{%
\zeta}(b)q^{\prime}_{i}(b^{\prime})n_{\alpha_{1}}-\sum_{i+j=k}\sum_{\alpha_{1}%
+\beta_{1}+\xi=i}m_{\beta_{1}}Q_{\xi}(b^{\prime})q^{\prime\prime}_{2j}(b)n_{%
\alpha_{1}}$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}m_{\beta_{1}}Q_{\xi}(b^{%
\prime})n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)-\sum_{\alpha_{1}+\beta_{1}+%
\xi+\zeta=k}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}Q^{\prime}_{\xi}(b^{\prime%
})n_{\alpha_{1}}$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}P^{\prime\prime}_{\xi}(b^%
{\prime})m_{\beta_{1}}Q^{\prime}_{\zeta}(b)n_{\alpha_{1}}-\sum_{\alpha_{1}+%
\beta_{1}+\xi+\zeta=k}m_{\beta_{1}}Q_{\zeta}(b)n_{\alpha_{1}}P^{\prime\prime}_%
{\xi}(b^{\prime}).$$
By replacing $Q^{\prime}_{*}$ with $\mathfrak{q}_{2*}+q^{\prime}_{*}$ in following sentences of relation (6.10) we have
$$\displaystyle\sum_{i+j=k}\sum_{r=1}^{\eta_{j}}\sum_{\zeta+(\alpha+\beta)_{r}=j%
}m_{\beta_{1}}Q_{\zeta}(b)q_{i}(b^{\prime})n_{\alpha_{1}}(m_{\beta}n_{\alpha})%
^{r}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\zeta=j}m_{\beta_{1}}Q_{%
\zeta}(b)\mathfrak{q}_{2i}(b^{\prime})n_{\alpha_{1}}+\sum_{i+j=k}\sum_{\alpha_%
{1}+\beta_{1}+\zeta=j}m_{\beta_{1}}Q_{\zeta}(b)q^{\prime}_{i}(b^{\prime})n_{%
\alpha_{1}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r=2}^{\eta_{j}}\sum_{\zeta+(\alpha+\beta)_{r}=j%
}m_{\beta_{1}}Q_{\zeta}(b)q_{i}(b^{\prime})n_{\alpha_{1}}(m_{\beta}n_{\alpha})%
^{r}+\sum_{\alpha_{1}+\beta_{1}+\zeta+\xi=k}m_{\beta_{1}}Q_{\zeta}(b)Q_{\xi}(b%
^{\prime})n_{\alpha_{1}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\zeta=j}\sum_{r^{\prime}=%
1}^{\eta_{i}}\sum_{(\alpha+\beta)_{r^{\prime}}+i_{1}=i}m_{\beta_{1}}Q_{\zeta}(%
b)(Q^{\prime}_{i_{1}}(b^{\prime})n_{\alpha_{1}}-n_{\alpha_{1}}P^{\prime\prime}%
_{i_{1}}(b^{\prime}))m_{\beta_{1}}\ldots n_{\alpha_{r^{\prime}}}m_{\beta_{r^{%
\prime}}})n_{\alpha_{1}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r=2}^{\eta_{j}}\sum_{\zeta+(\alpha+\beta)_{r}=j%
}m_{\beta_{1}}Q_{\zeta}(b)q_{i}(b^{\prime})n_{\alpha_{1}}(m_{\beta}n_{\alpha})%
^{r}+\sum_{\alpha_{1}+\beta_{1}+\zeta+\xi=k}m_{\beta_{1}}Q_{\zeta}(b)Q_{\xi}(b%
^{\prime})n_{\alpha_{1}}$$
$$\displaystyle+\sum_{i_{1}+j_{1}=k}\sum_{r^{\prime}=1}^{\eta_{j_{1}}}\sum_{%
\zeta+(\alpha+\beta)_{r^{\prime}}=j_{1}}m_{\beta_{1}}Q_{\zeta}(b)(\mathfrak{q}%
_{2i_{1}}(b^{\prime})+q^{\prime}_{i_{1}}(b^{\prime}))n_{\alpha_{1}}(m_{\beta}n%
_{\alpha})^{r^{\prime}}$$
$$\displaystyle-\sum_{i_{1}+j_{1}=k}\sum_{r^{\prime}=1}^{\eta_{j_{1}}}\sum_{%
\zeta+(\alpha+\beta)_{r^{\prime}}=j_{1}}m_{\beta_{1}}Q_{\zeta}(b)n_{\alpha_{1}%
}P^{\prime\prime}_{i_{1}}(b^{\prime})(m_{\beta}n_{\alpha})^{r^{\prime}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r=4}^{\eta_{j}}\sum_{\zeta+(\alpha+\beta)_{r}=j%
}m_{\beta_{1}}Q_{\zeta}(b)q_{i}(b^{\prime})n_{\alpha_{1}}(m_{\beta}n_{\alpha})%
^{r}+\sum_{(\alpha+\beta)_{2}+\zeta+\xi=k}m_{\beta_{1}}Q_{\zeta}(b)Q_{\xi}(b^{%
\prime})n_{\alpha_{1}}m_{\beta_{2}}n_{\alpha_{2}}$$
$$\displaystyle+\sum_{i_{1}+j_{1}=k}\sum_{r^{\prime}=1}^{\eta_{j_{1}}}\sum_{%
\zeta+(\alpha+\beta)_{r^{\prime}}=j_{1}}m_{\beta_{1}}Q_{\zeta}(b)q^{\prime}_{i%
_{1}}(b^{\prime})n_{\alpha_{1}}(m_{\beta}n_{\alpha})^{r^{\prime}}$$
$$\displaystyle-\sum_{i_{1}+j_{1}=k}\sum_{r^{\prime}=1}^{\eta_{j_{1}}}\sum_{%
\zeta+(\alpha+\beta)_{r^{\prime}}=j_{1}}m_{\beta_{1}}Q_{\zeta}(b)n_{\alpha_{1}%
}P_{i_{1}}(b^{\prime})(m_{\beta}n_{\alpha})^{r^{\prime}}$$
$$\displaystyle=$$
$$\displaystyle\vdots$$
$$\displaystyle=$$
$$\displaystyle\sum_{r^{\prime}=1}^{\eta_{k}}\sum_{\xi+\zeta+(\alpha+\beta)_{r^{%
\prime}}=k}m_{\beta_{1}}Q_{\zeta}(b)Q_{\xi}(b^{\prime})n_{\alpha_{1}}(m_{\beta%
}n_{\alpha})^{r^{\prime}}-\sum_{r^{\prime}=2}^{\eta_{k}}\sum_{\xi+\zeta+(%
\alpha+\beta)_{r^{\prime}}=k}m_{\beta_{1}}Q_{\zeta}(b)n_{\alpha_{1}}P^{\prime%
\prime}_{\xi}(b^{\prime})(m_{\beta}n_{\alpha})^{r^{\prime}}.$$
By a similar way on following sentences of relation (6.10) we have
$$\displaystyle-\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\xi=i}m_{\beta_{1}}Q_{\xi%
}(b^{\prime})\mathfrak{q}_{2j}(b)n_{\alpha_{1}}$$
$$\displaystyle-\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\xi+(\alpha+%
\beta)_{r^{\prime}}=i}m_{\beta_{1}}Q_{\xi}(b^{\prime})q_{j}(b)n_{\alpha_{1}}%
\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}-\sum_{i+j=k}\sum_{\alpha_%
{1}+\beta_{1}+\xi=i}m_{\beta_{1}}Q_{\xi}(b^{\prime})q^{\prime}_{j}(b)n_{\alpha%
_{1}}$$
$$\displaystyle=$$
$$\displaystyle-\sum_{r^{\prime}=1}^{\eta_{k}}\sum_{\xi+\zeta+(\alpha+\beta)_{r^%
{\prime}}=k}m_{\beta_{1}}Q_{\xi}(b^{\prime})Q_{\zeta}(b)n_{\alpha_{1}}\ldots m%
_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}$$
$$\displaystyle+\sum_{r^{\prime}=2}^{\eta_{k}}\sum_{\xi+\zeta+(\alpha+\beta)_{r^%
{\prime}}=k}m_{\beta_{1}}Q_{\xi}(b^{\prime})n_{\alpha_{1}}P^{\prime\prime}_{%
\zeta}(b)\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}.$$
Now consider following sentences of relation (6.10)
$$\displaystyle\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\zeta=k}m_{\beta_{1}}Q_{i}%
(b^{\prime})n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)$$
$$\displaystyle-\sum_{i+j=k}\sum_{r=1}^{\eta_{j}}\sum_{\zeta+(\alpha+\beta)_{r}=%
j}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}q_{i}(b^{\prime})n_{\alpha_{1}}%
\ldots m_{\beta_{r}}n_{\alpha_{r}}$$
$$\displaystyle-\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\zeta=j}P^{\prime\prime}_%
{\zeta}(b)m_{\beta_{1}}Q^{\prime}_{i}(b^{\prime})n_{\alpha_{1}}$$
$$\displaystyle=$$
$$\displaystyle-\sum_{i+j=k}\sum_{r=2}^{\eta_{j}}\sum_{\zeta+(\alpha+\beta)_{r}=%
j}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}q_{i}(b^{\prime})n_{\alpha_{1}}%
\ldots m_{\beta_{r}}n_{\alpha_{r}}$$
$$\displaystyle-\sum_{i+j=k}\sum_{\alpha_{1}+\beta_{1}+\zeta=k}P^{\prime\prime}_%
{\zeta}(b)m_{\beta_{1}}q^{\prime}_{i}(b^{\prime})n_{\alpha_{1}}$$
$$\displaystyle=$$
$$\displaystyle\vdots$$
$$\displaystyle=$$
$$\displaystyle-\sum_{r=2}^{\eta_{k}}\sum_{\xi+\zeta+(\alpha+\beta)_{r}=k}P^{%
\prime\prime}_{\zeta}(b)m_{\beta_{1}}Q_{\xi}(b^{\prime})n_{\alpha_{1}}\ldots m%
_{\beta_{r}}n_{\alpha_{r}}$$
Similarly consider following sentences of relation (6.10) we have
$$\displaystyle-\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}m_{\beta_{1}}Q_{\zeta}(b)%
n_{\alpha_{1}}P^{\prime\prime}_{\xi}(b^{\prime})$$
$$\displaystyle+\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\xi+(\alpha+%
\beta)_{r^{\prime}}=i}P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}q_{j}(b)n%
_{\alpha_{1}}\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}P^{\prime\prime}_{\xi}(b^%
{\prime})m_{\beta_{1}}Q^{\prime}_{\zeta}(b)n_{\alpha_{1}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r^{\prime}=2}^{\eta_{k}}\sum_{\xi+\zeta+(\alpha+\beta)_{r^{%
\prime}}=k}P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}Q_{\zeta}(b)n_{%
\alpha_{1}}\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}$$
Gather above relations and replace in relation (6.10), from this and assumption of induction we have
$$\displaystyle\mathfrak{p}_{2k}[b,b^{\prime}]$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{\zeta+\xi+(\alpha+\beta)_{r}=k,\,\zeta%
+\xi\leq k-2}m_{\beta_{1}}[Q_{\zeta}(b),Q_{\xi}(b^{\prime})]n_{\alpha_{1}}%
\ldots m_{\beta_{r}}n_{\alpha_{r}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}m_{%
\beta_{1}}Q_{i}[b,b^{\prime}]n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}(m_%
{\beta_{1}}Q_{i}[b,b^{\prime}]-P^{\prime\prime}_{i}[b,b^{\prime}]m_{\beta_{1}}%
)n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}.$$
i.e. $P^{\prime\prime}_{k}[b,b^{\prime}]:=\mathfrak{p}_{2k}[b,b^{\prime}]-p^{\prime%
\prime}_{k}[b,b^{\prime}]=0$ for all $b,b^{\prime}\in B$. Similarly, $Q^{\prime\prime}_{k}[a,a^{\prime}]:=\mathfrak{q}_{1k}[a,a^{\prime}]-q^{\prime%
\prime}_{k}[a,a^{\prime}]=0$ for all $a,a^{\prime}\in A$.
From $(2,2)$-entry of equation (6.9) we have
$$\mathfrak{q}_{2k}[b,b^{\prime}]=\sum_{i+j=k}[\mathfrak{q}_{2i}(b),\mathfrak{q}%
_{2j}(b^{\prime})]+\sum_{i+j=k}\big{(}\mathfrak{g}_{2i}(b)\mathfrak{f}_{2j}(b^%
{\prime})-\mathfrak{g}_{2j}(b^{\prime})\mathfrak{f}_{2i}(b)\big{)}.$$
Replace $\mathfrak{q}_{2k},\mathfrak{q}_{2i},\mathfrak{q}_{2j}$ with $Q_{k}-q_{k},Q_{i}-q_{i},Q_{j}-q_{j}$ respectively, then we have
$$\displaystyle Q_{k}[b,b^{\prime}]-q_{k}[b,b^{\prime}]$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}[Q_{i}(b)-q_{i}(b),Q_{j}(b^{\prime})-q_{j}(b^{\prime})]$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}(n_{\alpha_{1}}P^{\prime%
\prime}_{\zeta}(b)-Q^{\prime}_{\zeta}(b)n_{\alpha_{1}})(P^{\prime\prime}_{\xi}%
(b^{\prime})m_{\beta_{1}}-m_{\beta_{1}}Q_{\xi}(b^{\prime}))$$
$$\displaystyle-\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}(n_{\alpha_{1}}P^{\prime%
\prime}_{\xi}(b^{\prime})-Q^{\prime}_{\xi}(b^{\prime})n_{\alpha_{1}})(P^{%
\prime\prime}_{\zeta}(b)m_{\beta_{1}}-m_{\beta_{1}}Q_{\zeta}(b)).$$
To show that $Q_{k}$ is a Lie higher derivation by assumption of induction, it is enough to check following equation
$$\displaystyle q_{k}[b,b^{\prime}]$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\big{(}[Q_{i}(b),q_{j}(b^{\prime})]+[q_{i}(b),Q_{j}(b%
^{\prime})]-[q_{i}(b),q_{j}(b^{\prime})]\big{)}$$
$$\displaystyle+\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}\big{(}n_{\alpha_{1}}P^{%
\prime\prime}_{\zeta}(b)m_{\beta_{1}}Q_{\xi}(b^{\prime})+Q^{\prime}_{\zeta}(b)%
n_{\alpha_{1}}P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}-Q^{\prime}_{%
\zeta}(b)n_{\alpha_{1}}m_{\beta_{1}}Q_{\xi}(b^{\prime})\big{)}$$
$$\displaystyle-\sum_{\alpha_{1}+\beta_{1}+\xi+\zeta=k}\big{(}n_{\alpha_{1}}P^{%
\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}Q_{\zeta}(b)+Q^{\prime}_{\xi}(b^{%
\prime})n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}-Q^{\prime}_{\xi%
}(b^{\prime})n_{\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b)\big{)}.$$
From definition of $q_{k}$ and equations $Q_{k}=\mathfrak{q}_{2k}+q_{k}$, $Q^{\prime}_{k}=\mathfrak{q}_{2k}+q^{\prime}_{k}$ we have
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k}n_{\alpha_{r}}m%
_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{i}[b,b^{\prime}]$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}Q_{i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta)%
_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{\xi}(b^%
{\prime})$$
$$\displaystyle-\sum_{i+j=k}Q_{i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta%
)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\xi}%
(b^{\prime})m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k}\sum_{r=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta)_{r}=j}%
n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{\xi}(b^{\prime%
})Q_{i}(b)$$
$$\displaystyle+\sum_{i+j=k}\sum_{r=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta)_{r}=j}%
n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\xi}(b^{%
\prime})m_{\beta_{1}}Q_{i}(b)$$
$$\displaystyle+\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\zeta+(\alpha+%
\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{\prime}}}\ldots n_{%
\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b)Q_{j}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\zeta+(\alpha+%
\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{\prime}}}\ldots n_{%
\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}Q_{j}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}Q_{j}(b^{\prime})\sum_{r^{\prime}=1}^{\eta_{i}}\sum_%
{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{%
\prime}}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b)$$
$$\displaystyle+\sum_{i+j=k}Q_{j}(b^{\prime})\sum_{r^{\prime}=1}^{\eta_{i}}\sum_%
{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{%
\prime}}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k}Q_{i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta%
)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{\xi}(b%
^{\prime})$$
$$\displaystyle+\sum_{i+j=k}Q_{i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta%
)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\xi}%
(b^{\prime})m_{\beta_{1}}$$
$$\displaystyle+\sum_{i+j=k}\mathfrak{q}_{2i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\xi+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{%
1}}Q_{\xi}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\mathfrak{q}_{2i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\xi+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}P^{\prime%
\prime}_{\xi}(b^{\prime})m_{\beta_{1}}$$
$$\displaystyle+\sum_{i+j=k}Q_{j}(b^{\prime})\sum_{r^{\prime}=1}^{\eta_{i}}\sum_%
{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{%
\prime}}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b)$$
$$\displaystyle-\sum_{i+j=k}Q_{j}(b^{\prime})\sum_{r^{\prime}=1}^{\eta_{i}}\sum_%
{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{%
\prime}}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k}\mathfrak{q}_{2j}(b^{\prime})\sum_{r^{\prime}=1}^{%
\eta_{i}}\sum_{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{%
\beta_{r^{\prime}}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b)$$
$$\displaystyle+\sum_{i+j=k}\mathfrak{q}_{2j}(b^{\prime})\sum_{r^{\prime}=1}^{%
\eta_{i}}\sum_{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{%
\beta_{r^{\prime}}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\zeta+\alpha_{1}=i}\sum_{\xi+\beta_{1}=j}n_{%
\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}Q_{\xi}(b^{\prime})$$
$$\displaystyle+\sum_{i+j=k}\sum_{\zeta+\alpha_{1}=i}\sum_{\xi+\beta_{1}=j}Q^{%
\prime}_{\zeta}(b)n_{\alpha_{1}}P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k}\sum_{\zeta+\alpha_{1}=i}\sum_{\xi+\beta_{1}=j}Q^{%
\prime}_{\zeta}(b)n_{\alpha_{1}}m_{\beta_{1}}Q_{\xi}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\sum_{\zeta+\alpha_{1}=i}\sum_{\xi+\beta_{1}=j}n_{%
\alpha_{1}}P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}Q_{\zeta}(b)$$
$$\displaystyle-\sum_{i+j=k}\sum_{\zeta+\beta_{1}=i}\sum_{\xi+\alpha_{1}=j}Q^{%
\prime}_{\xi}(b^{\prime})n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\zeta+\beta_{1}=i}\sum_{\xi+\alpha_{1}=j}Q^{%
\prime}_{\xi}(b^{\prime})n_{\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b).$$
By omitting similar sentences we have
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r=2}^{\eta_{j}}\sum_{\xi+(\alpha+\beta)_{r}=j}n%
_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\xi}(b^{%
\prime})m_{\beta_{1}}Q_{i}(b)$$
$$\displaystyle-\sum_{i+j=k}\sum_{r^{\prime}=2}^{\eta_{i}}\sum_{\zeta+(\alpha+%
\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{\prime}}}\ldots n_{%
\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}Q_{j}(b^{\prime})$$
$$\displaystyle+\sum_{i+j=k}\mathfrak{q}_{2i}(b)\sum_{r=2}^{\eta_{j}}\sum_{\xi+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{%
1}}Q_{\xi}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\mathfrak{q}_{2i}(b)\sum_{r=2}^{\eta_{j}}\sum_{\xi+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}P^{\prime%
\prime}_{\xi}(b^{\prime})m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k}\mathfrak{q}_{2j}(b^{\prime})\sum_{r^{\prime}=2}^{%
\eta_{i}}\sum_{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{%
\beta_{r^{\prime}}}\ldots n_{\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b)$$
$$\displaystyle+\sum_{i+j=k}\mathfrak{q}_{2j}(b^{\prime})\sum_{r^{\prime}=2}^{%
\eta_{i}}\sum_{\zeta+(\alpha+\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{%
\beta_{r^{\prime}}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\zeta+\alpha_{1}=i}\sum_{\xi+\beta_{1}=j}q^{%
\prime}_{\zeta}(b)n_{\alpha_{1}}P^{\prime\prime}_{\xi}(b^{\prime})m_{\beta_{1}}$$
$$\displaystyle-\sum_{i+j=k}\sum_{\zeta+\alpha_{1}=i}\sum_{\xi+\beta_{1}=j}q^{%
\prime}_{\zeta}(b)n_{\alpha_{1}}m_{\beta_{1}}Q_{\xi}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\sum_{\zeta+\beta_{1}=i}\sum_{\xi+\alpha_{1}=j}q^{%
\prime}_{\xi}(b^{\prime})n_{\alpha_{1}}P^{\prime\prime}_{\zeta}(b)m_{\beta_{1}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\zeta+\beta_{1}=i}\sum_{\xi+\alpha_{1}=j}q^{%
\prime}_{\xi}(b^{\prime})n_{\alpha_{1}}m_{\beta_{1}}Q_{\zeta}(b)$$
$$\displaystyle=$$
$$\displaystyle\vdots$$
$$\displaystyle=$$
$$\displaystyle\sum_{(\alpha+\beta)_{\eta_{k}-1}=k-1}n_{\alpha_{\eta_{k}-1}}m_{%
\beta_{\eta_{k}-1}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{1}(b^{\prime})m_{%
\beta_{1}}b$$
$$\displaystyle-\sum_{(\alpha+\beta)_{\eta_{k}-1}=k-1}n_{\alpha_{\eta_{k}-1}}m_{%
\beta_{\eta_{k}-1}}\ldots n_{\alpha_{1}}P^{\prime\prime}_{1}(b)m_{\beta_{1}}b^%
{\prime}$$
$$\displaystyle+\sum_{(\alpha+\beta)_{\eta_{k}}=k}bn_{\alpha_{\eta_{k}}}m_{\beta%
_{\eta_{k}}}\ldots n_{\alpha_{1}}m_{\beta_{1}}b^{\prime}-\sum_{(\alpha+\beta)_%
{\eta_{k}}=k}b^{\prime}n_{\alpha_{\eta_{k}}}m_{\beta_{\eta_{k}}}\ldots n_{%
\alpha_{1}}m_{\beta_{1}}b$$
$$\displaystyle-\sum_{(\alpha+\beta)_{\eta_{k}-1}=k-1}n_{\alpha_{1}}P^{\prime%
\prime}_{1}(b^{\prime})m_{\beta_{1}}\ldots n_{\alpha_{\eta_{k}-1}}m_{\beta_{%
\eta_{k}-1}}b$$
$$\displaystyle+\sum_{(\alpha+\beta)_{\eta_{k}-1}=k-1}n_{\alpha_{1}}P^{\prime%
\prime}_{1}(b)m_{\beta_{1}}\ldots n_{\alpha_{\eta_{k}-1}}m_{\beta_{\eta_{k}-1}%
}b^{\prime}$$
$$\displaystyle-\sum_{(\alpha+\beta)_{\eta_{k}}=k}bn_{\alpha_{1}}m_{\beta_{1}}%
\ldots n_{\alpha_{\eta_{k}}}m_{\beta_{\eta_{k}}}b^{\prime}+\sum_{(\alpha+\beta%
)_{\eta_{k}}=k}b^{\prime}n_{\alpha_{1}}m_{\beta_{1}}\ldots n_{\alpha_{\eta_{k}%
}}m_{\beta_{\eta_{k}}}b.$$
Hence $\{Q_{k}\}_{k\in\mathbb{N}_{0}}$ is a Lie higher derivation on $B$. By similar techniques one can show that $\{P_{k}\}_{k\in\mathbb{N}_{0}},\{P^{\prime}_{k}\}_{k\in\mathbb{N}_{0}}$ are Lie higher derivations on $A$ and also $\{Q^{\prime}_{k}\}_{k\in\mathbb{N}_{0}}$ is a Lie higher derivation on $B$.
Now cosider commutatore $\left[\left(\begin{array}[]{cc}0&m\\
0&0\end{array}\right),\left(\begin{array}[]{cc}0&0\\
n&0\end{array}\right)\right]$, apply $\mathcal{L}_{k}$ on it then we have
$$\displaystyle\left(\begin{array}[]{cc}\mathfrak{p}_{1k}(mn)-\mathfrak{p}_{2k}(%
nm)&*\\
*&\mathfrak{q}_{1k}(mn)-\mathfrak{q}_{2k}(nm)\end{array}\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\left[\mathcal{L}_{i}\left(\begin{array}[]{cc}0&m\\
0&0\end{array}\right),\mathcal{L}_{j}\left(\begin{array}[]{cc}0&0\\
n&0\end{array}\right)\right]$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\left[\left(\begin{array}[]{cc}\mathfrak{p}_{3i}(m)&%
\mathfrak{f}_{3i}(m)\\
\mathfrak{g}_{3i}(m)&\mathfrak{q}_{3i}(m)\end{array}\right),\left(\begin{array%
}[]{cc}\mathfrak{p}_{4j}(n)&\mathfrak{f}_{4j}(n)\\
\mathfrak{g}_{4j}(n)&\mathfrak{q}_{4j}(n)\end{array}\right)\right].$$
Hence
$$\mathfrak{p}_{1k}(mn)-\mathfrak{p}_{2k}(nm)=\sum_{i+j=k}\big{(}\mathfrak{p}_{3%
i}(m)\mathfrak{p}_{4j}(n)+\mathfrak{f}_{3i}(m)\mathfrak{g}_{4j}(n)-\mathfrak{p%
}_{4j}(n)\mathfrak{p}_{3i}(m)-\mathfrak{f}_{4j}(n)\mathfrak{g}_{3i}(m)\big{)},$$
and
$$\mathfrak{q}_{1k}(mn)-\mathfrak{q}_{2k}(nm)=\sum_{i+j=k}\big{(}\mathfrak{g}_{3%
i}(m)\mathfrak{f}_{4j}(n)+\mathfrak{q}_{3i}(m)\mathfrak{q}_{4j}(n)-\mathfrak{g%
}_{4j}(n)\mathfrak{f}_{3i}(m)-\mathfrak{q}_{4j}(n)\mathfrak{q}_{3i}(m)\big{)}.$$
∎
Proof of Theorem 2.3
Proof.
From the fact that “every higher derivation is a Lie higher derivation” and Proposition 2.2 follow that items $(3),(8),(9)$ and $(10)$ hold automatically.
For other items, we proceed the proof by induction on $k$. The case $k=1$ follows from [4]. Suppose that the conclusion is true for any integer less than $k$, and $\mathcal{L}_{k}$ has the presentation
$$\mathcal{D}_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}\mathtt{p}_{1k}(a)+\mathtt{p}_{%
2k}(b)+\mathtt{p}_{3k}(m)+\mathtt{p}_{4k}(n)&\mathtt{f}_{1k}(a)+\mathtt{f}_{2k%
}(b)+\mathtt{f}_{3k}(m)+\mathtt{f}_{4k}(n)\\
\mathtt{g}_{1k}(a)+\mathtt{g}_{2k}(b)+\mathtt{g}_{3k}(m)+\mathtt{g}_{4k}(n)&%
\mathtt{q}_{1k}(a)+\mathtt{q}_{2k}(b)+\mathtt{q}_{3k}(m)+\mathtt{q}_{4k}(n)%
\end{array}\right),$$
in which the maps appeared in the entries are linear.
Apply $\mathcal{D}_{k}$ on equation
$\left(\begin{array}[]{cc}a&m\\
0&0\end{array}\right)\left(\begin{array}[]{cc}0&0\\
0&b\end{array}\right)=\left(\begin{array}[]{cc}0&mb\\
0&0\end{array}\right)$ then we have
$$\displaystyle\left(\begin{array}[]{cc}\mathtt{p}_{3k}(mb)&\mathtt{f}_{3k}(mb)%
\\
\mathtt{g}_{3k}(mb)&\mathtt{q}_{3k}(mb)\end{array}\right)$$
$$\displaystyle=$$
$$\displaystyle\left(\begin{array}[]{cc}\mathtt{p}_{1k}(a)+\mathtt{p}_{3k}(m)&%
\mathtt{f}_{1k}(a)+\mathtt{f}_{3k}(m)\\
\mathtt{g}_{1k}(a)+\mathtt{g}_{3k}(m)&\mathtt{q}_{1k}(a)+\mathtt{q}_{3k}(m)%
\end{array}\right)\left(\begin{array}[]{cc}0&0\\
0&b\end{array}\right)$$
$$\displaystyle+\left(\begin{array}[]{cc}a&m\\
0&0\end{array}\right)\left(\begin{array}[]{cc}\mathtt{p}_{2k}(b)&\mathtt{f}_{2%
k}(b)\\
\mathtt{g}_{2k}(b)&\mathtt{q}_{2k}(b)\end{array}\right)$$
$$\displaystyle+\hskip-14.226378pt\sum_{i+j=k,i,j\neq 0}\left(\begin{array}[]{cc%
}\mathtt{p}_{1i}(a)+\mathtt{p}_{3i}(m)&\mathtt{f}_{1i}(a)+\mathtt{f}_{3i}(m)\\
\mathtt{g}_{1i}(a)+\mathtt{g}_{3i}(m)&\mathtt{q}_{1i}(a)+\mathtt{q}_{3i}(m)%
\end{array}\right)\left(\begin{array}[]{cc}\mathtt{p}_{2j}(b)&\mathtt{f}_{2j}(%
b)\\
\mathtt{g}_{2j}(b)&\mathtt{q}_{2j}(b)\end{array}\right).$$
From $(1,2)$-entry of above equation we have
$$\displaystyle\mathtt{f}_{3k}(mb)$$
$$\displaystyle=$$
$$\displaystyle\mathtt{f}_{1k}(a)b+\mathtt{f}_{3k}(m)b+a\mathtt{f}_{2k}(b)+m%
\mathtt{q}_{2k}(b)$$
(6.11)
$$\displaystyle+\sum_{i+j=k,i,j\neq 0}\big{(}(\mathtt{p}_{1i}(a)+\mathtt{p}_{3i}%
(m))\mathtt{f}_{2j}(b)+(\mathtt{f}_{1i}(a)+\mathtt{f}_{3i}(m))\mathtt{q}_{2j}(%
b)\big{)}.$$
Set $m=0,\,b=1$ in equation (6.11) then we have
$$\displaystyle\mathtt{f}_{1k}(a)$$
$$\displaystyle=$$
$$\displaystyle am_{k}+\sum_{i+j=k,i,j\neq 0}\big{(}\mathtt{p}_{1i}(a)m_{j}-%
\mathtt{f}_{1i}(a)\mathtt{q}_{2j}(1)\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,j\neq 0}\mathtt{p}_{1i}(a)m_{j}+\sum_{i+j=k,i,j\neq 0%
}\mathtt{f}_{1i}(a)q_{j}(1)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,j\neq 0}\mathtt{p}_{1i}(a)m_{j}+\sum_{i+j=k,i,j\neq 0%
}\mathtt{f}_{1i}(a)\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}=j}n_{\alpha_{%
r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,j\neq 0}\mathtt{p}_{1i}(a)m_{j}+\sum_{i+j=k,i,j\neq 0%
}\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}=j}\sum_{t+s=i}\mathsf{P}_{t}(a)%
m_{s}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{1}}.$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k,j\neq 0}\mathtt{p}_{1i}(a)m_{j}+\sum_{i+j=k,i,j\neq 0%
}\sum_{r=1}^{\eta_{i}}\sum_{t+(\alpha+\beta)_{r}=i}\mathsf{P}_{t}(a)m_{\beta_{%
1}}n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}m_{j}.$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\mathsf{P}_{i}(a)m_{j}.$$
Similarly one can check that similar equations for $\mathtt{g}_{1k}(a),\mathtt{g}_{2k}(b),\mathtt{f}_{2k}(b)$.
Now, set $m=0$ in (6.11) hence
$$\displaystyle\mathtt{f}_{3k}(mb)$$
$$\displaystyle=$$
$$\displaystyle\mathtt{f}_{3k}(m)b+m\mathtt{q}_{2k}(b)$$
(6.12)
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}\big{(}\mathtt{p}_{3i}(m)\mathtt{f}_{2j}%
(b)+\mathtt{f}_{3i}(m)\mathtt{q}_{2j}(b)\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\mathtt{f}_{3i}(m)\mathtt{q}_{2j}(b)+\sum_{i+j=k,\,i,%
j\neq 0}\mathtt{p}_{3i}(m)\mathtt{f}_{2j}(b).$$
As
$$\displaystyle\mathtt{p}_{3i}(m)\mathtt{f}_{2j}(b)$$
$$\displaystyle=$$
$$\displaystyle\sum_{s+t=i}\sum_{\lambda+\mu=j}\mathtt{f}_{3t}(m)\mathcal{N}_{s}%
m_{\lambda}\mathsf{Q}_{\mu}(b)$$
$$\displaystyle=$$
$$\displaystyle\sum_{s+t=i}\sum_{\lambda+\mu=j}\sum_{r=1}^{\nu_{s}}\sum_{(\alpha%
+\beta)_{r}+\gamma=s}\mathtt{f}_{3t}(m)(n_{\alpha_{1}}m_{\beta_{1}}\ldots n_{%
\alpha_{r}}m_{\beta_{r}}n_{\gamma}+n_{s})m_{\lambda}\mathsf{Q}_{\mu}(b)$$
$$\displaystyle=$$
$$\displaystyle\mathtt{f}_{3i}(m)\sum_{r=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r}+%
\mu=j,\mu\leq k-2}n_{\alpha_{1}}m_{\beta_{1}}\ldots n_{\alpha_{r}}m_{\beta_{r}%
}\mathsf{Q}_{\mu}(b)$$
$$\displaystyle=$$
$$\displaystyle\mathtt{f}_{3i}(m)\mathsf{q}_{j}(b),$$
we can replace this relation in (6.12) and get
$$\mathtt{f}_{3k}(mb)=\sum_{i+j=k}\mathtt{f}_{3i}(m)\mathsf{Q}_{j}(b).$$
Similarly one can check similar equations for $\mathtt{f}_{3k}(am),\mathtt{g}_{4k}(na),\mathtt{g}_{4k}(bn)$.
Now apply $\mathcal{D}_{k}$ on equation
$\left(\begin{array}[]{cc}0&0\\
0&b\end{array}\right)\left(\begin{array}[]{cc}0&0\\
0&b^{\prime}\end{array}\right)=\left(\begin{array}[]{cc}0&0\\
0&bb^{\prime}\end{array}\right)$, then
$$\displaystyle\left(\begin{array}[]{cc}\mathtt{p}_{2k}(bb^{\prime})&*\\
*&\mathtt{q}_{2k}(bb^{\prime})\end{array}\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\left(\begin{array}[]{cc}\mathtt{p}_{2i}(b)&\mathtt{f%
}_{2i}(b)\\
\mathtt{g}_{2i}(b)&\mathtt{q}_{2i}(b)\end{array}\right)\left(\begin{array}[]{%
cc}\mathtt{p}_{2j}(b^{\prime})&\mathtt{f}_{2j}(b^{\prime})\\
\mathtt{g}_{2j}(b^{\prime})&\mathtt{q}_{2j}(b^{\prime})\end{array}\right).$$
(6.13)
From $(1,1)$-entry of above equation and assumption of induction we have
$$\displaystyle\mathtt{p}_{2k}(bb^{\prime})$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\big{(}\mathtt{p}_{2i}(b)\mathtt{p}_{2j}(b^{\prime})+%
\mathtt{f}_{2i}(b)\mathtt{g}_{2j}(b^{\prime})\big{)}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r=1}^{\eta_{i}}\sum_{\zeta+(\alpha+\beta)_{r}=i%
}\sum_{r^{\prime}=1}^{\eta_{j}}\sum_{\xi+(\alpha+\beta)_{r^{\prime}}=j}m_{%
\beta_{1}}\mathsf{Q}_{\zeta}(b)n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}%
}m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}\ldots m_{\beta_{1}}\mathsf{Q}^{%
\prime}_{\xi}(b^{\prime})n_{\alpha_{1}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\zeta+\beta_{1}=i}\sum_{\xi+\alpha_{1}=j}m_{%
\beta_{1}}\mathsf{Q}_{\zeta}(b)\mathsf{Q}^{\prime}_{\xi}(b^{\prime})n_{\alpha_%
{1}}.$$
Set $b^{\prime}=1$ it follows that
$$\displaystyle\mathtt{p}_{2k}(b)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\sum_{r=1}^{\eta_{i}}\sum_{\zeta+(\alpha+\beta)_{r}=i%
}\sum_{r^{\prime}=1}^{\eta_{j}}\sum_{(\alpha+\beta)_{r^{\prime}}=j}m_{\beta_{1%
}}\mathsf{Q}_{\zeta}(b)n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}m_{%
\beta_{1}}n_{\alpha_{1}}\ldots m_{\beta_{r^{\prime}}}n_{\alpha_{r^{\prime}}}$$
$$\displaystyle+\sum_{i+j=k}\sum_{\zeta+\beta_{1}=i}m_{\beta_{1}}\mathsf{Q}_{%
\zeta}(b)n_{\alpha_{j}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=2}^{\eta_{k}}\sum_{\zeta+(\alpha+\beta)_{r}=k}m_{\beta_{1%
}}\mathsf{Q}_{\zeta}(b)n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}}+\sum_{%
\zeta+\alpha_{1}+\beta_{1}=k}m_{\beta_{1}}\mathsf{Q}_{\zeta}(b)n_{\alpha_{1}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{\zeta+(\alpha+\beta)_{r}=k}m_{\beta_{1%
}}\mathsf{Q}_{\zeta}(b)n_{\alpha_{1}}\ldots m_{\beta_{r}}n_{\alpha_{r}},\qquad%
(b\in B).$$
Note that if we set $b=1$ then
$$\mathtt{p}_{2k}(b^{\prime})=\sum_{r=1}^{\eta_{k}}\sum_{\zeta+(\alpha+\beta)_{r%
}=k}m_{\beta_{r}}n_{\alpha_{r}}\ldots m_{\beta_{1}}\mathsf{Q}^{\prime}_{\zeta}%
(b^{\prime})n_{\alpha_{1}},\qquad(b^{\prime}\in B).$$
A similar argument reveals that
$$\displaystyle\mathtt{q}_{1k}(a)$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}n_{%
\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}\mathsf{P}_{i}(a)m_{\beta_{1}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum_{i+(\alpha+\beta)_{r}=k,\,i\leq k-2}n_{%
\alpha_{1}}\mathsf{P}^{\prime}_{i}(a)m_{\beta_{1}}\ldots n_{\alpha_{r}}m_{%
\beta_{r}},\qquad(a\in A)$$
From $(2,2)$-entry of equation (6.13) we have
$$\mathtt{q}_{2k}(bb^{\prime})=\sum_{i+j=k}\big{(}\mathtt{g}_{2i}(b)\mathtt{f}_{%
2j}(b^{\prime})+\mathtt{q}_{2i}(b)\mathtt{q}_{2j}(b^{\prime})\big{)}.$$
By using assumption of induction and replacement $\mathtt{q}_{2k},\mathtt{q}_{2i},\mathtt{q}_{2j}$ with $\mathsf{Q}_{k}-\mathsf{q}_{k},\mathsf{Q}_{i}-\mathsf{q}_{i},\mathsf{Q}_{j}-%
\mathsf{q}_{j}$ respectively, we have
$$\displaystyle\mathsf{Q}_{k}(bb^{\prime})-\mathsf{q}_{k}(bb^{\prime})$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}(\mathsf{Q}_{i}(b)-\mathsf{q}_{i}(b))(\mathsf{Q}_{j}(%
b^{\prime})-\mathsf{q}_{j}(b^{\prime}))$$
$$\displaystyle+\sum_{i+j=k,\,i,j\neq 0}\sum_{\xi+\alpha_{1}=i}\sum_{\zeta+\beta%
_{1}=j}\mathsf{Q}^{\prime}_{\xi}(b)n_{\alpha_{1}}m_{\beta_{1}}\mathsf{Q}_{%
\zeta}(b^{\prime}).$$
In sequal we show that
$$\displaystyle\mathsf{q}_{k}(bb^{\prime})$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\big{(}\mathsf{Q}_{i}(b)\mathsf{q}_{j}(b^{\prime})+%
\mathsf{q}_{i}(b)\mathsf{Q}_{j}(b^{\prime})-\mathsf{q}_{i}(b)\mathsf{q}_{j}(b^%
{\prime})\big{)}$$
$$\displaystyle-\sum_{i+j=k,\,i,j\neq 0}\sum_{\xi+\alpha_{1}=i}\sum_{\zeta+\beta%
_{1}=j}\mathsf{Q}^{\prime}_{\xi}(b)n_{\alpha_{1}}m_{\beta_{1}}\mathsf{Q}_{%
\zeta}(b^{\prime}).$$
As
$$\displaystyle\sum_{r=1}^{\eta_{k}}\sum n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{%
\alpha_{1}}m_{\beta_{1}}\mathsf{Q}_{i}(bb^{\prime})$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\mathsf{Q}_{i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\zeta+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{%
1}}\mathsf{Q}_{\zeta}(b^{\prime})$$
$$\displaystyle+\sum_{i+j=k}\sum_{r^{\prime}=1}^{\eta_{i}}\sum_{\xi+(\alpha+%
\beta)_{r^{\prime}}=i}n_{\alpha_{r^{\prime}}}m_{\beta_{r^{\prime}}}\ldots n_{%
\alpha_{1}}m_{\beta_{1}}\mathsf{Q}_{\xi}(b)\mathsf{Q}_{j}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\mathsf{Q}_{i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\zeta+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{%
1}}\mathsf{Q}_{\zeta}(b^{\prime})$$
$$\displaystyle+\sum_{i+j=k}\mathtt{q}_{2i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\zeta+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{%
1}}\mathsf{Q}_{\zeta}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\sum_{\xi+\alpha_{1}=i}\sum_{\zeta+\beta_{1}=j}%
\mathsf{Q}^{\prime}_{\xi}(b)n_{\alpha_{1}}m_{\beta_{1}}\mathsf{Q}_{\zeta}(b^{%
\prime}).$$
By assumption of induction, cancelling similar sentences and replacin $\mathsf{Q}^{\prime}_{i}$ with $\mathtt{q}_{2i}+\mathsf{q}^{\prime}_{i}$ we have
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle+\sum_{i+j=k}\mathtt{q}_{2i}(b)\sum_{r=1}^{\eta_{j}}\sum_{\zeta+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{%
1}}\mathsf{Q}_{\zeta}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\mathsf{Q}^{\prime}_{i}(b)\sum_{\zeta+\alpha_{1}+%
\beta_{1}=j}n_{\alpha_{1}}m_{\beta_{1}}\mathsf{Q}_{\zeta}(b^{\prime})$$
$$\displaystyle=$$
$$\displaystyle+\sum_{i+j=k}\mathtt{q}_{2i}(b)\sum_{r=2}^{\eta_{j}}\sum_{\zeta+(%
\alpha+\beta)_{r}=j}n_{\alpha_{r}}m_{\beta_{r}}\ldots n_{\alpha_{1}}m_{\beta_{%
1}}\mathsf{Q}_{\zeta}(b^{\prime})$$
$$\displaystyle-\sum_{i+j=k}\mathsf{q}^{\prime}_{i}(b)\sum_{\zeta+\alpha_{1}+%
\beta_{1}=j}n_{\alpha_{1}}m_{\beta_{1}}\mathsf{Q}_{\zeta}(b^{\prime})$$
$$\displaystyle=$$
$$\displaystyle\vdots$$
$$\displaystyle=$$
$$\displaystyle\sum_{(\alpha+\beta)_{\eta_{k}}=k}bn_{\alpha_{\eta_{k}}}m_{\beta_%
{\eta_{k}}}\ldots n_{\alpha_{1}}m_{\beta_{1}}b^{\prime}-\sum_{(\alpha+\beta)_{%
\eta_{k}}=k}bn_{\alpha_{\eta_{k}}}m_{\beta_{\eta_{k}}}\ldots n_{\alpha_{1}}m_{%
\beta_{1}}b^{\prime}.$$
Hence $\{\mathsf{Q}_{k}\}_{k\in\mathbb{N}_{0}}$ is a higher derivation on $B$. By similar techniques one can be shown that
$\{\mathsf{P}_{k}\}_{k\in\mathbb{N}_{0}},\{\mathsf{P}^{\prime}_{k}\}_{k\in%
\mathbb{N}_{0}}$ are higher derivations on $A$ and also $\{\mathsf{Q}^{\prime}_{k}\}_{k\in\mathbb{N}_{0}}$ is a higher derivation on $B$.
Now consider equation $\left(\begin{array}[]{cc}0&m\\
0&0\end{array}\right)\left(\begin{array}[]{cc}0&0\\
n&0\end{array}\right)=\left(\begin{array}[]{cc}mn&0\\
0&0\end{array}\right)$, apply $\mathcal{D}_{k}$ on it then we have,
$$\displaystyle\left(\begin{array}[]{cc}\mathtt{p}_{1k}(mn)&*\\
*&\mathtt{q}_{1k}(mn)\end{array}\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i+j=k}\left(\begin{array}[]{cc}\mathtt{p}_{3i}(m)&\mathtt{f%
}_{3i}(m)\\
\mathtt{g}_{3i}(m)&\mathtt{q}_{3i}(m)\end{array}\right)\left(\begin{array}[]{%
cc}\mathtt{p}_{4j}(n)&\mathtt{f}_{4j}(n)\\
\mathtt{g}_{4j}(n)&\mathtt{q}_{4j}(n)\end{array}\right).$$
Hence
$$\mathtt{p}_{1k}(mn)=\sum_{i+j=k}\big{(}\mathtt{p}_{3i}(m)\mathtt{p}_{4j}(n)+%
\mathtt{f}_{3i}(m)\mathtt{g}_{4j}(n)\big{)},$$
and
$$\mathtt{q}_{1k}(mn)=\sum_{i+j=k}\big{(}\mathtt{g}_{3i}(m)\mathtt{f}_{4j}(n)+%
\mathtt{q}_{3i}(m)\mathtt{q}_{4j}(n)\big{)}.$$
Similarly one can check similar equations for $\mathtt{p}_{2k}(nm)$ and $\mathtt{q}_{2k}(nm)$.
∎
Proof of Proposition 2.4
Proof.
Let $\tau$ maps into the center of $\mathcal{G}$ and vanishes at commutators. Suppose that linear map $\tau_{k}:\mathcal{G}\longrightarrow Z(\mathcal{G})$ has general form as;
$$\displaystyle\tau_{k}\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right)=\left(\begin{array}[]{cc}p_{1k}(a)+p_{2k}(b)+p_{3k}(m)+p%
_{4k}(n)&\\
&q_{1k}(b)+q_{2k}(b)+q_{3k}(m)+q_{4k}(n)\end{array}\right),$$
(6.14)
for all $k\in\mathbb{N}$.
As $\tau$ vanishing at commutators we have
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle\tau_{k}\left[\left(\begin{array}[]{cc}a&m\\
n&b\end{array}\right),\left(\begin{array}[]{cc}a^{\prime}&m^{\prime}\\
n^{\prime}&b^{\prime}\end{array}\right)\right]$$
(6.15)
$$\displaystyle=$$
$$\displaystyle\tau_{k}\left(\begin{array}[]{cc}[a,a^{\prime}]+mn^{\prime}-m^{%
\prime}n&am^{\prime}+mb^{\prime}-a^{\prime}m-m^{\prime}b\\
na^{\prime}+bn^{\prime}-n^{\prime}a-b^{\prime}n&[b,b^{\prime}]+nm^{\prime}-n^{%
\prime}m\end{array}\right),$$
for all $k\in\mathbb{N}$. Now by relations (6.14) and (6.15) we have
$$p_{3k}(am^{\prime}+mb^{\prime}-a^{\prime}m-m^{\prime}b)=0,\quad q_{3k}(am^{%
\prime}+mb^{\prime}-a^{\prime}m-m^{\prime}b)=0,$$
if we set $a=a^{\prime}=0,b=0,b^{\prime}=1$ in the above commutator we have $p_{3k}(m)=0$ and $q_{3k}(m)=0$ for all $m\in M$ and $k\in\mathbb{N}$. Similarly, $p_{4k}(n)=0,q_{4k}(n)=0$ for all $n\in N,k\in\mathbb{N}$. Moreover if $m^{\prime}=0,n=0$ we have $p_{1k}(mn^{\prime})=p_{2k}(n^{\prime}m),q_{1k}(mn^{\prime})=q_{2k}(n^{\prime}m)$ for all $m\in M,n^{\prime}\in N,k\in\mathbb{N}$. One more time set $b=0,m=0,n=0$ in relation (6.15) then we have $p_{1k}[a,a^{\prime}]=0$ and $q_{1k}[a,a^{\prime}]=0$ for all $a,a^{\prime}\in A,k\in\mathbb{N}$. By a similar way one can check that $p_{2k}[b,b^{\prime}]=0,q_{2k}[b,b^{\prime}]=0$ for all $b,b^{\prime}\in B,k\in\mathbb{N}$. As $\tau$ maps into $Z(\mathcal{G})$ we have;
$$\left[\left(\begin{array}[]{cc}p_{1k}(a)+p_{2k}(b)&0\\
0&q_{1k}(a)+q_{2k}(b)\end{array}\right),\left(\begin{array}[]{cc}a^{\prime}&m^%
{\prime}\\
n^{\prime}&b^{\prime}\end{array}\right)\right]=0,$$
for all $a,a^{\prime}\in A,b,b^{\prime}\in B,m^{\prime}\in M,n^{\prime}\in N$ and $k\in\mathbb{N}$. From this, a direct verification reveals that the remainders hold. The reverse argument is trivial.
∎
References
[1]
D. Benkovič, Lie triple derivations of unital algebras with idempotents, Linear Multilinear Algebra, Linear Multilinear Algebra 63 (2015), 141-165.
[2]
W.-S. Cheung, Mappings on triangular algebras, Ph.D Thesis, University of Victoria,
137 (2000).
[3]
W.-S. Cheung, Lie derivations of triangular algebras, Linear Multilinear Algebra,
51 (2003), 299-310.
[4]
Y. Du and Y. Wang, Lie derivations of generalized matrix algebras, Linear algebra and its applications, 437 (2012), 2719-2726.
[5]
H.R. Ebrahimi Vishki, M. Mirzavaziri and F. Moafian, Characterizations of Jordan higher derivations on trivial extension algebras, Preprint.
[6]
M. Ferrero and C. Haetinger, Higher derivations of semiprime rings, Comm. Algebra, 30 (2002), 2321-2333.
[7]
D. Han, Higher derivations on operator algebras, Bull. Iranian Math. Soc., 40 (2014), 1735-8515.
[8]
J. Li and Q. Shen, Characterizations of Lie higher and Lie triple derivations on triangular algebras, J. Korean Math. Soc. 49 (2012), 419-433.
[9]
F. Lu and W. Jing, Charactrizations of Lie derivations of $B(X)$, Linear Algebra Appl. 432 (2010), 89-99.
[10]
F. Lu and B. Liu, Lie derivable maps on $B(X)$, J. Math. Anal. Appl. 372 (2010), 369-376.
[11]
Y. Li and F. Wei, Semi-centralizing maps of generalized matrix algebras, Linear Algebra Appl. 436 (2012), 1122-1153.
[12]
F. Moafian, Jordan higher and Lie higher derivations on triangular algebras and module extension algebras, Ph.D Thesis, Ferdowsi University of Mashhad (2015).
[13]
F. Moafian and H.R. Ebrahimi Vishki, Lie higher derivations on triangular algebras revisited, to appear in Filomat.
[14]
A.H. Mokhtari and H.R. Ebrahimi Vishki, More on Lie derivations of generalized matrix algebras, arXiv:1505.02344v1 [math.RA] 10 May 2015.
[15]
A. Nakajima, On generalized higher derivations, Turkish J. Math. 24 (2000), 295-311.
[16]
X.F. Qi, Characterization of Lie higher derivations on triangular algebras, Acta Math. Sinica, English Series, 29 (2013), 1007-1018.
[17]
X.F. Qi and J.C. Hou, Lie higher derivations on nest algebras, Comm. Math. Res. 26 (2010) 131-143.
[18]
A.D. Sands, Radicals and Morita contexts, J. Algebra 24 (1973), 335-345.
[19]
Y. Wang, Lie $n-$derivations of unital algebras with idempotents, Linear Algebra Appl. 458 (2014), 512-525.
[20]
Y. Wang and Y. Wang, Multiplicative Lie n-derivations of generalized matrix algebras, Linear Algebra Appl. 438 (2013) 2599-2616.
[21]
F. Wei and Z. Xiao, Higher derivations of triangular algebras and its generalizations, Linear Algebra Appl. 435 (2011), 1034-1054.
[22]
Z. Xiao and F. Wei, Jordan higher derivations on triangular algebras, Linear Algebra Appl. 432 (2010), 2615-2622.
[23]
Z. Xiao and F. Wei, Nonlinear Lie higher derivations on triangular algebras, Linear Multilinear Algebra, 60 (2012), 979-994. |
Reflectance of graphene-coated dielectric
plates in the framework of Dirac model:
Joint action of energy gap and chemical potential
G. L. Klimchitskaya${}^{1,2}$, V. S. Malyi${}^{2}$,
V. M. Mostepanenko${}^{1,2,3}$ and V. M. Petrov${}^{2}$
${}^{1}$Central Astronomical Observatory at Pulkovo
of the Russian Academy of Sciences, St.Petersburg, 196140, Russia
${}^{2}$Institute of Physics, Nanotechnology and
Telecommunications, Peter the Great Saint Petersburg
Polytechnic University, St.Petersburg, 195251, Russia
${}^{3}$Kazan Federal University, Kazan, 420008, Russia
vmostepa@gmail.com
Abstract
We investigate the reflectance of a dielectric plate coated with
a graphene sheet which possesses the nonzero energy gap and chemical
potential at any temperature. The general formalism for the
reflectance using the polarization tensor is presented in the framework
of Dirac model. It allows calculation of the reflectivity properties
for any material plate coated with real graphene sheet on the basis of
first principles of quantum electrodynamics. Numerical computations of
the reflectance are performed for the graphene-coated SiO${}_{2}$ plate at
room, liquid-nitrogen, and liquid-helium temperatures. We demonstrate
that there is a nontrivial interplay between the chemical potential,
energy gap, frequency, and temperature in their joint action on the
reflectance of a graphene-coated plate. Specifically, it is shown that
at the fixed frequency of an incident light an increase of the chemical
potential and the energy gap affect the reflectance in opposite
directions by increasing and decreasing it, respectively. According to
our results, the reflectance approaches unity at sufficiently low
frequencies and drops to that of an uncoated plate at high frequencies
for any values of the chemical potential and energy gap. The impact of
temperature on the reflectance is found to be more pronounced for graphene
coatings with smaller chemical potentials. The obtained results could
be applied for optimization of optical detectors and other devices exploiting
graphene coatings.
Keywords: graphene-coated plate, energy gap, chemical potential, reflectance
1 Introduction
Investigation of graphene, a two-dimensional sheet of carbon atoms packed in a hexagonal lattice,
has clearly demonstrated unique opportunities of this material for both fundamental and applied physics.
The Dirac model is of primary importance for understanding the physical properties of graphene.
According to this model, at energies below one or two eV the quasiparticles in graphene are either
massless or very light and obey the linear dispersion relation although move with the
Fermi velocity $\upsilon_{F}$ which is less than the speed of light $c$ by approximately a factor
of 300 (see [1, 2, 3]).
Theoretical description of the electrical conductivity and reflectances of graphene has been
developed in the framework of the Kubo response theory, the Boltzmann transport equation, the
formalism of the correlation functions in the random phase approximation and some other,
more phenomenological approaches, such as the two-dimensional Drude model. This provided insight
into the nature of optical properties of graphene and offered a clearer view on its universal
conductivity expressed via the electric charge $e$ and the Planck constant $\hbar$
[4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. Some of the theoretical approaches mentioned above have been
applied also to calculate the Casimir (Casimir-Polder) force between two graphene sheets and
between an atom and graphene which is caused by the zero-point and themal fluctuations of the electromagnetic field [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. It should be mentioned,
however, that the cited theoretical approaches are not based on the first principles of quantum electrodynamics at nonzero temperature which should be directly applicable to a simple system
described by the Dirac model. In this connection, due to some ambiguities in the statistical
description and limiting transitions, several conflicting results related to the conductivity,
optical properties and Casimir forces in graphene systems have been obtained
(see a discussion [29, 30, 31]).
The most fundamental description of the optical properties, Casimir effect and electrical
conductivity of graphene is given by the polarization tensor in (2+1)-dimensional space-time.
Although in some specific cases this quantity was considered by many authors (see, e.g., [32, 33, 34]), the exact expressions valid at any temperature for graphene
with nonzero energy gap have been derived only recently [35, 36].
The application region of the obtained polarization tensor was, however, restricted to pure
imaginary Matsubara frequencies. Because of this, it was possible to apply it for detailed
investigation of the Casimir effect in graphene systems [31, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46],
but not for calculations of the reflectances and conductivities of graphene which are defined
at real frequencies.
The polarization tensor valid for a gapped graphene over the entire plane of complex
frequencies, including the real frequency axis, was derived in [47, 48].
In [49, 50] the results of [47, 48] were generalized for the case of graphene
with nonzero chemical potential. The novel form of the polarization tensor
[47, 48, 49, 50] has been used for a more detailed study of the Casimir effect in
graphene systems [51, 52, 53, 54, 55, 56, 57] and for investigation of the electrical
conductivity of graphene on the basis of first principles of quantum electrodynamics
at nonzero temperature [58, 59, 60, 61]. Special attention was devoted to the
field-theoretical description of the reflectivity properties of graphene.
In [47, 48] and [62] they were investigated in detail for both pure (gapless)
and gapped graphene, respectively, and in [63] the joint action of nonzero
energy gap and chemical potential was analyzed.
Of particular interest is the configuration of a dielectric plate coated with a
graphene sheet. In addition to the fundamental interest in this subject, graphene coatings
find prospective technological applications to optical detectors, various optoelectronic
devices, solar cells, biosensors etc. [64, 65, 66, 67]. Using the polarization tensor
of [47, 48], the reflectivity properties of material plates coated with pure [68, 69]
and gapped [70] graphene have been investigated. The case of thin films coated with
gapped graphene has been considered as well [71]. However, the joint action of nonzero
energy gap and chemical potential on the reflectivity properties of graphene-coated plates,
which is of major interest for condensed matter physics and its applications,
remains to date unexplored.
In this paper, we present the formalism using the polarization tensor of [47, 48, 49, 50] and
calculate the reflectance of the dielectric plate coated with real graphene sheet which possesses
nonzero energy gap
in the spectrum of quasiparticles and chemical potential.
Thus, here we exploit the same formalism, as was used in Refs. [51, 52, 53, 54, 55, 56, 57]
in the Casimir configurations of two parallel graphene sheets (graphene-coated plates),
but in quite different physical situation.
All computations are performed at
the room temperature $T=300~{}$K, at the liquid-nitrogen temperature $T=77~{}$K, and at the
liquid-helium temperature $T=4.4~{}$K for the plate made of silica glass SiO${}_{2}$. It is shown
that there is an interesting interplay between the chemical potential, energy gap, frequency
and temperature in their impact on the
reflectance of a graphene-coated plate.
Another result obtained is that at the fixed frequency $\omega$ the increasing chemical
potential and energy
gap affect the reflectance in opposite directions. Specifically, an increase of the chemical potential
either increases the reflectance or leaves it unchanged. On the other hand, an increase of
the energy gap either decreases the reflectance or leaves it constant. It is shown that for
any values of the chemical potential and energy gap the reflectance of a graphene-coated plate
approaches unity at sufficiently
low frequencies. With increasing frequency, the reflectance drops to that of an uncoated plate.
According to our results, for larger chemical potential the frequency region, where the
reflectance of
a graphene-coated plate depends on the presence of coating, becomes wider. However,
at frequencies $\hbar\omega>20~{}$meV the graphene coating makes no impact on the reflectance
of a SiO${}_{2}$ plate any
more. An impact of temperature on the reflectance is more pronounced for graphene
coating with
smaller chemical potentials. For sufficiently high chemical potential, temperature
no longer affects
the reflectance. By and large, the presented formalism allows computation of the
impact of graphene
coating on the reflectance of a plate made of any material.
The paper is organized as follows. In Section 2 we present the formalism of the
Dirac model expressing
the reflectance of a graphene-coated plate via the polarization tensor.
Section 3 contains the computational results for the reflectance as a function of
frequency and chemical potential under the
varying energy gap and temperature. In Section 4 the reflectance is considered
as a function of energy
gap under different values of the chemical potential and temperature.
In Section 5 the reader will find our conclusions and a discussion.
2 General formalism for the reflectance
We consider sufficiently thick dielectric plate, which can be treated as a semispace,
coated with gapped graphene possessing some chemical potential $\mu$. It is well known
that for a pure freestanding graphene the energy gap in the spectrum of quasiparticles
$\Delta=0$, but under an influence of the defects of structure, electron-electron
interaction and in the presence of a material substrate the nonzero gap
$\Delta\leq 0.1$
or $0.2~{}$eV arises [1, 3, 34, 72, 73].
As to the chemical potential, it describes the doping
concentration, which is nonzero for any real graphene sample and takes typical values
$\mu\sim 0.1~{}$eV [3]. Below we consider only the case of normal incidence
so that the projection of the wave vector on a graphene plane $k=0$
(a dependence on the incident angle is discussed in [63, 68, 70]).
Because of this, it is sufficient to consider only the transverse magnetic (TM), i.e.,
$p$-polarized, electromagnetic waves of frequency $\omega$ incident on a graphene-coated plate.
The TM reflection coefficient on the dielectric plate coated with a graphene sheet is given by [46]
$$r_{\rm TM}^{(g,p)}(\omega,0)=\frac{r_{\rm TM}^{(g)}(\omega,0)+r_{\rm TM}^{(p)}%
(\omega,0)[1-2r_{\rm TM}^{(g)}(\omega,0)]}{1-r_{\rm TM}^{(g)}(\omega,0)r_{\rm
TM%
}^{(p)}(\omega,0)},$$
(1)
where $r_{\rm TM}^{(g)}$ and $r_{\rm TM}^{(p)}$ are the TM reflection coefficients on graphene and on a plate taken singly, respectively (see Appendix for more details).
The magnitude of the wave vector projection on the plane of graphene is put equal to zero because we consider the normal incidence.
The TM reflection coefficient of the electromagnetic waves on a freestanding graphene sheet is expressed as [47, 71]
$$r_{\rm TM}^{(g)}(\omega,0)=\frac{\tilde{\Pi}_{00}(\omega,0)}{2+\tilde{\Pi}_{00%
}(\omega,0)},$$
(2)
where the 00 component of the normalized polarization tensor
of graphene $\tilde{\Pi}_{00}(\omega,0)$ is connected with the conventional definition
of the 00 component of this tensor by [71]
$$\tilde{\Pi}_{00}(\omega,0)=-\frac{i\omega}{\hbar c}\lim_{k\to 0}\frac{\Pi_{00}%
(\omega,k)}{k^{2}}.$$
(3)
The explicit expression for the quantity $\Pi_{00}$ for graphene with nonzero $\Delta$ and $\mu$ at any temperature $T$ is presented in [60] (see Appendix concerning different but equivalent expressions for the reflection coefficients on a freestanding graphene sheet in terms of the polarization tensor, electric susceptibility and density-density correlation functions).
For us it is now important that the 00 component of the polarization tensor is directly connected with the in-plane conductivity of graphene [45, 58, 59, 60]
$$\Pi_{00}(\omega,k)=\frac{4\pi i\hbar k^{2}}{\omega}\sigma(\omega,k).$$
(4)
Combining this equation with (3), one obtains
$$\tilde{\Pi}_{00}(\omega,0)=\frac{4\pi}{c}\sigma(\omega,0).$$
(5)
As a result, the TM reflection coefficient on graphene (2) is expressed in terms
of the in-plane conductivity
$$r_{\rm TM}^{(g)}(\omega,0)=\frac{2\pi\sigma(\omega,0)}{c+2\pi\sigma(\omega,0)}.$$
(6)
Now we return to the reflection coefficient $r_{\rm TM}^{(p)}$ on an uncoated dielectric plate. At the normal incidence it is given by the commonly known expression
$$r_{\rm TM}^{(p)}(\omega,0)=\frac{\sqrt{\varepsilon(\omega)}-1}{\sqrt{%
\varepsilon(\omega)}+1}=\frac{n(\omega)-1+ik(\omega)}{n(\omega)+1+ik(\omega)},$$
(7)
where $\varepsilon(\omega)$ is the frequency-dependent dielectric permittivity of the plate material
and $n(\omega)$ and $k(\omega)$ are the real and imaginary parts of its complex index of
refraction, respectively. Representing the complex conductivity of graphene as
$$\sigma(\omega,0)={\rm Re}\sigma(\omega)+i{\rm Im}\sigma(\omega,0),$$
(8)
and substituting (6)–(8) in (1),
one finds the reflection coefficient on a graphene-coated plate
$$r_{\rm TM}^{(g,p)}(\omega,0)=\frac{n(\omega)-1+\frac{4\pi}{c}{\rm Re}\sigma(%
\omega,0)+i[k(\omega)+\frac{4\pi}{c}{\rm Im}\sigma(\omega,0)]}{n(\omega)+1+%
\frac{4\pi}{c}{\rm Re}\sigma(\omega,0)+i[k(\omega)+\frac{4\pi}{c}{\rm Im}%
\sigma(\omega,0)]}.$$
(9)
From (9) one easily obtains the reflectance of the plate coated with a graphene sheet
$$\displaystyle{\cal R}(\omega)=[r_{\rm TM}^{(g,p)}(\omega,0)]^{2}$$
(10)
$$\displaystyle=\frac{[n(\omega)-1+\frac{4\pi}{c}{\rm Re}\sigma(\omega,0)]^{2}+[%
k(\omega)+\frac{4\pi}{c}{\rm Im}\sigma(\omega,0)]^{2}}{[n(\omega)+1+\frac{4\pi%
}{c}{\rm Re}\sigma(\omega,0)]^{2}+[k(\omega)+\frac{4\pi}{c}{\rm Im}\sigma(%
\omega,0)]^{2}}.$$
The exact expressions for real and imaginary parts of the in-plane conductivity of
graphene $\sigma(\omega,0)$ have been found in [60] by using the polarization tensor.
For the real part it was obtained
$$\hskip-28.452756pt{\rm Re}\sigma(\omega,0)=\sigma_{0}\theta(\hbar\omega-\Delta%
)\frac{(\hbar\omega)^{2}+\Delta^{2}}{2(\hbar\omega)^{2}}\left(\tanh\frac{\hbar%
\omega+2\mu}{4k_{B}T}+\tanh\frac{\hbar\omega-2\mu}{4k_{B}T}\right),$$
(11)
where the universal conductivity of graphene is $\sigma_{0}={e^{2}}/(4\hbar)$, the step function takes the values $\theta(x)=1$, $x\geq 0$ and $\theta(x)=0$, $x<0$, and $k_{B}$ is the Boltzmann constant.
The imaginary part for the in-plane conductivity of graphene is somewhat more complicated [60]
$${\rm Im}\sigma(\omega,0)=\frac{\sigma_{0}}{\pi}\left[\frac{2\Delta}{\hbar%
\omega}-\frac{(\hbar\omega)^{2}+\Delta^{2}}{(\hbar\omega)^{2}}\ln\left|\frac{%
\hbar\omega+\Delta}{\hbar\omega-\Delta}\right|+Y(\omega,\Delta,\mu)\right],$$
(12)
where the quantity $Y$ is defined as
$$\displaystyle Y(\omega,\Delta,\mu)=2\int_{\Delta/(\hbar\omega)}^{\infty}\!dt%
\sum_{\kappa=\pm 1}\left(\exp\frac{\hbar\omega t+2\kappa\mu}{2k_{B}T}+1\right)%
^{-1}$$
$$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\times\left[1+\frac{(\hbar%
\omega)^{2}+\Delta^{2}}{(\hbar\omega)^{2}}\frac{1}{t^{2}-1}\right].$$
(13)
Note that equations (11)–(13) are exact in the framework of the Dirac model.
They are derived on the basis of first principles of quantum electrodynamics at nonzero
temperature. Taking into account that the Dirac model is applicable at sufficiently low
frequencies ($\hbar\omega<1-2~{}$eV), one arrives at the conclusion that these equations
give the proper description of all physical properties dealing with electrical conductivity
and reflection of light from graphene in this frequency region.
We remind that the imaginary part of the dielectric permittivity of graphene, which is
proportional to the real part (11) of its conductivity, is nonzero
over the wide frequency region.
This means that the concept of free Dirac quasiparticles allows the proper account of
energy losses by conduction electrons in graphene due to scattering processes
without resort to any additional parameters like in the 2-dimensional Drude model [14]
or Kubo approach [74].
Although physics behind
(11)–(13) was already discussed in the literature [60],
we briefly recall the meaning of different contributions to the conductivity.
Thus, the real part of conductivity (11) is interpreted as originating from the
interband transitions [60, 75]. The imaginary part of conductivity (12)
contains contributions from both the interband and intraband transitions.
To separate them analytically, one needs, however, to consider several distinct
asymptotic regions of parameters in the quantity $Y$ defind in (13),
as is done in detail in [60]. For one example, under the conditions
$k_{B}T\ll\mu$, $\hbar\omega\ll 2\mu$, and $\Delta<2\mu$ one arrives at [60]
$${\rm Im}\sigma(\omega,0)=\frac{\sigma_{0}}{\pi}\left[\vphantom{\frac{(\hbar%
\omega)^{2}+\Delta^{2}}{(\hbar\omega)^{2}}\ln\left|\frac{\hbar\omega+\Delta}{%
\hbar\omega-\Delta}\right|}\frac{4\mu}{\hbar\omega}-4\ln 2\frac{k_{B}T}{\hbar%
\omega}-\frac{(\hbar\omega)^{2}+\Delta^{2}}{(\hbar\omega)^{2}}\ln\frac{2\mu+%
\hbar\omega}{2\mu-\hbar\omega}\right].$$
(14)
Here, the first two terms on the right-hand side correspond to the intraband transitions,
whereas the third term originates from the interband transitions [7, 60]
(see Ref. [60] for other relationships between $\hbar\omega$, $T$, $\mu$, and
$\Delta$ in (12) and (13)).
Using (10)–(13) and the optical data for the complex index of
refraction of the plate material (see, for instance, [76]), one can calculate
the reflectance of any material plate coated with real graphene sheet possessing nonzero
energy gap and chemical potential. Such computations are presented in the next sections.
3 Reflectance as the function of frequency and chemical
potential at a varying energy gap
We apply the formalism of Sec. II to describe the reflectance of a fused
silica (SiO${}_{2}$) plate coated with a graphene sheet. It has been known that
fused silica plates are of frequent use as graphene
substrates [77, 78, 79, 80, 81].
This material is characterized by the static dielectric permittivity
$\varepsilon(0)\approx 3.82$ which remains unchanged up to rather high frequency
$\hbar\omega=10~{}$meV [76]. At higher frequencies the detailed optical
data for $n(\omega)$ and $k(\omega)$ for SiO${}_{2}$ are contained in [76].
Numerical computations of the reflectance of graphene-coated SiO${}_{2}$
plate are performed by using (10)–(13). We start with the
dependence of reflectance on frequency for three different values of the chemical
potential and varying energy gap. The computational results are shown
in figures 1(a), 1(b) and 1(c) at three temperatures $T=300~{}$K, $77$ K, $4.4$ K,
respectively.
In each of the the gray regions from left
to right are plotted for the values of the chemical potential
$\mu=0$, $0.1$, and $0.8~{}$eV, respectively. The width of the gray regions
is caused by the values of the energy gap which varies from
$\Delta=0.1$ eV (the left boundary line of each region)
to $\Delta=0$ (the right boundary line of each region).
The bottom
lines in 1(a), 1(b) and 1(c), which are horizontal at
$\hbar\omega\leq 10~{}$meV, demonstrate the
reflectance of an uncoated SiO${}_{2}$ plate. It is obtained from
(10) by putting $\sigma(\omega,0)=0$
$${\cal R}_{0}(\omega)=\frac{[n(\omega)-1]^{2}+k^{2}(\omega)}{[n(\omega)+1]^{2}+%
k^{2}(\omega)}.$$
(15)
In the frequency region $\hbar\omega\leq 10~{}$meV
we have $k(\omega)=0$ [76] and (15) reduces to
$${\cal R}_{0}(\omega)=\left[\frac{\sqrt{\varepsilon(0)}-1}{\sqrt{\varepsilon(0)%
}+1}\right]^{2}\approx 0.1044.$$
(16)
As is seen in figure 1, at the very low frequencies the reflectance of a
graphene-coated SiO${}_{2}$ plate is close to unity and drops to the reflectance of an
uncoated plate with increasing frequency.
Physically this means that at low frequencies the graphene coating behaves like a metallic one,
whereas at high frequencies becomes transparent.
The frequency region, where this drop takes place,
shifts to higher frequencies with increasing chemical potential of graphene.
The width of the gray regions (i.e., the range of frequencies where the energy gap of
graphene sheet varies between zero and $0.1~{}$eV) quickly decreases with increasing $\mu$.
This means that for larger $\mu$ an impact of the energy gap on the reflectance quickly
decreases and for $\mu=0.8~{}$eV there is no impact of $\Delta$ on the reflectance at
any temperature. With decreasing temperature, some of the gray regions become wider.
This feature is especially pronounced for the undoped graphene ($\mu=0$).
Thus, at $T=77~{}$K and $4.4~{}$K the left boundaries of the left gray regions are missing
because they appear at much lower frequencies than those ones shown in figures 1(b) and 1(c).
Note that for any reflectance smaller than unity the widths of all gray regions
increase with decreasing frequency, but decrease when the reflectance approaches
unity.
The right boundary lines of the left gray regions in figures 1(a)
and 1(b), plotted for
the case $\mu=\Delta=0$ at $T=300~{}$K and $77~{}$K, coincide with the lines 3 and 2,
respectively, obtained earlier for this particular case in [68].
It is interesting that at a fixed frequency the increase of $\mu$ typically leads
to a larger (or the same) reflectance, whereas the increase of $\Delta$ results in a
smaller (or the same) reflectance, i.e., both quantities act in opposite directions.
This effect has simple physical explanation. Thus, increasing $\mu$ leads to an increased
density of charge carriers and increased conductivity resulting in the higher reflectance.
Just to the opposite, an increased energy gap leads to the decreased mobility of charge
carriers, decreased conductivity and, finally, to the lower reflectance.
At $\hbar\omega\geq 20~{}$meV the graphene coating does not make any impact on the
reflectance. Note that here and below we do not show the unobservable
infinitesimally narrow peaks ${\cal R}=1$ which appear under the condition
$\hbar\omega=\Delta$ (see [70] for a discussion of this artefact).
Now we consider a dependance of the reflectance on the chemical potential.
This is done separately at three different temperatures. In figure 2, the
reflectance of a graphene-coated plate is shown as a function of $\mu$ at $T=300~{}$K.
The four gray regions from top to bottom are plotted for the frequency
$\hbar\omega=0.1$, $1$, $5$, and $10~{}$meV, respectively. In each of these regions,
the upper boundary line is plotted for the energy gap $\Delta=0$ and the lower
boundary line for $\Delta=0.1~{}$eV. The dashed lines are computed for $\Delta=0.05~{}$eV.
They are depictured only for sufficiently wide gray regions related to
$\hbar\omega=0.1$ and 1 meV. As is seen in figure 2, with increasing
$\mu$ an impact of the energy gap on the reflectance quickly decreases in line with
figure 1. The reflectance is the monotonously increasing function of $\mu$.
The maximum values of reflectance are reached faster for the light of lower frequency.
Next, we present the computational results for the reflectance as a function of
$\mu$ at $T=77~{}$K. In figure 3 it is plotted (a) at $\hbar\omega=0.1$ meV and
(b) at $\hbar\omega=1.0$ meV. The upper and lower boundary lines of the gray regions
are plotted for the energy gap $\Delta=0$ and $0.1$ eV, respectively, whereas the
dashed middle lines for $\Delta=0.05~{}$eV. In an inset to figure 3(b) the
region of small values of $\mu\leq 0.12~{}$eV is shown on an enlarged scale.
As can be seen in figure 3, the reflectance is again a monotonously increasing
function of $\mu$. At $\omega=1.0~{}$meV [see figure 3(b)] the lower boundary line
of the dashed region ($\Delta=0.1~{}$eV) is very similar to the respective line in
figure 2 plotted at $T=300~{}$K and differs from it by only a more sharp angle.
From figure 3(a) it is seen that for $\mu\geq 0.12~{}$eV the reflectance
${\cal R}=1$ and does not depend on $\mu$. At $\hbar\omega=1.0~{}$meV the reflectance
approaches to unity at much larger $\mu\approx 0.8~{}$eV [see figure 3(b)].
We do not present
the computational results for a reflectance at $\hbar\omega=5~{}$and $10~{}$meV because
at $T=77~{}$K they are nearly the same as at $T=300~{}$K (see figure 2).
Finally, we repeat computations of the reflectance of a graphene-coated SiO${}_{2}$
plate as a function of $\mu$ at $T=4.4~{}$K. The computational results are shown in
figures 4(a) and 4(b) for the values of frequency $\hbar\omega=0.1~{}$ and $1.0$ meV,
respectively. All notations are the same as already explained in figures 3(a) and 3(b).
It is seen that figure 4(a) differs from figure 3(a) plotted at $\omega=0.1~{}$meV,
$T=77~{}$K by a sharper angle of the lower line restricting the gray region.
It is seen also that at $\hbar\omega=0.1~{}$meV, $T=4.4~{}$K the graphene coating with
$\mu=0$ does not influence on the reflectance at any value of the energy gap from 0
to 0.1 eV (this is not the case at $\hbar\omega=0.1~{}$meV, $T=77~{}$K).
Figure 4(b) plotted at $\hbar\omega=1.0~{}$meV, $T=4.4~{}$K differs from
figure 3(b) plotted at the same frequency but $T=77~{}$K by only a sharper
angle of the lower line restricting the gray region (see the inset).
In this case at $\mu=0$ the graphene coating does not make an impact on the reflectance
at both $77~{}$K and $4.4~{}$K. At $T=4.4~{}$K the computational results for the reflectance
at the frequencies $\hbar\omega=5~{}$ and $10~{}$meV, are again nearly the same as
at $T=300~{}$K. They are shown by the bottom line and by the next to it gray region
in figure 2.
4 Reflectance as the function of energy gap under
different chemical potentials and frequencies
In this section we present the computational results for the reflectance of
a graphene-coated SiO${}_{2}$
plate as a function of the energy gap. In so doing, the chemical potential
$\mu$ may at times be fixed,
whereas the frequency takes several values, and at other times the fixed quantity
is the frequency and
the chemical potential takes different values. All computations are
again performed by using (10)–(13).
In figure 5 the reflectance is shown as a function of
$\Delta$ for graphene coating with $\mu=0.01~{}$eV at (a) $T=300~{}$K and
(b) $T=77~{}$K.
In figure 5(a) the seven lines from top to bottom are plotted
for the values of
$\hbar\omega=~{}0.01$, $0.05$, $0.1$, $0.25$, $0.5$, $1$, and $\geq 5$ meV,
respectively.
In figure 5(b) there are five lines for the values of
$\hbar\omega=0.01$, $0.05$, $0.1$, $0.5$
and $\geq 1$ eV. In both figures 5a and 5b the computational
results do not change when
$\hbar\omega$ becomes larger than $5$ and $1$ meV, respectively.
When decreasing temperature to below
$77$ K, the computational results do not change, so that figure 5(b)
remains applicable.
As is seen in figure 5, the reflectance decreases monotonously
with increasing $\Delta$.
At lower temperature this decrease goes on faster.
In figure 6 similar computational results for the reflectance as
a function of $\Delta$ at $T=300~{}$K for graphene coating with $\mu=0.1~{}$eV.
It is interesting that for the relatively large $\mu$ the reflectance is
already temperature-independent, when decreasing $T$ to below $300$ K.
Because of this, the computational results at $T=77~{}$K and $4.4$ K are not shown.
The seven lines from top to bottom are plotted for $\hbar\omega=0.01$,
$0.1$, $0.5$, $1$, 2.5, 5, and 10 meV, respectively.
As is seen in figure 6, in this case the reflectance varies with $\mu$ very slowly.
Now we consider the dependence of the reflectance on energy gap at the fixed
frequency and different values of $\mu$. In figure 7 the reflectance of a
graphene-coated plate is shown as a function of the energy gap at
$\hbar\omega=0.1~{}$meV and (a) $T=300~{}$K, (b) $T=77~{}$K, and (c) $T=4.4~{}$K.
The seven lines from bottom to top are plotted for the values of chemical
potential $\mu=0$, $0.01$, $0.03$, $0.05$, $0.07$, $0.1$, and $\geq 0.5~{}$eV,
respectively. As is seen in figure 7, the reflectance is again a decreasing
function of $\Delta$. At fixed $\Delta$ it takes either larger or the same values
for larger chemical potential. With decreasing temperature, a decrease of the
reflectance with increasing $\Delta$ becomes faster. At $T=4.4~{}$K [figure 7(c)]
the graphene coating with $\mu=0$ does not influence on the reflectance shown by the bottom line.
At fixed $\Delta$ the impact of temperature on the reflectance is more pronounced for graphene
coating with smaller chemical potential. This is because at zero temperature the properties of
graphene with small $\mu$ are almost the same as for an undoped graphene.
Finally, in figure 8 the computational results for the reflectance are shown
as the function of $\Delta$ at $\hbar\omega=1~{}$meV and (a) $T=300~{}$K, (b) $T=77~{}$K
(the values of the reflectance computed at $T=77~{}$K do not change with further decreasing
temperature). The seven lines from bottom to top are plotted for $\mu\leq 0.01~{}$eV,
$\mu=0.03$, $0.07$, $0.1$, $0.2$, $0.5$, and $0.8~{}$eV, respectively. As can be seen in
figure 8, the top two lines corresponding to the largest values of the chemical
potential are nearly flat, i.e., the reflectance does not depend on $\Delta$ at any
temperature $T\leq 300$ K. It is seen also that the respective lines in figures 8(a)
and 8(b) are very similar, i.e., the impact of temperature is very minor for
all considered values of the chemical potential.
5 Conclusions and discussion
In this paper, we have presented the formalism, based of the first principles of
quantum electrodynamics at nonzero temperature, which allows to compute the reflectance of
a dielectric plate coated with real graphene sheet possessing nonzero energy gap and chemical
potential. In doing so, the graphene sheet is described by the polarization tensor in
(2+1)-dimensional space-time, and the dielectric plate by the measured optical data for
the complex index of refraction of the plate material. This formalism was applied to the
case of a graphene-coated plate made of silica glass at room, liquid-nitrogen, and
liquid-helium temperatures.
The results of numerical computations show that there is a nontrivial interplay in the
joint action of the energy gap, chemical potential, frequency of the incident light,
and temperature on the reflectance. According to our results, an increase of the chemical
potential and the energy gap of graphene coating influences the reflectance in opposite
directions by making it larger and smaller, respectively.
The physical explanation for this result is provided.
It is shown that at sufficiently
low frequencies the reflectance of a graphene-coated plate approaches unity, but goes
down to the reflectance of an uncoated plate at sufficiently
high frequencies.
In this respect graphene coating is qualitatively similar to a metallic film.
At $\hbar\omega\geq 20~{}$meV the graphene coating does not affect
the reflectance of a plate made of silica glass independently of the values of the energy
gap and chemical potential. Temperature makes major impact on the reflectance for graphene
coatings with relatively small chemical potential. For sufficiently high chemical potential,
a decrease from the room temp45erature down to the liquid-helium value does not influence
on the reflectance.
We underline that the graphene coating significantly
affects the reflectance only at frequencies
below approximately $\hbar\omega=20~{}$meV, i.e., fully in the application region of the
Dirac model which extends up to 1–2 eV. This means that the developed formalism can be
reliably used for controlling the reflectance of graphene-coated plates, e.g.,
for making it larger or smaller, by modifying the frequency, temperature, energy gap or
the doping concentration.
It would be of much interest to compare the predicted effects with experimental data.
Currently, however, available experiments are performed at higher frequencies.
Thus, the reflectance of graphene-coated SiO${}_{2}$ substrates was measured at
$\hbar\omega$ above 60 meV [84], 130 meV [85], and 500 meV [75].
In these frequency regions, an impact of graphene on the reflectance of a substrate
is rather small and
was found in agreement with calculations using the polarization tensor with
$\Delta=\mu=0$ [68, 69] (note that with increasing frequency the influence of
nonzero $\Delta$ and $\mu$ on the graphene reflectance decreases).
The relative infrared reflectance of graphene deposited
on a SiC substrate, investigated for different values of $\mu$ at
$\hbar\omega\geq 12~{}$meV [86], is in qualitative agreement with the computational
results presented in Fig. 1.
We conclude that large impact of graphene coating on a substrate reflectance in
the far infrared region and
the possibilities to control it predicted above might be beneficial
for numerous applications of graphene-coated plates mentioned in section 1.
V. M. M. was partially funded by the Russian Foundation for Basic
Research Grant No. 19-02-00453 A.
The work of V. M. M. was also partially supported by the Russian Government Program of
Competitive Growth of Kazan Federal University.
Appendix
In this Appendix, we derive the basic expression (1) for the reflection coefficient of
a graphene-coated plate [46]. We also consider how the reflection coefficients of a
graphene sheet are expressed via the polarization tensor of graphene and other related
quantities such as the electric susceptibility and the density-density correlation function.
This helps to understand equations (2)–(4).
Let us consider the system consisting of a graphene sheet, characterized by the amplitude
reflection coefficient $r^{(g)}(\omega,k)$ and the transmission coefficient $t^{(g)}(\omega,k)$
spaced in vacuum at a hight $a$ above thick dielectric plate characterized by the amplitude
reflection coefficien $r^{(p)}(\omega,k)$. These coefficients can be either transverse
magnetic (TM) or transverse electric (TE).
With account of multiple reflections on the graphene sheet and on the top boundary plane
of the dielectric plate, the reflection coefficient of our system takes the form [82]
$$\displaystyle r^{(g,p)}(\omega,k)=r^{(g)}(\omega,k)+t^{(g)}(\omega,k)r^{(p)}(%
\omega,k)t^{(g)}(\omega,k)e^{2iap(\omega,k)}$$
$$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}\times\sum_{n=0}^{\infty}\left[r^{(g)}(%
\omega,k)r^{(p)}(\omega,k)e^{2iap(\omega,k)}\right]^{n}$$
$$\displaystyle~{}~{}~{}=r^{(g)}(\omega,k)+\frac{t^{(g)}(\omega,k)r^{(p)}(\omega%
,k)t^{(g)}(\omega,k)e^{2iap(\omega,k)}}{1-r^{(g)}(\omega,k)r^{(p)}(\omega,k)e^%
{2iap(\omega,k)}},$$
(17)
where $p(\omega,k)=(\omega^{2}/c^{2}-k^{2})^{1/2}$.
Taking into account that for a graphene sheet in vacuum in the case of TM polarization
of the electromagnetic field one has $t^{(g)}(\omega,k)=1-r^{(g)}(\omega,k)$ [83],
(17) can be rewritten in the form
$$r_{a,\rm TM}^{(g,p)}(\omega,k)=\frac{r_{\rm TM}^{(g)}(\omega,k)+r_{\rm TM}^{(p%
)}(\omega,k)[1-2r_{\rm TM}^{(g)}(\omega,k)]e^{2iap(\omega,k)}}{1-r_{\rm TM}^{(%
g)}(\omega,k)r_{\rm TM}^{(p)}(\omega,k)e^{2iap(\omega,k)}},$$
(18)
By considering the limiting case $a\to 0$ in (18), one obtains the reflection
coefficient [46]
$$r_{0,\rm TM}^{(g,p)}(\omega,k)\equiv r_{\rm TM}^{(g,p)}(\omega,k)=\frac{r_{\rm
TM%
}^{(g)}(\omega,k)+r_{\rm TM}^{(p)}(\omega,k)[1-2r_{\rm TM}^{(g)}(\omega,k)]}{1%
-r_{\rm TM}^{(g)}(\omega,k)r_{\rm TM}^{(p)}(\omega,k)},$$
(19)
which coincides with (1) at the normal incidence $k=0$.
The TM reflection coefficient of the electromagnetic waves on a graphene sheet
can be written in the form [4, 6, 21]
$$r_{\rm TM}^{(g)}(\omega,k)=\frac{p(\omega,k)\alpha(\omega,k)}{ik+p(\omega,k)%
\alpha(\omega,k)},$$
(20)
where the longitudinal (in-plane) electric susceptibility (polarizability) of graphene
$\alpha(\omega,k)$ is expressed via the respective dielectric permittivity and
density-density correlation function as
$$\alpha(\omega,k)=\varepsilon(\omega,k)-1=-\frac{2\pi e^{2}}{k}\chi(\omega,k).$$
(21)
The latter is directly connected with the in-plane conductivity of graphene [21]
$$\sigma(\omega,k)=\frac{ie^{2}\omega}{k^{2}}\chi(\omega,k).$$
(22)
Note that all the quantities $\alpha$, $\varepsilon$, $\chi$, and $\sigma$ are also
temperature-dependent.
The longitudinal (in-plane) density-density correlation function of graphene is expressed
via the component $\Pi_{00}$ of the polarization tensor [45]
$$\chi(\omega,k)=-\frac{1}{4\pi e^{2}\hbar}\Pi_{00}(\omega,k).$$
(23)
In the framework of the Dirac model, the polarization tensor represents the effective
action for massless (or very light) fermionic quasiparticles in the quadratic order of
external electromagnetic field.
Substituting (21) and (23) in (20), one obtains the TM
reflection coefficient on a graphene sheet in terms of the polarization tensor
$$r_{0,\rm TM}^{(g)}(\omega,k)=\frac{p(\omega,k)\Pi_{00}(\omega,k)}{2i\hbar k^{2%
}+p(\omega,k)\Pi_{00}(\omega,k)},$$
(24)
Introducing here the notation (3), one arrives at (2).
Then, by combining (22) with (23), we obtain equation (4).
References
References
[1]
Castro Neto A H, Guinea F, Peres N M R, Novoselov K S
and Geim A K 2009 Rev. Mod. Phys. 81, 109.
[2]
Katsnelson M I 2012
Graphene: Carbon in Two Dimensions
(Cambridge: Cambridge University Press)
[3]
Aoki H and Dresselhaus M S (eds) 2014 Physics of Graphene
(Cham: Springer)
[4]
Stauber T, Peres N M R and Geim A K 2008
Phys. Rev. B 78 085432
[5]
Gusynin V P, Sharapov S G and Carbotte J P 2007
Phys. Rev. Lett. 98 157402
[6]
Falkovsky L A and Pershoguba S S 2007
Phys. Rev. B 76 153410
[7]
Falkovsky L A 2008
J. Phys.: Conf. Ser. 129
012004
[8]
Pedersen T G, Jauho A-P and Pedersen K 2009
Phys. Rev. B 79
113406
[9]
Merano M 2016
Phys. Rev. A 93
013832
[10]
Auslender M and Katsnelson M I 2007
Phys. Rev. B 76
235425
[11]
Scholz A and Schliemann J 2011
Phys. Rev. B 83 235409
[12]
Dartora C A and Cabrera G G 2013
Phys. Rev. B 87
165416
[13]
Liu D and Zhang S 2008
J. Phys.: Condens. Matter 20
175222
[14]
Horng J et al 2011
Phys. Rev. B 83
165113
[15]
Dobson J F, White A and Rubio A 2006
Phys. Rev. Lett. 96
073201
[16]
Gómez-Santos G 2009
Phys. Rev. B 80
245424
[17]
Drosdoff D and Woods L M 2010
Phys. Rev. B 82
155459
[18]
Drosdoff D and L. M. Woods L M 2011
Phys. Rev. A 84
062501
[19]
Bo E. Sernelius Bo E 2011
Europhys. Lett. 95
57003
[20]
Judd T E, Scott R G, Martin A M, Kaczmazek B and Fromhold T M 2011
New J. Phys. 13
083020
[21]
Sernelius Bo E 2012
Phys. Rev. B 85
195427
[22]
Sarabadani J, Naji A, Asgari R and Rodgornik R 2011
Phys. Rev. B 84
155407
[23]
Sarabadani J, Naji A, Asgari R and Rodgornik R 2013
Phys. Rev. B 87
239905
[24]
Drosdoff D, Phan A D, Woods L M, Bondarev I V and Dobson J F 2012
Eur. Phys. J. B 85
365
[25]
Tse W-K and MacDonald A H 2012
Phys. Rev. Lett. 109
236806
[26]
Cysne T, Kort-Kamp W J M, Oliver D, Pinheiro F A, Rosa F S S and Farina C 2014
Phys. Rev. A 90
052511
[27]
Svetovoy V B and Palasantzas G 2014
Phys. Rev. Appl. 2
034006
[28]
Khusnutdinov N, Kashapov R and Woods L M 2016
Phys. Rev. A 94
012513
[29]
Ziegler K 2006
Phys. Rev. Lett. 97
266802
[30]
Lewkowicz M and B. Rosenstein B 2009
Phys. Rev. Lett. 102
106802
[31]
Klimchitskaya G L and Mostepanenko V M 2013
Phys. Rev. B 87
075439
[32]
Gorbar E V, Gusynin V P, Miransky V A and Shovkovy I A 2002
Phys. Rev. B 66
045108
[33]
Gusynin V P and Sharapov S G 2006
Phys. Rev. B 73
245411
[34]
Pyatkovsky P K 2009
J. Phys.: Condens. Matter 21
025506
[35]
Bordag M, Fialkovsky I V, Gitman D M and
Vassilevich D V 2009
Phys. Rev. B 80
245406
[36]
Fialkovsky I V, Marachevsky V N and
Vassilevich D V 2011
Phys. Rev. B 84
035446
[37]
Bordag M, Klimchitskaya G L and
Mostepanenko V M 2012
Phys. Rev. B 86 165429
[38]
Chaichian M, Klimchitskaya G L, Mostepanenko V M
and Tureanu A 2012
Phys. Rev. A 86 012515
[39]
Arora B, Kaur H and B. K. Sahoo B K 2014
J. Phys. B 47 155002
[40]
Kaur K, Kaur J, Arora B and Sahoo B K 2014
Phys. Rev. B 90 245405
[41]
Klimchitskaya G L and Mostepanenko V M 2014
Phys. Rev. A 89 012516
[42]
Klimchitskaya G L and Mostepanenko V M 2014
Phys. Rev. B 89 035407
[43]
Klimchitskaya G L and MostepanenkoV M 2014
Phys. Rev. A 89 052512
[44]
Klimchitskaya G L and Mostepanenko V M 2014
Phys. Rev. A 89 062508
[45]
Klimchitskaya G L, Mostepanenko V M and
Sernelius Bo E 2014
Phys. Rev. B 89 125407
[46]
Klimchitskaya G L, Mohideen U and Mostepanenko V M 2014
Phys. Rev. B 89 115419
[47]
Bordag M, Klimchitskaya G L, Mostepanenko V M and Petrov V M 2015
Phys. Rev. D 91 045037
[48]
Bordag M, Klimchitskaya G L, Mostepanenko V M and Petrov V M 2016
Phys. Rev. D 93 089907
[49]
Bordag M, Fialkovskiy I and Vassilevich D 2016
Phys. Rev. B 93 075414
[50]
Bordag M, Fialkovskiy I and Vassilevich D 2017
Phys. Rev. B 95 119905
[51]
Klimchitskaya G L and Mostepanenko V M 2015
Phys. Rev. B 91 174501
[52]
Klimchitskaya G L 2016
Int. J. Mod. Phys. A 31 1641026
[53]
Bezerra V B, Klimchitskaya G L,
Mostepanenko V M and Romero C 2016
Phys. Rev. A 94 042501
[54]
Bimonte G, Klimchitskaya G L and Mostepanenko V M 2017
Phys. Rev. A 96 012517
[55]
Bimonte G, Klimchitskaya G L and Mostepanenko V M 2017
Phys. Rev. B 96 115430
[56]
Henkel C, Klimchitskaya G L and Mostepanenko V M 2018
Phys. Rev. A 97 032504
[57]
Klimchitskaya G L and Mostepanenko V M 2018
Phys. Rev. A 98 032506
[58]
Klimchitskaya G L and Mostepanenko V M 2016
Phys. Rev. B 93 245419
[59]
Klimchitskaya G L and Mostepanenko V M 2016
Phys. Rev. B 94 195405
[60]
Klimchitskaya G L, Mostepanenko V M and Petrov V M 2017
Phys. Rev. B 96 235432
[61]
Klimchitskaya G L and Mostepanenko V M 2018
Phys. Rev. D 97 085001
[62]
Klimchitskaya G L and Mostepanenko V M 2016
Phys. Rev. A 93 052106
[63]
Klimchitskaya G L, Mostepanenko V M and Petrov V M 2018
Phys. Rev. A 98 023809
[64]
Bludov Yu V, Vasilevskiy M I and N. M. R. Peres N M R 2010
Europhys. Lett. 92
68001
[65]
Chung K, Lee C-H and Yi G-C 2010
Science 330
655
[66]
Hong W, Xu Y, Lu G, Li C and Shi G 2008
Electrochem. Commun. 10
1555
[67]
Kuila T, Bose S, Khanra P, Mishra A K, Kim N H and Lee J H 2011
Biosens. Bioelectron. 29
4637
[68]
Klimchitskaya G L, Korikov C C and Petrov V M 2015
Phys. Rev. B 92 125419
[69]
Klimchitskaya G L, Korikov C C and Petrov V M 2016
Phys. Rev. B 93
159906
[70]
Klimchitskaya G L and Mostepanenko V M 2017
Phys. Rev. B 95 035425
[71]
Klimchitskaya G L and Mostepanenko V M 2018
Phys. Rev. A 97 063817
[72]
Gusynin V P, Sharapov S G and Carbotte J P 2009
New J. Phys. 11 095013
[73]
Jafari S A 2012
J. Phys.: Condens. Matter 24 205802
[74]
Falkowsky L A and Varlamov A A 2007
Eur. Phys. J. B 56 281
[75]
Mak K F, Sfeir M Y, Wu Y, Lui C H, Misewich J A
and Heinz T F 2008
Phys. Rev. Lett. 101 196405
[76]
Palik E D (ed) 1985
Handbook of Optical Constants of Solids
(New York: Academic Press)
[77]
Banishev A A, Wen H, Xu J, Kawakami R K,
Klimchitskaya G L, Mostepanenko V M and Mohideen U 2013
Phys. Rev. B 87 205433
[78]
da Gunha Rodrigues G, Zelenovskiy P, Romanyuk K, Luchkin S,
Kopelevich Ya and Kholkin A 2015
Nat. Commun. 6 7572
[79]
Kang Y-J, Kang J and Chang K J 2008
Phys. Rev. B 78 115404
[80]
Ishigami M, Chen J H, Cullen W G, Fuhrer M S and Williams E D 2007
Nano. Lett. 7 1643
[81]
Nguyen T C, Otani M and Okada S 2011
Phys. Rev. Lett. 106 106801
[82]
Chew W C 1999 Waves and Fields in Inhomogeneous Media
(New York: Wiley-IEEE Press)
[83]
Stauber T, Peres N M R and Geim A K 2008
Phys. Rev. B 78 085432
[84]
Goldflam M D, Ruiz I, Howell S W, Wendt J R, Sinclair M B, Peters D W and
Beechem T E 2018 Opt. Express 26 8532
[85]
Saigal N, Mukherjee A, Sugunakar V and Ghosh S 2014
Rev. Sci. Instr. 85 073105
[86]
Santos C N, Joucken F, De Sousa Meneses D, Echegut P, Campos-Delgado J,
Louette P, Raskin J-P and Haskens B 2016
Sci. Reports 6 2430 |
Crystalline Geometries from Fermionic Vortex Lattice
M. Reza Mohammadi Mozaffar,
Ali Mollabashi
m$˙-$mohammadi@ipm.ir
mollabashi@ipm.ir
Abstract
We study charged Dirac fermions on an AdS${}_{2}\times\mathbb{R}^{2}$ background with a non-zero magnetic field. Under certain boundary conditions, we show that the charged fermion can make the background unstable, resulting in spontaneously formation of a vortex lattice.
We observe that an electric field emerges in the back-reacted solution due to the vortex lattice constructed from spin polarized fermions. This electric field may be extended to the UV boundary which leads to a finite charge density. We also discuss corrections to the thermodynamic functions due to the lattice formation.
††institutetext: School of physics, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
IPM/P-2013/020
1 Introduction
AdS/CFT correspondence has provided a powerful tool for studying strongly coupled field theories. This correspondence has been applied to certain systems in condensed matter physics (for excellent reviews see Iqbal:2011ae ; Hartnoll:2009sz ; Hartnoll:2011fn ). Specifically in the last few years, much attempts have been devoted to apply this approach to various phenomena including superconductivity, Fermi and non-Fermi liquids.
From gravity point of view, this can be done by considering different matter fields on an AdS-RN geometry which corresponds to physical systems at finite temperature and finite density.
A natural question one may pose is how to consider the effects of the underlying lattice that strongly coupled systems leave on. This key ingredient of condensed matter systems was neglected in most of the earlier studies (see however Kachru:2009xf ; Kachru:2010dk ; Hellerman:2002qa ).
To consider a lattice, one should break translational invariance of the dual field theory. This was first motivated by the significant effect of the momentum dissipation of charge carriers in optical conductivity Horowitz:2012ky ; Horowitz:2012gs . In these papers the authors have explicitly broken the translational symmetry in two different ways. Either by imposing a spatially inhomogeneous periodic source for a neutral scalar field coupled to an Einstein-Maxwell theory Horowitz:2012ky , or alternatively by considering the back-reaction of a periodic chemical potential on the metric in an Einstein-Maxwell theory Horowitz:2012gs . In both cases, lattice effects were handled by solving coupled PDE’s numerically. The most important achievement of these models was holographic reconstruction of Drude peak in optical conductivity and the power-law behavior in an intermediate frequency range111Fermions on these backgrounds have been studied in Ling:2013aya ..
Other approaches have also been used to study lattice effects in the dual field theory. An explicit example
is five dimensional models deformed by a uniform chemical potential. Although translational symmetry is broken in these models, a Bianchi VII${}_{0}$ subgroup is still preserved giving the dual field theory a helical structure Nakamura:2009tf ; Donos:2011ff . These models are technically much easier to deal with since the homogeneity of the system at constant $r$ slices leads to ODE’s rather than PDE’s Donos:2011ff ; Donos:2012wi 222The story is different in four dimensions, see for example Donos:2013wia .. Ground states of these models that the spatial modulation persists deep in the IR, are constructed numerically in Donos:2011ff ; Donos:2012js .
Beside the above approaches, recently an analytical back-reacted crystalline geometry was constructed in Bao:2013fda . The gravity dual of this model is an AdS${}_{2}\times\mathbb{R}^{2}$ supported by a magnetic field, which breaks translational symmetry. The vortex lattice is constructed via the instability of a probe charged scalar field coupled to the magnetic field. The important distinction between incorporating the lattice by means of periodic chemical potential Horowitz:2012ky ; Horowitz:2012gs and this solution is their behavior at IR. While in the former case, the background charge carriers screen the spatially modulated chemical potential in the IR, in the latter case, since magnetic field can not be screened, the effect of the lattice could persist deep in the IR. This solution has also been generalized to gravity duals with Lifshitz and/or hyperscaling violation exponents Bao:2013ixa . It is worth noting that this class of solutions is based on an elegant vortex lattice solution constructed in Maeda:2009vf ; Albash:2008eh 333See also Bu:2012mq where the back-reaction of such a lattice on the gauge filed is studied..
In this paper we consider the same background as that in Bao:2013fda , though in the present case the background is probed by a charged Dirac fermion. We construct a fermionic vortex lattice by means of the lowest Landau level (LLL) solutions. Fermionic LLL states are spin polarized and their holographic aspects have been previously discussed in Blake:2012tp ; Bolognesi:2012pi . In the present work we are interested in the spontaneous formation of a crystalline geometry sourced by these LLLs on the gauge sector. Thus we will have to analytically solve the corresponding coupled PDE’s for the metric and the gauge field to the lowest non-trivial order. We show that the back-reaction of the fermionic lattice leads to an emergent electric field and thus an effective charge density. In a specific range of parameters we show that the electric field can reach the UV boundary. The situation is different from Bao:2013fda , where a lattice structure due to a charged scalar condensate, only corrects the back-ground magnetic field.
The rest of this paper is organized in the following way. Section 2 is devoted to the basic setup including the gravitational background, Dirac action and its possible boundary terms. In Section 3 we will consider a charge fermion on an AdS${}_{2}\times\mathbb{R}^{2}$ background where we will construct the fermionic lattice from LLL solution. In the rest of this section we will study the back-reaction on the metric and gauge field sourced by these fermions. In the last section we discuss about our results and directions for further investigation of the model.
2 Basic set-up
In this section we describe basic ingredients for constructing a fermionic lattice. In Section (2.1) we will review the gravitational back-ground. Since dealing with fermions in curved space-time requires special care, we shall review certain basic properties of fermions in curved space-times in subsection (2.2). In particular, in order to provide a full description of dynamics, one must add suitable boundary terms to the Dirac action Henneaux:1998ch ; Iqbal:2009fd . We will discuss different boundary conditions which lead to well-defined variational principle and also lead to lattice formation.
2.1 Gravitational back-ground
Consider the following action which may support a magnetic AdS${}_{2}\times\mathbb{R}^{2}$ solution
$$\displaystyle S_{1}=\int{d^{4}x\sqrt{-g}\left(R-\frac{1}{4}F_{\mu\nu}F^{\mu\nu%
}-2\Lambda\right)}.$$
(1)
The corresponding metric and gauge field are
$$\displaystyle ds^{2}=L^{2}\left(-\frac{dt^{2}}{r^{2}}+\frac{dr^{2}}{r^{2}}+dx^%
{2}+dy^{2}\right),\hskip 28.452756ptF=Bdx\wedge dy,$$
(2)
where $L^{-2}=-2\Lambda>0$ and $r\to 0$ is the UV boundary. The equations of motion fixes $B$ in terms of the AdS radius as
$$\displaystyle B=\sqrt{2}L.$$
So this solution has one free parameter, which can be considered as the magnetic field.
It is worth noting that this solution emerges as the near-horizon limit of magnetically charged extremal AdS-RN black-branes. The holographic dual is an emergent IR CFT which describes the semi-local quantum liquid phase that plays a key role in explaining non-Fermi liquid behaviours and quantum phase transitions (see Iqbal:2011ae for details).
2.2 Dirac action and boundary terms
In order to study the vortex lattice solution with a charged fermionic probe, we add the following Dirac action to the back-ground (1)
$$\displaystyle S_{2}=\int{d^{4}x\sqrt{-g}\,i\,\overline{\Psi}\left(\frac{1}{2}(%
\Gamma^{\mu}\overrightarrow{D}_{\mu}-\overleftarrow{D}_{\mu}\Gamma^{\mu})-m%
\right)\Psi}.$$
(3)
Here $\Gamma^{\mu}D_{\mu}\equiv e_{a}^{\mu}\Gamma^{a}\left(\partial_{\mu}+\frac{1}{8%
}\omega_{ab,\mu}[\Gamma^{a},\Gamma^{b}]+iqA_{\mu}\right)$ is the covariant derivative where $\omega_{ab,\mu}$ is the spin connection and the vierbein $e_{a}^{\mu}$ translates between space-time indices $\mu,\nu$ and tangent space indices $a,b$. The gamma matrices carry tangent space indices and obey the Clifford algebra $\{\Gamma^{a},\Gamma^{b}\}=2\eta^{ab}$. We define the chiral gamma matrix as $\Gamma^{5}=i\Gamma^{r}\Gamma^{t}\Gamma^{x}\Gamma^{y}$ and conjugate spinors are defined by $\overline{\Psi}=\Psi^{\dagger}\Gamma^{t}$. $q$ denotes the electric charge of the fermion which we will set it to unity without loss of generality.
The magnetic field in the $x$-direction forces us to decompose the Dirac spinor eigenvectors of $i\Gamma^{x}\Gamma^{y}$ operator. These eigenvectors also correspond to spin up and spin down states. Thus we use the projection operator
$$\displaystyle\Psi_{\pm}=P_{\pm}\Psi,\hskip 28.452756ptP_{\pm}=\frac{1}{2}\left%
(1\pm i\Gamma^{x}\Gamma^{y}\right).$$
(4)
A suitable basis which we choose for the Dirac matrices is
$$\displaystyle\Gamma^{x}=\left(\begin{array}[]{cc}\sigma^{2}&0\\
0&-\sigma^{2}\end{array}\right),\Gamma^{y}=\left(\begin{array}[]{cc}\sigma^{1}%
&0\\
0&\sigma^{1}\end{array}\right),\Gamma^{t}=\left(\begin{array}[]{cc}-i\sigma^{3%
}&0\\
0&-i\sigma^{3}\end{array}\right),\Gamma^{r}=\left(\begin{array}[]{cc}0&-i%
\sigma^{2}\\
i\sigma^{2}&0\end{array}\right),$$
(5)
which implies the form of spin down and spin up spinors as
$$\displaystyle\Psi^{-}=\left(\begin{array}[]{c}0\\
{\Psi_{1}^{-}}\\
{\Psi_{2}^{-}}\\
0\end{array}\right)\;\;\;,\;\;\;\Psi^{+}=\left(\begin{array}[]{c}{\Psi_{1}^{+}%
}\\
0\\
0\\
{\Psi_{2}^{+}}\end{array}\right).$$
(6)
Since we are interested in finding a normalizable solution for the fermion in the entire bulk geometry, in order to obtain a non-trivial solution we have to terminate the geometry in the IR. This could be done by considering a black-brane horizon or a hard wall. In what follows, we will impose a hard wall; a wall that abruptly cuts the geometry at some finite $r=r_{0}$. As mentioned before, imposing boundary conditions for fermions is not a trivial task. The rest of this section is devoted to these subtleties.
Standard and alternative boundary conditions for fermions can be imposed by
$$\displaystyle(1\mp\Gamma^{r})\Psi(r\to 0)=0.$$
(7)
In order to have a well-defined variational principle with this sort of boundary conditions, we must add a boundary term to action (3) as follows
$$\displaystyle S_{\mathrm{bdy}}^{\mathrm{UV}}=\pm\frac{i}{2}\int_{r=0}{d^{3}x%
\sqrt{-h}\,\overline{\Psi}\Psi},$$
(8)
where the upper and lower signs refer to standard and alternative quantizations and $h$ is the determinant of the induced boundary metric, $h=gg^{rr}$. It is important to notice that the alternative quantization is allowed in a specific range of the spinor mass. For the LLL solutions which will be discussed in Sec. (3.2) this range is $0<mL<1/2$.
Since we are interested in spontaneous formation of the lattice, we will turn off the source of the fermionic field. In order to read the source in our basis, we have to consider the variation of the full Dirac action, which leads to
$$\displaystyle\delta S_{2}=\text{bulk term}$$
$$\displaystyle+$$
$$\displaystyle\frac{i}{2}\int_{r=0}{d^{3}x\sqrt{-h}\left(\delta\overline{\Psi}%
\Gamma^{r}\Psi-\overline{\Psi}\Gamma^{r}\delta\Psi\right)}$$
(9)
$$\displaystyle-$$
$$\displaystyle\frac{i}{2}\int_{r=r_{0}}{d^{3}x\sqrt{-h}\left(\delta\overline{%
\Psi}\Gamma^{r}\Psi-\overline{\Psi}\Gamma^{r}\delta\Psi\right)}.$$
First we consider the UV boundary term for the standard quantization, which after variation becomes
$$\displaystyle\delta S_{\mathrm{bdy}}^{\mathrm{UV}}=\frac{i}{2}\int_{r=0}{d^{3}%
x\sqrt{-h}\left(\delta\overline{\Psi}\Psi+\overline{\Psi}\delta\Psi\right)}.$$
(10)
Thus the variation of the full action at the UV boundary reads
$$\displaystyle{\delta S_{2}}\Big{|}_{r=0}+\delta S_{\mathrm{bdy}}^{\mathrm{UV}}%
=\frac{1}{2}\int_{r=0}{d^{3}x\sqrt{-h}\left(\delta\xi_{+}^{\dagger}\xi_{-}+\xi%
_{-}^{\dagger}\delta\xi_{+}-\delta\chi_{-}^{\dagger}\chi_{+}-\chi_{+}^{\dagger%
}\delta\chi_{-}\right)},$$
(11)
where $\xi_{\pm}=\Psi_{1}^{+}\pm\Psi_{2}^{+}$ and $\chi_{\pm}=\Psi_{1}^{-}\pm\Psi_{2}^{-}$. Thus the UV boundary condition for our choice becomes
$$\displaystyle\xi_{+}=0,\hskip 28.452756pt\chi_{-}=0.$$
(12)
For the alternative quantization these combinations become $\xi_{-}=0$ and $\chi_{+}=0$.
For a boundary condition at IR boundary, we will follow a procedure similar to Bolognesi:2012pi and add the following boundary term to the action (3)
$$\displaystyle S_{\mathrm{bdy}}^{\mathrm{IR}}=-\frac{i}{2}\int_{r=r_{0}}{d^{3}x%
\sqrt{-h}\,\overline{\Psi}e^{i\left(\theta-\frac{\pi}{2}\right)\Gamma^{5}}\Psi}.$$
(13)
So that
$$\displaystyle{\delta S_{2}}\Big{|}_{r=r_{0}}+\delta S_{\mathrm{bdy}}^{\mathrm{%
IR}}=-\int_{r=r_{0}}{d^{3}x\sqrt{-h}\left(\delta\tilde{\xi}_{-}^{\dagger}%
\tilde{\xi}_{+}+\tilde{\xi}_{+}^{\dagger}\delta\tilde{\xi}_{-}+\delta\tilde{%
\chi}_{+}^{\dagger}\tilde{\chi}_{-}+\tilde{\chi}_{-}^{\dagger}\delta\tilde{%
\chi}_{+}\right)},$$
(14)
where
$$\displaystyle\left(\begin{array}[]{c}\tilde{\xi}_{+}\\
\tilde{\xi}_{-}\end{array}\right)=\left(\begin{array}[]{cc}\cos\frac{\theta}{2%
}&\sin\frac{\theta}{2}\\
\sin\frac{\theta}{2}&-\cos\frac{\theta}{2}\end{array}\right)\left(\begin{array%
}[]{c}\Psi^{+}_{1}\\
\Psi^{+}_{2}\end{array}\right),\hskip 14.226378pt\left(\begin{array}[]{c}%
\tilde{\chi}_{+}\\
\tilde{\chi}_{-}\end{array}\right)=\left(\begin{array}[]{cc}-\cos\frac{\theta}%
{2}&\sin\frac{\theta}{2}\\
\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{array}\right)\left(\begin{array}%
[]{c}\Psi^{-}_{1}\\
\Psi^{-}_{2}\end{array}\right).$$
(15)
Therefore a well-defined variational principle is obtained on the hard wall by choosing
$$\displaystyle\tilde{\xi}_{-}=0,\hskip 28.452756pt\tilde{\chi}_{+}=0.$$
(16)
We will show that these two boundary conditions at UV and IR (12)-(16) lead to a unique normalizable solution for the Dirac hair. For the LLL solution discussed in subsection (3.2), the condition (16) is satisfied by choosing $\theta=\pi/2$ for standard and $\theta=-\pi/2$ for alternative quantizations.
Note that the IR boundary condition we have used on the hard wall is completely different from what is used in Bao:2013fda . This is because imposing Dirichlet or Neumann boundary conditions on the normalizable mode of a scalar filed at the IR boundary just yields to the trivial solution of AdS${}_{2}\times\mathbb{R}^{2}$. Thus authors of Bao:2013fda have imposed a Randall-Sundrum like (see Randall:1999ee ) boundary condition on a hard wall which supports a non-trivial profile for the scalar field444We thank Ning Bao for bringing our attention to this point.. The prescription is to add a mirror image of the space-time to the other side of the wall and glue them together at the IR boundary. So the set-up has two UV boundaries and each field also has a mirror image, which reduces to the desired space-time after imposing a Z${}_{2}$ symmetry.
This implies a discontinuity in the first derivative of the fields at the wall, although the fields are continuous there.
In our set-up, as we have seen, it is not necessary to consider a mirror boundary condition to obtain a non-trivial fermionic profile, though it is still possible to do that. Again imposing such a boundary condition for fermions is accompanied by some complications (for some early ideas see Flachi:2001bj ). One must consider spinors that are representations of Z${}_{2}$ group. Imposing the Z${}_{2}$ invariance on the Dirac action (3) leads to
$$\displaystyle\Psi(-r,x_{a})=\pm\Gamma^{r}\Psi(r,x_{a}),$$
(17)
which is the suitable boundary condition in a mirror geometry for a fermionic field.
3 The crystalline geometry
In this section we find the IR instability due to a fermionic probe which leads to a crystalline ground state. We will see that by changing the parameters, the $\Psi=0$ solution can be degenerated with a vortex lattice solution. The onset of the instability, referred by the critical point, is identified by the existence of a normalizable solution for $\Psi$ that satisfies both the UV and IR boundary conditions. After finding the desired solution for a Dirac hair Cubrovic:2010bf , we will solve the full set of equations of motion, including the back-reaction of the lattice on the gauge sector.
The equations of motion are
$$\displaystyle(\Gamma^{\mu}D_{\mu}-m)\Psi$$
$$\displaystyle=$$
$$\displaystyle 0$$
(18)
$$\displaystyle\frac{1}{\sqrt{-g}}\partial_{\mu}\left({\sqrt{-g}F^{\mu\nu}}\right)$$
$$\displaystyle=$$
$$\displaystyle J^{\nu}_{\Psi}$$
$$\displaystyle G_{\mu\nu}+\Lambda g_{\mu\nu}$$
$$\displaystyle=$$
$$\displaystyle T^{A}_{\mu\nu}+T^{\Psi}_{\mu\nu},$$
where
$$\displaystyle J^{\nu}_{\Psi}$$
$$\displaystyle=$$
$$\displaystyle\overline{\Psi}\Gamma^{\nu}\Psi$$
(19)
$$\displaystyle T^{A}_{\mu\nu}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}F_{\mu\lambda}F_{\nu}^{\lambda}-\frac{1}{8}g_{\mu\nu}F%
^{2}$$
$$\displaystyle T^{\Psi}_{\mu\nu}$$
$$\displaystyle=$$
$$\displaystyle-\frac{i}{8}\left\{\overline{\Psi}e_{a\mu}\left(\partial_{\nu}+%
\frac{1}{4}\omega_{bc,\nu}\Gamma^{bc}+iA_{\nu}\right)\Gamma^{a}\Psi+\text{h.c.%
}\right\}+(\mu\leftrightarrow\nu).$$
In order to consider the back-reaction of the fermionic lattice on the back-ground (2), we consider a perturbative expansion around the critical point. At such a point the value of the fermionic field is zero, so one can consider
$$\displaystyle\Psi(r,x,y)=\epsilon\Psi_{(1)}(r,x,y)+\epsilon^{3}\Psi_{(3)}(r,x,%
y)+\cdots.$$
(20)
The expansion parameter $\epsilon$ is the distance away from the critical point in the parameter space. We are interested in the back-reaction of the LLL solutions (Sec. 3.2) to the gauge sector. A simple analysis shows that the only non-trivial sources at order $\mathcal{O}(\epsilon^{2})$ are $T^{\Psi}_{tx}$, $T^{\Psi}_{ty}$, and $J_{\Psi}^{t}$. Thus we will use the following ansatz for the back-reacted metric and gauge field
$$\displaystyle ds^{2}$$
$$\displaystyle=$$
$$\displaystyle L^{2}\left[-\frac{dt^{2}}{r^{2}}+\frac{dr^{2}}{r^{2}}+\epsilon^{%
2}\Big{(}a(r,x,y)dtdx+b(r,x,y)dtdy\Big{)}+dx^{2}+dy^{2}\right],$$
$$\displaystyle A$$
$$\displaystyle=$$
$$\displaystyle Bydx+\epsilon^{2}a_{2}^{t}(r,x,y)dt.$$
(21)
This shows that the back-reaction of the fermionic lattice on the gauge sector at the leading order leads to an effective charge density. The situation is different from Bao:2013fda , where a lattice structure due to a scalar condensate just corrects the back-ground magnetic field.
3.1 Droplet solution
At order $\epsilon$, we can neglect the back-reaction of $\Psi_{(1)}$ on the gauge sector555From here we omit the subscript (1) in $\Psi_{(1)}$.. In this limit from the equations in (18), only the Dirac equation is relevant. However, when dealing with fermions in a curved space-time, it is often more simple to get rid of spin connection terms by introducing a rescaled fermionic field $\Psi(r,x,y)=(-h)^{-1/4}\,\psi(r,x,y)$. Making use of this, the Dirac equation on the back-ground (2) takes the following form
$$\displaystyle\left(\Gamma^{r}r\partial_{r}+\Gamma^{x}\left(\partial_{x}+iBy%
\right)+\Gamma^{y}\partial_{y}-mL\right)\psi=0.$$
(22)
By acting $(\Gamma^{\mu}D_{\mu}+mL)$ operator on the above equation and after some gymnastics with gamma matrices, one can find a second order equation as follows
$$\displaystyle\left[r^{2}\partial_{r}^{2}+\partial_{x}^{2}+\partial_{y}^{2}+r%
\partial_{r}+2iBy\partial_{x}-B^{2}y^{2}-iB\Gamma^{x}\Gamma^{y}-m^{2}L^{2}%
\right]\psi(r,x,y)=0.$$
(23)
We will solve the above equation by separation of variables as $\psi_{\pm}(r,x,y)=\rho(r)g(y)e^{ikx}C_{\pm}$ where $C_{\pm}$ are constant spinors such that $i\Gamma^{x}\Gamma^{y}C_{\pm}=\pm C_{\pm}$. The separated equations become
$$\displaystyle r^{2}\frac{\rho_{n_{\pm}}^{\prime\prime}}{\rho_{n_{\pm}}}+r\frac%
{\rho_{n_{\pm}}^{\prime}}{\rho_{n_{\pm}}}-m^{2}L^{2}=(k+By)^{2}-\frac{g_{n_{%
\pm}}^{\prime\prime}}{g_{n_{\pm}}}\pm B=-\lambda_{n_{\pm}},$$
(24)
where $\lambda_{n_{\pm}}$ are the eigenvalues from the separation of variables. To find the ’basic droplet’ solution, it is enough to consider the equation for $g(y)$, setting $Y=\sqrt{B}(y+\frac{k}{B})$, one finds
$$\displaystyle g^{\prime\prime}_{n_{\pm}}(Y)-g_{n_{\pm}}(Y)\left(Y^{2}+\frac{%
\lambda_{n_{\pm}}}{B}\pm 1\right)=0.$$
A general solution of the above equation is the parabolic cylinder function, though if one demands a normalizable solution as $y\rightarrow\infty$, then it becomes the familiar Hermite function666Assuming $B>0$.
$$\displaystyle g_{n_{\pm}}(Y)\sim e^{\frac{-Y^{2}}{2}}H_{n_{\pm}}(Y)$$
where $c^{\pm}$’s are constants, and $\lambda_{n_{\pm}}=-2B(n_{\pm}+\frac{1}{2}\pm\frac{1}{2})$. Actually the situation is the same as a quantum harmonic oscillator eigenvalue problem and the corresponding Landau levels, but here for a fermionic field. Various aspects of these solutions have been previously discussed in the series of papers Albash:2009wz ; Albash:2010yr ; Gubankova:2010rc ; Bolognesi:2011un ; Bolognesi:2012pi ; Blake:2012tp , in a related but distinct context.
Now we will solve the radial part of the equation (24) for $\rho(r)$. It has a power law solution in AdS${}_{2}\times\mathbb{R}^{2}$ as
$$\displaystyle\rho_{n_{\pm}}(r)=c^{\pm}r^{\alpha_{\pm}}+d^{\pm}r^{-\alpha_{\pm}%
},\hskip 28.452756pt\alpha_{\pm}=\sqrt{m^{2}L^{2}-\lambda_{n_{\pm}}}.$$
So the full solution of the equation (23) becomes
$$\displaystyle\psi(r,x,Y)=e^{ikx}e^{\frac{-Y^{2}}{2}}\left[\left(c^{+}r^{\alpha%
_{+}}+d^{+}r^{-\alpha_{+}}\right)H_{n_{+}}(Y)+\left(c^{-}r^{\alpha_{-}}+d^{-}r%
^{-\alpha_{-}}\right)H_{n_{-}}(Y)\right]$$
where $c^{\pm}$ and $d^{\pm}$ are constant spinors. Since the Dirac equation (22) is a first order equation, $c^{\pm}$ and $d^{\pm}$ are not independent constants. In other words the desired solutions are those which satisfy the original first order equation (22) among the above solutions. This can be done, by using the recursion relations between the Hermite functions, it becomes obvious that the existence of non-trivial solution implies that the $\alpha_{+}=\alpha_{-}\equiv\alpha$ and thus $\lambda_{n_{+}}=\lambda_{n_{-}}\equiv\lambda$. This leads to
$$\displaystyle n_{-}=n_{+}+1\equiv n.$$
Using these constrains, the relations between the constant spinors (for $n\neq 0$) becomes
$$\displaystyle{c_{1}^{+}}=\frac{1}{2n}\left(\nu c_{1}^{-}+\sqrt{\nu^{2}+2n}c_{2%
}^{-}\right),\hskip 14.226378pt{c_{2}^{+}}=\frac{1}{2n}\left(\nu c_{2}^{-}+%
\sqrt{\nu^{2}+2n}c_{1}^{-}\right)$$
$$\displaystyle{d_{1}^{+}}=\frac{1}{2n}\left(\nu d_{1}^{-}-\sqrt{\nu^{2}+2n}d_{2%
}^{-}\right),\hskip 14.226378pt{d_{2}^{+}}=\frac{1}{2n}\left(\nu d_{2}^{-}-%
\sqrt{\nu^{2}+2n}d_{1}^{-}\right)$$
(25)
where $\nu^{2}=\frac{m^{2}L^{2}}{B}$ and $\alpha=\sqrt{m^{2}L^{2}+2nB}$. Thus the physical solution can be written in terms of Hermite polynomials as follows
$$\displaystyle\Psi(r,x,Y)=r^{\alpha+\frac{1}{2}}e^{ikx}e^{\frac{-Y^{2}}{2}}%
\left[\left(c^{-}+d^{-}r^{-2\alpha}\right)H_{n}(Y)+\left(c^{+}+d^{+}r^{-2%
\alpha}\right)H_{n-1}(Y)\right].$$
(26)
As we have mentioned earlier, in order to have a crystalline ground-state, we must have normalizable solution for $\Psi$ that satisfies the IR boundary conditions. The above solution have two different radial modes which one of them can diverge near the boundary at $r=0$ depending on the parameters. In the case that $\alpha>\frac{1}{2}$, we can only consider the standard quantization, but in the case that $\alpha<\frac{1}{2}$, the alternative quantization is also possible.
3.2 Fermionic vortex lattice
As mentioned in Maeda:2009vf , to obtain the vortex lattice structure from the single droplet solution, it is enough to consider the $n=0$ level. It is evident from the equation (3.1) that in this case the general solution (26) become useless. Actually in this case the $H_{-1}$, which is the eigenfunction of the spin up fermion is not well-defined and one must define $c_{+}=d_{+}\equiv 0$. After some simple algebra one finds
$$\displaystyle\Psi_{0}(r,x,y)=r^{\frac{1}{2}}e^{ikx}\psi_{0}(y;k)\left(\begin{%
array}[]{c}0\\
c_{0}r^{mL}+d_{0}r^{-mL}\\
c_{0}r^{mL}-d_{0}r^{-mL}\\
0\end{array}\right),\hskip 28.452756pt\psi_{0}(y;k)=e^{-\frac{B}{2}(y+\frac{k}%
{B})^{2}},$$
(27)
where the Hermite function is normalized such that $H_{0}=1$. So the lowest Landau level is spin polarized and the degeneracy of fermions is half of the higher levels. The vortex lattice solution can be obtain by an appropriate superposition of the droplet solutions
$$\displaystyle\Psi_{0}^{\mathrm{lat}}(x,y)=\sum\limits_{l=-\infty}^{\infty}c_{l%
}e^{ik_{l}x}\psi_{0}(y;k_{l})$$
(28)
where
$$\displaystyle c_{l}\equiv e^{-i\pi\frac{v_{2}}{v_{1}^{2}}l^{2}},\hskip 28.4527%
56ptk_{l}=\frac{2\pi l}{v_{1}}\sqrt{B}$$
(29)
for arbitrary $v_{1}$ and $v_{2}$. In terms of the elliptic theta function $\vartheta_{3}$ defined by
$$\displaystyle\vartheta_{3}(v,\tau)\equiv\sum\limits_{l=-\infty}^{\infty}q^{l^{%
2}}z^{2l},\hskip 28.452756ptq\equiv e^{i\pi\tau},\hskip 28.452756ptz\equiv e^{%
i\pi v}$$
(30)
the equation (28) becomes
$$\displaystyle\Psi_{0}^{\mathrm{lat}}(x,y)=e^{-\frac{By^{2}}{2}}\vartheta_{3}(v%
,\tau)$$
(31)
where
$$\displaystyle v=\frac{\sqrt{B}(x+iy)}{v_{1}},\hskip 28.452756pt\tau=\frac{2\pi
i%
-v_{2}}{v_{1}^{2}}.$$
(32)
The elliptic theta function $\vartheta_{3}$ has two properties which implies the vortex lattice structure.
The first one is its pseudo-periodicity
$$\displaystyle\vartheta_{3}(v+1,\tau)=\vartheta_{3}(v,\tau),\hskip 28.452756pt%
\vartheta_{3}(v+\tau,\tau)=e^{-2\pi i(v+\tau/2)}\vartheta_{3}(v,\tau),$$
(33)
thus every function that depends on the norm of $\vartheta_{3}$ is invariant upon translation by the lattice generators
$$\displaystyle\mathbf{b}_{1}=\frac{1}{\sqrt{B}}v_{1}\partial_{x},\hskip 28.4527%
56pt\mathbf{b}_{2}=\frac{1}{\sqrt{B}}\left(\frac{2\pi}{v_{1}}\partial_{y}+%
\frac{v_{2}}{v_{1}}\partial_{x}\right).$$
(34)
By this choice, every unit cell contains exactly one quantum flux, where the area is given by $2\pi/B$. Second, $\vartheta_{3}$ vanishes at
$$\displaystyle\mathbf{x}_{m,n}=\left(m+\frac{1}{2}\right)\mathbf{b}_{1}+\left(n%
+\frac{1}{2}\right)\mathbf{b}_{2},\hskip 28.452756ptm,n\in\mathbb{N},$$
(35)
and has a phase rotation of $2\pi$ around each such zero, thus one can consider $\mathbf{x}_{m,n}$ as the vortex cores. By changing the parameters $v_{1}$ and $v_{2}$, one can construct various lattice shapes, such as rectangular, square, rhombic, and etc. In this paper we will only consider the square lattice which is obtained by setting
$$\displaystyle v_{2}=0\rightarrow c_{l}=1\hskip 28.452756pt\text{and}\hskip 28.%
452756ptv_{1}=\sqrt{2\pi}.$$
(36)
Now that we are equipped with the fermionic vortex lattice, constructed from lowest Landau level of a Dirac fermion, we can consider the back-reaction of this lattice structure on the gauge sector.
3.3 Back-reaction on the gauge sector
The back-reaction of the crystalline structure on the metric and the gauge field at order $\mathcal{O}(\epsilon^{2})$ is sourced by the fermions as matter current and energy-momentum tensor. As we mentioned earlier, in order to obtain a spontaneous lattice formation, the source must be turned off. This means that for the standard quantization, one must consider $d_{0}=0$ in the solution (27), where we used the equation (12) for identifying the source term777The following calculation can be held in a similar way for the case of alternative quantization. This can be done by changing: $c_{0}\rightarrow d_{0},m\rightarrow-m.$. Dealing with the equations is much simpler if we extract the $r$ scaling in the $\mathcal{O}(\epsilon^{2})$ corrections and solve the equations for the spatial dependence. We assume
$$\displaystyle f_{i}(r,x,y)=r^{2mL}f_{i}(x,y)$$
(37)
where $f_{i}=a,b,a_{2}^{t}$.
At this order, the only non-trivial Maxwell equation is
$$\displaystyle(\partial_{x}^{2}+\partial_{y}^{2})a_{2}^{t}+2mL(1+2mL)a_{2}^{t}+%
B\left(\partial_{x}b-\partial_{y}a\right)=2L^{3}\left|c_{0}\right|^{2}\left|%
\Psi_{0}^{\mathrm{lat}}\right|^{2},$$
(38)
and the non-trivial Einstein equations coming from $G_{tr}$, $G_{tx}$, and $G_{ty}$ that are
$$\displaystyle\partial_{x}a+\partial_{y}b$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle\partial_{x}^{2}b-\partial_{x}\partial_{y}a+2(2m^{2}L^{2}+mL-1)b-%
\frac{\sqrt{2}}{L}\partial_{x}a_{2}^{t}$$
$$\displaystyle=$$
$$\displaystyle-\frac{iL}{2}|c_{0}|^{2}\left(\Psi_{0}^{\mathrm{lat}}\partial_{y}%
{\Psi_{0}^{\mathrm{lat}}}^{*}-{\Psi_{0}^{\mathrm{lat}}}^{*}\partial_{y}{\Psi_{%
0}^{\mathrm{lat}}}\right),$$
(39)
$$\displaystyle\partial_{y}^{2}a-\partial_{x}\partial_{y}b+2(2m^{2}L^{2}+mL-1)a+%
\frac{\sqrt{2}}{L}\partial_{y}a_{2}^{t}$$
$$\displaystyle=$$
$$\displaystyle-\frac{iL}{2}|c_{0}|^{2}\left(\Psi_{0}^{\mathrm{lat}}\partial_{x}%
{\Psi^{\mathrm{lat}}_{0}}^{*}-{\Psi^{\mathrm{lat}}_{0}}^{*}\partial_{x}{\Psi^{%
\mathrm{lat}}_{0}}-2iBy\left|\Psi_{0}^{\mathrm{lat}}\right|^{2}\right).$$
For the rectangular lattice, we have a solution that is periodic in $x,y$ with periodicity $\frac{v_{1}}{\sqrt{B}}$ in the $x$ direction and $\frac{2\pi}{v_{1}\sqrt{B}}$ in the $y$ direction, therefore each of the functions can be expanded as a double Fourier series in $x,y$888Here we set $B=1$, which implies that each unit cell has a net flux density of $2\pi$.
$$\displaystyle f_{i}(x,y)=\sum\limits_{k,l}v_{1}e^{2\pi ik\frac{x}{v_{1}}}e^{-%
ilv_{1}y}e^{-ikl\pi-\frac{k^{2}\pi^{2}}{v_{1}^{2}}-\frac{1}{4}l^{2}v_{1}^{2}}%
\tilde{f_{i}}(k,l).$$
(40)
Using the Poisson summation formula, one can bring the Fourier transform of the source term into the form of the above equation. This trick helps us to reduce the coupled partial differential equations (38) and (3.3) to simple algebraic equations for the coefficients $\tilde{f_{i}}(k,l)$.
Plugging (40) into the equations (38) and (3.3), the algebraic equations for the $\tilde{f_{i}}(k,l)$ becomes
$$\displaystyle\left(\frac{4k^{2}\pi^{2}}{v_{1}^{2}}+l^{2}v_{1}^{2}-\sqrt{2}m-2m%
^{2}\right)\tilde{a}_{2}^{t}-i\left(\frac{2k\pi}{v_{1}}\tilde{b}+lv_{1}\tilde{%
a}\right)$$
$$\displaystyle=$$
$$\displaystyle-\frac{|c_{0}|^{2}}{4}\sqrt{\frac{2}{\pi}}$$
$$\displaystyle 2k\pi\tilde{a}-lv_{1}^{2}\tilde{b}$$
$$\displaystyle=$$
$$\displaystyle 0$$
$$\displaystyle 2k\pi l\tilde{b}+\left(l^{2}v_{1}^{2}+2-\sqrt{2}m-2m^{2}\right)%
\tilde{a}+2ilv_{1}\tilde{a}_{2}^{t}$$
$$\displaystyle=$$
$$\displaystyle\frac{ilv_{1}|c_{0}|^{2}}{4\sqrt{2\pi}}$$
$$\displaystyle 2k\pi l\tilde{a}+\left(\frac{4k^{2}\pi^{2}}{v_{1}^{2}}+2-\sqrt{2%
}m-2m^{2}\right)\tilde{b}+\frac{4ik\pi}{v_{1}}\tilde{a}_{2}^{t}$$
$$\displaystyle=$$
$$\displaystyle\frac{ik|c_{0}|^{2}}{2v_{1}}\sqrt{\frac{\pi}{2}}.$$
(41)
The above equations show that $\tilde{a}$ and $\tilde{b}$ are pure imaginary and $\tilde{a}_{2}^{t}$ is a real function.
The solutions to these equations for $k=l=0$ are
$$\displaystyle\tilde{a}=0,\hskip 28.452756pt\tilde{b}=0,\hskip 28.452756pt%
\tilde{a}_{2}^{t}=\frac{|c_{0}|^{2}}{2m\left(\sqrt{2}+2m\right)\sqrt{2\pi}}$$
(42)
and in all other cases one finds
$$\displaystyle\tilde{a}$$
$$\displaystyle=$$
$$\displaystyle\frac{ilv_{1}^{3}\left[4k^{2}\pi^{2}+v_{1}^{2}\left(4-\sqrt{2}m-2%
m^{2}+l^{2}v_{1}^{2}\right)\right]}{D}|c_{0}|^{2}$$
(43)
$$\displaystyle\tilde{b}$$
$$\displaystyle=$$
$$\displaystyle\frac{2\pi ikv_{1}\left[4k^{2}\pi^{2}+v_{1}^{2}\left(4-\sqrt{2}m-%
2m^{2}+l^{2}v_{1}^{2}\right)\right]}{D}|c_{0}|^{2},$$
$$\displaystyle\tilde{a}_{2}^{t}$$
$$\displaystyle=$$
$$\displaystyle\frac{-12k^{2}\pi^{2}v_{1}^{2}+v_{1}^{4}\left(-4+2\sqrt{2}m+4m^{2%
}-3l^{2}v_{1}^{2}\right)}{D}|c_{0}|^{2}$$
(44)
where
$$\displaystyle D$$
$$\displaystyle=$$
$$\displaystyle 4\sqrt{2\pi}\Big{[}16k^{4}\pi^{4}+8k^{2}\pi^{2}v_{1}^{2}\left(l^%
{2}v_{1}^{2}-\sqrt{2}m-2m^{2}\right)$$
(45)
$$\displaystyle+$$
$$\displaystyle v_{1}^{4}\left(4\sqrt{2}m^{3}+4m^{4}+l^{4}v_{1}^{4}-2\sqrt{2}m%
\left(1+l^{2}v_{1}^{2}\right)-2m^{2}\left(1+2l^{2}v_{1}^{2}\right)\right)\Big{%
]}.$$
3.4 Visualization of the Modulated phase
In this subsection we show different plots of the vortex lattice solution. In the Figure (1) the fermionic lattice is plotted at order $\mathcal{O}(\epsilon)$ as a function of $(x,y)$. Figure (2) shows the spatially modulation of the temporal component of the gauge field $a_{2}^{t}(x,y)$ and metric $a(x,y)$.
In the Figure (3) one can compare the profile of the electric field in the bulk at a constant $x$-slice as a function of $(r,y)$ for different mass parameters. The physical significance of these two plots is difference of the behavior of the electric field near the UV boundary. It is worth noting that in the case of alternative quantization, the electric field always reaches the UV boundary.
In all plots, we consider $B=1$, $v_{1}=\sqrt{2\pi}$, and $c_{0}=4$. Since the coefficients in the Fourier decomposition are exponentially suppressed as functions of $k^{2}$ and $l^{2}$, we have got a well approximation by running $k,l$ from $-5$ to $5$ (i.e. we have approximated the series with their first 121 terms).
4 Discussion
In this paper we have considered a magnetic AdS${}_{2}\times\mathbb{R}^{2}$ background which is abruptly terminated in the IR. The background magnetic field breaks the translation symmetry along the $y$-direction. We considered Dirac fermions at order $\mathcal{O}(\epsilon)$ on this background. The fermions lie in Landau levels due to the magnetic field. Considering a specific superposition of the lowest Landau level solution, which are spin polarized, we have constructed a fermionic vortex lattice. Turning off the source term, we have solved the coupled PDE’s of the metric and the gauge field sourced by the fermionic lattice. The back-reacted geometry comes to a crystalline structure. The spontaneously formed crystalline geometry supports an electric field and thus a finite charge density at $\mathcal{O}(\epsilon^{2})$. The electric field can reach the boundary for specific range of parameters.
The lattice formation has several effects on the physics, including the thermodynamic functions. In order to compute corrections to the free energy and other thermodynamic quantities due to the lattice, explaining the role of $r_{0}$ is necessary. In the IR wall geometry, one can think of $r_{0}$ as a proxy for a confinement scale $\Lambda^{-1}$ in the confinement phase or a temperature $T^{-1}$ in the deconfined phase, which is represented by a horizon at some $r<r_{0}$ Bao:2013fda . The leading corrections to the thermodynamic functions can be deduced from this correspondence.
The free energy in the field theory can be computed from the on-shell value of the bulk action and other observables can be derived from it. A simple dimensional analysis shows that in the standard quantization the free energy, entropy and specific heat densities take corrections as
$$\displaystyle\mathcal{F}$$
$$\displaystyle\sim$$
$$\displaystyle T+\epsilon^{2}T^{-2mL}+\cdots$$
$$\displaystyle\mathcal{S}$$
$$\displaystyle\sim$$
$$\displaystyle 1+\epsilon^{2}T^{-2mL-1}+\cdots$$
$$\displaystyle\mathcal{C}$$
$$\displaystyle\sim$$
$$\displaystyle\epsilon^{2}T^{-2mL-1}+\cdots.$$
We must note that the perturbation expansion is valid while $\epsilon\ll T^{mL+1/2}$. In terms of the IR cut-off, $r_{0}$, this is equivalent to $\epsilon\ll r_{0}^{-mL-1/2}$. Considering $\epsilon$ as the distance away from the critical point, we see that decreasing $m$ extends the validity of linearised expansion region. For low temperatures, the above corrections become more important for the case of alternative quantization, where the sign of $m$ changed, while for high temperatures the converse is true.
It would be interesting to further explore this analytical fermionic vortex lattice in the following directions:
•
It is worth to generalize the fermionic vortex lattice to geometries with Lifshitz and/or hyperscaling violating exponents, specifically the case of $\eta$-geometries (the case where $z\to\infty$ and $-\theta/z=\eta$ is a constant).
•
The most important achievement of including the lattice effects in the dual field theory was the reconstruction of the Drude peak and a also reading the exponent of the power-law behavior in an intermediate frequency range of the optical conductivity. It would be interesting to compute the current-current correlators to study these features in this model.
•
A more natural setup to construct a vortex lattice is to consider a black-brane horizon in the IR, instead of a hard wall.
•
We have discussed the lattice formation in this paper for standard and alternative quantizations of Dirac fermions. These are not the only possible quantizations. It would be interesting to investigate the effect of other possible quantizations, such as mixed quantization Laia:2011zn in the lattice formation.
•
We have only considered the lattice formation due to LLL solutions which are spin polarized. While the excited Landau levels ($n>0$) contain both spin-up and spin-down components, it is interesting to construct lattice solutions from the excited states. This can investigate the role of spin polarization in the lattice solution.
Acknowledgments
We would like to thank D. Tong, M.M. Sheikh-Jabbari, A. Vaezi, N. Bao, D. Allahbakhshi, and A. Naseh for useful discussions and comments. We would also like to thank M. Alishahiha, for fruitful discussions, comments and all of his supports during this work. We also acknowledge the use of M. Headrick’s excellent Mathematica package ”diffgeo”.
References
(1)
N. Iqbal, H. Liu and M. Mezei,
“Lectures on holographic non-Fermi liquids and quantum phase transitions,”
arXiv:1110.3814 [hep-th].
(2)
S. A. Hartnoll,
“Lectures on holographic methods for condensed matter physics,”
Class. Quant. Grav. 26, 224002 (2009)
[arXiv:0903.3246 [hep-th]].
(3)
S. A. Hartnoll,
“Horizons, holography and condensed matter,”
arXiv:1106.4324 [hep-th].
(4)
S. Kachru, A. Karch and S. Yaida,
“Holographic Lattices, Dimers, and Glasses,”
Phys. Rev. D 81, 026007 (2010)
[arXiv:0909.2639 [hep-th]].
(5)
S. Kachru, A. Karch and S. Yaida,
“Adventures in Holographic Dimer Models,”
New J. Phys. 13, 035004 (2011)
[arXiv:1009.3268 [hep-th]].
(6)
S. Hellerman,
“Lattice gauge theories have gravitational duals,”
hep-th/0207226.
(7)
G. T. Horowitz, J. E. Santos and D. Tong,
“Optical Conductivity with Holographic Lattices,”
JHEP 1207, 168 (2012)
[arXiv:1204.0519 [hep-th]].
(8)
G. T. Horowitz, J. E. Santos and D. Tong,
“Further Evidence for Lattice-Induced Scaling,”
JHEP 1211, 102 (2012)
[arXiv:1209.1098 [hep-th]].
(9)
Y. Ling, C. Niu, J. Wu, Z. Xian and H. Zhang,
JHEP 1307, 045 (2013)
[arXiv:1304.2128 [hep-th]].
(10)
S. Nakamura, H. Ooguri and C. -S. Park,
“Gravity Dual of Spatially Modulated Phase,”
Phys. Rev. D 81, 044018 (2010)
[arXiv:0911.0679 [hep-th]].
(11)
A. Donos and J. P. Gauntlett,
“Holographic helical superconductors,”
JHEP 1112, 091 (2011)
[arXiv:1109.3866 [hep-th]].
(12)
A. Donos and J. P. Gauntlett,
“Black holes dual to helical current phases,”
Phys. Rev. D 86, 064010 (2012)
[arXiv:1204.1734 [hep-th]].
(13)
A. Donos and S. A. Hartnoll,
“Metal-insulator transition in holography,”
arXiv:1212.2998 [hep-th].
(14)
A. Donos,
“Striped phases from holography,”
JHEP 1305, 059 (2013)
[arXiv:1303.7211 [hep-th]].
(15)
N. Bao, S. Harrison, S. Kachru and S. Sachdev,
“Vortex Lattices and Crystalline Geometries,”
arXiv:1303.4390 [hep-th].
(16)
N. Bao and S. Harrison,
“Crystalline Scaling Geometries from Vortex Lattices,”
arXiv:1306.1532 [hep-th].
(17)
M. Henneaux,
“Boundary terms in the AdS / CFT correspondence for spinor fields,”
In *Tbilisi 1998, Mathematical methods in modern theoretical physics* 161-170
[hep-th/9902137].
(18)
N. Iqbal and H. Liu,
“Real-time response in AdS/CFT with application to spinors,”
Fortsch. Phys. 57, 367 (2009)
[arXiv:0903.2596 [hep-th]].
(19)
K. Maeda, M. Natsuume and T. Okamura,
“Vortex lattice for a holographic superconductor,”
Phys. Rev. D 81, 026002 (2010)
[arXiv:0910.4475 [hep-th]].
(20)
T. Albash and C. V. Johnson,
“A Holographic Superconductor in an External Magnetic Field,”
JHEP 0809, 121 (2008)
[arXiv:0804.3466 [hep-th]].
(21)
Y. -Y. Bu, J. Erdmenger, J. P. Shock and M. Strydom,
“Magnetic field induced lattice ground states from holography,”
JHEP 1303, 165 (2013)
[arXiv:1210.6669 [hep-th]].
(22)
M. Blake, S. Bolognesi, D. Tong and K. Wong,
“Holographic Dual of the Lowest Landau Level,”
JHEP 1212, 039 (2012)
[arXiv:1208.5771 [hep-th]].
(23)
S. Bolognesi, J. N. Laia, D. Tong and K. Wong,
“A Gapless Hard Wall: Magnetic Catalysis in Bulk and Boundary,”
JHEP 1207, 162 (2012)
[arXiv:1204.6029 [hep-th]].
(24)
M. Cubrovic, J. Zaanen and K. Schalm,
“Constructing the AdS Dual of a Fermi Liquid: AdS Black Holes with Dirac Hair,”
JHEP 1110, 017 (2011)
[arXiv:1012.5681 [hep-th]].
(25)
L. Randall and R. Sundrum,
“A Large mass hierarchy from a small extra dimension,”
Phys. Rev. Lett. 83, 3370 (1999)
[hep-ph/9905221].
(26)
A. Flachi, I. G. Moss and D. J. Toms,
“Quantized bulk fermions in the Randall-Sundrum brane model,”
Phys. Rev. D 64, 105029 (2001)
[hep-th/0106076].
(27)
T. Albash and C. V. Johnson,
“Holographic Aspects of Fermi Liquids in a Background Magnetic Field,”
J. Phys. A 43, 345405 (2010)
[arXiv:0907.5406 [hep-th]].
(28)
T. Albash and C. V. Johnson,
“Landau Levels, Magnetic Fields and Holographic Fermi Liquids,”
J. Phys. A 43, 345404 (2010)
[arXiv:1001.3700 [hep-th]].
(29)
E. Gubankova, J. Brill, M. Cubrovic, K. Schalm, P. Schijven and J. Zaanen,
“Holographic fermions in external magnetic fields,”
Phys. Rev. D 84, 106003 (2011)
[arXiv:1011.4051 [hep-th]].
(30)
S. Bolognesi and D. Tong,
“Magnetic Catalysis in AdS4,”
Class. Quant. Grav. 29, 194003 (2012)
[arXiv:1110.5902 [hep-th]].
(31)
M. Blake, S. Bolognesi, D. Tong and K. Wong,
“Holographic Dual of the Lowest Landau Level,”
JHEP 1212, 039 (2012)
[arXiv:1208.5771 [hep-th]].
(32)
J. N. Laia and D. Tong,
“A Holographic Flat Band,”
JHEP 1111, 125 (2011)
[arXiv:1108.1381 [hep-th]]. |
Top-quark flavor-changing $tqZ$ couplings and rare $\Delta F=1$ processes
Chuan-Hung Chen
physchen@mail.ncku.edu.tw
Department of Physics, National Cheng-Kung University, Tainan 70101, Taiwan
Takaaki Nomura
nomura@kias.re.kr
School of Physics, KIAS, Seoul 02455, Korea
(December 2, 2020)
Abstract
We model-independently study the impacts of anomalous $tqZ$ couplings ($q=u,c$), which lead to the $t\to qZ$ decays, on low energy flavor physics. It is found that the $tuZ$-coupling effect can significantly affect the rare $K$ and $B$ decays, whereas the $tcZ$-coupling effect is small. Using the ATLAS’s branching ratio (BR) upper bound of $BR(t\to uZ)<1.7\times 10^{-4}$, the influence of the anomalous $tuZ$-coupling on the rare decays can be found as follows: (a) The contribution to the Kaon direct CP violation can be up to $Re(\epsilon^{\prime}/\epsilon)\lesssim 0.8\times 10^{-3}$; (b) $BR(K^{+}\to\pi^{+}\nu\bar{\nu})\lesssim 12\times 10^{-11}$ and $BR(K_{L}\to\pi^{0}\nu\bar{\nu})\lesssim 7.9\times 10^{-11}$; (c) the BR for $K_{S}\to\mu^{+}\mu^{-}$ including the long-distance effect can be enhanced by $11\%$ with respect to the standard model result, and (d) $BR(B_{d}\to\mu^{+}\mu^{-})\lesssim 1.97\times 10^{-10}$. In addition, although $Re(\epsilon^{\prime}/\epsilon)$ cannot be synchronously enhanced with $BR(K_{L}\to\pi^{0}\nu\bar{\nu})$ and $BR(K_{S}\to\mu^{+}\mu^{-})$ in the same region of the CP-violating phase, the values of $Re(\epsilon^{\prime}/\epsilon)$, $BR(K^{+}\to\pi^{+}\nu\bar{\nu})$, and $BR(B_{d}\to\mu^{+}\mu^{-})$ can be simultaneously increased.
KIAS-P18113
I Introduction
Top-quark flavor changing neutral currents (FCNCs) are extremely suppressed in the standard model (SM) due to the Glashow-Iliopoulos-Maiani (GIM) mechanism Glashow:1970gm . The branching ratios (BRs) for the $t\to q(g,\gamma,Z,h)$ decays with $q=u,c$ in the SM are of order of $10^{-12}-10^{-17}$ AguilarSaavedra:2004wm ; Abbas:2015cua , and these results are far below the detection limits of LHC, where the expected sensitivity in the high luminosity (HL) LHC for an integrated luminosity of 3000 fb${}^{-1}$ at $\sqrt{s}=14$ TeV is in the range $10^{-5}-10^{-4}$ ATLAS:2013hta ; ATLAS:2016 . Thus, the top-quark flavor-changing processes can serve as good candidates for investigating the new physics effects. Extensions of the SM, which can reach the HL-LHC sensitivity, can be found in Abraham:2000kx ; Eilam:2001dh ; AguilarSaavedra:2002kr ; Gaitan:2017tka ; Mandrik:2018gud .
Using the data collected with an integrated luminosity of 36.1 fb${}^{-1}$ at $\sqrt{s}=13$ TeV, ATLAS reported the current strictest upper limits on the BRs for $t\to qZ$ as Aaboud:2018nyl :
$$\displaystyle BR(t\to uZ)$$
$$\displaystyle<1.7\times 10^{-4}\,,$$
$$\displaystyle BR(t\to cZ)$$
$$\displaystyle<2.4\times 10^{-4}\,.$$
(1)
Based on the current upper bounds, we model-independently study the implications of anomalous $tqZ$ couplings in the low energy flavor physics. It is found that the $tqZ$ couplings through the $Z$-penguin diagram can significantly affect the rare decays in $K$ and $B$ systems, such as $\epsilon^{\prime}/\epsilon$, $K\to\pi\nu\bar{\nu}$, $K_{S}\to\mu^{+}\mu^{-}$, and $B_{d}\to\mu^{+}\mu^{-}$. Since the gluon and photon in the top-FCNC decays are on-shell, the contributions from the dipole-operator transition currents are small. In this study we thus focus on the $t\to qZ$ decays, especially the $t\to uZ$ decay.
From a phenomenological perspective, the importance of investigating the influence of these rare decays are stated as follows:
The inconsistency in $\epsilon^{\prime}/\epsilon$ between theoretical calculations and experimental data was recently found based on two analyses: (i) The RBC-UKQCD collaboration obtained the lattice QCD result with Blum:2015ywa ; Bai:2015nea :
$$Re(\epsilon^{\prime}/\epsilon)=1.38(5.15)(4.59)\times 10^{-4}\,,$$
(2)
where the numbers in brackets denote the errors. (ii) Using a large $N_{c}$ dual QCD Buras:1985yx ; Bardeen:1986vp ; Bardeen:1986uz ; Bardeen:1986vz ; Bardeen:1987vg , the authors in Buras:2015xba ; Buras:2015yba obtained:
$$Re(\epsilon^{\prime}/\epsilon)_{\rm SM}=(1.9\pm 4.5)\times 10^{-4}\,.$$
(3)
Both results show that the theoretical calculations exhibit an over $2\sigma$ deviation from the experimental data of $Re(\epsilon^{\prime}/\epsilon)_{\rm exp}=(16.6\pm 2.3)\times 10^{-4}$, measured by NA48 Batley:2002gn and KTeV AlaviHarati:2002ye ; Abouzaid:2010ny . Various extensions of the SM proposed to resolve the anomaly can be found in Buras:2015qea ; Buras:2015yca ; Buras:2015kwd ; Buras:2015jaq ; Tanimoto:2016yfy ; Buras:2016dxz ; Kitahara:2016otd ; Endo:2016aws ; Bobeth:2016llm ; Cirigliano:2016yhc ; Endo:2016tnu ; Bobeth:2017xry ; Crivellin:2017gks ; Bobeth:2017ecx ; Haba:2018byj ; Buras:2018lgu ; Chen:2018ytc ; Chen:2018vog ; Matsuzaki:2018jui ; Haba:2018rzf ; Aebischer:2018rrz ; Aebischer:2018quc ; Aebischer:2018csl ; Chen:2018dfc ; Chen:2018stt . We find that the direct Kaon CP violation arisen from the $tuZ$-coupling can be $\epsilon^{\prime}/\epsilon\lesssim 0.8\times 10^{-8}$ when the bound of $BR(t\to uZ)<1.7\times 10^{-4}$ is satisfied.
Unlike $\epsilon^{\prime}/\epsilon$, which strongly depends on the hadronic matrix elements, the calculations of $K^{+}\to\pi^{+}\nu\bar{\nu}$ and $K_{L}\to\pi^{0}\nu\bar{\nu}$ are theoretically clean and the SM results can be found as Bobeth:2016llm :
$$\displaystyle BR(K^{+}\to\pi^{+}\nu\bar{\nu})$$
$$\displaystyle=(8.5^{+1.0}_{-1.2})\times 10^{-11}\,,$$
(4)
$$\displaystyle BR(K_{L}\to\pi^{0}\nu\bar{\nu})$$
$$\displaystyle=(3.2^{+1.1}_{-0.7})\times 10^{-11}\,,$$
(5)
where the QCD corrections at the next-to-leading-order (NLO) Buchalla:1993bv ; Misiak:1999yg ; Buchalla:1998ba and NNLO Gorbahn:2004my ; Buras:2005gr ; Buras:2006gb and the electroweak corrections at the NLO Buchalla:1997kz ; Brod:2008ss ; Brod:2010hi have been calculated. In addition to their sensitivity to new physics, $K_{L}\to\pi^{0}\nu\bar{\nu}$ is a CP-violating process and its BR indicates the CP-violation effect. The current experimental situations are $BR(K^{+}\to\pi^{+}\nu\bar{\nu})^{\rm exp}=(17.3^{+11.5}_{-10.5})\times 10^{-11}$ Artamonov:2008qb and $BR(K_{L}\to\pi^{0}\nu\bar{\nu})^{\rm exp}<2.6\times 10^{-8}$ Ahn:2009gb .
The NA62 experiment at CERN is intended to measure the BR for $K^{+}\to\pi^{+}\nu\bar{\nu}$ with a $10\%$ precision Rinella:2014wfa ; Moulson:2016dul , and the KOTO experiment at J-PARC will observe the $K_{L}\to\pi^{0}\nu\bar{\nu}$ decay Komatsubara:2012pn ; Beckford:2017gsf . In addition, the KLEVER experiment at CERN starting in Run-4 could observe the BR of $K_{L}\to\pi^{0}\nu\bar{\nu}$ to $20\%$ precision Moulson:2018mlx .
Recently, NA62 reported its first result using the 2016 taken data and found that one candidate event of $K^{+}\to\pi^{+}\nu\bar{\nu}$ could be observed, where the corresponding BR upper bound is given by $BR(K^{+}\to\pi^{+}\nu\bar{\nu})<14\times 10^{-10}$ at a $95\%$ confidence level (CL) Velghe:2018erh . We will show that the anomalous $tuZ$-coupling can lead to $BR(K^{+}\to\pi^{+}\nu\bar{\nu})\lesssim 12\times 10^{-11}$ and $BR(K_{L}\to\pi^{0}\nu\bar{\nu})\lesssim 7.9\times 10^{-11}$. It can be seen that NA62, KOTO, and KLEVER experiments can further constrain the $tuZ$-coupling using the designed sensitivities.
Another important CP violating process is $K_{S}\to\mu^{+}\mu^{-}$, where the SM prediction including the long-distance (LD) and short-distance (SD) effects is given as $BR(K_{S}\to\mu^{+}\mu^{-})=(5.2\pm 1.5)\times 10^{-12}$ Ecker:1991ru ; Isidori:2003ts ; DAmbrosio:2017klp . The current upper limit from LHCb is $BR(K_{S}\to\mu^{+}\mu^{-})<0.8(1.0)\times 10^{-9}$ at a 90%(95%) CL. It is expected that using the LHC Run-2 data, the LHCb sensitivity can be improved to $[4,\,200]\times 10^{-12}$ with 23 fb${}^{-1}$ and to $[1,\,100]\times 10^{-12}$ with 100 fb${}^{-1}$ Dettori:UKF2017 . Although the $tuZ$-coupling can significantly enhance the SD contribution of $K_{S}\to\mu^{+}\mu^{-}$, due to LD dominance, the increase of $BR(K_{S}\to\mu^{+}\mu^{-})_{\rm LD+SD}$ can be up to $11\%$.
It has been found that the $tuZ$-coupling-induced $Z$-penguin can significantly enhance the $B_{d}\to\mu^{+}\mu^{-}$ decay, where the SM prediction is given by $BR(B_{d}\to\mu^{+}\mu^{-})=(1.06\pm 0.09)\times 10^{-10}$ Bobeth:2013uxa . From the data, which combine the full Run I data with the results of 26.3 fb${}^{-1}$ at $\sqrt{s}=13$ TeV, ATLAS reported the upper limit as $BR(B_{d}\to\mu^{+}\mu^{-})<2.1\times 10^{-10}$ Aaboud:2018mst . In addition, the result combined CMS and LHCb was reported as $BR(B_{d}\to\mu^{+}\mu^{-})=(3.9^{+1.6}_{-1.4})\times 10^{-10}$ CMS:2014xfa . It can be seen that the measured sensitivity is close to the SM result. We find that using the current upper limit of $BR(t\to uZ)$, the $BR(B_{d}\to\mu^{+}\mu^{-})$ can be enhanced up to $1.97\times 10^{-10}$, which is close to the ATLAS upper bound.
The paper is organized as follows: In Sec. II, we introduce the effective interactions for $t\to qZ$ and derive the relationship between the $tqZ$-coupling and $BR(t\to qZ)$. The $Z$-penguin FCNC processes induced via the anomalous $tqZ$ couplings are given in Sec. III. The influence on $\epsilon^{\prime}/\epsilon$ is shown in the same section. The $tqZ$-coupling contribution to the other rare $K$ and $B$ decays is shown in Sec. IV. A summary is given in Sec. V.
II Anomalous $tqZ$ couplings and their constraints
We write the anomalous $tqZ$ interactions as AguilarSaavedra:2004wm :
$$\displaystyle-{\cal L}_{tqZ}$$
$$\displaystyle=\frac{g}{2c_{W}}\bar{q}\left(\zeta^{L}_{q}P_{L}+\zeta^{R}_{q}P_{%
R}\right)tZ^{\mu}+\frac{g}{2c_{W}}\bar{q}\left(\xi^{v}_{q}+\xi^{a}_{q}\gamma_{%
5}\right)\frac{i\sigma_{\mu\nu}k^{\nu}}{m_{t}}tZ^{\mu}+H.c.\,,$$
(6)
where $g$ is the $SU(2)_{L}$ gauge coupling; $c_{W}=\cos\theta_{W}$ and $\theta_{W}$ is the Weinberg angle; $P_{L(R)}=(1\mp\gamma_{5})/2$, and $\zeta^{L(R)}_{q}$ and $\xi^{v(a)}_{q}$ denote the dimensionless effective couplings and represent the new physics effects. In this study, we mainly concentrate the impacts of the $tqZ$ couplings on the low energy flavor physics, in which the rare $K$ and $B$ decays are induced through the penguin diagram. Thus, because of the $m_{K(B)}/m_{t}$ suppression factor, which arises from $k^{\nu}\sim O(m_{K(B)})$, the contributions of the dipole operators in Eq. (6) are both small and negligible. Hence, in the following analysis, we ignore the $\xi^{v(a)}_{q}$ effects and only investigate the $\zeta^{L,R}_{q}$ effects. In order to study the influence on the Kaon CP violation, we take $\zeta^{L,R}_{q}$ as complex parameters, and the new CP violating phases are defined as $\zeta^{\chi}_{q}=|\zeta^{\chi}_{q}|e^{-i\theta^{\chi}_{q}}$ with $\chi=L,R$.
Using the interactions in Eq. (6), we can calculate the BR for $t\to qZ$ decay. Since our purpose is to examine whether the anomalous $tqZ$-coupling can give sizable contributions to the rare $K$ and $B$ decays when the current upper bound of $BR(t\to qZ)$ is satisfied, we express the parameters $\zeta^{L,R}_{q}$ as a function of $BR(t\to qZ)$ to be:
$$\displaystyle\sqrt{\left|\zeta^{L}_{q}\right|^{2}+\left|\zeta^{R}_{q}\right|^{%
2}}$$
$$\displaystyle=\left(\frac{BR(t\to qZ)}{C_{tqZ}}\right)^{1/2}\,,$$
$$\displaystyle C_{tqZ}$$
$$\displaystyle=\frac{G_{F}m^{3}_{t}}{16\sqrt{2}\pi\Gamma_{t}}\left(1-\frac{m^{2%
}_{Z}}{m^{2}_{t}}\right)^{2}\left(1+2\frac{m^{2}_{Z}}{m^{2}_{t}}\right)\,.$$
(7)
For the numerical analysis, the relevant input values are shown in Table 1.
Using the numerical inputs, we obtain $C_{tqZ}\approx 0.40$. When $BR(t\to u(c)Z)<1.7(2.3)\times 10^{-4}$ measured by ATLAS are applied, the upper limits on $\sqrt{|\zeta^{L}_{u(c)}|^{2}+|\zeta^{R}_{u(c)}|^{2}}$ can be respectively obtained as:
$$\displaystyle\sqrt{|\zeta^{L}_{u}|^{2}+|\zeta^{R}_{u}|^{2}}<0.019\,,$$
$$\displaystyle\sqrt{|\zeta^{L}_{c}|^{2}+|\zeta^{R}_{c}|^{2}}<0.022\,.$$
(8)
Since the current measured results of the $t\to(u,c)Z$ decays are close each other, the bounds on $\zeta^{\chi}_{u}$ and $\zeta^{\chi}_{c}$ are very similar. We note that BR cannot determine the CP phase; therefore, $\theta^{\chi}_{u}$ and $\theta^{\chi}_{c}$ are free parameters.
III Anomalous $tqZ$ effects on $\epsilon^{\prime}/\epsilon$
In this section, we discuss the $tqZ$-coupling contribution to the Kaon direct CP violation. The associated Feynman diagram is shown in Fig. 1, where $q=u,c$; $q^{\prime}$ and $q^{\prime\prime}$ are down type quarks, and $f$ denotes any possible fermions. That is, the involved rare $K$ and $B$ processes in this study are the decays, such as $K\to\pi\pi$, $K\to\pi\nu\bar{\nu}$, and $K_{S}(B_{d})\to\ell^{+}\ell^{-}$. It is found that the contributions to $K_{L}\to\pi\ell^{+}\ell^{-}$ and $B\to\pi\ell^{+}\ell^{-}$ are not significant; therefore, we do not discuss the decays in this work.
Based on the $tqZ$ couplings shown in Eq. (6), the effective Hamiltonian induced by the $Z$-penguin diagram for the $K\to\pi\pi$ decays at $\mu=m_{W}$ can be derived as:
$$\displaystyle{\cal H}_{tqZ}$$
$$\displaystyle=-\frac{G_{F}\lambda_{t}}{\sqrt{2}}\left(y^{Z}_{3}Q_{3}+y^{Z}_{7}%
Q_{7}+y^{Z}_{9}Q_{9}\right)\,,$$
(9)
where $\lambda_{t}=V^{*}_{ts}V_{td}$; the operators $Q_{3,7,9}$ are the same as the SM operators and are defined as:
$$\displaystyle Q_{3}$$
$$\displaystyle=(\bar{s}d)_{V-A}\sum_{q^{\prime}}(\bar{q}^{\prime}q^{\prime})_{V%
-A}\,,$$
$$\displaystyle Q_{7}$$
$$\displaystyle=\frac{3}{2}(\bar{s}d)_{V-A}\sum_{q^{\prime}}e_{q^{\prime}}(\bar{%
q}^{\prime}q^{\prime})_{V+A}\,,$$
$$\displaystyle Q_{9}$$
$$\displaystyle=\frac{3}{2}(\bar{s}d)_{V-A}\sum_{q^{\prime}}e_{q^{\prime}}(\bar{%
q}^{\prime}q^{\prime})_{V-A}\,,$$
(10)
with $e_{q^{\prime}}$ being the electric charge of $q^{\prime}$-quark, and the effective Wilson coefficients are expressed as:
$$\displaystyle y^{Z}_{3}$$
$$\displaystyle=-\frac{\alpha}{24\pi s^{2}_{W}}I_{Z}(x_{t})\eta_{Z}\,,~{}y^{Z}_{%
7}=-\frac{\alpha}{6\pi}\eta_{Z}\,,$$
$$\displaystyle y^{Z}_{9}$$
$$\displaystyle=\left(1-\frac{1}{s^{2}_{W}}\right)y^{Z}_{7}\,,~{}\eta_{Z}=\sum_{%
q=u,c}\left(\frac{V_{qd}\zeta^{L*}_{q}}{V_{td}}+\frac{V^{*}_{qs}\zeta^{L}_{q}}%
{V^{*}_{ts}}\right)\,,$$
(11)
with $\alpha=e^{2}/4\pi$, $x_{t}=m^{2}_{t}/m^{2}_{W}$, and $s_{W}=\sin\theta_{W}$. The penguin-loop integral function is given as:
$$I_{Z}(x_{t})=-\frac{1}{4}+\frac{x_{t}\,\ln x_{t}}{2(x_{t}-1)}\approx 0.693\,.$$
(12)
Since $W$-boson can only couple to the left-handed quarks, the right-handed couplings $\zeta^{R}_{u,c}$ in the diagram have to appear with $m_{u(c)}$ and $m_{t}$, in which the mass factors are from the mass insertion in the quark propagators inside the loop. When we drop the small factors $m_{c,u}/m_{W}$, the effective Hamiltonian for $K\to\pi\pi$ only depends on $\zeta^{L}_{u,c}$. Since $|V_{ud}/V_{td}|$ is larger than $|V_{cs}/V_{ts}|$ by a factor of $4.67$, the dominant contribution to the $\Delta S=1$ processes is from the first term of $\eta_{Z}$ defined in Eq. (11). In addition, $V_{ud}$ is larger than $|V_{cd}|$ by a factor of $1/\lambda\sim 4.44$; therefore, the main contribution in the first term of $\eta_{Z}$ comes from the $V_{ud}\zeta^{L*}_{u}/V_{td}$ effect. That is, the anomalous $tuZ$-coupling is the main effect in our study.
Using the isospin amplitudes, the Kaon direct CP violating parameter from new physics can be estimated using Buras:2015yba :
$$Re\left(\frac{\epsilon^{\prime}}{\epsilon}\right)\approx-\frac{\omega}{\sqrt{2%
}|\epsilon_{K}|}\left[\frac{ImA_{0}}{ReA_{0}}-\frac{ImA_{2}}{ReA_{2}}\right]\,,$$
(13)
where $\omega=ReA_{2}/ReA_{0}\approx 1/22.35$ denotes the $\Delta I=1/2$ rule, and $|\epsilon_{K}|\approx 2.228\times 10^{-3}$ is the Kaon indirect CP violating parameter. It can be seen that in addition to the hadronic matrix element ratios, $\epsilon^{\prime}/\epsilon$ also strongly depends on the Wilson coefficients at the $\mu=m_{c}$ scale. It is known that the main new physics contributions to $\epsilon^{\prime}/\epsilon$ are from the $Q^{(\prime)}_{6}$ and $Q^{(\prime)}_{8}$ operators Buras:2014sba ; Buras:2015yca . Although these operators are not generated through the $tqZ$ couplings at $\mu=m_{W}$ in our case, they can be induced via the QCD radiative corrections. The Wilson coefficients at the $\mu=m_{c}$ scale can be obtained using the renormalization group (RG) evolution Buchalla:1995vs . Thus, the induced effective Wilson coefficients for $Q_{6,8}$ operators at $\mu=m_{c}$ can be obtained as:
$$\displaystyle y^{Z}_{6}(m_{c})$$
$$\displaystyle\approx-0.08y^{Z}_{3}-0.01y^{Z}_{7}+0.07y^{Z}_{9}\,,$$
$$\displaystyle y^{Z}_{8}(m_{c})$$
$$\displaystyle\approx 0.63y^{Z}_{7}\,.$$
(14)
It can be seen that $y^{Z}_{6}(m_{c})$ is much smaller than $y^{Z}_{8}(m_{c})$; that is, we can simply consider the $Q_{8}$ operator contribution.
According to the $K\to\pi\pi$ matrix elements and the formulation of $Re(\epsilon^{\prime}/\epsilon)$ provided in Buras:2015yba , the $O_{8}$ contribution can be written as:
$$\displaystyle Re\left(\frac{\epsilon^{\prime}}{\epsilon}\right)^{Z}_{P}$$
$$\displaystyle\approx-a^{(3/2)}_{8}B^{(3/2)}_{8}\,,$$
$$\displaystyle a^{(3/2)}_{8}$$
$$\displaystyle=Im\left(\lambda_{t}y^{Z}_{8}(m_{c})\right)\frac{r_{2}\langle Q_{%
8}\rangle_{2}}{B^{(3/2)}_{8}ReA_{2}}\,,$$
(15)
where $r_{2}=\omega G_{F}/(2|\epsilon_{K}|)\approx 1.17\times 10^{-4}$ GeV${}^{-2}$, $B^{(3/2)}_{8}\approx 0.76$; $ReA^{\rm exp}_{2(0)}\approx 1.21(27.04)\times 10^{-8}$ GeV PDG , and the matrix element of $\langle Q_{8}\rangle_{2}$ is defined as:
$$\langle Q_{8}\rangle_{2}=\sqrt{2}\left(\frac{m^{2}_{K}}{m_{s}(\mu)+m_{d}(\mu)}%
\right)^{2}f_{\pi}B^{3/2}_{8}\,.$$
(16)
Although the $Q_{8}$ operator can contribute to the isospin $I=0$ state of $\pi\pi$, because its effect is a factor of $15$ smaller than the isospin $I=2$ state, we thus neglect its contribution.
Since the $t\to(u,c)Z$ decays have not yet been observed, in order to simplify their correlation to $\epsilon^{\prime}/\epsilon$, we use $BR(t\to qZ)\equiv{\rm Min}(BR(t\to cZ),\,BR(t\to uZ))$ instead of $BR(t\to u(c)Z)$ as the upper limit. The contours for $Re(\epsilon^{\prime}/\epsilon)^{Z}_{P}$ ( in units of $10^{-3}$) as a function of $BR(t\to qZ)$ and $\theta^{L}_{u}$ are shown in Fig. 2, where the solid and dashed lines denote the results with $\theta^{L}_{c}=-\theta^{L}_{u}$ and $\zeta^{L}_{c}=0$, respectively, and the horizontal dashed line is the current upper limit of $BR(t\to qZ)$. It can be seen that the Kaon direct CP violation arisen from the anomalous $tuZ$-coupling can reach $0.8\times 10^{-3}$, and the contribution from $tcZ$-coupling is only a minor effect. When the limit of $t\to qZ$ approaches $BR(t\to qZ)\sim 0.5\times 10^{-4}$, the induced $\epsilon^{\prime}/\epsilon$ can be as large as $Re(\epsilon^{\prime}/\epsilon)^{Z}_{P}\sim 0.4\times 10^{-3}$.
IV Z-penguin induced (semi)-leptonic $K$ and $B$ decays and numerical analysis
The same Feynman diagram as that in Fig. 1 can be also applied to the rare leptonic and semi-leptonic $K(B)$ decays when $f$ is a neutrino or a charged lepton. Because $|V_{us}/V_{ts}|\ll|V_{cs}/V_{ts}|\sim|V_{us}/V_{td}|\ll|V_{ud}/V_{td}|$, it can be found that the anomalous $tu(c)Z$-coupling contributions to the $b\to s\ell\bar{\ell}$ ($\ell=\nu,\ell^{-})$ processes can deviate from the SM result being less than $7\%$ in terms of amplitude. However, the influence of the $tuZ$ coupling on $d\to s\ell\bar{\ell}$ and $b\to d\ell\bar{\ell}$ can be over $20\%$ at the amplitude level. Accordingly, in the following analysis, we concentrate the study on the rare decays, such as $K\to\pi\nu\bar{\nu}$, $K_{S}\to\mu^{+}\mu^{-}$, and $B_{d}\to\mu^{+}\mu^{-}$, in which the channels are sensitive to the new physics effects and are theoretically clean.
According to the formulations in Bobeth:2017ecx , we write the effective Hamiltonian for $d_{i}\to d_{j}\ell\bar{\ell}$ induced by the $tuZ$ coupling as:
$$\displaystyle{\cal H}_{d_{i}\to d_{j}\ell\bar{\ell}}$$
$$\displaystyle=-\frac{G_{F}V^{*}_{td_{j}}V_{td_{i}}}{\sqrt{2}}\frac{\alpha}{\pi%
}C^{Z}_{L}[\bar{d}_{j}\gamma_{\mu}P_{L}d_{i}][\bar{\nu}\gamma^{\mu}(1-\gamma_{%
5})\nu]$$
$$\displaystyle-\frac{G_{F}V^{*}_{td_{j}}V_{td_{i}}}{\sqrt{2}}\frac{\alpha}{\pi}%
\bar{d}_{j}\gamma_{\mu}P_{L}d_{i}\left[C^{Z}_{9}\bar{\ell}\gamma^{\mu}\ell+C^{%
Z}_{10}\bar{\ell}\gamma^{\mu}\gamma_{5}\ell\right]\,,$$
(17)
where we have ignored the small contributions from the $tcZ$-coupling; $d_{i}\to d_{j}$ could be the $s\to d$ or $b\to d$ transition, and the effective Wilson coefficients are given as:
$$\displaystyle C^{Z}_{L}=C^{Z}_{10}$$
$$\displaystyle\approx\frac{I_{Z}(x_{t})\zeta^{L}_{u}}{4s^{2}_{W}}\frac{V^{*}_{%
ud}}{V^{*}_{td}}\,,~{}C^{Z}_{9}\approx C^{Z}_{L}\left(-1+4s^{2}_{W}\right)\,.$$
(18)
Because $-1+4s^{2}_{W}\approx-0.08$, the $C^{Z}_{9}$ effect can indeed be neglected.
Based on the interactions in Eq. (17), the BRs for the $K_{L}\to\pi^{0}\nu\bar{\nu}$ and $K^{+}\to\pi^{+}\nu\bar{\nu}$ decays can be formulated as Buras:2015yca :
$$\displaystyle BR(K_{L}\to\pi^{0}\nu\bar{\nu})$$
$$\displaystyle=\kappa_{L}\left|\frac{Im\,X_{\rm eff}}{\lambda^{5}}\right|^{2}\,,$$
$$\displaystyle BR(K^{+}\to\pi^{+}\nu\bar{\nu})$$
$$\displaystyle=\kappa_{+}(1+\Delta_{\rm EM})\left[\left|\frac{Im\,X_{\rm eff}}{%
\lambda^{5}}\right|^{2}+\left|\frac{Re\,\lambda_{c}}{\lambda}P_{c}(X)+\frac{Re%
\,X_{\rm eff}}{\lambda^{5}}\right|^{2}\right]\,,$$
(19)
where $\lambda_{c}=V^{*}_{cs}V_{cd}$, $\Delta_{EM}=-0.003$; $P_{c}(X)=0.404\pm 0.024$ denotes the charm-quark contribution Isidori:2005xm ; Mescia:2007kn ; the values of $\kappa_{L,+}$ are respectively given as $\kappa_{L}=(2.231\pm 0.013)\times 10^{-10}$ and $\kappa_{+}=(5.173\pm 0.025)\times 10^{-11}$ , and $X_{\rm eff}$ is defined as:
$$X_{\rm eff}=\lambda_{t}\left(X^{\rm SM}_{L}-s^{2}_{W}C^{Z*}_{L}\right)\,,$$
(20)
with $X^{\rm SM}_{L}=1.481\pm 0.009$ Buras:2015yca . Since $K_{L}\to\pi^{0}\nu\bar{\nu}$ is a CP violating process, its BR only depends on the imaginary part of $X_{\rm eff}$. Another important CP violating process in $K$ decay is $K_{S}\to\mu^{+}\mu^{-}$, where its BR from the SD contribution can be expressed as Bobeth:2017ecx :
$$\displaystyle BR(K_{S}\to\mu^{+}\mu^{-})_{\rm SD}$$
$$\displaystyle=\tau_{K_{S}}\frac{G^{2}_{F}\alpha^{2}}{8\pi^{3}}m_{K}f^{2}_{K}m^%
{2}_{\mu}\sqrt{1-\frac{4m^{2}_{\mu}}{m^{2}_{K}}}\left|Im[\lambda_{t}\left(C^{%
\rm SM}_{10}+C^{Z*}_{10}\right)]\right|^{2}\,,$$
(21)
with $C^{\rm SM}_{10}\approx-4.21$. Including the LD effect Ecker:1991ru ; Isidori:2003ts , the BR for $K_{S}\to\mu^{+}\mu^{-}$ can be estimated using $BR(K_{S}\to\mu^{+}\mu^{-})_{\rm LD+SD}\approx 4.99_{\rm LD}\times 10^{-12}+BR(%
K_{S}\to\mu^{+}\mu^{-})_{\rm SD}$ DAmbrosio:2017klp . Moreover, it is found that the effective interactions in Eq. (17) can significantly affect the $B_{d}\to\mu^{+}\mu^{-}$ decay, where its BR can be derived as:
$$\displaystyle BR(B_{d}\to\mu^{+}\mu^{-})$$
$$\displaystyle=\tau_{B}\frac{G^{2}_{F}\alpha^{2}}{16\pi^{3}}m_{B}f^{2}_{B}m^{2}%
_{\mu}\left(1-\frac{2m^{2}_{\ell}}{m^{2}_{B}}\right)\sqrt{1-\frac{4m^{2}_{\mu}%
}{m^{2}_{B}}}$$
$$\displaystyle\times\left|V^{*}_{td}V_{tb}\left(C^{\rm SM}_{10}+C^{Z}_{10}%
\right)\right|^{2}\,.$$
(22)
Because $B_{d}\to\mu^{+}\mu^{-}$ is not a pure CP violating process, the BR involves both the real and imaginary part of $V^{*}_{td}V_{tb}\left(C^{\rm SM}_{10}+C^{Z}_{10}\right)$. Note that the associated Wilson coefficient in $B_{d}\to\mu^{+}\mu^{-}$ is $C^{Z}_{10}$, whereas it is $C^{Z*}_{10}$ in the $K$ decays.
After formulating the BRs for the investigated processes, we now numerically analyze the $tuZ$-coupling effect on these decays. Since the involved parameter is the complex $\zeta^{L}_{u}=|\zeta^{L}_{u}|e^{-i\theta^{L}_{u}}$, we take $BR(t\to uZ)$ instead of $|\zeta^{L}_{u}|$. Thus, we show $BR(K_{L}\to\pi^{0}\nu\bar{\nu})$ (in units of $10^{-11})$ as a function of $BR(t\to uZ)$ and $\theta^{L}_{u}$ in Fig. 3(a), where the CP phase is taken in the range of $\theta^{L}_{u}=[-\pi,\pi]$; the SM result is shown in the plot, and the horizontal line denotes the current upper limit of $BR(t\to uZ)$. It can be clearly seen that $BR(K_{L}\to\pi^{0}\nu\bar{\nu})$ can be enhanced to $7\times 10^{-11}$ in $\theta^{L}_{u}>0$ when $BR(t\to uZ)<1.7\times 10^{-4}$ is satisfied. Moreover, the result of $BR(K_{L}\to\pi^{0}\nu\bar{\nu})\approx 5.3\times 10^{-11}$ can be achieved when $BR(t\to uZ)=0.5\times 10^{-4}$ and $\theta^{u}_{L}\ =2.1$ are used. Similarly, the influence of $\zeta^{L}_{u}$ on $BR(K^{+}\to\pi^{+}\nu\bar{\nu})$ is shown in Fig. 3(b).
Since $BR(K^{+}\to\pi^{+}\nu\bar{\nu})$ involves the real and imaginary parts of $X_{\rm eff}$, unlike the $K_{L}\to\pi^{0}\nu\bar{\nu}$ decay, its BR cannot be enhanced manyfold due to the dominance of the real part. Nevertheless, the BR of $K^{+}\to\pi^{+}\nu\bar{\nu}$ can be maximally enhanced by $38\%$; even, with $BR(t\to uZ)=0.5\times 10^{-4}$ and $\theta^{u}_{L}=2.1$, the $BR(K^{+}\to\pi^{+}\nu\bar{\nu})$ can still exhibit an increase of $15\%$. It can be also found that in addition to $|\zeta^{L}_{u}|$, the BRs of $K\to\pi\nu\bar{\nu}$ are also sensitive to the $\theta^{L}_{u}$ CP-phase. Although the observed $BR(K\to\pi\nu\bar{\nu})$ cannot constrain $BR(t\to uZ)$, the allowed range of $\theta^{L}_{u}$ can be further limited.
For the $K_{S}\to\mu^{+}\mu^{-}$ decay, in addition to the SD effect, the LD effect, which arises from the absorptive part of $K_{S}\to\gamma\gamma\to\mu^{+}\mu^{-}$, predominantly contributes to the $BR(K_{S}\to\mu^{+}\mu^{-})$. Thus, if the new physics contribution is much smaller than the LD effect, the influence on $BR(K_{S}\to\mu^{+}\mu^{-})_{\rm LD+SD}=BR(K_{S}\to\mu^{+}\mu^{-})_{\rm LD}+BR(%
K_{S}\to\mu^{+}\mu^{-})_{\rm SD}$ from new physics may not be so significant. In order to show the $tuZ$-coupling effect, we plot the contours for $BR(K_{S}\to\mu^{+}\mu^{-})_{\rm LD+SD}$ ( in units of $10^{-12}$) in Fig. 3(c). From the result, it can be clearly seen that $BR(K_{S}\to\mu^{+}\mu^{-})_{\rm LD+SD}$ can be at most enhanced by $11\%$ with respect to the SM result, whereas the BR can be enhanced only $\sim 4.3\%$ when $BR(t\to uZ)=0.5\times 10^{-4}$ and $\theta^{L}_{u}=2.1$.
As discussed earlier that the $tcZ$-coupling contribution to the $B_{s}\to\mu^{+}\mu^{-}$ process is small; however, similar to the case in $K^{+}\to\pi^{+}\nu\bar{\nu}$ decay, the BR of $B_{d}\to\mu^{+}\mu^{-}$ can be significantly enhanced through the anomalous $tuZ$-coupling. We show the contours of $BR(B_{d}\to\mu^{+}\mu^{-})$ ( in units of $10^{-10})$ as a function of $BR(t\to uZ)$ and $\theta^{L}_{u}$ in Fig. 3(d). It can be seen that the maximum of the allowed $BR(B_{d}\to\mu^{+}\mu^{-})$ can reach $1.97\times 10^{-10}$, which is a factor of 1.8 larger than the SM result of $BR(B_{d}\to\mu^{+}\mu^{-})^{\rm SM}\approx 1.06\times 10^{-10}$. Using $BR(t\to uZ)=0.5\times 10^{-4}$ and $\theta^{L}_{u}=2.1$, the enhancement factor to $BR(B_{d}\to\mu^{+}\mu^{-})^{\rm SM}$ becomes $1.38$. Since the maximum of $BR(B_{d}\to\mu^{+}\mu^{-})$ has been close to the ATLAS upper bound of $2.1\times 10^{-10}$, the constraint from the rare $B$ decay measured in the LHC could further constrain the allowed range of $\theta^{L}_{u}$
V Summary
We studied the impacts of the anomalous $tqZ$ couplings in the low energy physics, especially the $tuZ$ coupling. It was found that the anomalous coupling can have significant contributions to $\epsilon^{\prime}/\epsilon$, $BR(K\to\pi\nu\bar{\nu})$, $K_{S}\to\mu^{+}\mu^{-}$, and $B_{d}\to\mu^{+}\mu^{-}$. Although these decays have not yet been observed in experiments, with the exception of $\epsilon^{\prime}/\epsilon$, their designed experiment sensitivities are good enough to test the SM. It was found that using the sensitivity of $BR(t\to uZ)\sim 5\times 10^{-5}$ designed in HL-LHC, the resulted $BR(K\to\pi\nu\bar{\nu})$ and $BR(B_{d}\to\mu^{+}\mu^{-})$ can be examined by the NA62, KOTO, KELVER, and LHC experiments.
According to our study, it was found that we cannot simultaneously enhance $Re(\epsilon^{\prime}/\epsilon)$, $BR(K_{L}\to\pi^{0}\nu\bar{\nu})$, and $BR(K_{S}\to\mu^{+}\mu^{-})$ in the same region of the CP violating phase, where the positive $Re(\epsilon^{\prime}/\epsilon)$ requires $\theta^{L}_{u}<0$, but the large $BR(K_{L}\to\pi^{0}\nu\bar{\nu})$ and $BR(K_{S}\to\mu^{+}\mu^{-})$ have to rely on $\theta^{L}_{u}>0$. Since $BR(K^{+}\to\pi^{+}\nu\bar{\nu})$ and $BR(B_{d}\to\mu^{+}\mu^{-})$ involve both real and imaginary parts of Wilson coefficients, their BRs are not sensitive to the sign of $\theta^{L}_{u}$. Hence, $Re(\epsilon^{\prime}/\epsilon)$, $BR(K^{+}\to\pi^{+}\nu\bar{\nu})$ and $BR(B_{d}\to\mu^{+}\mu^{-})$ can be enhanced at the same time.
Acknowledgments
This work was partially supported by the Ministry of Science and Technology of Taiwan,
under grants MOST-106-2112-M-006-010-MY2 (CHC).
References
(1)
S. L. Glashow, J. Iliopoulos and L. Maiani,
Phys. Rev. D 2, 1285 (1970).
(2)
J. A. Aguilar-Saavedra,
Acta Phys. Polon. B 35, 2695 (2004)
[hep-ph/0409342].
(3)
G. Abbas, A. Celis, X. Q. Li, J. Lu and A. Pich,
JHEP 1506, 005 (2015)
[arXiv:1503.06423 [hep-ph]].
(4)
[ATLAS Collaboration],
arXiv:1307.7292 [hep-ex].
(5)
ATLAS Collaboration, ATL-PHYS-PUB-2016-019.
(6)
K. J. Abraham, K. Whisnant, J. M. Yang and B. L. Young,
Phys. Rev. D 63, 034011 (2001)
[hep-ph/0007280].
(7)
G. Eilam, A. Gemintern, T. Han, J. M. Yang and X. Zhang,
Phys. Lett. B 510, 227 (2001)
[hep-ph/0102037].
(8)
J. A. Aguilar-Saavedra,
Phys. Rev. D 67, 035003 (2003)
Erratum: [Phys. Rev. D 69, 099901 (2004)]
[hep-ph/0210112].
(9)
R. Gaitan, R. Martinez, J. H. M. de Oca and E. A. Garces,
Phys. Rev. D 98, no. 3, 035031 (2018)
[arXiv:1710.04262 [hep-ph]].
(10)
P. Mandrik [CMS Collaboration],
EPJ Web Conf. 191, 02009 (2018)
[arXiv:1808.09915 [hep-ex]].
(11)
M. Aaboud et al. [ATLAS Collaboration],
JHEP 1807, 176 (2018)
[arXiv:1803.09923 [hep-ex]].
(12)
T. Blum et al.,
Phys. Rev. D 91, no. 7, 074502 (2015)
[arXiv:1502.00263 [hep-lat]].
(13)
Z. Bai et al. [RBC and UKQCD Collaborations],
Phys. Rev. Lett. 115, no. 21, 212001 (2015)
[arXiv:1505.07863 [hep-lat]].
(14)
A. J. Buras and J. M. Gerard,
Nucl. Phys. B 264, 371 (1986).
(15)
W. A. Bardeen, A. J. Buras and J. M. Gerard,
Phys. Lett. B 180, 133 (1986).
(16)
W. A. Bardeen, A. J. Buras and J. M. Gerard,
Nucl. Phys. B 293, 787 (1987).
(17)
W. A. Bardeen, A. J. Buras and J. M. Gerard,
Phys. Lett. B 192, 138 (1987).
(18)
W. A. Bardeen, A. J. Buras and J. M. Gerard,
Phys. Lett. B 211, 343 (1988).
(19)
A. J. Buras and J. M. Gerard,
JHEP 1512, 008 (2015)
[arXiv:1507.06326 [hep-ph]].
(20)
A. J. Buras, M. Gorbahn, S. J$\rm\ddot{a}$ger and M. Jamin,
JHEP 1511, 202 (2015)
[arXiv:1507.06345 [hep-ph]].
(21)
J. R. Batley et al. [NA48 Collaboration],
Phys. Lett. B 544, 97 (2002)
[hep-ex/0208009].
(22)
A. Alavi-Harati et al. [KTeV Collaboration],
Phys. Rev. D 67, 012005 (2003)
Erratum: [Phys. Rev. D 70, 079904 (2004)]
[hep-ex/0208007].
(23)
E. Abouzaid et al. [KTeV Collaboration],
Phys. Rev. D 83, 092001 (2011)
[arXiv:1011.0127 [hep-ex]].
(24)
A. J. Buras, D. Buttazzo, J. Girrbach-Noe and R. Knegjens,
JHEP 1511, 033 (2015)
[arXiv:1503.02693 [hep-ph]].
(25)
A. J. Buras, D. Buttazzo and R. Knegjens,
JHEP 1511, 166 (2015)
[arXiv:1507.08672 [hep-ph]].
(26)
A. J. Buras and F. De Fazio,
JHEP 1603, 010 (2016)
[arXiv:1512.02869 [hep-ph]].
(27)
A. J. Buras,
JHEP 1604, 071 (2016)
[arXiv:1601.00005 [hep-ph]].
(28)
M. Tanimoto and K. Yamamoto,
PTEP 2016, no. 12, 123B02 (2016)
[arXiv:1603.07960 [hep-ph]].
(29)
A. J. Buras and F. De Fazio,
JHEP 1608, 115 (2016)
[arXiv:1604.02344 [hep-ph]].
(30)
T. Kitahara, U. Nierste and P. Tremper,
Phys. Rev. Lett. 117, no. 9, 091802 (2016)
[arXiv:1604.07400 [hep-ph]].
(31)
M. Endo, S. Mishima, D. Ueda and K. Yamamoto,
Phys. Lett. B 762, 493 (2016)
[arXiv:1608.01444 [hep-ph]].
(32)
C. Bobeth, A. J. Buras, A. Celis and M. Jung,
JHEP 1704, 079 (2017)
[arXiv:1609.04783 [hep-ph]].
(33)
V. Cirigliano, W. Dekens, J. de Vries and E. Mereghetti,
Phys. Lett. B 767, 1 (2017)
[arXiv:1612.03914 [hep-ph]].
(34)
M. Endo, T. Kitahara, S. Mishima and K. Yamamoto,
Phys. Lett. B 771, 37 (2017)
[arXiv:1612.08839 [hep-ph]].
(35)
C. Bobeth, A. J. Buras, A. Celis and M. Jung,
JHEP 1707, 124 (2017)
[arXiv:1703.04753 [hep-ph]].
(36)
A. Crivellin, G. D’Ambrosio, T. Kitahara and U. Nierste,
Phys. Rev. D 96, no. 1, 015023 (2017)
[arXiv:1703.05786 [hep-ph]].
(37)
C. Bobeth and A. J. Buras,
JHEP 1802, 101 (2018)
[arXiv:1712.01295 [hep-ph]].
(38)
N. Haba, H. Umeeda and T. Yamada,
arXiv:1802.09903 [hep-ph].
(39)
A. J. Buras and J. M. Gèrard,
arXiv:1804.02401 [hep-ph].
(40)
C. H. Chen and T. Nomura,
arXiv:1804.06017 [hep-ph].
(41)
C. H. Chen and T. Nomura,
arXiv:1805.07522 [hep-ph].
(42)
S. Matsuzaki, K. Nishiwaki and K. Yamamoto,
arXiv:1806.02312 [hep-ph].
(43)
N. Haba, H. Umeeda and T. Yamada,
arXiv:1806.03424 [hep-ph].
(44)
J. Aebischer, A. J. Buras and J. M. Gèrard,
arXiv:1807.01709 [hep-ph].
(45)
J. Aebischer, C. Bobeth, A. J. Buras, J. M. Gèrard and D. M. Straub,
arXiv:1807.02520 [hep-ph].
(46)
J. Aebischer, C. Bobeth, A. J. Buras and D. M. Straub,
arXiv:1808.00466 [hep-ph].
(47)
C. H. Chen and T. Nomura,
arXiv:1808.04097 [hep-ph].
(48)
C. H. Chen and T. Nomura,
arXiv:1811.02315 [hep-ph].
(49)
F. Newson et al.,
arXiv:1411.0109 [hep-ex].
(50)
M. Moulson [NA62 Collaboration],
PoS ICHEP 2016, 581 (2016)
[arXiv:1611.04979 [hep-ex]].
(51)
T. K. Komatsubara,
Prog. Part. Nucl. Phys. 67, 995 (2012)
[arXiv:1203.6437 [hep-ex]].
(52)
B. Beckford [KOTO Collaboration],
arXiv:1710.01412 [hep-ex].
(53)
M. Moulson,
arXiv:1812.01896 [physics.ins-det].
(54)
G. Buchalla and A. J. Buras,
Nucl. Phys. B 400, 225 (1993).
(55)
M. Misiak and J. Urban,
Phys. Lett. B 451, 161 (1999)
[hep-ph/9901278].
(56)
G. Buchalla and A. J. Buras,
Nucl. Phys. B 548, 309 (1999)
[hep-ph/9901288].
(57)
M. Gorbahn and U. Haisch,
Nucl. Phys. B 713, 291 (2005)
[hep-ph/0411071].
(58)
A. J. Buras, M. Gorbahn, U. Haisch and U. Nierste,
Phys. Rev. Lett. 95, 261805 (2005)
[hep-ph/0508165].
(59)
A. J. Buras, M. Gorbahn, U. Haisch and U. Nierste,
JHEP 0611, 002 (2006)
Erratum: [JHEP 1211, 167 (2012)]
[hep-ph/0603079].
(60)
G. Buchalla and A. J. Buras,
Phys. Rev. D 57, 216 (1998)
[hep-ph/9707243].
(61)
J. Brod and M. Gorbahn,
Phys. Rev. D 78, 034006 (2008)
[arXiv:0805.4119 [hep-ph]].
(62)
J. Brod, M. Gorbahn and E. Stamou,
Phys. Rev. D 83, 034030 (2011)
[arXiv:1009.0947 [hep-ph]].
(63)
A. V. Artamonov et al. [E949 Collaboration],
Phys. Rev. Lett. 101, 191802 (2008)
[arXiv:0808.2459 [hep-ex]].
(64)
J. K. Ahn et al. [E391a Collaboration],
Phys. Rev. D 81, 072004 (2010)
[arXiv:0911.4789 [hep-ex]].
(65)
B. Velghe [NA62 Collaboration],
arXiv:1810.06424 [hep-ex].
(66)
G. Ecker and A. Pich,
Nucl. Phys. B 366, 189 (1991).
(67)
G. Isidori and R. Unterdorfer,
JHEP 0401, 009 (2004)
[hep-ph/0311084].
(68)
G. D’Ambrosio and T. Kitahara,
Phys. Rev. Lett. 119, no. 20, 201802 (2017)
[arXiv:1707.06999 [hep-ph]].
(69)
F. Dettori, on behalf of the LHCb collaboration, a talk given in UK Flavour 2017: https://conference.ippp.dur.ac.uk/event/573/contributions/3286.
(70)
C. Bobeth, M. Gorbahn, T. Hermann, M. Misiak, E. Stamou and M. Steinhauser,
Phys. Rev. Lett. 112, 101801 (2014)
[arXiv:1311.0903 [hep-ph]].
(71)
M. Aaboud et al. [ATLAS Collaboration],
arXiv:1812.03017 [hep-ex].
(72)
V. Khachatryan et al. [CMS and LHCb Collaborations],
Nature 522, 68 (2015)
[arXiv:1411.4413 [hep-ex]].
(73)
A. J. Buras, F. De Fazio and J. Girrbach,
Eur. Phys. J. C 74, no. 7, 2950 (2014)
[arXiv:1404.3824 [hep-ph]].
(74)
G. Buchalla, A. J. Buras and M. E. Lautenbacher,
Rev. Mod. Phys. 68, 1125 (1996)
[hep-ph/9512380].
(75)
C. Patrignani et al. (Particle Data Group), Chin. Phys. C 40, 100001 (2016).
(76)
G. Isidori, F. Mescia and C. Smith,
Nucl. Phys. B 718, 319 (2005)
[hep-ph/0503107].
(77)
F. Mescia and C. Smith,
Phys. Rev. D 76, 034017 (2007)
[arXiv:0705.2025 [hep-ph]].
(78)
F. Mescia, C. Smith and S. Trine,
JHEP 0608, 088 (2006)
[hep-ph/0606081]. |
Modeling the Performance of the LSST in Surveying the Near-Earth Object Population
T. Grav
Planetary Science Institute, Tucson, AZ 85719, USA; tgrav@psi.edu
A. K. Mainzer
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
T. Spahr
NEO Sciences, LLC
()
Abstract
We have performed a detailed survey simulation of the LSST performance with regards to near-Earth objects (NEOs) using the project’s current baseline cadence. The survey shows that if the project is able to reliably generate linked sets of positions and times (a so-called “tracklet”) using two detections of a given object per night and can link these tracklets into a track with a minimum of 3 tracklets covering more than a $\sim 12$ day length-of-arc, they would be able to discover $62\%$ of the potentially hazardous asteroids (PHAs) larger than $140$ m in its projected 10 year survey lifetime. This completeness would be reduced to $58\%$ if the project is unable to implement a pipeline using the two detection cadence and has to adopt the four detection cadence more commonly used by existing NEO surveys. When including the estimated performance from the current operating surveys, assuming these would continue running until the start of LSST and perhaps beyond, the completeness fraction for PHAs larger than $140$ m would be $73\%$ for the baseline cadence and $71\%$ for the four detection cadence. This result is a lower completeness than the estimate of Ivezić et al. (2007) and Ivezic et al. (2014); however the result is quite close to that of Jones et al. (2016) who show completeness $\sim 70$% using the identical survey cadence as used here. We show that the traditional method of using absolute magnitude $H<22$ mag as a proxy for the population with diameters larger than $140$m results in completeness values that are too high by $\sim 5$%. Our simulation makes use of the most recent models of the physical and orbital properties of the NEO and PHA populations, as well as simulated cadences and telescope performance estimates provided by the LSST project. The consistency of the results presented here when compared to those of Jones et al. (2016) demonstrates the robustness of these survey modeling approaches. We also show that while neither LSST nor a space-based IR platform like NEOCam individually can complete the survey for $140m$ diameter NEOs, the combination of these systems can achieve that goal after a decade of observation.
asteroids, surveys
1 Introduction
As Earth travels through space along its orbit around the Sun it traverses through a population of asteroids and comets, termed the near-Earth objects (NEOs; asteroids and comets that approach within 1.3 au of the Sun). With regularity, Earth finds itself on a collisional course with one of these NEOs. Most of the time the objects are small and burn up in the atmosphere, but sometimes the NEO is large enough that it can cause damage. The effects can range from the local damage caused by the Tunguska airburst in 1908 (Chyba et al., 1993) or the Chelyabinsk airburst in 2013 (Brown et al., 2013; Kohout et al., 2014), to the regional destruction caused by the impact that created the kilometer-sized Meteor Crater in Arizona (Grieve, 1987). There is even evidence that the effects can be global, as seen in the theory that an impactor in the Cretaceous-Paleogene era caused the enormous ring-shaped feature beneath the Gulf of Mexico and significantly contributed to the extinction of the dinosaurs (Alvarez et al., 1980).
Over the last few decades, Congess has tasked NASA with two goals to address the NEO detection problem, in part spurred by the impact of Comet Shoemaker-Levy 9 into Jupiter (Hammel et al., 1995; Zahnle & Mac Low, 1994). The first goal, also called the Spaceguard survey, concerned the population of objects larger than 1km in diameter and directed the agency to detect $90\%$ of this population by 2008. Mainzer et al. (2011a) showed that this goal was reached somewhere in 2010. The second goal was given in the George E. Brown, Jr. near-Earth object section of the NASA Authorization Act of 2005 (Public Law 109-155), which charged the agency to discover and track $90\%$ of the NEO population with diameters larger than $140$m by 2020.
The current NEO survey capability is dominated by the Catalina Sky Survey (CSS; Larson, 2007) and the Pan-STARRS project (Denneau et al., 2013). Both surveys operate in similar fashion by observing each pointing on the sky 4-5 times per night, with each return to the same pointing being separated by minutes to tens of minutes. Detections in the fields are connected into tracklets, and when their motions are consistent with that of the NEO population they are posted as candidate NEOs on the Minor Planet Center (MPC) NEO confirmation page. In many cases other observers, dubbed the follow-up community, provide further observations over the next few nights that refine the orbit and establishe whether or not the candidate was indeed an NEO. Originally the Pan-STARRS project planned to adopt a two detections per night (hereafter 2-detection) survey cadence for its wide-field multi-science survey. Due to a variety of problems, the Pan-STARRS project was forced to change its survey cadence to the more traditional four detections per night (hereafter 4-detection) cadence that has proven to be successful in CSS, NEOWISE (Mainzer et al., 2011b), and now Pan-STARRS. See Denneau et al. (2013) for a description of the Pan-STARRS Moving Object Processing System (MOPS), including the details of its performance and its problems. New cameras, such as the Dark Energy Camera (DECam; Diehl & Dark Energy Survey Collaboration, 2012), together with refined false detection handling through for example machine learning algorithms Goldstein et al. (see 2015) may yet provide the advances to solve this problem in the future (although at present these techniques are not yet widely implemented for NEO surveys).
The known NEO population currently consists of more than 13,800 objects, $\sim 1,500$ of which were discovered in 2015 (http://neo.jpl.nasa.gov/stats/). The Catalina Sky Survey has been leading the NEO survey effort during most of the last decade, but when Pan-STARRS went from a multi-science survey telescope to a dedicated NEO survey telescope in early 2014, its discovery rate increased by $60\%$. This elevated Pan-STARRS to the premier discovery telescope in 2015, responsible for $48.6\%$ of the discoveries for that year. CSS was the second largest discoverer in 2015, responsible for $36.2\%$ of the discoveries. However, some caution must be exercised when considering where the increase in discoveries took place. The increase in the yearly discovery rate from 2011 to 2015 has been $74.2\%$, with 1,563 discoveries in 2015 compared to 897 in 2011. However, the increase in the yearly discovery rate of the larger objects (absolute magnitude $H<22$ mag, traditionally considered to be objects larger than $140$m in diameter) is only $34.5\%$ over the same time period, with 526 discovered in 2015 compared to 391 in 2011. The fraction of objects discovered per year that have $H>22$ mag has steadily increased from $56.4\%$ in 2011 to $67.6\%$ in 2015. This trend continues into the first two months on 2016.
In 2003 NASA commissioned a report that concluded that a number of ground-based, space-based, and networked systems are capable of meeting the goal set forth in the George E. Brown, Jr. goal (Stokes et al., 2003). The National Research Council released a report in 2010 that came to a similar conclusion (Shapiro et al., 2010). Several projects have been proposed to address this problem, but detailed simulations of most of these proposals have not been published in the refereed literature. We performed a detailed study of an infrared space-based option in Mainzer et al. (2015), where we compared the completeness for NEO population with diameters larger than $140$m for a Venus-trailing orbit and a L1 halo orbit. Sentinel, a privately funded infrared space-based survey, proposed by the B612 Foundation (Lu et al., 2013), is similar to the Venus-trailing orbit studied in Mainzer et al. (2015). The L1 halo orbit option, named the Near-Earth Object Camera (NEOCam), has been been submitted to the 2005, 2010, and 2014 NASA Discovery Announcements. NEOCam has received funding in 2010 to continue the development of its infrared detector technology (McMurtry et al., 2013) and was selected for Phase 2 development in the 2014 Discovery Announcement Opportunity. Mainzer et al. (2015) concluded that the L1 halo option offered superior performance to the Venus-trailing option when considering integral survey completeness for NEOs larger than $140$m in diameter, even when the effects of the loss of data rate in a Venus-trailing orbit is neglected.
In this paper we examine a complementary ground based option, the Large Synoptic Survey Telescope (LSST; Ivezić et al., 2007; Ivezic et al., 2014), an 8.4m telescope (with the approximate effective collective area of a 6.67m in diameter mirror) with a 9.6 square degree imager that is currently being built in Chile with funding from the National Science Foundation and the Department of Energy. We perform detailed survey simulations using the latest population models based on NEOWISE results (Mainzer et al., 2011a, 2012a) and the most current model of the survey plan provided by LSST project and compare our results with others published by the LSST project (Ivezić et al., 2007; Ivezic et al., 2014). These results are compared to the best available results from the space-based L1 halo orbit survey option. We also show that assuming that using absolute magnitude $H<22$ mag as a proxy for the population with diameters larger than $140$m is not a valid approximation.
1.1 The LSST Project
The effectiveness all surveys for NEO discovery depends on the details of their performance and observational cadence. As the baseline LSST survey cadence is currently envisioned, each pointing will be visited two time per night. This will produce a 6 band ($u$, $g$, $r$, $i$, $z$, and $y$ filters) wide-field deep astronomical survey covering more than 20,000 square degrees of southern sky, visiting each pointing on the sky over 1000 times in a 10 year survey.
In Ivezić et al. (2007), the LSST team presented a simulation that used a set of 1000 synthetic orbital elements that match the distribution of discovered objects in the large size range where present surveys were essentially complete. The simulation computed the positions of the synthetic orbits every 5 days and used a filtering method based on the assumed sky coverage and cadence pattern, limiting magnitude of the survey, visibility constraints, and observing conditions. They estimated that the LSST baseline cadence can achieve, over 10 years, a completeness of $90\%$ for PHAs larger than $250$m in diameter, and $75\%$ completeness for those with diameters larger than $140$m. They further suggested that ongoing simulations showed that improvements in filter choices and operations, LSST would be capable of reaching a completeness of $90\%$ for PHAs larger than $140$m in ten years, but details of these optimizations were not discussed. In Ivezic et al. (2014) a size-limited complete sample of 800 known PHAs was used as the trial population. This simulation improved upon the previous simulation by determining which PHAs are present in each exposure and whether they were bright enough to be detected in individual exposures. They found that the baseline cadence provides orbits for $82\%$ of the PHAs larger than $140$m in diameter after 10 years of operations. It was suggested that optimization of certain aspects of the survey operations would improve the completeness to $90\%$ in 12 years without significantly affecting the survey’s other science goals. It is important here to note that sizes quoted in these papers are derived by converting absolute magnitudes using an average albedo. We will show in our results why this is not an optimal approach, resulting in completeness values that are overly optimistic.
2 The Simulations
In the last few years our knowledge of the NEO population has increased significantly. Mainzer et al. (2011a, 2012a) used the asteroid-detection portion of the Wide-field Infrared Survey Explorer’s (WISE; Wright et al., 2010) data processing pipeline (NEOWISE; Mainzer et al., 2011b) to update the estimate of the number of objects larger than $140$m in diameter and to derive new size and albedo distributions for the NEO population as a whole, as well as the subpopulations. This has allowed us to create a new synthetic NEO population that is based on both the known NEO orbital population (Grav et al., 2011) and these new distributions of sizes and albedos (see Section 2.3 for the details). Significant work has also been done by the LSST project, resulting in the production of an improved baseline cadence that features the latest knowledge of the telescope and instrument performance, weather and seeing conditions at the Cerro Pachón site, and improvements based on earlier cadence simulations. With these recent developments, we present a set of survey simulations that combine our updated NEO population model with the most recent cadence and performance simulations provided by the LSST project to investigate the performance of the LSST project in surveying the NEO population.
2.1 2 versus 4 Detections per Night
One of the major features of the LSST baseline cadence is the use of the 2-detection approach, which deviates from the way the current NEO surveys operate. CSS, PanSTARRS and NEOWISE all require 4-5 detections per night to reliably link detections of individuals object due to the large range of rates and directions of most NEOs. This number of repeated observations per night has been shown to be relatively robust against the noise points, image artifacts, cosmic rays, and other transient sources that degrade the reliability of position-time pairs (so-called “tracklets”) constructed from fewer detections per night (Denneau et al., 2013; Mainzer et al., 2011b; Cutri et al., 2012).
Since the baseline assumes a 2-detection cadence that has not yet been tested and validated for NEO discovery, the LSST project has also produced a 4-detection cadence, which is more similar to the cadences used by the current NEO surveys. These two cadences are described in more detail in Section 2.4.
In Mainzer et al. (2015), we assumed that 4 detections spaced over 8-9 hours were required to reliably link detections of an individual object to form a tracklet. Requiring additional detections dramatically increases tracklet reliability but decreases the rate at which fresh sky can be covered, so the performance of 2-detection surveys cannot be compared with those of 4-detection surveys. Should a 2-detection cadence be proven a viable means of linking tracklets through future testing and validation, space-based surveys can also adopt this approach. Therefore, to compare the relative performance of the LSST survey and the L1 halo-orbit infrared space-based survey in discovering and tracking NEOs, we have elected to simulate both the 2- and 4-detection cadences. This work provides the bounding cases for the expected performance of these two options and allows accurate comparisons to be made.
2.2 The Solar System Survey Simulator
The solar system survey simulator we have created to analyze the performance of LSST is essentially the same as that used to model the performance of a 0.5m thermal infrared space telescope in Mainzer et al. (2015). The technique begins by combining a frame-by-frame pointing list for the simulated survey with a population of synthetic moving objects whose positions and velocities are computed at the epoch of each frame. The brightness and on-sky velocity of each object as it appears in each frame is evaluated to determine whether or not it would have been detected in that exposure, depending on that frame’s estimated sensitivity. If sets of detections are found in a cadence that allows them to be uniquely linked to one another over a sufficiently long timespan, the object can be declared “discovered”. The survey cadence is critically important in determining which objects merely pass through the field of regard, and which objects are actually detected, discovered, and tracked. This simulation does not yet include models of background sources or other image artifacts and transient sources (such as noise, cosmic rays, defective pixels, scattered light, etc.) that can confuse or break linkages; therefore, these simulations should be regarded as a best case.
If a set of two 4-detection tracklets covering an average of 10-20 nights or more can be linked to one another, an asteroid’s orbit can be determined with sufficient accuracy to allow it to not only be declared discovered, but to allow it to be recovered at its next apparition. A set of at least three 2-detection tracklets is needed to allow the same for the 2-detection cadence. The survey simulation in Mainzer et al. (2015) describes a method for tallying detections over a survey cadence that collects tracklets spanning $\sim$10-22 days. In this paper, we apply a similar technique to simulate the performance of the LSST project, adopting the baseline survey cadence given on the LSST Operations Simulation (OpSim) website111https://confluence.lsstcorp.org/display/SIM/Operations+Simulator +Benchmark+Surveys (Connolly et al., 2014; Jones et al., 2014). This frame-by-frame simulation is combined with our synthetic population model to predict the numbers of objects that would be detected in each frame.
To give a robust estimate of the variation in orbital elements and physical properties, our simulations include 25 synthetic populations generated randomly according to the size distribution, albedo distribution, and numbers specified in Mainzer et al. (2012a). By running many populations through the survey simulator, we can evaluate systematic uncertainties introduced by the limits of our knowledge of the NEO population. The ephemeris for each synthetic object was computed at each frame time using the SWIFT numerical integrator (Levison & Duncan, 1994), which implements the Bulirsch-Stoer integration method, on the high performance computing facilities at Jet Propulsion Laboratory. Objects were assumed to be successfully assembled into tracklets if they were detected two or more times per night, with an on-sky velocity between 0.011 and 3.2${}^{\circ}$/day. The slow speed limit of $\sim$0.011${}^{\circ}$/hour is set by the LSST average seeing of $\sim$0.5 - 1 arcsec and the minimum estimated separation between two consecutive exposures in a night of $\sim$30 minutes. The upper speed limit of is determined by the need to avoid significant trailing losses in a 15 sec exposure.
We note here that these velocity limits have virtually no effect on the completeness fraction of these survey simulations as we focus on the population larger than $140$m. For these larger objects the distances at which they are observed are such that their velocities naturally fall within these limits. However, for objects smaller than $140$m, the effects of trailing can have a significant impact on the completeness fraction as these objects have to be much closer to the observatory to be detectable and thus generally have much higher velocities with respect to the observer. For the baseline cadence, tracklets were considered successfully linked if three tracklets, each containing at least two detections each, were detected over the course of $\sim$12 days. This requires that a single 2-detection tracklet can be successfully linked to another 2-detection tracklet no later than 6 days after, which as noted above has not been yet proven to be workable to date; we do not address this issue at present in our simulation. For the 4-detection cadence, an object was considered a discovery if two tracklets, each with at least four detections, are found within a 12 day timespan. This timespan was selected from our experience with the current surveys, where follow-up and linking has proven to be extremely difficult after 10-12 days, based on a 4-5 detection discovery tracklet (Cutri et al., 2012; Mainzer et al., 2011b; Denneau et al., 2013).
As noted in Mainzer et al. (2015), any survey simulation must account for the performance of the existing surveys such as the CSS and PanSTARRS, both in terms of the number of objects they have already discovered to date, and in terms of the number of objects they would be expected to discover over the course of the future survey. It should be noted that the LSST survey patterns available from the OpSim start in 1994 and run 10 years from this date. To estimate the overlap in discoveries between LSST and the historical and current surveys, it was therefore necessary to shift our historical and current survey simulations by 28 years back in time or shift the LSST cadence 28 years into the future (assuming that the LSST survey will start in 2022). This allowed us to derive a known population of NEOs that is similar to that which LSST will have when its starts its survey late this decade. Both methods yielded similar completeness fractions and similar overlap fractions with the existing surveys.
2.3 The Synthetic Near-Earth Object Population
The NEO population has historically been divided into several subpopulations. Starting from the outside and going inward, we consider four separate sub-populations in this paper: 1) the Amors, objects with perihelion distance in the range $1.017<q\leq 1.3$ au; 2) the Apollos, which have orbits with semi-major axis $a\geq 1$ au and perihelion distance $q\leq 1.017$ au; 3) the Atens, which have orbits with semi-major axis $a<1$ au and aphelion distance $Q\geq 0.983$ au; and 4) the interior Earth objects (IEOs), which have aphelion distances $Q<0.983$ au. In addition, we use the term potentially hazardous asteroid (PHA) to indicate objects that are larger than $140$m (the definition is usually given as absolute magnitude $H<22$ mag rather than size) and have minimum-orbital-intersection-distance (MOID) less than $0.05$ au (Ostro & Giorgini, 2004).
Mainzer et al. (2011a) determined based on the results of the NEOWISE survey that there are $20,500\pm 3,000$ near-Earth asteroids (NEAs; near-Earth comets were not included in the study) larger than 100 m in diameter. They looked at the subpopulations of the NEA population in Mainzer et al. (2012a) using the same data and determined that there are $1,600\pm 760$ Atens larger than 100 m in diameter. The corresponding numbers for the Apollo and Amor subpopulations are $11,200\pm 2,900$ and $7,700\pm 3200$, respectively. This is similar to the results of Greenstreet et al. (2012), which is based on numerical simulations of dynamical evolution of objects from the main asteroid belt source regions into NEO space. Since NEOWISE does not provide an estimate of the IEOs, we use the result of Greenstreet et al. (2012) as the basis to generate a population of interior objects that consist of $350\pm 100$ objects (which is close to the $1.6\%$ fraction given in their paper). The orbital elements for the Atens, Apollos, and Amors were generated based on the Grav et al. (2011) synthetic solar system model, which is in turn based on the Bottke et al. (2002) model.
In Mainzer et al. (2012a) each of the subpopulations was found to have a slightly different size and albedo distribution, and our synthetic populations reflect these differences. Since no measure of the size or albedo distribution of the IEOs exists, we use the distributions for the NEOWISE Aten population to generate sizes and albedos for the IEO population.
Since the LSST survey collects exposures in each of six bands, we need to model the magnitude of the synthetic NEOs at each wavelength. The absolute magnitude $H$ was found for each object using the relation
$$H=-5\cdot log_{10}\left(\frac{D\sqrt{p_{v}}}{1329}\right),$$
(1)
where $D$ is the assumed diameter in km and $p_{V}$ is the assumed geometric visible albedo (Bowell et al., 1989). Each object’s $V$ band magnitude was computed using the IAU phase curve correction:
$$V(\alpha)=H+5log(R\Delta)-2.5log[(1-G)\Phi_{1}(\alpha)+G\Phi_{2}(\alpha)],$$
(2)
where $R$ is the heliocentric distance in AU, $\Delta$ is the geocentric distance in AU, $\alpha$ is the Sun-observer-object angle, $G$ is the magnitude-phase relationship slope parameter, and $\Phi_{i}$ are given in Bowell et al. (1989). The magnitude in the $g$ band, $H_{g}$, is converted from the $V$ band by $g=V+0.56(B-V)-0.12$, where $B-V=(g-r+0.23)/1.05$ (Fukugita et al., 1996). Colors were generated in the $ugriz$ system using the color distributions found in Ivezić et al. (2001), assigning C-type and S-type to each synthetic asteroid among the NEOs. While there are of course several other taxonomic classes found in the NEO population, the vast majority of objects are consistent with the two major classes (Binzel et al., 2002; Mainzer et al., 2011a).
2.4 LSST Survey Patterns
The two survey patterns used in this paper were generated by the LSST team and are available at their website. The most recent version of the baseline cadence is labeled enigma_1189. The baseline cadence executes 5 science proposals: 1) the Wide-Fast-Deep survey, which is the universal cadence that covers large fractions (about 75%) of the sky in pairs of exposures taken 15 to 60 minutes apart; 2) the Galactic Plane survey, which collects 30 visits in each of the 6 bandpasses; 3) the North Ecliptic survey, which covers the ecliptic in the universal cadence beyond the +20${}^{\circ}$ declination limit set in the Wide-Fast-Deep survey; 4) the South Pole survey, which like the Galactic plane survey collects 30 visits in each bandpass; and 6) 6 deep drilling fields for supernova surveying. The baseline cadence starts and stops observing at 12${}^{\circ}$ twilight and uses the CTIO 4m weather log as its weather model. It uses the latest telescope model and includes scheduled downtime for maintenance; slews and filter changes together take 6.4 sec on average. The baseline cadence only uses the $u$ filter approximately 6 days per lunation. The 4-detection cadence is labeled enigma_1266. This survey cadence follows the same design and criteria as the baseline, with the only significant difference being that each pointing is observed four times if possible.
The two LSST cadences are represented in Figures 1 to 4. The surveys cover solar elongations from opposition (at $180^{\circ}$) to about $60^{\circ}$, but only about 11% of the observations are at solar elongations lower than $90^{\circ}$ for the baseline cadence (see Figure 1). This number drops to $\sim 7\%$ for the 4-detection cadence. Such limited coverage of these low-elongation regions will severely limit the usefulness of LSST in detecting objects in the IEO population, as these objects never rise above $90^{\circ}$ solar elongation. The $5\sigma$ limiting magnitudes given by the published simulated survey cadences in each filter are very similar for the two cadences, with mean limiting magnitudes in each filter being $23.7$, $24.7$, $24.4$, $23.7$, $22.5$ and $21.5$ (for u, g, r, i, z and y, respectively). It should be noted that though the seeing distributions are similar for the two cadences, the airmass distributions are not. The 2-detection baseline cadence spends significantly more time at higher airmasses, which is most likely due to increased time spent at lower solar elongations and the fact that more sky is covered each night. Therefore, more of the observations are made at less favorable airmasses. We note here that these $5\sigma$ limiting magnitudes are similar to those given in Ivezic et al. (2014), but with lower values in the $z$ and $y$ filters, which we assume are due to the project having more recent values for the performance and observing conditions when these filters are used. The limiting magnitudes and solar elongation coverage presented here are at odds with Myhrvold (2016); his Figure 7 shows much fainter limiting magnitudes than presented here. While Myhrvold (2016) cites the possibility and capability of LSST observing at much smaller solar elongations, this is inconsistent with the published cadences provided by the LSST project and would potentially interfere with its other science goals. Additionally, Myhrvold (2016) incorrectly assumes that an object need only be detected once for it to be counted as discovered, cataloged, and tracked.
2.5 Detector Gaps
LSST aims to have less than $5\%$ of the focal plane lost due to gaps, bad pixels, or dead detectors. With a pixel size of 10 microns, which equals 0.2 arc seconds of sky for the LSST telescope, we assumed gaps that were 1 mm wide, or equivalent to 100 pixels. There are 21 “rafts” in the LSST focal plane design, with each raft containing a 3x3 array of 4k by 4k pixel CCDs. This yields 14 gaps in both the horizontal and vertical direction, for a total loss due to gaps that is close to $5\%$ of the focal plane. Increasing the gaps between chips increases the probability of a detection being lost due to falling in a gap, particularly in the case of the 2-detection cadence, where loss of a single detection results in loss of the entire tracklet.
3 Results
We first examine the population of NEOs with $H<22$ mag, which has been the traditional way of reporting completeness in pursuit of the George E. Brown, Jr. goal. To do this, it was necessary to simulate a population of objects as small as diameters of $70$m, since high albedo objects with that size would have $H\sim 22$ mag.
The results given in Ivezić et al. (2007) and Ivezic et al. (2014) are stated in terms of the completeness for the fraction of the population with absolute magnitudes brighter than $H=22$ mag. If we consider the population with $H<22$ mag, we achieve comparable numbers for this population in our simulations. In the 4-detection cadence, our simulations yield a survey completeness of $63\%$ for the PHA population with $H<22$ mag, increasing to $67\%$ completeness for the same population in the 2-detection case. This is $8\%$ lower than the $75\%$ completeness reported in Ivezić et al. (2007) and $15\%$ less than the $82\%$ reported in Ivezic et al. (2014). The lower completeness in our simulations can be attributed to the higher fidelity simulations (field-by-field simulation, inclusion of gaps in the focal plane, actual accounting of individual detections needed to form tracklets and tracks), improved knowledge of the population model over the last decade, and better understanding of the performance of the telescope itself. An example of this is the visible magnitude distribution of the detections for the discovered objects in on our simulations as seen in Figure 5. The distribution of $V$ magnitudes turns over sharply at $V\sim 23.5-23.75$. This is inconsistent with Figure 7 in Myhrvold (2016), who assumes $50\%$ limiting magnitudes $\sim 0.8$ mag fainter than our computations indicate.
However, examination of our improved population models, which now use the size and albedo distributions directly based on the results from the NEOWISE survey (Mainzer et al., 2011a, 2012a), shows that $H<22$ mag is a poor proxy for the NEO population with diameters larger than $140$m. Approximately one quarter, 23%, of the NEOs with diameters larger than $140$m have absolute magnitudes fainter than this due to the observed spread in NEO albedos (see Figure 6). Integrating over the albedo distribution of the NEO population the $90\%$ integral completeness yields $H\sim 22.8$ for the $140$m population. (E. Wright, personal communication). Setting the target as the population with $H<22$ mag allows the surveys to discover a large number of high albedo objects with sizes as small as $70$m, while a dark, large object with diameter of $370$m and $2\%$ albedo would be outside the population with $H=22.02$ mag. This effect would be to increase the apparent effectiveness of visible-band surveys, as many objects with diameters less than $140$m would would erroneously be counted in the $H<22$ mag tally. Thus, to correctly predict survey completeness, it is necessary to use synthetic populations that properly account for the actual size and albedo distributions of the NEOs, rather than simply assuming $H=22$ mag corresponds to $D=140$m.
3.1 Result for the Population of Objects Larger than 140m
The simulations show that with the 2-detection cadence, LSST reaches a completeness of $63\%$ for the total NEO population larger than $140$m after 10 years, and a $62\%$ completeness for the PHA population larger than $140$m (see Table 1). As expected, LSST is most efficient in finding objects among the Amor subpopulation, due to the fact that much of its time is spent observing the opposition regions of the sky where Amors are preferentially located. Conversely, LSST is less efficient at finding objects in the Aten and IEO subpopulations, since these tend to be distributed at lower solar elongations, and the two LSST cadences studied here spend relatively little time at solar elongations less than 90${}^{\circ}$ (Fig. 1). High airmass image degradation and phase effects also play a significant role in the reduced detection of IEOs and Atens from ground based observatories. The completeness falls to $59\%$ for the NEO population with diameters larger than $140$m for the 4-detection cadence tested (see Table 2). The reduction in completeness is due to the fact that spending more exposures on each patch of sky necessitates a reduction in the amount of fresh sky covered each night. The effect of the 4-detection cadence on the PHA completeness fraction is similar, dropping from $62\%$ to $58\%$.
We have thus computed a more realistic value for the expected completeness of the LSST project in its attempt to satisfy the $140$m goal. We have used the project’s most recent simulated survey cadences, which contain the expected limiting magnitude for each field as affected by airmass, sky brightness, and weather. We point to several factors that resulted in a different estimate of the performance of LSST with regards to the NEO and PHA populations. First, we use an improved synthetic model for the NEO and PHA populations, where each object has size, albedo, and orbital elements based on the measured properties of the population from the NEOWISE survey (Mainzer et al., 2011a). This is a significant improvement over the synthetic population of 800 known PHAs used by Ivezic et al. (2014). Further, our analysis uses diameter to determine the completeness fraction, rather than the absolute magnitudes used in previous work. As pointed out above, a significant fraction of the PHAs with diameters larger than $140$m have absolute magnitudes fainter than the $22$ magnitude limit used in Ivezic et al. (2014). This is due to the fact that as determined by Mainzer et al. (2011c), $\sim 35\%$ of NEOs have low albedos. Also, Figure 5 shows the V magnitude of the detections of the objects found in the survey simulation for one of the populations used in combination with the baseline enigma_1189 cadence. It shows that the peak of the distribution is in the $V\sim 23-23.75$ mag range, with the distribution turning sharply downwards at $V\sim 23.75$ mag. In both the baseline and the 4-detection cadence, only $18\%$ of the observations of the detected synthetic objects have V magnitudes fainter than $V=23.75$ mag. This limiting
magnitude is shallower than the $V\sim 24-25$ assumed in Ivezić et al. (2007) and Ivezic et al. (2014).
These estimates of completeness assume that LSST would operate without any other surveys, past or future. However, the current NEOs surveys (CSS, Pan-STARRS, NEOWISE, etc.) have already discovered more than 13,000 NEOs, with an estimated current completeness for NEOs larger than $140$m of $\sim 25\%$ (Mainzer et al., 2011a). Simulations of the current surveys estimate that the completeness will rise to $\sim 43\%$ at the start of the LSST survey, if the current surveys continue to operate unchanged. Our simulations show that the overlap of objects seen by the current surveys and LSST is significant, with the combination of the LSST and the current surveys at the end of the LSST operations reaching a completeness of $67\%$ for NEOs larger than $140$m (up from the $63\%$ for LSST alone) in the 4-detection cadence case. For the PHAs, the completeness of LSST and the current surveys reaches $71\%$ after the 10 year LSST survey is completed (up from the $58\%$ for LSST alone). This significant overlap is due to the fact that both the current surveys and LSST operate by observing mainly at opposition. For the baseline 2-detection cadence, the combination of LSST and the current surveys reaches $67\%$ and $73\%$ for the NEOs and PHAs larger than $140$m, respectively.
Figure 7 shows the completeness of the PHA population larger than $140$m for the 4-detection cadence. The 25 populations were run through all survey simulations and combined completeness values were derived by counting discovered objects as a function of time as the different surveys become available. Note that we choose to use the 4-detection cadence here, as this is what is used by current surveys and what is most often presented by other studies of survey performance, but using the LSST baseline 2-detection cadence only improves the completeness by a few percent. The performance of the current surveys is shown, along with the combination of the current surveys with the expected LSST. These are compared to the combined completeness of NEOCam and the current surveys, which reach $78\%$ PHA completeness after five years (the NEOCam baseline mission) and $88\%$ by 2031 (Mainzer et al., 2015). Operating both LSST and NEOCam offers the fastest means of reaching the $90\%$ goal set by the George E. Brown, Jr. Near-Earth Object section of the NASA Authorization Act of 2005 (Public Law 109-155).
4 Discussion and Conclusion
We have performed a detailed survey simulation of the LSST performance using the current LSST baseline cadence. The simulation shows that if the project is able to reliably generate tracklets using two detections per night and can link these tracklets into a track with a minimum of 3 tracklets covering more than a $\sim 6$ day length-of-arc, the survey would discover $62\%$ of the PHAs larger than $140$m in its projected 10 year survey lifetime. This completeness would be reduced to $58\%$ if the 2-detection cadence cannot be implemented, and the more traditional 4-detection cadence is instead adopted. When including the estimated performance from the current operating surveys, assuming these would continue running until the start of LSST and perhaps beyond, the completeness fraction for PHAs larger than $140$m would be $73\%$ for the baseline cadence and $71\%$ for the 4-detection cadence.
Our results differ from the estimates of Ivezić et al. (2007) and Ivezic et al. (2014), but are quite comparable to Jones et al. (2016) and Chesley & Vereš (2015, personal communication). Some reasons for the discrepancy include our choice of modeling the survey based on diameter, rather than the proxy population of objects with absolute magnitude $H<22$ mag; choice of model input populations; and the difference among cadence choices. Our simulation accounts for the fact that a sizable fraction of NEOs larger than $140$m are dark, with $H>22$ mag (Mainzer et al., 2011c, 2012b). We have shown that using this proxy population is a less than optimal approach for estimating the ability of a survey to make progress towards the George E. Brown, Jr. goal of detecting and tracking $90\%$ of the NEOs larger than $140$m in diameter.
The advantages of operating both NEOCam and LSST are many. The combination of LSST and NEOCam creates observational redundancy (which improves reliability of individual tracklets) and the ability to extend orbital arcs, allowing potential impacts to be reliably predicted much farther into the future. The surveys observe complementary regions of orbital element phase space, with NEOCam observing more interior NEOs and Atens, and LSST preferentially detecting Amors near opposition. The combination of visible and IR fluxes will produce sizes, albedos, and color information, which gives insight into objects’ probable composition (e.g. Mainzer et al., 2012b). Composition is a key uncertainty in understanding the potential damage that a given impactor could produce.
Even if LSST choses to operate in the 2-detection cadence, which may lower its ability to link tracklets, linking its observations to objects found by NEOCam or other sources will provide an immensely powerful capability. Combining LSST observations with others will help to recover objects, secure orbits, and extend observational arcs.
5 Acknowledgements
This publication makes use of data products from NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. We gratefully acknowledge the support of the JPL High-Performance Computing Facility, which is supported by the JPL Office of the Chief Information Officer.
References
Alvarez et al. (1980)
Alvarez, L. W., Alvarez, W., Asaro, F., & Michel, H. V. 1980, Science,
208, 1095
Binzel et al. (2002)
Binzel, R. P., Lupishko, D., di Martino, M., Whiteley, R. J., &
Hahn, G. J. 2002, Asteroids III, 255
Bottke et al. (2002)
Bottke, W. F., Morbidelli, A., Jedicke, R., et al. 2002, Icarus, 156,
399
Bowell et al. (1989)
Bowell, E., Hapke, B., Domingue, D., et al. 1989, in Asteroids II, ed.
R. P. Binzel, T. Gehrels, & M. S. Matthews, 524–556
Brown et al. (2013)
Brown, P. G., Assink, J. D., Astiz, L., et al. 2013, Nature, 503, 238
Chesley & Vereš (2015)
Chesley, S. R., & Vereš, P. 2015, in AAS/Division for Planetary
Sciences Meeting Abstracts, Vol. 47, AAS/Division for Planetary Sciences
Meeting Abstracts, 308.09
Chyba et al. (1993)
Chyba, C. F., Thomas, P. J., & Zahnle, K. J. 1993, Nature, 361, 40
Connolly et al. (2014)
Connolly, A. J., Angeli, G. Z., Chandrasekharan, S., et al. 2014, in
Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series,
Vol. 9150, Society of Photo-Optical Instrumentation Engineers (SPIE)
Conference Series, 14
Cutri et al. (2012)
Cutri, R. M., Wright, E. L., Conrow, T., et al. 2012, Explanatory
Supplement to the WISE All-Sky Data Release Products, Tech. rep.
Denneau et al. (2013)
Denneau, L., Jedicke, R., Grav, T., et al. 2013, PASP, 125, 357
Diehl & Dark Energy Survey Collaboration (2012)
Diehl, T., & Dark Energy Survey Collaboration. 2012, Physics Procedia, 37,
1332
Fukugita et al. (1996)
Fukugita, M., Ichikawa, T., Gunn, J. E., et al. 1996, AJ, 111, 1748
Goldstein et al. (2015)
Goldstein, D. A., D’Andrea, C. B., Fischer, J. A., et al. 2015, AJ,
150, 82
Grav et al. (2011)
Grav, T., Jedicke, R., Denneau, L., et al. 2011, PASP, 123, 423
Greenstreet et al. (2012)
Greenstreet, S., Ngo, H., & Gladman, B. 2012, Icarus, 217, 355
Grieve (1987)
Grieve, R. A. F. 1987, Annual Review of Earth and Planetary Sciences, 15, 245
Hammel et al. (1995)
Hammel, H. B., Beebe, R. F., Ingersoll, A. P., et al. 1995, Science,
267, 1288
Ivezić et al. (2001)
Ivezić, Ž., Tabachnik, S., Rafikov, R., et al. 2001, AJ, 122,
2749
Ivezić et al. (2007)
Ivezić, Ž., Tyson, J. A., Jurić, M., et al. 2007, in IAU
Symposium, Vol. 236, IAU Symposium, ed. G. B. Valsecchi,
D. Vokrouhlický, & A. Milani, 353–362
Ivezic et al. (2014)
Ivezic, Z., Tyson, J. A., Abel, B., et al. 2014, ArXiv e-prints,
arXiv:0805.2366
Jones et al. (2016)
Jones, R. L., Jurić, M., & Ivezić, Ž. 2016, in IAU
Symposium, Vol. 318, IAU Symposium, ed. S. R. Chesley, A. Morbidelli,
R. Jedicke, & D. Farnocchia, 282–292
Jones et al. (2014)
Jones, R. L., Yoachim, P., Chandrasekharan, S., et al. 2014, in Society
of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol.
9149, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference
Series, 0
Kohout et al. (2014)
Kohout, T., Gritsevich, M., Grokhovsky, V. I., et al. 2014, Icarus,
228, 78
Larson (2007)
Larson, S. 2007, in IAU Symposium, Vol. 236, IAU Symposium, ed. G. B.
Valsecchi, D. Vokrouhlický, & A. Milani, 323–328
Levison & Duncan (1994)
Levison, H. F., & Duncan, M. J. 1994, Icarus, 108, 18
Lu et al. (2013)
Lu, E. T., Reitsema, H., Troeltzsch, J., & Hubbard, S. 2013, New
Space, 1, 42
Mainzer et al. (2011a)
Mainzer, A., Grav, T., Bauer, J., et al. 2011a, ApJ, 743,
156
Mainzer et al. (2011b)
Mainzer, A., Bauer, J., Grav, T., et al. 2011b, ApJ, 731,
53
Mainzer et al. (2011c)
Mainzer, A., Grav, T., Masiero, J., et al. 2011c, ApJ,
736, 100
Mainzer et al. (2012a)
—. 2012a, ApJ, 752, 110
Mainzer et al. (2012b)
Mainzer, A., Masiero, J., Grav, T., et al. 2012b, ApJ,
745, 7
Mainzer et al. (2015)
Mainzer, A., Grav, T., Bauer, J., et al. 2015, AJ, 149, 172
McMurtry et al. (2013)
McMurtry, C., Lee, D., Beletic, J., et al. 2013, Optical Engineering,
52, 091804
Myhrvold (2016)
Myhrvold, N. 2016, PASP, 128, 045004
Ostro & Giorgini (2004)
Ostro, S. J., & Giorgini, J. D. 2004, in Mitigation of Hazardous Comets
and Asteroids, ed. M. J. S. Belton, T. H. Morgan, N. H. Samarasinha, &
D. K. Yeomans , 38
Shapiro et al. (2010)
Shapiro, I. I., A’Hearn, M., Vilas, F., et al. 2010, Defending Planet
Earth: Near-Earth-Object Surveys and Hazard Mitigation Strategies (National
Research Council of The National Academies, Division of Engineering and
Physical Sciences)
Stokes et al. (2003)
Stokes, G. H., Yeomans, D. K., Bottke, W. F., et al. 2003, Study to
Determine the Feasibility of Extending the Search for Near-Earth Objects to
Smaller Limiting Diameters. Report of the Near-Earth Object Science
Definition Team, Tech. rep.
Wright et al. (2010)
Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ,
140, 1868
Zahnle & Mac Low (1994)
Zahnle, K., & Mac Low, M.-M. 1994, Icarus, 108, 1 |
Pedestrian Trajectory Prediction using Context-Augmented Transformer Networks
Khaled Saleh
Faculty of Engineering and IT
University of Technology Sydney
Sydney, Australia
Email: khaled.aboufarw@uts.edu.au
Abstract
Forecasting the trajectory of pedestrians in shared urban traffic environments is still considered one of the challenging problems facing the development of autonomous vehicles (AVs). In the literature, this problem is often tackled using recurrent neural networks (RNNs). Despite the powerful capabilities of RNNs in capturing the temporal dependency in the pedestrians’ motion trajectories, they were argued to be challenged when dealing with longer sequential data. Thus, in this work, we are introducing a framework based on the transformer networks that were shown recently to be more efficient and outperformed RNNs in many sequential-based tasks. We relied on a fusion of the past positional information, agent interactions information and scene physical semantics information as an input to our framework in order to provide a robust trajectory prediction of pedestrians. We have evaluated our framework on two real-life datasets of pedestrians in shared urban traffic environments and it has outperformed the compared baseline approaches in both short-term and long-term prediction horizons.
\floatsetup
[table]capposition=top
I Introduction
The interactions between pedestrians and highly/fully automated vehicles (AVs) requires a mutually shared understanding between the two parties [1]. On one hand, the vehicles need to have a deep understanding and anticipation of the pedestrians’ actions, especially in shared urban traffic environments. On the other hand, a suitable channel is also required so that the vehicles could convey their decisions to the pedestrians similar to what typically happens between human-driven vehicles and pedestrians nowadays. Therefore, the problem of trajectory prediction for pedestrians got the attention of the research community over the past few years. In literature, the formulation of the trajectory prediction of pedestrians problem is generally unified (i.e, given a short-time observation of the pedestrians’ motion trajectories, the task is to forecast a few seconds ahead the rest of their future trajectory). The setup and context of the problem however vary based on the target domain application. Target domain applications can be categorised into two main categories based on the nature of the space that the pedestrians are interacting with. The first one is crowded spaces, where a large number of pedestrians are in close proximity to each other and the interactions happen only among themselves (typically on side-walks) [2, 3, 4, 5]. The second domain application is the urban traffic shared space, where the density of the pedestrians in scene varies from medium to low and the interactions are more diversified (i.e, among each other, with cyclists and with vehicles) [6, 7, 8, 9]. The focus of this work will be more on the second domain application, but we will also discuss some of the approaches from the first category which overlap with our proposed methodology. In the literature, one of the most successful approaches for tackling the problem of trajectory prediction of pedestrians (across the aforementioned two domain applications), is the data-driven approaches. The reason for that is the capability of such approaches to directly learn complex behaviours of pedestrians (especially in crowded spaces) in different scenarios using large datasets. One of the key components that commonly exist almost in all of the data-driven approaches is the recurrent neural networks (specifically Long Short-Term Memory (LSTM) architecture). LSTMs can implicitly model the inherent dependency between the consecutive observations of pedestrians’ trajectories in an end-to-end fashion. That being said, LSTMs were however recently argued to be inefficient when it comes to modelling longer sequential data [10]. Additionally, LSTMs were also shown to be more sensitive to missing observations which is typically the case with any data coming from real physical sensors [11].
Thus, transformer networks [12] were recently introduced and quickly became the preferred model when it comes to sequential modelling tasks such as natural language translation and summarisation [13]. The reason for that is the multi-head attention mechanism of these networks that allow them to process the sequential input data in parallel without any constraints on their order as it is the case with LSTMs. Additionally, the attention mechanism allows the model to attend to different parts of the input sequence at each time-step, which in return gives more in-depth reasoning about the relevancy and context of the input sequential data. Transformer networks were recently also explored for the trajectory prediction of pedestrians problem in crowded spaces [11]. The model has achieved state-of-the-art (SOTA) results over the TrajNet benchmark [14]. The model was able to achieve such results by capitalizing only on single positional information about the pedestrians in the scene. While positional information could be enough for the transformer model in crowded spaces, but we argue in urban traffic shared spaces this might not be the case. The rationale behind that is in crowded spaces, there is only one type of interactions (which is between pedestrians and each other). On the other hand, in urban traffic shared spaces, there are two levels of interactions: the first one is the interaction between pedestrians and other agents sharing the space with them such as vehicles and cyclists. The second level of interaction happens between the pedestrians and the physical semantics of the scene such as trees, buildings, road, side-walk, vegetation,..etc. While such semantic information might not be very useful in crowded spaces (since interactions take place only on side-walks), but in urban traffic shared spaces, it might influence the decisions and choices of pedestrians while navigating through them. For example, pedestrians in urban traffic environments, might cross the road or walk on grass instead of side-walks.
Thus in this work, we will be proposing an approach based on transformer networks,
but rather than relying only on positional information as an input, we will be augmenting our proposed transformer network model with additional information about the context of the scene, namely interactions with other agents and semantics of the scene. The rest of the paper will be organised as follows. In section II, a brief summary of the related work from the literature will be discussed. In Section III, the proposed methodology will be presented and discussed. Section IV will include the experimental results of our proposed approach on two real datasets. Finally, Section V will conclude our paper.
II Related Work
The problem of trajectory prediction of pedestrians is often tackled using two broad classes of approaches, namely model-based approaches and data-driven approaches. In model-based approaches, as the name implies, an explicit model about the pedestrian actions (in terms of movements over time) is assumed to be existent in the first place. Commonly, model-based approaches are also assumed to be holding the Markovian assumption (where the future state/position of pedestrians is only conditioned on their current state/position). In [15], a number of techniques based on linear dynamical motion models such as Kalman filter and Gaussian process were introduced to predict future trajectories of pedestrians over short-term horizons (i.e whether pedestrians will stop to cross the road or continue walking). In order to overcome the limitations of linear dynamical models, Karasev et al. [16], proposed another model-based approach by modelling pedestrians actions’ as a jump-Markov process for long-term prediction of their trajectories. They utilised a Rao-Blackwellized filter to estimate the posterior over a switching goal state that pedestrians are heading towards it that could be changed over-time . Then they cast the problem as a Markov decision process (MDP), which they solve using an estimated reward function based on a semantic map of the scene. More recently, Anderson et al. [17], presented a model-based approach that model the interaction between pedestrians and incoming vehicles they might encounter in urban traffic shared spaces. Given an $n$ number of vehicles in the scene, they model the influence of each vehicle on each pedestrian’s decision whether he/she will yield to it using an attention mechanism. Their attention mechanism is hand-crafted using an assumed known position and velocity for each vehicle and pedestrian as well as a pre-defined threshold distance to discard any vehicle beyond this distance. Given this interaction modelling, they estimate future trajectories of pedestrians using a constant velocity Kalman filter.
On the other hand, in data-driven approaches, there is no explicit modelling of the pedestrians’ behaviours. They, however, rely on datasets of pedestrian trajectories in a number of scenarios and try to learn the pedestrian behaviours directly from the data. One of the relatively recent yet successful data-driven approaches is the Social-LSTM model introduced in [3]. The Social-LSTM model was targeted for the prediction of pedestrian trajectories in crowded spaces by exploiting the interactions between pedestrians on side-walks. They utilised LSTM networks to automatically model the sequential positional information of pedestrians. Moreover, their model included a social pooling layer to model potential social interactions among pedestrians within a pre-defined grid size around each pedestrian. Due to the computational complexity of social pooling layer of the Social-LSTM model to take into account the influence of the interactions among all pedestrians in the scene, the Social-GAN model [4] was introduced to overcome this issue. The Social-GAN model addressed this limitation by relying on two main components, first is the Generative Adversarial Networks (GANs) which are used to capture and generate more socially-acceptable trajectories. Second, they utilised a global pooling mechanism rather than the average layer used in the Social-LSTM model. In [11], another data-driven approach was proposed but instead of utilising the famous LSTM network, they, however, relied on the transformer networks that have been achieving SOTA results in a number of natural language processing tasks. They proposed a vanilla transformer network model that takes an input only positional information about individual pedestrians without taking into account any social/contextual interactions. Their proposed model has achieved SOTA on TrajNet benchmark [14], which is a large collection of pedestrians trajectory datasets in versatile crowded spaces.
III Proposed Method
In this section, we first start with our formulation for the trajectory prediction problem. Then, we present the different contextual information we took into account as an input to our proposed framework (shown in Figure 1). Then, we describe the architecture of our proposed context-augmented transformer model and its implementation details.
III-A Problem Formulation
We cast the pedestrians’ trajectory prediction problem as a sequence prediction problem, where given past short-term observations about the pedestrians’ state $X_{t-\delta:t}$ at current time-step $t$, the task is to provide predictions about the true position of their future trajectory $\kappa$ steps ahead. The state of pedestrians at each time-step can be approximated using multimodal contextual information which in our case are as follow:
III-A1 Positional Information
Given the bird’s eye view of the scene, we get the sequence of 2D positions $(x,y)$ in the ground plane of each pedestrian in the scene. Then, we calculate the offset positions between every two consecutive positions and use it as our first dimension of the state representation input to our context-augmented transformer model. The reason for using offset positions instead of absolute positions is to make our trained model agnostic to the size dimensionality of different scenes views. Additionally, the position offsets can be viewed as an approximation of the pedestrians’ velocities over time which could provide some patterns about pedestrians’ movements that can be captured by our transformer model.
III-A2 Agent Interactions Information
Similar to social interaction models [3, 4], we also include interactions information as part of our pedestrians’ state representation. However, rather than having only interactions between pedestrians and each other, we also accommodate interactions with other agents such as moving vehicles and cyclists which are commonly found in urban traffic shared spaces. The interactions information are included using a polar occupancy grid map around each pedestrian in the scene, which its centroid is the pedestrian’s 2D position. Encoding agent interactions as a polar occupancy grid map have been commonly applied in modelling pedestrians interactions in crowded spaces [7, 18]. Each cell within a polar grid is parametrised by two parameters, namely orientation angle $\theta$ and radius $r$. $r$ represents the distance from each agent in the scene and the ego-pedestrian position (i.e centroid of the polar grid). While $\theta$ describes the orientation angle from each agent in the scene and the ego-pedestrian position. At each time-step $t$ of the ego-pedestrian trajectory, if the euclidean distance between it and each agent within a certain threshold $th$, then all those agents will be added to the cell ($r,\theta$) of the polar grid interaction map of the ego pedestrian.
III-A3 Scene Semantics Information
The last information we take into account as part of the pedestrians’ state representation is the scene semantic information. Given the bird’s eye view RGB image of the scene (as shown in Figure 2), we get the label map of each pixel that belongs to any class of the following five classes: 1) road, 2) side-walk, 3) zebra-crossing, 4) vegetation/grass, 5) parked vehicle. We focused on these specific 5 classes since they are considered the most important ones that could influence the decisions/actions of pedestrians in the scene. For example, if a pedestrian is moving on the road is encountered by a parked car, s/he would follow a trajectory that avoids colliding with it. The same also happens if there is a zebra-crossing in the road, pedestrians would most likely cross through it if they want to reach the other side of the road. The way we accommodate the scene semantic information is by aggregating $k$-nearest neighbour pixel-wise semantics within a certain threshold distance (in pixels) at each 2D position of the ego-pedestrian’s trajectory.
III-B Context-Augmented Transformer Model
Given the advantages mentioned previously about the transformer networks model over RNN/LSTM networks, we are proposing a transformer model similar to the one proposed by Guliari et al. in [11] which achieved SOTA results on the leading benchmark for pedestrian trajectory prediction in crowded spaces, TrajNet [14]. Unlike their model however instead of using only positional information, we will be utilising the full contextual information described in the previous section, hence we refer to it as the context-augmented transformer model. We argue by including such contextual information besides the positional information, it will help in boosting the performance of our proposed transformer model especially in urban traffic shared spaces. The overall architecture of the transformer model is following the same encoder/decoder paradigm that commonly exists in LSTM-based approaches. However, it is more efficient as it does not contain the recurrences loops that exist in LSTM networks. The main building blocks of our context-augmented transformer model is the same as the original transformer model first introduced in [12]. These blocks namely are embedding layers, positional encoding layers, self-attention mechanism and fully-connected feed-froward layers. The embedding layers exist at the start of both the encoder and the decoder stages as shown in Figure 3. They are responsible for embedding both our observed (source) contextual sequence information and the to be predicted (target) pedestrian’s trajectory into a higher dimensional space $d_{model}$. The embedding layers are essentially learnable linear transformations with weight matrices. Similar to embedding layers, positional encoding layers also are found at the start of the encoder/decoder stages. Positional encoding layers play a crucial role given the fact that the transformer network model does not contain any recurrences like LSTMs. Thus, positional encoder layers give the transformer model the notion of order in the input/output sequential data (i.e. allows time-stamping it). There are a number of pathways to define positional encoding, in our model we follow the formulation introduced in [12]. In this formulation, position encoding $PE$ vector is defined using a wide spectrum of frequencies of sine/cosine functions as follows:
$$\displaystyle PE_{(p,2k)}=sin(p/10000^{2k/d_{model}})$$
$$\displaystyle PE_{(p,2k+1)}=cos(p/10000^{2k/d_{model}})$$
where $p$ represents the position and the dimension is represented by $k$. From the above formulation, it means that for each dimension $k$ of $PE$ vector, it has a corresponding sinusoid that spans a frequency range from $2\pi$ to $10000\cdot 2\pi$. In other words, this will allow the model to be mindful of the order in the sequential data using unique relative positions. The dimension of the $PE$ vector is similar to the embedding layers dimension which is $d_{model}$ (since they are summed together as shown in Figure 3)
Overall, the encoder/decoder parts of our context-augmented transformer model contain 6 layers in total. Internally, each layer is composed of both self-attention head and feed-forward fully connected sub-layers. Additionally, each sub-layer is followed by two residual connections and a normalisation operation. The multi-head self-attention, or the multi-scaled dot-product attention, works based on the mapping between the so-called ’query’ vectors and the pair (key, value) vectors. The dimension of query and key vectors is $d_{k}$, where the values vector dimension is $d_{v}$. The attention operation itself is computed by taking the dot-product between the query and key vectors divided by the square root of $d_{k}$ before finally passing them to the softmax function to get their weights by their values. Since scaled dot-product attention operation is done multiple times, the queries, keys and values vectors are extended into matrices $Q,K,V$ respectively. The following formula is the description of how the scaled dot-product attention operation is calculated:
$$\mathrm{Attention}(Q,K,V)=\mathrm{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V%
\leavevmode\nobreak\ $$
(1)
At the encoder stage, the embedding input is encoded and gives as an output the queries, keys and values matrices calculated in Equation 1. These matrices are then passed to the decoder stage, which in return at each prediction step, compare its own new queries matrix with the encoder’s keys and values matrices as well as its preceding decoded prediction. The decoder auto-regressively repeat the aforementioned procedure until all to be predicted total trajectory positions are achieved.
IV Experiments and Results
In this section, we will first present the datasets we utilised for training and evaluating the performance of our proposed approach. Then, the details of the setup of our experiment will be outlined. Finally, the quantitative and qualitative results of our proposed approach on real-life datasets will be evaluated and discussed.
IV-A Datasets
The number of publicly available datasets of pedestrians trajectories in urban traffic shared spaces (interacting agents such as vehicles, cyclists..etc) are rather limited. Just recently in 2019, two promising datasets became available, namely DUT [19] and inD [20] datasets. The DUT dataset contain two unique scenes of urban traffic shared spaces where pedestrians and moving vehicles are interacting with each other. The dataset has more than 1700 pedestrians’ trajectories that were captured by a drone with a down-facing camera hovering over a university campus. The captured video files were captured at 23.98 frame-per-second (FPS) and have a spatial dimension of 1920H$\times$1080W. The annotation in the dataset was done semi-automatically using a visual object tracker to extract pedestrians and vehicle trajectories with manual refinement if needed. The annotation contains the 2D position and velocity of pedestrians and vehicles both in pixels and meters. On the other hand, the inD dataset contains more trajectories with roughly 11500 of different road users such as pedestrians, vehicles and bicyclists and it was captured also using a hovering drone with down-facing camera. The inD dataset has a total recording time of 10 hours and it was covering four intersections. The resolution of the recorded video files is 4096H$\times$2160W and captured at 25 FPS. Similar to DUT, the inD dataset was also annotated with 2D positions and velocity for both pedestrians and vehicles and unlike the DUT dataset it has also the annotation for the bicyclists in the scene.
IV-B Implementation Details
The first step in our experiments is to pre-process the datasets to obtain the required multimodal input information discussed in Section III-A. Since the FPS for each dataset of the aforementioned two datasets is not unified, thus we downsampled the two datasets to 10 FPS. Given the formulation we discussed in Section III-A, we slide a window of length $\delta+\kappa$ over each pedestrian’s trajectory from the two datasets. In order to conform with other approaches from the literature, we chose the size of the observed trajectory $\delta$ to be of size 30 (i.e, 3 secs) and the size of the prediction horizon $\kappa$ to be 50 (i.e, 5 secs ahead). For the positional information, we directly utilised the annotated 2D positions provided for each dataset. Since we are using offset positions instead of absolute positions as the input to our proposed transformer model, we subtracted each position from its preceding one for each pedestrian after the first observed 2D position. For the agent interaction information, we chose the parameters of the polar grid occupancy to be 64 pixels for the threshold value $th$ of the euclidean distance between ego-pedestrian and all other agents in the scene. For the scene semantic information, we manually annotated the total 60 BEV images of the scenes from the two datasets (28 for the DUT and 32 for the inD). The five labelled classes are shown in Figure 2.
Given these pre-processed input multimodal contextual information, we trained our context-augmented transformer model. Since our formulation of the problem of pedestrian’s trajectory prediction is considered as a regression problem, we chose the $L2$-loss function as our objective function for the training. We trained our model for 250 epochs using Adam optimiser. For the hyper-parameters of the model, we chose $d_{model}$ to be 512 and we utilised 8 self-attention heads in both the encoder and decoder stages.
IV-C Performance Evaluation
We relied on the DUT and the inD datasets described above in order to evaluate the performance of our proposed framework. We also compared its performance against a number of baseline approaches from the literature. The strategy we followed for the training and testing of our trained models and all the other baseline models is similar to the one in [17]. The strategy is to train each model (including our proposed model) on one dataset and test it on the other unseen dataset. The results of our proposed model against the baseline model are shown in Table I. The baseline models that we compared our proposed model against are three models (two data-driven approaches and one model-based approach). The first data-driven baseline approach is the Social-GAN model [4] which was one of the SOTA techniques for pedestrians’ trajectory prediction in crowded spaces. The second data-driven baseline approach is the transformer model introduced in [11], which is a similar model to the one we are proposing but it only takes into account the positional information. This model at the time of writing this paper was ranked the first on the TrjaNet challenge benchmark. We refer to this model as the ‘Vanilla-TF’. The final baseline model we compare against is the model-based approach introduced in [17]. This model is referred to as the Off the Sidewalk Predictions (OSP) as it was denoted by its designers. OSP relies on the Kalman filter to predict future trajectories of pedestrians, and they take into account interactions between pedestrians and vehicles using a hard-attention mechanism.
The evaluation metrics we utilised for quantitatively assess the performance of our proposed model and all other baseline models are two, namely Average Displacement Error (ADE) and Root Mean Squared Error(RMSE) [21]. The reason we chose these two especially is due to the fact that they are the most commonly used evaluation metrics when it comes to evaluating pedestrians’ trajectories especially in urban traffic shared spaces [3, 4, 22, 23]. In ADE, it measures the expected distance (Euclidean) between the predicted position at each time step of the trajectory and the ground-truth position at that step. RMSE on the other hand it calculates the square root of the squared error between the predicted position at each time step of the trajectory and the ground-truth position at that step.
As it can be shown from Table I, the transformer-based models achieved resilient results in our urban traffic shared spaces datasets similar to crowded spaces datasets. More importantly, our proposed context-augmented transformer model ‘Context-TF’ has outperformed all the baseline models in terms of ADE/RMSE scores over both the DUT and the inD datasets. These scores further prove our claims that having more contextual information to the transformer model helped in boosting the overall performance of the model. Furthermore, in Table I, we also report the performance of our proposed model with all the baseline models over short-term prediction horizons, namely 1, 2, 3 and 4 seconds ahead. Our context-augmented transformer model continued to provide robust results among the short-term prediction horizons, the only exception was in the DUT dataset
where the vanilla-TF achieved lower ADE/RMSE scores over shorter time prediction horizons (1 and 2 seconds). We believe the reason for that is at such shorter time prediction horizons, there is no enough contextual information that could be discriminative enough to help in providing more accurate predictions. Moreover, the trajectories in the DUT dataset took place in a university campus where pedestrians might not be following the common-sense rules in urban traffic environments (such as pedestrians would generally prefer to walk on the side-walk). As it can be noticed also, the model-based approach, OSP since it relies on a Kalman filter for giving predictions, it provided competitive reasonable results in the shorter time prediction horizons. On the other hand, the Social-GAN model failed in giving competitive scores in comparison to our proposed model and the other baseline models. The reason for that is due to the fact that the number of interactions between pedestrians and each other in our urban shared traffic spaces is not in line with the same quantity and density of the interactions that appear in crowded spaces.
In order to further assess the performance of our proposed context-augmented transformer model, we qualitatively visualise some of the output predictions from our model on two scenes from the DUT and inD datasets in Figure 4. As it can be shown from Figure 4, our context-augmented transformer model gives quite accurate predictions (in blue) that mimics the overall trend of the ground-truth trajectory (in green). Additionally, in the inD dataset predictions (second row), it can be noticed that our model can capture the non-linearity of the pedestrians’ trajectory and how it can respect the contextual information of the scene. For instance, in the second-row, middle figure, the predicted trajectory follows the ground-truth trajectory and it avoids the stand obstacle located on the side-walk.
V Conclusion
In this work, we have introduced a novel context-augmented transformer network-based model for the task of pedestrians’ trajectory prediction in an urban shared traffic environment. Besides their past trajectories, the proposed model also took into account the contextual information around pedestrians in the scene (i.e. interactions with other agents and the scene’s semantics information). The inclusion of such information led the model to achieve robust results on real-life datasets of pedestrians’ trajectories in urban shared traffic environments. Moreover, the proposed model has outperformed the compared baseline models from the literature with a large margin over short and long term prediction horizons.
References
[1]
K. Saleh, M. Hossny, and S. Nahavandi, “Towards trusted autonomous vehicles
from vulnerable road users perspective,” in 2017 Annual IEEE
International Systems Conference (SysCon). IEEE, 2017, pp. 1–7.
[2]
D. Helbing and P. Molnar, “Social force model for pedestrian dynamics,”
Physical review E, vol. 51, no. 5, p. 4282, 1995.
[3]
A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese,
“Social lstm: Human trajectory prediction in crowded spaces,” in
Proceedings of the IEEE conference on computer vision and pattern
recognition, 2016, pp. 961–971.
[4]
A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi, “Social gan:
Socially acceptable trajectories with generative adversarial networks,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2018, pp. 2255–2264.
[5]
J. Amirian, J.-B. Hayet, and J. Pettré, “Social ways: Learning multi-modal
distributions of pedestrian trajectories with gans,” in Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition Workshops,
2019, pp. 0–0.
[6]
K. Saleh, M. Hossny, and S. Nahavandi, “Contextual recurrent predictive
model for long-term intent prediction of vulnerable road users,” IEEE
Transactions on Intelligent Transportation Systems, vol. 21, no. 8, pp.
3398–3408, 2020.
[7]
N. Lee, W. Choi, P. Vernaza, C. B. Choy, P. H. Torr, and M. Chandraker,
“Desire: Distant future prediction in dynamic scenes with interacting
agents,” in Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2017, pp. 336–345.
[8]
J. F. P. Kooij, N. Schneider, F. Flohr, and D. M. Gavrila, “Context-based
pedestrian path prediction,” in European Conference on Computer
Vision. Springer, 2014, pp. 618–633.
[9]
K. Saleh, M. Hossny, and S. Nahavandi, “Cyclist trajectory prediction using
bidirectional recurrent neural networks,” in Australasian Joint
Conference on Artificial Intelligence. Springer, 2018, pp. 284–295.
[10]
S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic
convolutional and recurrent networks for sequence modeling,” arXiv
preprint arXiv:1803.01271, 2018.
[11]
F. Giuliari, I. Hasan, M. Cristani, and F. Galasso, “Transformer networks for
trajectory forecasting,” arXiv preprint arXiv:2003.08111, 2020.
[12]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in
Advances in neural information processing systems, 2017, pp.
5998–6008.
[13]
T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal,
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language
models are few-shot learners,” arXiv preprint arXiv:2005.14165, 2020.
[14]
S. Becker, R. Hug, W. Hübner, and M. Arens, “An evaluation of trajectory
prediction approaches and notes on the trajnet benchmark,” arXiv
preprint arXiv:1805.07663, 2018.
[15]
C. G. Keller and D. M. Gavrila, “Will the pedestrian cross? a study on
pedestrian path prediction,” IEEE Transactions on Intelligent
Transportation Systems, vol. 15, no. 2, pp. 494–506, 2013.
[16]
V. Karasev, A. Ayvaci, B. Heisele, and S. Soatto, “Intent-aware long-term
prediction of pedestrian motion,” in 2016 IEEE International
Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 2543–2549.
[17]
C. Anderson, R. Vasudevan, and M. Johnson-Roberson, “Off the beaten sidewalk:
Pedestrian prediction in shared spaces for autonomous vehicles,” arXiv
preprint arXiv:2006.00962, 2020.
[18]
M. Pfeiffer, G. Paolo, H. Sommer, J. Nieto, R. Siegwart, and C. Cadena, “A
data-driven model for interaction-aware pedestrian motion prediction in
object cluttered environments,” in 2018 IEEE International Conference
on Robotics and Automation (ICRA). IEEE, 2018, pp. 1–8.
[19]
D. Yang, L. Li, K. Redmill, and Ü. Özgüner, “Top-view
trajectories: A pedestrian dataset of vehicle-crowd interaction from
controlled experiments and crowded campus,” in 2019 IEEE Intelligent
Vehicles Symposium (IV). IEEE, 2019,
pp. 899–904.
[20]
J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein, “The ind
dataset: A drone dataset of naturalistic road user trajectories at german
intersections,” 2019.
[21]
B. Ivanovic and M. Pavone, “The trajectron: Probabilistic multi-agent
trajectory modeling with dynamic spatiotemporal graphs,” in
Proceedings of the IEEE International Conference on Computer Vision,
2019, pp. 2375–2384.
[22]
R. Chandra, U. Bhattacharya, A. Bera, and D. Manocha, “Traphic: Trajectory
prediction in dense and heterogeneous traffic using weighted interactions,”
in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2019, pp. 8483–8492.
[23]
K. Saleh, M. Hossny, and S. Nahavandi, “Contextual recurrent predictive model
for long-term intent prediction of vulnerable road users,” IEEE
Transactions on Intelligent Transportation Systems, 2019. |
A multi-node room-temperature quantum network
Mehdi Namazi
Stony Brook, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
MN, MF, and AS contributed equally to this work.
Mael Flament
Stony Brook, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
MN, MF, and AS contributed equally to this work.
Alessia Scriminich
Department of Information Engineering, University of Padova, Via Gradenigo 6b, 35131 Padova, Italy
MN, MF, and AS contributed equally to this work.
Sonali Gera
Stony Brook, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
Steven Sagona-Stophel
Stony Brook, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
Giuseppe Vallone
Department of Information Engineering, University of Padova, Via Gradenigo 6b, 35131 Padova, Italy
Paolo Villoresi
Department of Information Engineering, University of Padova, Via Gradenigo 6b, 35131 Padova, Italy
Eden Figueroa
Stony Brook, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
Abstract
We present the first multi-node room-temperature memory-assisted quantum communication network using polarization qubits. Our quantum network combines several independent quantum nodes in an elementary configuration: (i) two independent polarization qubit generators working at rubidium transitions, (ii) two ultra-low noise room-temperature quantum memories, and (iii) a Bell-state qubit decoder and reading station. After storage and retrieval in the two table-top quantum memories, we measure a high-visibility Hong-Ou-Mandel interference of $V=46.8\%\pm 3.4\%$ between the outputs. We envision this realization to become the backbone of future memory-assisted cryptographic and entanglement distribution networks.
I Introduction: Room-Temperature Quantum Networks
A quantum network, through the implementation of protocols such as Measurement-Device-Independent Quantum Key Distribution (MDI-QKD)CurtisLo2002 and entanglement distribution using quantum repeaters Lloyd2001 , yields significantly promising technological and societal impacts Simon2017 . Over the last few decades, the experimental development of such a network has been delayed due to the limited capabilities of the individual components composing the network. In particular, the requirement of high-fidelity quantum storage is a stringent condition for scalability Kimble2008 ; Northup2014 . Indeed, each individual node must perform with high-efficiency, high-fidelity, and with minimal losses Razavi2009 . In addition, engineering the inter-connectivity between these nodes must be as efficient as possible to scale up the network Muralidharan2016 . Given recent advances, we have arrived at a critical point where we can interconnect several quantum devices to bring about the first generation of quantum networks. These first prototypes have been proven to show elementary quantum functionality, such as entanglement distribution over several kilometers Hensen2016 ; Valivarthi2016 ; Sun2016 and quantum-state transfer and entanglement generation between light-matter quantum nodes using cold atoms, single atoms trapped in cavities Ritter2012 , NV diamond centers Kalb2017 ; Humphreys2018 and superconducting Josephson junctions Kurpiers2018 . Despite these remarkable successes, the technological overhead of these realizations in terms of resources, cryogenic cooling, vacuum equipment, and laser cooling systems prevents the realization of larger quantum networks.
Room-temperature quantum technology offers a solution to this scaling problem. Further progress in the design and characterization of these devices to the level of practical implementations would have a tremendous impact on the field Novikova2012 . Owing to innovative techniques, recent studies have demonstrated the potential of room-temperature quantum operation. Noiseless room-temperature quantum memories for single-photons and polarization qubits have been demonstrated Namazi2017_1 ; Finkelstein2018 ; Kaczmarek2018 , with coherence times of several seconds in ambient conditionsKatz2018 . Furthermore, preliminary quantum cryptography networks using room-temperature memories are already available Namazi2017_2 . The next step would be to design a quantum network that leverages room-temperature atomic systems to establish secure communication over extended distances. Such a network must be compatible with QKD protocols and quantum repeater architecture using entanglement.
Here we show the implementation of such a room-temperature quantum network of multiple light-matter interfaces and conducted preliminary studies on the interconnectivity between its components. The paper is structured in the following way: in section II we describe the overall structure of the room-temperature quantum network, including two polarization qubit sources (referred as Alice and Bob), two room-temperature quantum memories, and a Bell measurement station (Charlie). In section III, we show Hong-Ou-Mandel(HOM) interference experiments using the polarization qubit sources. In section IV we demonstrate the “the degree of indistinguishability” of the polarization qubits extracted from two independent quantum memories. We conclude with an outlook and discussion in Section V.
II Experimental Setup
The core of our implementation consists of interconnecting several quantum devices in a configuration akin to the one needed for memory-assisted MDI-QKD Panayi2014 ; Abruzzo2014 or polarization-entanglement-based quantum repeater nodes Lloyd2001 . This configuration requires two sources of polarization qubits, two quantum memories, and one Bell-state four-detector-measurement station.
II.1 Polarization qubit sources.
Two independent acoustic-optical modulator units (AOM) temporally shape the probe fields. The AOMs are driven by two phase-locked signal generators operating at $80MHz.$ Two arbitrary signal wave generators modulate the amplitude of the AOMs. These wave generators are triggered by the master trigger FPGA generate the $400ns$ FWHM Gaussian envelope of the probe pulses. Independent Electro-Optical Modulation units (EOM) are in place to encode the desired polarization states on the probe pulses. We modulate the output polarization based on the input applied voltage to the EOMs (usually in the range of $\approx 0-500V$). After calibration, we can generate $\ket{H}$, $\ket{V}$, $\ket{D}$ and $\ket{A}$ states. An FPGA-based circuit controls the high-voltage amplifiers for fast operation and trigger-synchronized control. The FPGA can be programmed to generate any sequence of polarizations including a fully random sequence. In these experiments, we employ the FPGA to generate a pre-assigned sequence of polarizations. The qubits are delivered to another location via $30m$ long single-mode optical fibers.
II.2 Quantum memories
The two identical room-temperature quantum memories are based on a lambda-type Electromagnetically-Induced Transparency (EIT) configuration, with a probe field frequency at the $5S_{1/2}F=1$ $\rightarrow$ $5P_{1/2}F^{\prime}=1$ rubidium transition at a wavelength of 795 nm and a control field coupling the $5S_{1/2}F=2$ $\rightarrow$ $5P_{1/2}F^{\prime}=1$ transition. We first lock the probe field to the F=1 to $F^{\prime}=1$ transition using saturation spectroscopy, next we phase-lock the control-field exactly $6.8348GHz$ away from the probe laser. The experiments here described were achieved with one-photon detunings of $250MHz$.
Upon receiving the qubits at the probe input, a series of wave-plates compensate for the unitary polarization rotation of the optical fibers. Then, the probes pass through a Beam Displacer (BD) element to allow for storage of the polarization-encoded on each pulse. Any polarization state consists of a superposition of $\ket{H}$ and $\ket{V}$; the BD maps these polarization superpositions onto spatial-mode superpositions of the left and right rails, creating a total of four spatial-modes (rails) in total. Half-wave plates (HWP) rotate the polarization of the $\ket{V}$ rail to $\ket{H}.$ Both rails are sent through a Glan-Laser polarizer (GL), where they are combined with control fields, before entering the Rb cells.
Four independent control beams coherently prepare two volumes in each of the two ${}^{87}$Rb vapor cells, containing Kr-Ne buffer gas, at $\approx 60^{\circ}$ in which each mode of polarization qubits are stored. Each vapor cell is placed at the center of three concentric mu-metal cylindrical shields creating a magnetic-field-free environment. Inside the vapor and under EIT conditions, we create four single-photon-level dark-state polaritons. Switching the control fields off stores the pair of qubits. Switching the control field back on, $1\mu s$ later, retrieves the photon pair with their encoded qubit information intact.
After successfully retrieving the stored qubits, polarizing beam splitters (PBS) separate the vertically polarized control fields with an extinction ratio of $42dB$. Then, after recombining the rails, we use frequency filter setups which consist of two consecutive etalon Fabry-Perot cavities separated by a polarization-insensitive Faraday Isolator (FI). The etalons are $7.5mm$ and $4.0mm$ thick (for each setup), corresponding to a Free Spectral Range (FSR) of $13GHz$ and $21GHz$ respectively. Finally, the outputs of these filtering systems are fiber-coupled to the measuring station, Charlie.
II.3 Measurement station.
A four-SPCM-based multi-purpose measurement station, referred to as Charlie, can perform measurements necessary for HOM interference, Bell-state detection, and MDI-QKD protocols. The Alice and Bob input pulses are compensated for any polarization rotation at the input of Charlie before entering a $50:50$ non-polarizing beamsplitter (NPBS). After the NPBS, a PBS at each port further splits the photon pulses into $H$ and $V$ and sends them to four separate SPCMs.
III HOM interference between polarization qubits
One of the significant challenges in creating MDI-QKD networks and quantum repeaters based on entanglement swapping consists in demonstrating successful HOM interference between qubits. For two indistinguishable single photons interfering at a non-polarizing beamsplitter, the probability that they are detected in opposite arms is zeroHOM . In the case of weak coherent pulses, containing on average less than one photon, the coincidence rate drops at most to $50\%$ due to the multi-photon components of the coherent stateHOM_WCP_chinese .
In our first set of experiments we probe the indistinguishably of the independent qubit sources. To do so, we bypass the quantum memories and remove the PBSs in the four-detector measurement setup. We then measure the coincidence rate of photons arriving at only two detectors simultaneously (see Figure 1).
There are two conventional approaches to measure HOM interference: the photons may be made distinguishable by scanning either the input polarization or the temporal overlap. We scan the delay parameter of the wave generator that creates the temporal envelope of Alice’s (or Bob’s) AOM. We observe a $48\%$ decrease in the coincidence rate (dots in Fig. 2a) of horizontally-polarized inputs with an average of $\langle 0.4\rangle$ photons per pulse. As expected, the width of the HOM curve matches the temporal width of the input pulses ($200ns$ for this measurement) precisely (solid line in Fig. 2a).
We also observe HOM interference using polarization by varying the polarization of one of the qubit inputs. This was achieved by rotating the input polarization of one of the pulses using automatized wave-plates to enhance the precision. Eventually, perpendicular qubit polarizations will result in two distinguishable photons with a maximum coincidence rate at the outputs of the NPBS. To ensure that the setup is balanced for all the input states, we measure the HOM oscillation versus the relative polarization angle for all four-qubit states (see Fig. 2 b and c) achieving a $48\%\pm$0.2% decrease in the coincidence rate.
IV Second-Order Interference of stored and retrieved photons
A practical high-repetition quantum repeater node requires four quantum memories to store two pairs of entangled photons. For the entanglement swapping process to be successfully performed, a major requirement is to successfully interfere with the output of two quantum memories Chaneliere2007 ; Noelleke2013 . One of the significant challenges to create a workable quantum repeater is to demonstrate successful Hong-Ou-Mandel (HOM) interference between two stored photons Jin2013 ; Gao2018 . This fundamental step is described in this section.
To show that our memories are capable of preserving the qubit information encoded in initial states upon retrieval, we must guarantee that the temporal envelope, frequency and the polarization of the retrieved photons remain the same after storage. To preserve the indistinguishably of the input qubits, it is necessary that all four memory rails have identical EIT bandwidths (to preserve the frequency) and storage efficiencies (to maintain the polarization). We have done this by carefully adjusting all the laser detunings (adjusting the AOM frequencies) and control field powers (adjusting AOM amplitude modulation) of each individual rail in the quantum memories. In the classical calibration data of the four EIT ensembles, we have measured equal storage efficiencies of $\approx$ 20% and EIT bandwidths of $\approx$ 1 MHz for each of the four rails (see Fig. 3), guaranteeing the matching of the delays and temporal bandwidths of the four retrieved wave functions.
Furthermore, we must assure that the two independent filtering systems have similar bandwidths and identical transmissions for $\ket{H}$ and $\ket{V}$ polarizations. Fig. 3 shows the four etalons, without birefringence, temperature tuned to have maximum simultaneous transmission at the qubits frequency. The Lorentzian bandwidths of the filtering etalons are $(22.100\pm 0.007)$MHz and $(17.490\pm 0.007)$MHz for the memory 1 system and, $(15.26\pm 0.01)$MHz and $(13.99\pm 0.01)$MHz for the memory 2 system. The relative difference between the etalons bandwidths does not impact the indistinguishability of the photons negatively as these lines are more than one order of magnitude wider than the EIT line width. After matching the parameters of all the rails, we proceed to store polarization qubits at the few-photon level in the two memories.
Figure 4 shows the 1$\mu$s long storage of two $\ket{D}$ polarized pulses from Alice and Bob in the two dual-rail memories on a quantum level. We use input photon numbers of $\langle 14\rangle$ and $\langle 10\rangle$ per pulse for Alice and Bob, respectively, in order to solely study the timing and fidelity characteristics of the retrieved photons.
We stored the qubits with an average efficiency of 8% in both of the memories. We choose to lower the efficiency by decreasing the control fields’ strength in order to avoid significant background effects on the fidelity after storage. The transmission for both filtering systems is $\approx$ 3%. This results in output mean photon numbers of $\approx$ 0.02 and $\approx$ 0.014 for Alice and Bob, respectively.
We generate temporally identical outputs by carefully matching all the relevant parameters in the quantum memories such as two-photon detunings, storage time, filtering transmission, and the EIT linewidths. The coincidence rate is measured versus the relative polarization the of output pulses (see Figure 4). First, both memories retrieve identical photons with the same temporal shape, frequency, and polarization, resulting in a minimum coincidence rate. Then, we rotate the polarization of one of the output pulses before it reaches the beamsplitter to make them distinguishable. Doing so results in a peak in the coincidence rate after $45^{\circ}$ degrees rotation, resulting in an interference visibility of $V=46.8\%\pm 3.4\%$ (red dots and line in Figure. 4).
The relevance of the careful matching of the memories’ storage properties in the HOM visibility can be stressed further by measuring the visibility for unmatched pulses interacting with the memory. We repeat our analysis using the two leftover “leakage” peaks (parts of the original inputs that memories do not store (see Figure 4) that have different temporal bandwidths. We measured a HOM visibility of $V=19.4\%\pm 1.5\%$ (blue dots and line in Figure 4), testifying to the mismatch of their temporal shapes.
V Discussion and Outlook
We have presented individual experiments addressing various challenges to create the quantum connectivity needed to perform long-distance quantum communication with polarization qubits in a memory-assisted quantum network. While other groups have shown retrieval from four-memory systems using the Duan-Lukin-Cirac-Zoller (DLCZ) protocol Choi2010 , our system incorporates a polarization qubit architecture that has been proposed to achieve a higher entanglement distribution rate Lloyd2001 . We have reached a number of different benchmarks within this new architecture: input preparation of polarization qubits at rubidium wavelengths, simultaneous storage using four light-matter interfaces, scalability through room-temperature operation, and verification of identical storage and retrieval from the memories through Hong-Ou-Mandel interference.
We verified the purity of two independent qubit input sources through HOM interference. It has been shown that the theoretical visibility for coherent states at the single-photon level should be $\frac{1}{2}$. This difference can be explained by investigating the theoretical value for the visibility for attenuated coherent states Moschandreou2018 :
$$V_{HOM}=2\mu_{1}\mu_{2}\frac{Cos[\phi]^{2}}{(\mu_{1}+\mu_{2})^{2}}$$
(1)
, where $\mu_{1}$, $\mu_{2}$ are the average photon numbers of the two coherent states and $\phi$, is a measure of the polarization mismatch between the two states. When $\mu_{1}=\mu_{2}$ and $cos[\phi]=1$ (describing parallel, or indistinguishable states), this produces a visibility of $\frac{1}{2}$. By varying the input polarization of the EOM, we observe a visibility of $48\%$ for on average $\langle 0.4\rangle$ photons per pulse. The largest region of error is due to polarization drift. The polarization’s ellipticity can be corrected for with a quarter wave plate but must be continuously adjusted due to the long fibers. As shown in this equation, as much as a $10^{\circ}$ degree difference in the two polarizations due to this elliptical drift can create enough distinguishability between the photons to produce this $48\%$ visibility.
We have created a prototype network of four quantum light-matter interfaces. Not only can this network uniquely implement a dual-rail protocol to store polarization qubits, but it can also function at the single-photon level at room-temperature with low noise. This implementation is ideally suited to be used in cryptographic networks assisted by quantum memories, as long as suitable visibilities are measured.
The central figure-of-merit to this system is the single-photon-level HOM interference between the outputs of the quantum memories, which has been seen to have a visibility of $V=46.8\%\pm 3.4\%$. The decrease in visibility from 50% can be sufficiently explained from Equation (1) when incorporating the slight mismatch in pulse strength before the two memories, in combination with the estimated elliptical drift in polarization. This visibility demonstrates the ability for this network to operate at single-photon level, which is sufficient criteria to implement MDI-QKD protocols.
Each of the individual photonic memories is sufficiently optimized such that their output degrees of freedom, including output temporal envelopes, frequency, and polarization, are matched well enough to get excellent HOM visibility. We observe a decrease in visibility with unmatched pulses, which signifies how mismatched states can affect the HOM interference.
Variable-delay MDI-QKD protocols using time-bin qubits have already been demonstratedKaneda2017 . Our network can implement similar memory-assisted protocols using polarization qubits. In separate experiments, we have shown that our memories are capable of operating on a shot-by-shot basis Namazi2017_2 , which makes them an ideal test bed for the storage of polarization entanglement. Additionally, this gives our realization important perspectives to operate as a network for MDI-QKD protocols that are assisted by entanglement Xu2013 . Together with the development of heralding mechanisms, we envision our quantum network configuration to become the backbone of future quantum repeater applications.
VI Acknowledgments
We thank Bertus Jordaan and Reihaneh Shahrokhshahi for technical assistance. This work was supported by the US-Navy Office of Naval Research (grant N00141410801); the National Science Foundation (grants PHY-1404398 and PHY-1707919); and the Simons Foundation (grant SBF241180).
References
(1)
VII References
(2)
H. Lo, M. Curty, and B. Qi, Measurement-Device-Independent Quantum Key Distribution, Phys. Rev.Lett., vol. 130503, no. March, pp. 1â5, 2012.
(3)
Lloyd, S., Shahriar, M. S., Shapiro, J. H., and Hemmer, P. R., Long Distance, Unconditional Teleportation
of Atomic States via Complete Bell-state Measurements, Physical Review Letters 87, 167903
(2001).
(4)
Christoph Simon, Towards a global quantum network, Nature Photonics 11, 678â680 (2017).
(5)
Northup, T. E., and Blatt, R., Quantum information transfer using photons, Nat Photon 8, 356 (2014).
(6)
Kimble, H. J., The quantum internet, Nature 453, 1023 (2008).
(7)
Razavi, M., Piani, M., and L¨utkenhaus, N., Quantum repeaters with imperfect memories: Cost and
scalability, Physical Review A 80, 032301 (2009).
(8)
Muralidharan, S., Li, L., Kim, J., L¨utkenhaus, N., Lukin, M. D., and Jiang, L., Optimal architectures
for long-distance quantum communication, Scientific Reports 6, 20463 (2016).
(9)
B. Hensen, N. Kalb, M.S. Blok, A. E. Dréau, A. Reiserer, R. F. L. Vermeulen, R. N. Schouten, M. Markham, D. J. Twitchen, K. Goodenough, D. Elkouss, S. Wehner, T. H. Taminiau and R. Hanson, Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis, Scientific Reports 6, 30289 (2016).
(10)
Raju Valivarthi, Marcel Grimau Puigibert, Qiang Zhou, Gabriel H. Aguilar, Varun B. Verma, Francesco Marsili, Matthew D. Shaw, Sae Woo Nam, Daniel Oblak and Wolfgang Tittel, Quantum teleportation across a metropolitan fiber network, Nature Photonics 10, 676â680 (2016).
(11)
Qi-Chao Sun, Ya-Li Mao, Si-Jing Chen, Wei Zhang, Yang-Fan Jiang, Yan-Bao Zhang, Wei-Jun Zhang, Shigehito Miki, Taro Yamashita, Hirotaka Terai, Xiao Jiang, Teng-Yun Chen, Li-Xing You, Xian-Feng Chen, Zhen Wang, Jing-Yun Fan, Qiang Zhang and Jian-Wei Pan, Quantum teleportation with independent sources and prior entanglement distribution over a network, Nature Photonics 10, 671â675 (2016).
(12)
Stephan Ritter, Christian Nölleke, Carolin Hahn, Andreas Reiserer, Andreas Neuzner, Manuel Uphoff, Martin Mücke, Eden Figueroa, Joerg Bochmann and Gerhard Rempe, An elementary quantum network of single atoms in optical cavities, Nature 484, 195â200 (2012).
(13)
N. Kalb, A. Reiserer, P. C. Humphreys, J. J. W. Bakermans, S. J. Kamerling, N. H. Nickerson, S. C. Benjamin, D. J. Twitchen, M. Markham, R. Hanson, Entanglement distillation between solid-state quantum network nodes, Science 356, 928-932 (2017).
(14)
Peter C. Humphreys, Norbert Kalb, Jaco P. J. Morits, Raymond N. Schouten, Raymond F. L. Vermeulen, Daniel J. Twitchen, Matthew Markham and Ronald Hanson, Deterministic delivery of remote entanglement on a quantum network, Nature 558, 268â273 (2018).
(15)
P. Kurpiers, P. Magnard, Walter B. Royer, M. Pechal, J. Heinsoo, Y. Salathé, A. Akin, S. Storz, J.-C. Besse, S. Gasparinetti, A. Blais and A. Wallraff, Deterministic quantum state transfer and remote entanglement using microwave photons, Nature 558, 2642018 (2018).
(16)
Irina Novikova, Ronald L. Walsworth, and Yanhong Xiao. Electromagnetically induced transparency-based slow and stored light in warm atoms. Laser Photonics Rev. 6, 333â353 (2012).
(17)
Mehdi Namazi, Connor Kupchak, Bertus Jordaan, Reihaneh Shahrokhshahi, and Eden Figueroa, Ultralow-Noise Room-Temperature Quantum Memory for Polarization Qubits, Phys. Rev. Applied 8, 034023 (2017).
(18)
Ran Finkelstein, Eilon Poem, Ohad Michel, Ohr Lahad and Ofer Firstenberg. Fast, noise-free memory for photon synchronization at room-temperature. Science Advances 4, eaap8598 (2018).
(19)
K. T. Kaczmarek, P. M. Ledingham, B. Brecht, S. E. Thomas, G. S. Thekkadath, O. Lazo-Arjona, J. H. D. Munns, E. Poem, A. Feizpour, D. J. Saunders, J. Nunn, and I. A. Walmsley. High-speed noise-free optical quantum memory. Phys. Rev. A 97, 042316 (2018).
(20)
Or Katz and Ofer Firstenberg. Light storage for one second in room-temperature alkali vapor. Nature Communications 9, 2074 (2018).
(21)
Mehdi Namazi, Giuseppe Vallone, Bertus Jordaan, Connor Goham, Reihaneh Shahrokhshahi, Paolo Villoresi, and Eden Figueroa. Free-Space Quantum Communication with a Portable Quantum Memory. Phys. Rev. Applied 8, 064013 (2017).
(22)
Panayi, C., Razavi, M., Ma, X., and L¨utkenhaus, N., Memory-assisted measurement-device-independent quantum key distribution, New Journal of Physics 16, 043005 (2014).
(23)
Abruzzo, S., Kampermann, H., and Bruss, D., Measurement-device-independent quantum key distribution
with quantum memories, Phys. Rev. A 89, 012301 (2014).
(24)
C. K. Hong, Z. Y. Ou, and L. Mandel, Measurement of Subpicosecond Time Intervals Between Two-Photon By Interference, Phys. Rev.Lett., vol. 59, no. 18, pp. 2044â2046, 1987.
(25)
H. Chen et al., HongâOuâMandel interference with two independent weak coherent states, Chinese Phys. B, vol. 25, no. 2, p. 020305, 2016.
(26)
T. Chanelière, D. N. Matsukevich, S. D. Jenkins, S.-Y. Lan, R. Zhao, T. A. B. Kennedy, and A. Kuzmich, Quantum Interference of Electromagnetic Fields from Remote Quantum Memories, Phys. Rev. Lett. 98, 113602 (2007).
(27)
Christian Nölleke, Andreas Neuzner, Andreas Reiserer, Carolin Hahn, Gerhard Rempe, and Stephan Ritter, Efficient Teleportation Between Remote Single-Atom Quantum Memories, Phys. Rev. Lett. 110, 140403 (2013).
(28)
Jeongwan Jin, Joshua A. Slater, Erhan Saglamyurek1, Neil Sinclair1, Mathew George, Raimund Ricken, Daniel Oblak, Wolfgang Sohler and Wolfgang Tittel, Two-photon interference of weak coherent laser pulses recalled from separate solid-state quantum memories, Nature Communications 4, 2386 (2013).
(29)
Yvonne Y. Gao, Brian J. Lester, Yaxing Zhang, Chen Wang, Serge Rosenblum, Luigi Frunzio, Liang Jiang, S.âM. Girvin, and Robert J. Schoelkopf, Programmable Interference between Two Microwave Quantum Memories, Phys. Rev. X 8, 021073 (2018).
(30)
K. S. Choi, A. Goban, S. B. Papp, S. J. van Enk and H. J. Kimble. Entanglement of spin waves among four quantum memories. Nature 468, 412â416 (2010).
(31)
Eleftherios Moschandreou, Jeffrey I. Garcia, Brian J. Rollick, Bing Qi, Raphael Pooser, and George Siopsis. Experimental study of Hong-Ou-Mandel interference using independent phase randomized weak coherent states, arXiv:1804.02291 (2018).
(32)
Fumihiro Kaneda, Feihu Xu, Joseph Chapman, and Paul G. Kwiat. Quantum-memory-assisted multi-photon generation for efficient quantum information processing. Optica 4, 1034-1037 (2017).
(33)
Feihu Xu, Bing Qi, Zhongfa Liao, and Hoi-Kwong Lo. Long distance measurement-device-independent quantum key distribution with entangled photon sources. App. Phys. Lett. 103, 061101 (2013). |
Hybrid moderation in the newsroom: Recommending featured posts to content moderators
Cedric Waterschoot
1234-5678-9012
cedric.waterschoot@meertens.knaw.nl
KNAW Meertens InstituutOudezijds Achterburgwal 185AmsterdamThe Netherlands1012 DK
and
Antal van den Bosch
Institute for Language Sciences, Utrecht UniversityUtrechtThe Netherlands
a.p.j.vandenbosch@uu.nl
Abstract.
Online news outlets are grappling with the moderation of user-generated content within their comment section. We present a recommender system based on ranking class probabilities to support and empower the moderator in choosing featured posts, a time-consuming task. By combining user and textual content features we obtain an optimal classification F1-score of $0.44$ on the test set. Furthermore, we observe an optimum mean NDCG@5 of $0.87$ on a large set of validation articles. As an expert evaluation, content moderators assessed the output of a random selection of articles by choosing comments to feature based on the recommendations, which resulted in a NDCG score of $0.83$. We conclude that first, adding text features yields the best score and second, while choosing featured content remains somewhat subjective, content moderators found suitable comments in all but one evaluated recommendations. We end the paper by analyzing our best-performing model, a step towards transparency and explainability in hybrid content moderation.
content moderation, online discussions, news recommendation, natural language processing
††ccs: Information systems Recommender systems
1. Introduction
Online newspapers allowing user comments have been facing moderation challenges with large and increasing content streams (Meier et al., 2018; Wintterlin et al., 2020). Whether to filter out toxicity, counter misinformation, or promote constructive posts, platforms are looking towards computational solutions to support moderator decisions (Gollatz et al., 2018). Overall, moderation strategies are focused on two polar opposites. On the one hand, the moderator is required to safeguard the comment space from toxic and negative content (Wintterlin et al., 2020). On the other hand, platforms aim to promote what they deem good contributions, for example by pinning certain content to the top of the page (Diakopoulos, 2015b).
In this paper we present a recommender based on ranking class probabilities to support the content moderator in picking such featured posts. Using Dutch comment data with human labeling of featured posts, we train a set of models which present the human moderator with a set of posts that might qualify for being featured. We hypothesize that the optimal post representation for ranking includes both user features and textual content features, information used by content moderators as well. Furthermore, we validate our models separately on a collection of articles. Validation on unseen articles reflects the real-life setting of moderating and choosing only a few comments to be featured, as opposed to artificially split and balanced test sets. The output of the best-performing model is assessed in an expert evaluation by content moderators currently employed at the platform in question, who evaluated a random selection of articles by deciding whether the recommended comments are worthy of getting featured.
2. Background
2.1. Online Content Moderation
As online comment platforms grow, content moderators have had to adapt moderation strategies to the changing online environments. Dealing with negative content has been a particular focus, e.g. detecting trolling and online harassment (Quandt et al., 2022) or even organized misinformation campaigns (Meier et al., 2018). Quandt (2018) describes these forms of negative content under the umbrella of ’dark participation’.
Recently, however, moderators are seeking to promote good comments as well. On the opposite side of the comment spectrum from dark participation, platforms and moderators are selecting what they deem as good, high-quality comments and manually pinning them to the top of the comment space.
Promoting what news outlets see as high-quality contributions has for example taken the form of New York Times (NYT) picks (Diakopoulos, 2015b), Guardian Picks at The Guardian, or featured posts at Dutch news outlet NU.nl. On their FAQ pages, these outlets describe such promotion-worthy comments as ”substantiated”, ”respectful” and representing ”a range of viewpoints”111https://www.nu.nl/nujij/5215910/nujij-veelgestelde-vragen.html. Diakopoulos (2015b) assigns a set of twelve editorial criteria to such featured posts, ranging from argumentative quality to entertainment value and relevance. Overall this procedure may be seen as ”a norm-setting stategy” (Wang and Diakopoulos, 2022, p.4). The authors argue that exposure to these promoted posts may also improves the quality of succeeding comments (Wang and Diakopoulos, 2022).
11footnotetext: https://help.nytimes.com/hc/en-us/articles/115014792387-The-Comments-Section
Supplementary to the aforementioned goal of promoting high quality content and the positive normative effects these posts may have on other commenters, user engagement may increase as well. Wang and Diakopoulos (2022) find that after a user received their first featured comment, their commenting frequency increased.
2.2. Hybrid Moderation
While featured content ranking for moderators is a novel task, recommender systems have been used in the context of news platforms. Plenty of research and workshops (e.g. INRA workshops) focus on news recommendation and personalization aimed at readers on these platforms (Raza and Ding, 2022). While this application is adjacent to content moderation, it differs from our application in that it is mostly aimed at users of a platform (as opposed to moderators) to optimize news consumption (instead of improving moderation tasks).
Moderators of online news outlets have been increasingly working with computational systems to perform their tasks to ward off toxic and unwanted content (Delort et al., 2011; Gorwa et al., 2020). The result is a hybrid setting in which the role of human moderator on the one hand, and the computational system on the other hand have been intertwined. Ruckenstein and Turunen (2020) argue that ideally, AI should offer decision support to the human moderator. Taking the final decision on publishing content is a task exclusively for the human moderator, and tools should be focused on assisting this function (Ruckenstein and Turunen, 2020). Park et al. (2016) emphasize this hybrid relation, stipulating that journalists do not want automatic editorial decision-making. When computational moderation tools support the human in carrying out their tasks, the moderator themselves can adapt to the nuances of changing online contexts and apply human interpretation and judgement (Park et al., 2016).
Classifying toxic comments in online comment spaces has received substantial attention (Gorwa et al., 2020; Wang, 2021). The classification of featured comments or editor picks, however, has not been explored quite that often. Diakopoulos (2015a) uses cosine similarity to calculate relevance scores relative to the conversation and the article using New York Times editor picks. The author discovers an association between these picks and relevance and concludes that such computational assistance may speed up comment curation (Diakopoulos, 2015a).
As part of their CommentIQ interface, Park et al. (2016) train a SVM classifier on unbalanced, but limited, data ($94$ NYT picks, $1,574$ non-picks) and achieve a precision score of $0.13$ and recall of $0.6$. Their data includes user history criteria as well as comment variables (Park et al., 2016).
Napoles et al. (2017) annotated comments from Yahoo News in terms of what they present as ”ERICs: Engaging, Respectful, and/or Informative Conversations”. The authors look at the constructiveness of a thread rather than a single comment, and do not use editorial choices as their labelling (Napoles et al., 2017).
Kolhatkar and Taboada (2017) combined the Yahoo comments with NYT picks. The authors achieve an F1-score of $0.81$ by training a BiLSTM on GloVe embeddings, using the NYT picks as benchmark and a balanced test set (Kolhatkar and Taboada, 2017). Furthermore, they combine a set of variables, including comment length features and named entities, and achieve a best F1-score of $0.84$ using SVMs (Kolhatkar and Taboada, 2017). In a follow-up study, the authors achieved an F1-score of $0.87$ on a similar task using crowdsourced annotations and logistic regression (Kolhatkar et al., 2020).
To sum up, these classifiers for the most part lacked input aside from comment information, whether text representation or otherwise. Additionally, the validation of these models was performed on large, balanced test sets, which does not resemble the real-life practice of picking featured posts. The moderator chooses editor picks on the article level and any model should therefore be evaluated on such tasks. In this paper, we combine user information with comment data and text representation, all information used by the moderators themselves.
2.3. Platform Specifics
The comment platform discussed in this paper is called NUjij, part of the Dutch online newspaper NU.nl222https://nu.nl/. NUjij allows commenting on a wide range of pre-selected news articles. Pre-moderation safeguards the comment space, in the form of automatic toxicity filtering and human moderators checking the uncertain comments (Van Hoek, 2020). NUjij employs a selection of moderation strategies, including awarding expert labels to verified users and presenting featured comments above the comment section, similar to New York Times or the Guardian picks (NU.nl, 2020). Picking featured comments is done by human moderators and are described as ”substantiated and respectful reactions that contribute to constructive discussion”333https://www.nu.nl/nujij/5215910/nujij-veelgestelde-vragen.html. Nujij states in their FAQ that moderators aim to present balanced selections and to not pick based on political affiliations. This paper aims to address this specific task making use of the information available to moderators, which includes user information and history. Other platforms might have different editorial guidelines for moderators to choose featured comments. To best support the moderator, it is important that the approach fully suits their context, which may include the (intended) human bias in picking featured content.
3. Methodology
3.1. Data
We obtained a Dutch language dataset containing a total of $821,408$ pseudonymized posts from the year 2020, spanning $2,952$ articles from NU.nl444https://anonymous.4open.science/r/HybridModeration_RecSys2023/README.md. Major topics within this dataset are climate change, the 2020 US election and the COVID-19 pandemic. A binary variable indicates whether each post was featured by a moderator during the time interval that commenting on the page was allowed. User variables were obtained by grouping and aggregating the information across the pseudonymization user keys. In total we have $8,262$ featured posts. An article has on average $2.8$ featured posts (sd=$3$), with a median of $2$. The average article has $278$ comments (sd=$358$, median=$173$). This shows that, while large variation exists in the number of comments per article, the number of featured posts per article remains low and relatively stable. The number of featured posts does not grow along with the number of comments posted per article and, therefore, articles with many comments cause great difficulty in finding the featured posts.
We group the comment data by article_id and sort these chronologically. We split this data $50\%/50\%$, resulting in two sets of $1,476$ articles, with the split date at June 16th 2020. The first set of articles is used for training and testing classifiers. We further split this set into $80\%/10\%/10\%$ generating a training, validation and test set, respectively. Table 1 shows the distribution of posts in each set. Using the validation data, we tested the downsampling of the non-featured posts in the training set using all the featured posts in the training data (n=$3,047$). Using the features listed in Table 2, we trained a random forest to predict if a post was featured on six different downsampled training sets (Figure 1). The $95/5$ ratio, i.e. 95% non-featured posts and 5% featured posts, yielded the best result and will be used as the training data henceforth. While the $95/5$ ratio still remains unbalanced, it is important to note that the unsampled actual ratio approximates $99/1$. Thus, the training data represents a marked downsampling of non-featured posts.
The second dataset contains $1,476$ articles, published after June 16th 2020, with a total of $500,191$ posts, of which $4,484$ are featured. This second article set is used for evaluating the ranking of unseen discussions.
3.2. Models
The first model is trained exclusively on the non-textual features listed in Table 2. These features are available to the moderator and can be taken into account when deciding to feature a comment. The other two random forest classifiers, detailed later, either include textual Bag-of-Words or word embedding features, while we have also finetuned a transformer-based model on textual input only. Other models were trained (including SVM and logistic regression) but did not perform as well as the random forest implementations.
3.2.1. Baseline
We defined a simple threshold-based model to determine whether a post is classified as featured. More specifically, comments posted by users that have a featured post ratio above 3% will be labelled as such. The threshold constitutes the 95th-percentile. Users with a history of often writing featured comments might do so in a new discussion. To make recommendations, the featured ratio is sorted in descending order.
3.2.2. Random Forest (RF)
We trained a random forest on the non-textual variables presented in Table 2. We used the standard sci-kit implementation of random forest and performed a hyperparameter grid search555https://scikit-learn.org/stable/index.html, v1.2.0. The final model has a max depth of $50$, $200$ estimators and a minumum sample split of $10$.
3.2.3. RobBERT
Previous models were entirely trained on non-textual data. To obtain a model that uses pure text as input, we employed the pre-trained Dutch transformer-based language model, RobBERT, and finetuned it on our training data (Delobelle et al., 2020). This training data consists of the text belonging to the exact comments the non-textual variables represent. The sequence classification employs a linear classification head on top of the pooled output (Wolf et al., 2020; Delobelle et al., 2020). We trained for $10$ epochs with a batch size of $64$, AdamW optimizer and a learning rate of $5e^{-5}$ (Loshchilov and Hutter, 2019).
3.2.4. RF_Emb & RF_BoW
By extracting the embeddings from the previously discussed RobBERT model, we are able to combine the textual input with the set of non-textual variables. We extracted the CLS-tokens from the input and added those to the training data as features, adding up to a total of $797$ features. We trained a sci-kit random forest on this combined data. A hyperparameter grid search was performed leading to a final random forest model with a max depth of $64$, $1,200$ estimators and min_sample_split of $2$.
Our final model adds another text representation to the mix. Instead of the CLS token used for the RobBERT embeddings, we represented the content by a standard Bag-Of-Words approach, counting the occurrences of the tokens in each comment. First, the text was lowercased and both punctuation and stopwords were removed. The words were added to the non-textual training data, resulting in a set of $426$ features. We once again performed a hyperparameter grid search that resulted in RF_BoW with $1,200$ estimators, min_sample_split of $10$ and a max depth of $110$.
4. Results
The initial evaluation is done on the test set that was obtained out of our original 80/10/10 split on the first set of articles. Evaluation on the test set follows the standard procedure of a classification problem, in which comments are not yet ranked by class probability. Next, we use the second set of articles to calculate recommendation scores. In order to recommend comments to the moderator, the model ranks all the posts by class probability. The top comments, i.e. those with the highest probability of belonging to the featured class, are recommended. We evaluate recommendations on the basis of Normalized Discounted Cumulative Gain (NDCG). We calculated NDCG at different recommendation set sizes ($k=3,5,10$) across unseen articles.
4.1. Classification results
The initial evaluation step concerned the classifier’s generalization performance on the test set, containing a total of $379$ featured posts and $31,743$ non-featured posts. The ’Informed Baseline’ achieved an F1-score of $0.17$. This model performed well in terms of recall, but not in precision. The transformer-based RobBERT model, which lacks the non-textual information that the others models have, underperformed as well (Table 3). This might be the case due to the fact that identical comments are sometimes featured and other times not. Due to the limit on featured comments per article, only a small set of well-written comment have the featured label. The RF without textual representation achieved the best F1-score on the test set, while RF_BoW outperformed the other models in terms of precision (Table 3).
4.2. Validation on unseen articles
Besides the standard classification of rare featured posts, these positively classified comments ought to be part of the recommendation set as well. Ranking is done by sorting individual comments by class probability for the featured class in descending order. A recommendation consists of the top $k$ posts derived from this ranking.
To validate the models, we calculated NDCG@k, with $k$, the number of comments recommended, at $3$, $5$ and $10$ across all $1,476$ articles (Jarvelin and Kekalainen, 2000). An article has on average $3$ featured posts, while $5$ and $10$ allows for the moderator to choose.
The results are shown in Table 4. The best performing models on the initial evaluation set also yield the best rankings of unseen comments. RF and RF_BoW performed the best at all recommendation sizes, with the latter yielding the highest score. This result indicates that added text representation in the form of Bag-of-Words slightly improves the recommendations shown to the moderators (Table 4). Simply ranking comments based on the featured history scored better than ranking based on content, potentially because there is no consistent featuring of well-written comments.
4.3. Expert evaluation by moderators
Using the best performing model, a random set of unseen articles was collected alongside the recommendations. We created a survey consisting of $30$ articles combined with a set of comments. This set consisted of the recommended comments (comments with class probability above $0.5$ and maximum $10$ per article) and an equal number of random non-recommended comments from the discussion. These were randomly shuffled so the moderators did not know which comments were recommended by our system. Along with the article and comments, the evaluation included features that moderators have access to in the real-life practice: the number of previously posted and featured comments by the user, the rejection rate of the user and the respect points of the comment. The content moderators had to decide for each individual comment whether they thought it was a candidate to feature on NUjij. In total, four moderators took the survey and each of them labelled comments from $15$ articles. The first five articles were shown to all moderators in order to calculate inter-annotator agreement, while the other $10$ were randomly selected from the pool. We calculated a Krippendorff’s alpha inter-rater agreement of $0.62$. This result, combined with the fact that $42.3$% of comments featured in the original data were not chosen, indicates that picking featured content remains somewhat subjective.
However, in all but one article, moderators found comments to feature among the recommendations, resulting in a NDCG score of $0.83$. While there is subjectivity involved in picking featured comments, the moderators do find featured content within the recommendations made by the model. They might not all choose the exact same comments, but all find worthy content in the recommended set.
5. Discussion
The context of hybrid moderation asks for insight into computational models employed in the pipeline. Transparency being a key value in the field of journalism, moderators and users alike demand explanations as to how models come to a certain output (Ruckenstein and Turunen, 2020; Molina and Sundar, 2022). Transparency is a prerequisite for user trust in content moderation (Brunk et al., 2019). Here, we offer an error analysis of our best performing model. Moderators may use this information to counter potential bias towards certain comment characteristics. Furthermore, we discuss the limitations of our approach.
To explain our model’s behavior in general terms, we explore the erroneous recommendations the model has made, more specifically which features repetitively contributed to false positives (FP), and false negatives (FN). For the error analysis, we processed all $1,476$ validation articles and collected the top false positives in each recommendation (at k=$5$) and all false negatives. The latter are gathered from the entire article dataset, since they were incorrectly omitted from the actual recommendation. We used the python library ’treeinterpreter’ to collect for each prediction the feature contribution666https://github.com/andosa/treeinterpreter. The contribution ($c$) equals the percentage points (as decimal) the feature has contributed to the class probability of the prediction, calculated by following the decision paths in the trees.
Respect_count ($c=0.14$) and respect_uptime ($c=0.11$) contributed highly to incorrect recommendations, indicating that that our model often incorrectly recommended posts with a high number of likes (Table 5). Additionally, the model is biased towards users who have been often featured before (c=$0.06$), and towards longer posts (c=$0.04$).
Next, we looked at the false negatives (FN). Similar to FPs, the history of being featured is a crucial factor in incorrectly omitting posts. Posts by users that have not been featured before or have an extremely low ratio of featured posts (c=$0.05$) were missed, as can be seen in Table 5. Furthermore, featured posts with a noticeably low respect_count (c=$0.09$) were missed as well. Another source of erroneous rankings was wordcount (c=$0.02$). Featured posts tend to be longer (mean featured = $100$, mean non-featured = $53$). Shorter comments may have been overlooked and omitted from the recommendation.
5.1. Limitations & future research
We see at least two limitations to our approach. First are those related to the platform. Our models make use of a wide range of variables, including aggregated user information which may not be available for other platforms. Furthermore, our recommendations are based on historical moderation choices and may therefore be biased towards certain content. These choices reflect the editorial interpretation of a constructive comment by the platform. Future research could compare different criteria for featuring posts. Another platform-related limitation is the language. All text in this study was Dutch. Although we did not test the approach on data in another language, our approach, which assumes the presence of pre-labeled featured post data and a transformer language model for that language, is language-independent.
Second, while we have validated our models on a large collection of articles which resemble the real-life application, we do not know the precise moment at which the moderator selected featured posts. Knowing which posts were available to the moderator at that point in time would allow us to replay the recommendation process in time-realistic detail. Future research will specifically address this issue, using time-stamped data that documents the precise moment moderators selected featured posts.
6. Conclusion
In this paper, we presented a classifier-based recommender system for featured posts to offer decision support to the online content moderator. Using comment and moderation data from a Dutch news platform, we showed that supplementing the non-textual data with text representation achieves the best ranking scores. More specifically, our random forest supplemented with Bag-Of-Words representations achieved the best ranking. While previous research on classifying constructive comments validated their models only on an artificially balanced test set, we validated our models on a large set of articles, replicating real-life practice. Furthermore, content moderators of the platform in question evaluated the output, yielding a NDCG of $0.83$. We unpacked our best performing model in terms of error analysis, showing that our model favoured posts from users with a history of being featured before and might omit comments with a lower respect count.
With our proposed and novel approach combined with transparency, we aim to support and empower the online content moderator in their tasks, while not obscuring the nuance and contextuality of picking featured posts.
Acknowledgements.
This study is part of the project Better-MODS with project number 410.19.006 of the research programme ’Digital Society - The Informed Citizen’ which is financed by the Dutch Research Council (NWO).
References
(1)
Brunk et al. (2019)
Jens Brunk, Jana Mattern,
and Dennis M. Riehle. 2019.
Effect of transparency and trust on acceptance of
automatic online comment moderation systems.
Proceedings - 21st IEEE Conference on
Business Informatics, CBI 2019 1 (2019),
429–435.
https://doi.org/10.1109/CBI.2019.00056
Delobelle et al. (2020)
Pieter Delobelle, Thomas
Winters, and Bettina Berendt.
2020.
RobBERT: A Dutch RoBERTa-based language model.
Findings of the Association for Computational
Linguistics Findings of ACL: EMNLP 2020 (2020),
3255–3265.
https://doi.org/10.18653/v1/2020.findings-emnlp.292
arXiv:2001.06286
Delort et al. (2011)
Jean Yves Delort, Bavani
Arunasalam, and Cecile Paris.
2011.
Automatic moderation of online discussion sites.
International Journal of Electronic
Commerce 15, 3 (2011),
9–30.
https://doi.org/10.2753/JEC1086-4415150302
Diakopoulos (2015a)
Nicholas Diakopoulos.
2015a.
The Editor’s Eye: Curation and Comment Relevance on
the New York Times. In Proceedings of the 18th ACM
Conference on Computer Supported Cooperative Work and Social Computing
(Vancouver, BC, Canada) (CSCW ’15).
Association for Computing Machinery,
New York, NY, USA, 1153–1157.
https://doi.org/10.1145/2675133.2675160
Diakopoulos (2015b)
Nicholas Diakopoulos.
2015b.
Picking the NYT Picks : Editorial Criteria and
Automation in the Curation of Online News Comments.
#ISOJ, the official research journal of
ISOJ 5, 1 (2015),
147–166.
http://www.nickdiakopoulos.com/wp-content/uploads/2011/07/ISOJ{_}Journal{_}V5{_}N1{_}2015{_}Spring{_}Diakopoulos{_}Picking-NYT-Picks.pdf
Gollatz et al. (2018)
Kirsten Gollatz,
Martin Johannes Riedl, and Jens
Pohlmann. 2018.
Removals of online hate speech in numbers.
HIIG Science Blog August
2018 (2018).
https://doi.org/10.5281/zenodo.1342324
Gorwa et al. (2020)
Robert Gorwa, Reuben
Binns, and Christian Katzenbach.
2020.
Algorithmic content moderation: Technical and
political challenges in the automation of platform governance.
Big Data and Society 7,
1 (2020).
https://doi.org/10.1177/2053951719897945
Jarvelin and Kekalainen (2000)
Kalervo Jarvelin and
Jaana Kekalainen. 2000.
IR evaluation methods for retrieving highly
relevant documents.
SIGIR Forum (ACM Special Interest Group on
Information Retrieval) (2000), 41–48.
https://doi.org/10.1145/3130348.3130374
Kolhatkar and Taboada (2017)
Varada Kolhatkar and
Maite Taboada. 2017.
Using New York Times Picks to Identify
Constructive Comments. In Proceedings of the 2017
EMNLP Workshop: Natural Language Processing meets Journalism.
Association for Computational Linguistics,
Copenhagen, Denmark, 100–105.
https://doi.org/10.18653/v1/W17-4218
Kolhatkar et al. (2020)
Varada Kolhatkar, Nithum
Thain, Jeffrey Sorensen, Lucas Dixon,
and Maite Taboada. 2020.
Classifying Constructive Comments.
(2020), 1–24.
arXiv:2004.05476
http://arxiv.org/abs/2004.05476
Loshchilov and Hutter (2019)
Ilya Loshchilov and
Frank Hutter. 2019.
Decoupled Weight Decay Regularization.
ICLR 2019 (2019).
arXiv:arXiv:1711.05101v3
Meier et al. (2018)
Klaus Meier, Daniela
Kraus, and Edith Michaeler.
2018.
Audience Engagement in a Post-Truth Age: What it
means and how to learn the activities connected with it.
Digital Journalism 6,
8 (2018), 1052–1063.
https://doi.org/10.1080/21670811.2018.1498295
Molina and Sundar (2022)
Maria D. Molina and
S. Shyam Sundar. 2022.
When AI moderates online content: Effects of human
collaboration and interactive transparency on user trust.
Journal of Computer-Mediated Communication
27, 4 (2022).
https://doi.org/10.1093/jcmc/zmac010
Napoles et al. (2017)
Courtney Napoles, Joel
Tetreault, Aasish Pappu, Enrica Rosato,
and Brian Provenzale. 2017.
Finding Good Conversations Online: The Yahoo
News Annotated Comments Corpus. In Proceedings
of the 11th Linguistic Annotation Workshop. Association
for Computational Linguistics, Valencia, Spain,
13–23.
https://doi.org/10.18653/v1/W17-0802
NU.nl (2020)
NU.nl. 2020.
NUjij laat met expertlabels de kennis en expertise
van gebruikers zien.
https://www.nu.nl/nulab/6093189/nujij-laat-met-expertlabels-de-kennis-en-expertise-van-gebruikers-zien.html
Park et al. (2016)
Deokgun Park, Simranjit
Sachar, Nicholas Diakopoulos, and
Niklas Elmqvist. 2016.
Supporting comment moderators in identifying high
quality online news comments.
Conference on Human Factors in Computing
Systems - Proceedings (2016), 1114–1125.
https://doi.org/10.1145/2858036.2858389
Quandt (2018)
Thorsten Quandt.
2018.
Dark participation.
Media and Communication
6, 4 (2018),
36–48.
https://doi.org/10.17645/mac.v6i4.1519
Quandt et al. (2022)
Thorsten Quandt, Johanna
Klapproth, and Lena Frischlich.
2022.
Dark social media participation and well-being.
Current Opinion in Psychology
45 (2022), 101284.
https://doi.org/10.1016/j.copsyc.2021.11.004
Raza and Ding (2022)
Shaina Raza and Chen
Ding. 2022.
News recommender system: a review of
recent progress, challenges, and opportunities. Vol. 55.
Springer Netherlands. 749–800 pages.
https://doi.org/10.1007/s10462-021-10043-x
Ruckenstein and Turunen (2020)
Minna Ruckenstein and
Linda Lisa Maria Turunen. 2020.
Re-humanizing the platform: Content moderators and
the logic of care.
New Media and Society 22,
6 (2020), 1026–1042.
https://doi.org/10.1177/1461444819875990
Van Hoek (2020)
Colin Van Hoek.
2020.
Hoe NU.nl beter wordt van een robot.
https://www.nu.nl/blog/6045082/hoe-nunl-beter-wordt-van-een-robot.html
Wang (2021)
Sai Wang. 2021.
Moderating Uncivil User Comments by Humans or
Machines? The Effects of Moderation Agent on Perceptions of Bias and
Credibility in News Content.
Digital Journalism 9,
1 (2021), 64–83.
https://doi.org/10.1080/21670811.2020.1851279
Wang and Diakopoulos (2022)
Yixue Wang and Nicholas
Diakopoulos. 2022.
Highlighting High-quality Content as a Moderation
Strategy : The Role of New York Times Picks in Comment Quality.
4, 4 (2022),
1–24.
Wintterlin et al. (2020)
Florian Wintterlin, Tim
Schatto-Eckrodt, Lena Frischlich, Svenja
Boberg, and Thorsten Quandt.
2020.
How to Cope with Dark Participation: Moderation
Practices in German Newsrooms.
Digital Journalism 8,
7 (2020), 904–924.
https://doi.org/10.1080/21670811.2020.1797519
Wolf et al. (2020)
Thomas Wolf, Lysandre
Debut, Victor Sanh, Julien Chaumond,
Clement Delangue, Anthony Moi,
Pierric Cistac, Tim Rault,
Remi Louf, Morgan Funtowicz,
Joe Davison, Sam Shleifer,
Patrick von Platen, Clara Ma,
Yacine Jernite, Julien Plu,
Canwen Xu, Teven Le Scao,
Sylvain Gugger, Mariama Drame,
Quentin Lhoest, and Alexander Rush.
2020.
Transformers: State-of-the-Art Natural Language
Processing.
Proceedings of the 2020 Conference on
Empirical Methods in Natural Language Processing: System Demonstrations
(2020), 38–45.
https://doi.org/10.18653/v1/2020.emnlp-demos.6
arXiv:arXiv:1910.03771v5 |
Random anisotropy
disorder in superfluid ${}^{3}$He-A in aerogel
G. E. Volovik
e-mail: volovik@boojum.hut.fi
Low Temperature Laboratory, Helsinki University of
Technology,
P.O.Box 2200, FIN-02015, HUT, Finland
Landau Institute for Theoretical Physics RAS, Kosygina 2,
119334 Moscow, Russia
Abstract
The anisotropic superfluid ${}^{3}$He-A in aerogel provides an
interesting example of a system with continuous symmetry in the presence
of random anisotropy disorder. Recent NMR experiments allow us to discuss
two regimes of the orientational disorder, which have different
NMR properties. One of them, the (s)-state,
is identified as the pure Larkin-Imry-Ma state. The structure of another state, the (f)-state, is not
very clear: probably it is the Larkin-Imry-Ma state contaminated by the
network of the topological defects pinned by aerogel.
\rtitle
Random anisotropy
disorder in superfluid ${}^{3}$He-A in aerogel
\sodtitleRandom anisotropy
disorder in superfluid ${}^{3}$He-A in aerogel
\rauthorG. E. Volovik
\sodauthorVolovik
\dates18 September 2006*
1 Introduction
Behavior of systems with continuous symmetry in the presence of random
anisotropy disorder is the subject of numerous theoretical and
experimental investigations.
This is because of the surprizing observation
made by Larkin [1] and Imry and Ma
[2] that even a
weak disorder may destroy
the long-range translational or orientational order.
Recent example is provided by the nematic liquid crystals in
random porous medium, in which the order parameter – the unit
vector $\hat{\bf n}$ – interacts with the quenched random
anisotropy disorder (see e.g. Ref.
[3] and references therein).
Though the problem of violation of the long-range order by
quenched disorder is more than 30 years old, still there is no
complete understanding (see e.g. Refs.
[3, 4, 5, 6]
and references therein),
especially concerning the role of topological defects.
In the anisotropic phase A of superfluid ${}^{3}$He, the Larkin-Imry-Ma effect is even more interesting. In this superfluid the order
parameter contains two Goldstone vector fields: (1) the unit vector
$\hat{\bf l}$ characterizes the spontaneous anisotropy of the orbital
and superfluid properties of the system; and (2) the unit vector $\hat{\bf d}$
characterizes the spontaneous anisotropy of the spin (magnetic) degrees
of freedom. In aerogel, the quenched random anisotropy disorder of the
silicon strands interacts with the orbital vector $\hat{\bf l}$, which
thus must experience the Larkin-Imry-Ma effect. As for the
vector $\hat{\bf d}$ of the spontaneous anisotropy of spins it is assumed
that $\hat{\bf d}$ does not interact directly with the quenched disorder, at least in the arrangement when the aerogel strands are
covered by layers of ${}^{4}$He atoms preventing the formation of solid
layers of ${}^{3}$He with large Curie-Weiss magnetization. There is
a tiny spin-orbit coupling between vectors $\hat{\bf d}$ and $\hat{\bf l}$ due
to which the $\hat{\bf l}$-vector may transfer
the disorder to the $\hat{\bf d}$-field. Superfluid ${}^{3}$He-A experiences many
different types of topological defects (see e.g. [7]), which may be pinned by the disorder.
On recent experiments on the superfluid
${}^{3}$He-A in aerogel see Refs.
[8, 9, 10, 11, 12, 13]
and references therein. In particular, Refs. [9, 13] describe the transverse
NMR experiments, in which the dependence of the frequency shift on the
tipping angle $\beta$ of the precessing magnetization has been measured;
and Ref. [13] also reports the observation of the longitudinal NMR.
Here we discuss these experiments in terms of the Larkin-Imry-Ma
disordered state [1, 2] extended for the description of
the superfluid ${}^{3}$He-A in aerogel [14].
In Sec. 2 the general equations for NMR in ${}^{3}$He-A are written. In Sec. 3 these equations are applied to the states with disordered $\hat{\bf d}$ and $\hat{\bf l}$ fields.
In Sec. 4 and Sec. 5 the models for the two observed states are suggested in terms of the averaged distributions of $\hat{\bf d}$ and $\hat{\bf l}$ fields consistent with observations,. Finally in Sec. 6 these states are interpreted in terms of different types of disorder.
2 Larmor precession of ${}^{3}$He-A
In a typical experimental arrangement the spin-orbit (dipole-dipole)
energy is smaller than Zeeman energy and thus may be considered as a
perturbation. In zero-order approximation when the dipole energy and
dissipation are neglected, the spin freely precesses with the Larmor
frequency
$\omega_{L}=\gamma H$, where $\gamma$ is the gyromagnetic ratio of ${}^{3}$He
nuclei. In terms of the Euler angles the precession of magnetization is
given by
$${\bf S}(t)=S~{}R_{z}(-\omega_{L}t+\alpha)R_{y}(\beta)\hat{\bf z}~{}~{}.$$
(1)
Here $S=\chi H$ is the amplitude of spin induced by magnetic field; the
axis
$\hat{\bf z}$ is along the magnetic field
${\bf H}$; matrix
$R_{y}(\beta)$ describes rotation about transverse axis $y$ by angle
$\beta$, which is the tipping angle of the precessing magnetization;
$R_{z}$ describes rotation about $z$; $\alpha$ is the phase of the
precessing magnetization. According to the Larmor theorem
[15], in the precessing frame the vector $\hat{\bf d}$
is in turn precessing about
${\bf S}$. Because of the interaction between the spin ${\bf S}$ and
the order parameter vector $\hat{\bf d}$, the precession of $\hat{\bf d}$
occurs in the plane perpendicular to ${\bf S}$, and it is characterized
by another phase
$\Phi_{d}$. In the laboratory frame the precession of $\hat{\bf d}$ is given by
$$\hat{\bf d}(t)=R_{z}(-\omega_{L}t+\alpha)R_{y}(\beta)R_{z}(\omega_{L}t-\alpha+%
\Phi_{d})\hat{\bf x}~{},$$
(2)
while the orbital vector $\hat{\bf l}$ is time-independent in this approximation:
$$\hat{\bf l}=\hat{\bf z}\cos\lambda+\sin\lambda(\hat{\bf x}\cos\Phi_{l}+\hat{%
\bf y}\sin\Phi_{l})~{}.$$
(3)
This is the general state of the pure Larmor precession of ${}^{3}$He-A, and
it contains 5 Goldsone parameters: 2 angles $\alpha$ and $\beta$ of the
magnetization in the precessing frame; angle
$\Phi_{d}$ which characterizes the precession of vector $\hat{\bf d}$; and
two angles
$\lambda$ and $\Phi_{l}$ of the orbital vector $\hat{\bf l}$.
The degeneracy is lifted by spin-orbit (dipole) interaction
[16]
$$F_{D}=-\frac{\chi\Omega^{2}_{A}}{2\gamma^{2}}(\hat{\bf l}\cdot\hat{\bf d})^{2}%
~{},$$
(4)
where $\Omega_{A}$ is the
so-called Leggett frequency. In the bulk homogeneous ${}^{3}$He-A, the
Leggett frequency coincides with the frequency of the longitudinal NMR,
$\omega_{\parallel}=\Omega_{A}$. In typical experiments one has $\Omega_{A}\ll\omega_{L}$, which allows us to use the spin-orbit interaction averaged over the fast Larmor
precession:
[17, 15]
$$\displaystyle\bar{F}_{D}=\frac{\chi\Omega^{2}_{A}}{2\gamma^{2}}U~{}~{},$$
(5)
$$\displaystyle U=-{1\over 2}\sin^{2}\beta+{1\over 4}(1+\cos\beta)^{2}\sin^{2}%
\Phi\sin^{2}\lambda$$
$$\displaystyle-({7\over 8}\cos^{2}\beta+{1\over 4}\cos\beta-{1\over 8})\sin^{2}%
\lambda~{}~{},$$
(6)
where $\Phi=\Phi_{d}-\Phi_{l}$.
The dipole interaction generates the frequency shift of the transverse
NMR from the Larmor frequency:
$$\omega_{\perp}-\omega_{L}=-\frac{\partial\bar{F}_{D}}{\partial(S\cos\beta)}=-%
\frac{\Omega^{2}_{A}}{2\omega_{L}}\frac{\partial U}{\partial\cos\beta}~{}.~{}~{}$$
(7)
In the bulk ${}^{3}$He-A, the minimum of the dipole interaction requires that
$\Phi_{d}=\Phi_{l}$, and $\sin^{2}\lambda=1$, i.e. the equilibrium
position of $\hat{\bf l}$ is in the plane perpendicular to ${\bf H}$.
However, for the ${}^{3}$He-A confined in aerogel, the interaction with the quenched disorder may essentially modify this spatially homogeneous state
by destroying the long-range orientational order due to the Larkin-Imry-Ma
effect
[14].
3 Two states of ${}^{3}$He-A in aerogel
Experiments reported in Ref. [13] demonstrate two different types of
magnetic behavior of the A-like phase in aerogel, denoted as (f+c)-state and
(c)-state correspondingly. The (f+c)-state contains two overlapping lines (f) and
(c) in the transverse NMR spectrum (far from and close to the Larmor frequency correspondingly). The frequency shift of the transverse
NMR is about 4 times bigger for the (f)-line compared to the (c)-line.
The behavior under applied gradient of magnetic field suggests that the
(f+c)-state consists of two magnetic states concentrated in different parts of
the cell. The (c)-state contains only a single (c)-line in the spectrum, and it is
obtained after application of the 180 degree pulse while cooling through
$T_{c}$. The pure (f)-state, i.e.
the state with a single (f)-line, has not been observed.
The (c) and (f+c) states have different of dependence of the frequency
shift $\omega_{\perp}-\omega_{L}$ on the tilting angle $\beta$ in the pulsed NMR experiments: $\omega_{\perp}-\omega_{L}\propto\cos\beta$ in the pure (c)-state; and
$\omega_{\perp}-\omega_{L}\propto(1+\cos\beta)$ in the (f+c)-state. The
latter behavior probably characterizes the property of the (f)-line which has
the bigger shift and is dominating in the spectrum of the (f+c)-state.
The $(1+\cos\beta)$-law has been also observed in Ref.
[9].
The experiments with longitudinal NMR were also reported in Ref.
[13]. The longitudinal resonance in the (f+c)-state has been
observed, however no traces of the longitudinal resonance have been seen in the
(c)-state.
Let us discuss this behavior in terms of the disordered states emerging in
${}^{3}$He-A due to the orientational disorder.
In the extreme case of weak disorder, the characteristic Imry-Ma length
$L$ of the disordered $\hat{\bf l}$-texture is much bigger than the
characteristic length scale $\xi_{D}$ of dipole interaction, $L\gg\xi_{D}$.
In this case the equilibrium values of
$\Phi$ and $\lambda$ are dictated by the spin-orbit interaction, $\Phi=\Phi_{d}-\Phi_{l}=0$ and $\sin^{2}\lambda=1$; and the Eq.(7) gives
$$\omega_{\perp}-\omega_{L}=\frac{\Omega^{2}_{A}}{8\omega_{L}}(1+3\cos\beta)~{}.$$
(8)
This dependence fits neither the (f)-state $(1+\cos\beta)$ behavior nor the
$\cos\beta$ law in the (c)-state, which indicates that the disorder is not weak compared to the spin-orbit energy.
In the extreme case of strong disorder, when $L\ll\xi_{D}$, both $\Phi$
and $\hat{\bf l}$ are random:
$$\displaystyle\left<\sin^{2}\Phi\right>-\frac{1}{2}=0~{},$$
(9)
$$\displaystyle\left<\sin^{2}\lambda\right>-\frac{2}{3}=0~{}.$$
(10)
In this case
it follows from Eq.(7) that the frequency shift is absent:
$$\omega_{\perp}=\omega_{L}~{}.$$
(11)
The Eq.(9) means that $\Phi_{l}$ and $\Phi_{d}$ are dipole
unclocked, i.e. they are not locked by the spin-orbit
dipole-dipole interaction, which is natural in case of small Imry-Ma
length, $L\ll\xi_{D}$. In principle, there can be three different dipole-unlocked cases: (i)
when both
$\Phi_{l}$ and
$\Phi_{d}$ are random and independent; (ii) when $\Phi_{l}$ is random while
$\Phi_{d}$ is fixed; (iii) when $\Phi_{d}$ is random while $\Phi_{l}$ is fixed.
The strong disorder limit is consistent with the observation that the frequency shift of the (c)-line is much smaller
than the frequency shift in ${}^{3}$He-B in aerogel. The observed non-zero value can be explained in terms of the small first order corrections to the strong disorder limit. Let us
introduce the parameters
$$a=\frac{1}{2}-\left<\sin^{2}\Phi\right>~{}~{},~{}~{}b=\left<\sin^{2}\lambda%
\right>-\frac{2}{3}~{},$$
(12)
which describe the deviation from the strong disorder limit.
These parameters are zero in the limit of strong disorder $L^{2}/\xi_{D}^{2}\rightarrow 0$, and one may expect that in the pure Larkin-Imry-Ma state they are proportional to the small parameter $L^{2}/\xi_{D}^{2}\ll 1$.
The behavior of these two parameters can be essentially different in different
realizations of the disordered state, since the vector $\hat{\bf l}$ entering the parameter $a$ interacts
with the quenched orientational disorder directly, while $\Phi_{d}$ only interacts with
$\Phi_{l}$ via the spin-orbit coupling. That is why we shall try to interpret the two observed magnetic states,
the (c)-state and the (f)-state, in terms of different realizations of the textural disorder described by different phenomenological relations between
parameters $a$ and $b$ in these two states.
4 Interpretation of (c)-state
The observed $\cos\beta$-dependence of the transverse NMR frequency shift in the
(c)-state [13] can be reproduced if we assume that in the (c)-state
the parameters $a_{c}$ and $b_{c}$ satisfy the following relation: $a_{c}\ll b_{c}$. Then in the main approximation,
$$a_{c}=0~{}~{},~{}~{}b_{c}\neq 0~{},$$
(13)
the effective potential $U$ in Eq.(6) is
$$U_{c}=-\frac{3}{4}b_{c}\cos^{2}\beta+\frac{1}{4}\left(b_{c}+\frac{1}{3}\right)%
~{}.$$
(14)
If the parameter $b_{c}$ does not depend on $\beta$, the variation of $U_{c}$
with respect to $\cos\beta$ gives the required $\cos\beta$-dependence of the
frequency shift of transverse NMR in the (c)-state:
$$\omega_{c\perp}-\omega_{L}=\frac{3\Omega^{2}_{A}}{4\omega_{L}}b_{c}\cos\beta~{}.$$
(15)
Let us estimate the parameter $b_{c}$ using the following consideration.
The dipole energy which depends on $\lambda$ violates the complete
randomness of the Larkin-Imry-Ma state, and thus perturbs the average value of
$\sin^{2}\lambda$. The deviation of $b_{c}$ from zero is given by:
$$b_{c}\sim\frac{L^{2}}{\xi_{D}^{2}}\left(\cos^{2}\beta-\frac{1}{3}\right)~{}.$$
(16)
In this model the potential and frequency shift become
$$U_{c}\sim-\frac{L^{2}}{\xi_{D}^{2}}\left(\cos^{2}\beta-\frac{1}{3}\right)^{2}~%
{},$$
(17)
$$\omega_{c\perp}-\omega_{L}\sim\frac{\Omega^{2}_{A}}{\omega_{L}}\frac{L^{2}}{%
\xi_{D}^{2}}\cos\beta\left(\cos^{2}\beta-\frac{1}{3}\right)~{}.$$
(18)
Such $\beta$-dependence of the transverse NMR is also antisymmetric with respect to
the transformation
$\beta\rightarrow\pi-\beta$ as in the model with the $\beta$-independent parameter
$b_{c}$ in Eq.(15); however, as distinct from that model it is
inconsistent with the experiment (see Fig. 6 in Ref.
[13]). Certainly the theory must be refined to estimate the first
order corrections to the zero values of the parameters
$a_{c}$ and $b_{c}$.
The frequency of the longitudinal NMR in the (c)-state is zero in the
local approximation. The correction due to the deviation of $\Phi$
from the random behavior, i.e. due to the non-zero value of the parameter
$a_{c}$ in the (c)-state, is:
$$\omega_{c\parallel}^{2}=\frac{2}{3}\left(1-2\left<\sin^{2}\Phi\right>\right)%
\Omega^{2}_{A}=\frac{4a_{c}}{3}\Omega^{2}_{A}~{}.$$
(19)
In the simplest Imry-Ma model $a_{c}\sim L^{2}/\xi_{D}^{2}\ll 1$, and thus the
frequency of longitudinal NMR is small as compared with the frequency of the
longitudinal resonance in (f+c)-state, discussed in the next section. This is
consistent with non-observation of the longitudinal NMR in the (c)-state: under
conditions of this experiment the longitudinal resonance cannot be seen if its
frequency is much smaller than the frequency of the longitudinal resonance
observed in the (f+c)-state [13].
5 Interpretation of (f)-state
The observed $(1+\cos\beta)$-dependence of the transverse NMR frequency shift of
the (f)-line dominating in the (f+c)-state [13, 9] is reproduced if we
assume that for the (f)-line one has
$a_{f}\gg b_{f}$. In this case, in the main approximation the (f)-state may be
characterized by
$$a_{f}\neq 0~{}~{},~{}~{}b_{f}=0~{},$$
(20)
and one obtains:
$$\omega_{f\perp}-\omega_{L}=\frac{a_{f}}{6}\frac{\Omega^{2}_{A}}{\omega_{L}}(1+%
\cos\beta)~{}.$$
(21)
Let us compare the frequency shift of the (f)-line with that of the (c)-line in
Eq. (15) at $\beta=0$:
$$\frac{\omega_{f\perp}-\omega_{L}}{\omega_{c\perp}-\omega_{L}}=\frac{4a_{f}}{9b%
_{c}}~{}.$$
(22)
According to experiments [13] this ratio is about 4, and thus one obtains the
estimate: $b_{c}\sim 0.1a_{f}$. This supports the strong disorder limit, $b_{c}\ll 1$, for the (c)-state. If the statistic properties of the $\hat{\bf l}$-texture in the (f)-state are similar to that in the (c)-state, then one has
$b_{f}\ll a_{f}$ as suggested in Eq.(20).
The frequency of the longitudinal NMR in such a state is
$$\omega_{f\parallel}^{2}=\frac{4a_{f}}{3}\Omega^{2}_{A}~{},$$
(23)
which gives the relation between the transverse and longitudinal NMR
frequencies
$$\omega_{f\perp}-\omega_{L}=\frac{1}{8}\frac{\omega_{f\parallel}^{2}}{\omega_{L%
}}(1+\cos\beta)~{}.$$
(24)
This relation is also valid for the Fomin’s robust phase [18] (see
[19]). However, the frequency of the longitudinal NMR measured
in the (f+c)-state [13] does not satisfy this relation: the
measured value of $\omega_{f\parallel}$ is about 0.65 of the value which follows
from the Eq.(24) if one uses the measured $\omega_{f\perp}-\omega_{L}$. Probably the situation can be improved, if one considers the
interaction between the $f$ and $c$ lines in the (f+c)-state (see Ref.
[13]).
6 Discussion
6.1 Interpretation of A-phase states in aerogel.
The observed two magnetic states of ${}^{3}$He-A in aerogel [13] can be
interpreted in the following way. The pure (c)-state is the Larkin-Imry-Ma phase
with strong disorder, $L\ll\xi_{D}$. The (f+c)-state can be considered as mixed state with the volume $V_{c}$
filled by the Larkin-Imry-Ma phase, while the rest of volume $V_{f}=V-V_{c}$ consists
of the (f)-state. The (f)-state is also random due to the Larkin-Imry-Ma effect,
but the spin variable $\Phi_{d}$ and the orbital variable $\Phi_{l}$ are not
completely independent in this state. If $\Phi_{d}$ partially follows
$\Phi_{l}$, the difference $\Phi=\Phi_{d}-\Phi_{l}$ is not random and the parameter
$a_{f}$ in the (f)-state is not very small, being equal to $1/2$ in the extreme
dipole-locked case. Thus, for the (f+c)-state one may assume that $a_{f}\gg b_{f},b_{c},a_{c}$. As a result the (f)-line has essentially larger frequency shift of
transverse NMR and essentially larger longitudinal frequency compared to the
(c)-line.
Both results are consistent with the experiment: from the transverse NMR it
follows that $b_{c}\sim 0.1a_{f}$ (see Eq.(22)); and from the lack of the
observation of longitudinal NMR in the (c)-state it follows that $a_{c}\ll a_{f}$.
This confirms the assumption of the strong disorder in the (c)-state, in which the
smallness of the parameters
$b_{c}$ and $a_{c}$ is the result of the randomness of the $\hat{\bf l}$-texture on
the length scale $L\ll\xi_{D}$.
The $\cos\beta$-dependence of $\omega_{c\perp}-\omega_{L}$ in
Eq.(15) and
$(1+\cos\beta)$-dependence of $\omega_{f\perp}-\omega_{L}$ in
Eq.(21) are also consistent with the experiment. The open problem is how to estimate theoretically the phenomenological parameters
$a_{f},b_{f},b_{c},a_{c}$ and find its possible dependence on $\beta$.
The ‘universal’ relation (24)
between the longitudinal and
transverse NMR frequencies is not satisfied in the experiment, but we cannot
expect the exact relation in such a crude model, in which the interaction between
the $f$ and $c$ lines in the (f+c)-state is ignored (see [13]).
Moreover, we use the local approximation, i.e. we do not take into account the
fine structure of the NMR line which may contain the satellite peaks due to the
bound states of spin waves in the texture of
$\hat{\bf l}$ and $\hat{\bf d}$ vectors. The tendency is however correct: the
smaller is the frequency shift of transverse NMR the smaller is the frequency of
longitudinal NMR.
6.2 Global anisotropy and negative frequency shift
For further consideration one must take into account that in some aerogel
samples the large negative frequency shift has been observed for the
A-phase
[10, 11, 12, 20].
The reason of the negative shift is the deformation
of the aerogel sample which leads to the global
orientation of the orbital vector $\hat{\bf l}$ in the large region of
the aerogel [20]. The effect of regular uniaxial
anisotropy in aerogel has been considered in Refs. [21, 22].
It is important that even a rather small deformation of aerogel may
kill the subtle collective Larkin-Imry-Ma effect and lead to the uniform
orientation of the
$\hat{\bf l}$-vector. Using the estimation of the Imry-Ma length in Ref.
[14], one can find that the critical stretching of the aerogel
required to kill the Larkin-Imry-Ma effect is proportional to
$(R/\xi_{0})^{3}$. Here
$R$ is the radius of the silica strands and $\xi_{0}$ is the superfluid
coherence length.
From
Eqs.(6) and (7) it follows that the maximum possible
negative frequency shift could occur if in some region the global
orientation of
$\hat{\bf l}$ induced by deformation of the aerogel is along the magnetic
field (i.e. $\lambda=0)$:
$$\omega_{\perp}-\omega_{L}=-\frac{\Omega^{2}_{A}}{2\omega_{L}}~{}.$$
(25)
Such longitudinal orientation of $\hat{\bf l}$ is possible because the
regular anisotropy caused by the deformation of aerogel is bigger than the
random anisotropy, which in turn in the strong disorder limit is bigger
than the dipole energy preferring the transverse orientation of
$\hat{\bf l}$.
Comparing
the measured magnitude of the negative shift (which cannot be bigger
than the maximum possible in Eq.(25)) with the measured
positive shift of the (f)-line in the (f+c)-state
[23, 13] one obtains that the parameter $a_{f}$ in
Eq.(21) must be smaller than $0.25$. This is also
confirmed by the results of the longitudinal NMR experiments
[13], which show that the frequency of the longitudinal
NMR in the (f+c)-state of ${}^{3}$He-A is much smaller than the frequency
of the longitudinal NMR in ${}^{3}$He-B. The latter is only possible if
$a_{f}\ll 1$ in Eq.(23), i.e. the (f)-state is also in the
regime of strong disorder. Thus there is only the partial dipole locking between
the spin variable $\Phi_{d}$ and the orbital variable $\Phi_{l}$ in the (f)-state.
6.3 Possible role of topological defects.
It is not very clear what is the origin of the (f)-state. The partial
dipole locking is possible if the characteristic size of the $\hat{\bf l}$ texture in the (f)-state is on the order of or somewhat smaller than
$\xi_{D}$.
Alternatively, the line (f) could come from the topological defects of
the A-phase (vortices, solitons, vortex sheets, etc., see Ref.
[7]). The defects could appear during cooling down the sample
from the normal (non-superfluid) state and are annealed by application of
the 180 degree pulse during this process. Appearance of a large amount
of pinned topological defects in ${}^{3}$He-B in aerogel has been suggested
in Ref.
[24]. The reason why the topological defects may effect the NMR
spectrum in ${}^{3}$He-A is the following. In the case of the strong
disorder limit the texture is random, and the frequency shift is zero, if
one neglects the $L/\xi_{D}$ corrections in the main approximation. The
topological defect introduces some kind of order: some correlations are
nonzero because of the conserved topological charge of the defect. That
is why the frequency shift will be nonzero. It will be small, but still
bigger than due to the corrections of order
$(L/\xi_{D})^{2}$.
If this interpretation is correct, there are two different realizations
of the disordered state in the system with quenched orientational
disorder: the network of the pinned topological defects and the pure
Larkin-Imry-Ma state. Typically one has the interplay between these two
realizations, but the defects can be erased by the proper annealing
leaving the pure Larkin-Imry-Ma state.
6.4 Superfluid properties of A-phase in aerogel
The interesting problem concerns the superfluid density $\rho_{s}$ in the
states with the orientational disorder in the vector $\hat{\bf l}$. In Ref.
[14] it was suggested that
$\rho_{s}=0$ in such a state. Whether the superfluid density is zero or not
depends on the rigidity of the $\hat{\bf l}$-vector. If the $\hat{\bf l}$-texture is flexible, then due to the Mermin-Ho relation between the
texture and the superfluid velocity, the texture is able to respond to
the applied superflow by screening the supercurrent. As a result
the superfluid density in the flexible texture could be zero. The
experiments on ${}^{3}$He-A in aerogel demonstrated that $\rho_{s}\neq 0$ (see
e.g.
[8] and references therein).
However, most probably these
experiments have been done in the (f+c)-state. If our interpretation of this
state in terms of the topological defects is correct, the non-zero value of
superfluid density could be explained in terms of the pinning of the defects
which leads to the effective rigidity of the $\hat{\bf l}$-texture in the
(f+c)-state.
Whether the superfluid density is finite in the pure Larkin-Imry-Ma
state, identified here as the (c)-state, remains an open experimental and theoretical
question. The theoretical discussion of the rigidity or quasi-rigidity in such a
state can be found in Refs.
[25, 6]. In any case, one may expect that the observed two
states of ${}^{3}$He-A in aerogel, (c) and (f+c), have different superfluid
properties.
Recent Lancaster experiments with vibrating aerogel sample indicate
that the sufficiently large superflow produces the state with the regular (non-random) orientation of $\hat{\bf l}$ in aerogel, and in the
oriented state the superfluid density is bigger [26].
This suggests that the orientational disorder does lead to at least partial suppression of the superfluid density.
6.5 Conclusion.
In conclusion, the NMR experiments on the A-like superfluid state in
the aerogel indicate two types of behavior. Both of them can be
interpreted in terms of the random texture of the orbital vector
$\hat{\bf l}$ of the ${}^{3}$He-A order parameter. This supports the idea
that the superfluid ${}^{3}$He-A in aerogel exhibits the Larkin-Imry-Ma
effect: destruction of the long-range orientational order by random
anisotropy produced by the randomly oriented silicon strands of the aerogel. The
extended numerical simulations are needed to clarify the role of the topological
defects in the Larkin-Imry-Ma state, and to calculate the dependence of the NMR
line-shape and superfluid density on concentration and pinning of the
topological defects.
I thank V.V. Dmitriev and D.E. Khmelnitskii for illuminating discussions,
V.V. Dmitriev, Yu.M. Bunkov and S.N. Fisher for presenting the experimental results before
publication, and I.A. Fomin who attracted my attention to the relation
between frequencies of the longitudinal and transverse NMR in some
states. This work was supported in part by the Russian Foundation for
Fundamental Research and the ESF Program COSLAB.
References
[1]
A.I. Larkin,
Effect of inhomogeneities on the structure of
the mixed state of superconductors,
JETP 31, 784 (1970).
[2]
Y. Imry and S.K. Ma, Random-field instability of the ordered state of continuous symmetry, Phys. Rev. Lett. 35, 1399
(1975).
[3]
D.E. Feldman and R.A. Pelcovits, Liquid crystals in
random porous media: Disorder is stronger in low-density aerosils,
Phys. Rev. E 70, 040702(R) (2004); L. Petridis and E.M. Terentjev,
Nematic-isotropic transition with quenched disorder, cond-mat/0610010.
[4]
T. Emig and T. Nattermann, Planar defects and the fate of
the Bragg glass phase of type-II superconductors, cond-mat/0604345.
[5]
J. Wehr, A.Niederberger,
L. Sanchez-Palencia and M. Lewenstein, Disorder contra Mermin-Wagner-Hohenberg
effect: From classical spin systems to ultracold atomic gases, cond-mat/0604063.
[6]
M. Itakura,
Frozen quasi long range order in random anisotropy Heisenberg magnet, Phys. Rev. B
68, 100405 (2003).
[7]
G.E. Volovik, The Universe in a Helium Droplet,
Clarendon Press, Oxford (2003).
[8]
E. Nazaretski, N. Mulders and J. M.
Parpia, Metastability and superfluid fraction of the A-like
and B phases of ${}^{3}$He in aerogel in zero magnetic field, JETP Lett.
79, 383–387 (2004).
[9]
O.Ishikawa, R.Kado, H.Nahagawa, K.Obara, H.Yano, and T.Hata, Pulsed NMR
Measurements in Superfluid ${}^{3}$He in aerogel of 97.5$\%$ porosity,
abstract of LT-24 Conference, Official Conference Book (2005), p.200.
[10]
B.I. Barker, Y. Lee, L. Polukhina, D.D. Osheroff,
L.V. hrubesh and J.F. Poco, Observation of a superfluid He-3 A- B phase
transition in silica aerogel, Phys. Rev. Lett. 85, 2148–2151 (2000).
[11]
V.V. Dmitriev, I.V. Kosarev,
N. Mulders, V.V. Zavjalov, D.Ye. Zmeev, Experiments on A-like to B phase
transitions of ${}^{3}$He confined to aerogel, Physica B 329–333, 320–321 (2003).
[12]
V.V. Dmitriev, I.V. Kosarev, N. Mulders, V.V.
Zavjalov, D.Ye. Zmeev, Pulsed NMR experiments in superfluid ${}^{3}$He in
aerogel, Physica B 329–333, 296–298 (2003).
[13]
V.V. Dmitriev, L.V. Levitin,
N. Mulders, D.Ye. Zmeev, Longitudinal NMR and spin states in the A-like phase
of ${}^{3}$He in aerogel, Pis’ma ZhETF, this issue; cond-mat/0607789.
[14]
G.E. Volovik, Glass state of superfluid ${}^{3}$He-A in
aerogel, JETP Lett. 63, 301–304 (1996); No robust phases in
aerogel: ${}^{3}$He-A with orientational disorder in the Ginzburg-Landau
model, JETP Lett.
81, 647–649 (2005), cond-mat/0505281.
[15]
Yu. M. Bunkov and G.E. Volovik, On the
possibility of the Homogeneously Precessing Domain in bulk ${}^{3}$He-A,
Europhys. Lett. 21, 837–843 (1993).
[16]
D. Vollhardt and P. Wölfle P. The superfluid phases of helium 3, Taylor and Francis, London (1990).
[17]
A.D. Gongadze, G.E. Gurgenishvili and G.A.
Kharadze, Zh. Eksp. Teor. Fiz. 78, 615 (1980); [Sov. Phys. JETP,
51, 310 (1980)].
[18]
I.A. Fomin, Nutations of spin in the quasi-isotropic
superfluid A-like phase of ${}^{3}$He, cond-mat/0605126; Longitudinal
resonance and identification of the order parameter of the A-like phase
in superfluid ${}^{3}$He in aerogel, cond-mat/0607790.
[19]
M. Miura, S. Higashitani, M. Yamamoto, K.
Nagai, NMR properties of a possible non-unitary state of
superfluid ${}^{3}$He in aerogel, J. Low Temp. Phys. 138, 153–157
(2005).
[20]
Yu.M. Bunkov, private communications.
[21]
J. Pollanen, S. Blinstein, H. Choi, J.P. Davis,
T.M. Lippman, L.B. Lurio and W.P. Halperin, Anisotropic aerogels for
studying superfluid
${}^{3}$He, cond-mat/0606784.
[22]
K. Aoyama and R. Ikeda, Phys. Rev. B 73,
060504(R) (2006).
[23]
V.V. Dmitriev, private communications.
[24]
Yu. Bunkov, E. Collin, H. Godfrin and R. Harakaly,
Topological defects and coherenet magnetization precession of ${}^{3}$He in
aerogel, Physica B 329–333, 305–306 (2003).
[25]
K. Efetov and A.I. Larkin,
Charge-density wave in a random potential,
JETP 45, 1236 (1977).
[26]
S.N. Fisher, private communications. |
A Standard Law for the Equatorward Drift of the Sunspot Zones
D. H. \surnameHathaway${}^{1}$
${}^{1}$ NASA Marshall Space Flight Center, Huntsville, AL 35812 USA
email: david.hathaway@nasa.gov
Abstract
The latitudinal location of the sunspot zones in each hemisphere is determined by calculating the centroid position of sunspot areas for each solar rotation from May 1874 to June 2011.
When these centroid positions are plotted and analyzed as functions of time from each sunspot cycle maximum there appears to be systematic differences in the positions and equatorward drift rates as a function of sunspot cycle amplitude.
If, instead, these centroid positions are plotted and analyzed as functions of time from each sunspot cycle minimum then most of the differences in the positions and equatorward drift rates disappear.
The differences that remain disappear entirely if curve fitting is used to determine the starting times (which vary by as much as 8 months from the times of minima).
The sunspot zone latitudes and equatorward drift measured relative to this starting time follow a standard path for all cycles with no dependence upon cycle strength or hemispheric dominance.
Although Cycle 23 was peculiar in its length and the strength of the polar fields it produced, it too shows no significant variation from this standard.
This standard law, and the lack of variation with sunspot cycle characteristics, is consistent with Dynamo Wave mechanisms but not consistent with current Flux Transport Dynamo models for the equatorward drift of the sunspot zones.
keywords: Solar Cycle, Observations; Sunspots, Statistics; Sunspots, Velocity
\setlastpage\inarticletrue{opening}
1 Introduction
The equatorward drift of the sunspot zones is now a well known characteristic of the sunspot cycle.
While \inlineciteCarrington58 noted the disappearance of low latitude spots followed by the appearance of spots confined to mid-latitudes during the transition from Cycle 9 to Cycle 10, and \inlineciteSporer80 calculated and plotted the equatorward drift of sunspot zones over Cycles 10 and 11, the very existence of the sunspot zones was still in question decades later [Maunder (1903)].
The publication of the “Butterfly Diagram” by \inlineciteMaunder04 laid this controversy to rest and revealed a key aspect of the sunspot cycle – sunspots appear in zones on either side of the equator that drift toward the equator as each sunspot cycle progresses.
Cycle-to-cycle variations in the sunspot latitudes have been noted previously.
\inlineciteBecker54 and \inlineciteWaldmeier55 both noted that, at maximum, the sunspot zones are at higher latitudes in the larger sunspot cycles.
More recently, \inlineciteHathaway_etal03 found an anti-correlation between the equatorward drift rate and cycle period and suggested that this was evidence in support of flux transport dynamos [Nandy & Choudhuri (2002)].
However, \inlineciteHathaway10 noted that all these results are largely due to the fact that larger sunspot cycles reach maximum sooner than smaller sunspot cycles and that the drift rate is faster in the earlier part of both small and large cycles.
Nonetheless, \inlineciteHathaway10 did find that the sunspot zones appeared at slightly higher latitudes (with slightly higher drift rates) in the larger sunspot cycles when comparisons were made relative to the time of sunspot cycle minimum.
The equatorward drift of the sunspot zones is a key characteristic of the sunspot cycle.
It must be reproduced in viable models for the Sun’s magnetic dynamo and can be used to discriminate between the various models.
In the \inlineciteBabcock61 and \inlineciteLeighton69 dynamo models the latitudinal positions of the sunspot zones are determined by the latitudes where the differential rotation and initial magnetic fields produce magnetic fields strong enough to make sunspots.
This “critical” latitude moves equatorward from the position of strongest latitudinal shear as the cycle progresses.
The initial strength of the magnetic field in these models is determined by the polar field strength at cycle minimum so we might expect a delayed start for cycles starting with weak polar fields and the equatorward propagation might depend on both the differential rotation profile (which doesn’t vary substantially) and the initial polar fields (which do vary substantially).
In a number of dynamo models (both kinematic and magnetohydrodynamic) the equatorward drift of the sunspot zones is produced by a “Dynamo Wave” (cf. \openciteYoshimura75) which
propagates along iso-rotation surfaces at a rate and direction given by the product of the shear in the differential rotation and the kinetic helicity in the fluid motions.
In these models the equatorward propagation is a function of the differential rotation profile and the profile of kinetic helicity - both of which don’t vary substantially.
In flux transport dynamo models (cf. \openciteNandyChoudhuri02) the equatorward drift is produced primarily by the equatorward return flow of a proposed deep meridional circulation.
In these models, variations in the meridional flow speed (which does vary substantially with cycle amplitude and duration in these models) should be observed as variations in the equatorward drift rate of the sunspot zones.
Here we reexamine the latitudes of the sunspot zones and find that cycle-to-cycle and hemispheric variations vanish when time is measured relative to a cycle starting time derived from fitting the monthly sunspot numbers in each cycle to a functional form for the cycle shape.
2 The Sunspot Zones
Sunspot group positions and areas have been measured daily since May 1874.
The Royal Observatory Greenwich carried out the earlier observations using a small network of solar observatories from May 1874 to December 1976.
The United States Air Force and National Oceanic and Atmospheric Administration continued to acquire similar observations from a somewhat larger network starting in January 1977.
We calculate the average daily sunspot area over each Carrington rotation for 50 equal area latitude bins (equi-spaced in $\sin\lambda$ where $\lambda$ is the latitude).
The sunspot zones are clearly evident in the resulting Butterfly Diagram - Figure 1.
We divide the data into separate sunspot cycles by attributing low-latitude groups to the earlier cycle and high-latitude groups to the later cycle when the cycles overlap at minima.
We further divide the data by hemisphere and then calculate the latitude, $\bar{\lambda}$, of the centroid of sunspot area for each hemisphere for each rotation of each sunspot cycle using
$$\bar{\lambda}=\sum{A(\lambda_{i})\lambda_{i}}/\sum{A(\lambda_{i})}$$
(1)
where $A(\lambda_{i})$ is the average daily sunspot area in the latitude bin centered on latitude $\lambda_{i}$ and the sums are over the 25 latitude bins for a given hemisphere and Carrington rotation.
These centroid positions then provide the sunspot zone latitudes and drift rates for each hemisphere as a function of time through each cycle.
3 The Sunspot Zone Equatorward Drift
Previous work on cycle-to-cycle variations in the positions and drift rates of the sunspot zones [Becker (1954), Waldmeier (1955), Hathaway et al. (2003)] made those measurements relative to the sunspot cycle maxima.
The centroid position data are plotted as functions of time from cycle maxima in Figure 2.
The data encompass 12 sunspot cycles which, fortuitously, include four cycles much smaller than average (Cycles 12, 13, 14, and 16 with smoothed sunspot cycle maxima below 90), four cycles much larger than average (Cycles 18, 19, 21, and 22 with smoothed sunspot cycle maxima above 150), and four cycles close to the average (Cycles 15, 17, 20, and 23).
Figure 2 illustrates why the earlier studies concluded that larger cycles tend to have sunspot zones at higher latitudes.
The centroid positions for the large cycles (in red) are clearly at higher latitudes than those for the medium cycles which, in turn, are at higher latitudes than those for the small cycles.
While this conclusion is technically correct, it is somewhat misleading since large cycles reach their maxima sooner than small cycles (the “Waldmeier Effect” \openciteWaldmeier35 and \openciteHathaway10) and the sunspot zones are always at higher latitude earlier in each cycle.
In Figure 3 the centroid positions are plotted as functions of time from sunspot cycle minima.
Since large cycles reach maximum earlier than small cycles, the data points for the large cycles are shifted to the left (closer to minimum) relative to the medium and small cycles.
The size of the shift is different for each cycle depending on the dates of minimum and maximum.
Comparing Figures 2 and 3 shows that: 1) the latitudinal scatter is smaller in Figure 3 than in Figure 2 and; 2) the differences in the centroid positions for the different cycle amplitudes are diminished in Figure 3.
This suggests that there is a more general, cycle amplitude independent, law for the latitudes (and consequently latitudinal drift rates) of the sunspot zones.
A slight additional shift in the adopted times for sunspot cycle minima (with earlier times for small cycles) would appear to further diminish any cycle amplitude differences.
Determinations of the dates of sunspot cycle minima are not well defined.
Many investigators simply take the date of minimum in some smoothed sunspot cycle index (e. g. sunspot number, sunspot area, 10.7 cm radio flux).
Unfortunately, this can give dates that are clearly not representative of the actual cycle minima.
This problem led earlier investigators to define the date of minimum as some (undefined) average of the dates of: 1) minimum in the monthly sunspot number; 2) minimum in the smoothed monthly sunspot number; 3) maximum in the number of spotless days per month; 4) predominance of new cycle sunspot groups over old cycle sunspot groups [Waldmeier (1961), McKinnon (1987), Harvey & White (1999)].
Even neglecting the fact the the nature of the average is not defined, it is clear from the published dates for previous cycle minima that the first criterium is never used (probably due to the wide scatter it gives) and that even the simple average of the remaining criteria doesn’t give the published dates [Hathaway (2010)].
An alternative to using this definition for the dates of sunspot cycle minima is to use curve fitting to either the initial rise of activity or to the complete sunspot cycle.
Curve fitting is less sensitive to the noise associated minimum cycle behavior (e.g. discretized data and missing data from the unseen hemisphere).
\inlineciteHathaway_etal94 described earlier attempts at fitting solar cycle activity levels (monthly sunspot numbers) to parameterized functions and arrived at a function of just two parameters (cycle starting time $t_{0}$ and cycle amplitude $R_{max}$) as the most useful function for characterizing and predicting solar cycle behavior.
This function:
$$F(t;t_{0},R_{max})=R_{max}\ 2({t-t_{0}\over b})^{3}/\left[exp({t-t_{0}\over b}%
)^{2}-0.71\right]$$
(2)
is a skewed Gaussian with an initial rise that follows a cubic in time from the starting time (measured in months).
The width parameter, $b$, is a function of cycle amplitude $R_{max}$ that mimics the “Waldmeier Effect.”
This function is
$$b(R_{max})=27.12+25.15(100/R_{max})^{1/4}$$
(3)
Fitting $F(t;t_{0},R_{max})$ to the monthly averages of the daily International Sunspot Numbers using the Levenberg-Marquardt method [Press et al. (1986)] gives the amplitudes and starting times given by \inlineciteHathaway_etal94 and reproduced in Table 1 with the addition of results for Cycle 23.
On average the small cycles have starting times about 7 months earlier than minimum while medium cycles and large cycles have starting times about equal to minimum.
However, since minimum is determined by the behavior of both the old and the new cycles, there are substantial differences between the dates of minima and the starting times even among the medium and large cycles.
For example, Cycles 21 and 22 were both large but the minimum was 3 months earlier than the starting time in Cycle 21 and 4 months later in Cycle 22.
This is illustrated in Figure 4.
Measuring the time through each cycle relative to these starting times (rather than minimum or maximum) removes the scatter and cycle amplitude dependence on the centroid positions as shown in Figure 5.
The lack of any substantial cycle amplitude dependence on the centroid positions when time is measured relative to the curve fitted cycle starting time suggests that the equatorward drift of the sunspot zones follows a standard path or law.
This path is well represented by an exponential function with
$$\bar{\lambda}(t)=28^{\circ}\exp\left[-(t-t_{0})/90\right]$$
(4)
where time, $t$, is measured in months.
This exponential fit and the data for the small, medium, and large cycles are plotted as functions of time from the cycle starting time in the lower panel of Figure 5.
Hemispheric differences in solar activity were first noted by \inlineciteSporer89 not long after the discovery of the sunspot cycle itself.
Much has been made of these differences and their possible connection to a variety of sunspot cycle phenomena.
\inlineciteNortonGallagher10 recently revisited these connections and found little evidence for any of them.
Nonetheless we are compelled to examine possible differences in the sunspot zone locations and equatorward drift relative to the hemispheric asymmetries.
We keep the same starting time for each hemisphere of each cycle as determined from the curve fitting of the sunspot numbers but separate the data by the strength of the activity in the hemisphere.
Using the data shown in \inlineciteNortonGallagher10 for the sunspot area maximum and total sunspot area for each hemisphere in each cycle we identify cycles in which the northern hemisphere dominates as Cycles 15, 16, 19, and 20, cycles in which the southern hemisphere dominates as Cycles 12, 13, 18, and 23 with Cycles 14, 17, 21, and 22 having fairly balanced hemispheric activity.
(The relevant sunspot cycle characteristics are listed in Table 1.)
This gives 8 stronger hemispheres, 8 weaker hemispheres, and 8 balanced hemispheres.
The latitude positions of the sunspot zones for the stronger hemispheres, weaker hemispheres,and balanced hemispheres are shown separately in Figure 6.
We find no significant differences in the sunspot zone latitude positions associated with hemispheric asymmetry in spite of the fact that for the unbalanced cycles the same starting time is used for both the strong and the weak hemisphere.
4 Cycle 23
Cycle 23 had a long, low, extended minimum prior to the start of Cycle 24.
This delayed start of Cycle 24 left behind the lowest smoothed sunspot number minimum and the largest number of spotless days in nearly a century.
The polar fields during this minimum were the weakest seen in the four cycle record and the flux of galactic cosmic rays was the highest seen in the six cycle record.
One explanation for both the weak polar fields and the long cycle has been suggested by flux transport dynamos [Nandy, Muñoz-Jaramillo, & Martens (2011)].
This model can produce both these characteristics if the meridional flow was faster than average during the first half of Cycle 23 and slower than average during the second half.
The meridional flow measured by the motions of magnetic elements in the near surface layers (\openciteHathawayRightmire10 and \openciteHathawayRightmire11) exhibited the opposite behavior - slow meridional flow at the beginning of Cycle 23 and fast meridional flow at the end.
Although the speed of the near surface meridional flow was used to estimate the speed of the proposed deep meridional return flow in their flux transport dynamo models, \inlineciteNandy_etal11 suggest that the variations seen near the surface are unrelated to variations at the base of the convection zone.
However, with their model the latitudinal drift of the sunspot zones during Cycle 23 should provide a direct measure of the deep meridional flow and its variations.
Figure 7 shows the latitudinal positions of the sunspot zones for Cycle 23 along with those for the full 12 cycle dataset (with $2\sigma$ error bars).
The latitudinal drift of the sunspot zones during Cycle 23 follows within the $2\sigma$ error range for the average of the last 12 cycles and follows the standard exponential given by Equation 4.
A drift rate that was 30% higher than average at the start and 30% lower than average at the end of Cycle 23 (the red line in Figure 7) as proposed by \inlineciteNandy_etal11 is inconsistent with the data.
A drift rate governed by the observed meridional flow variations in the near surface layers (Hathaway & Rightmire 2010, 2011 - the green line in Figure 7) is also inconsistent with the data for Cycle 23.
This indicates that the meridional flow is not connected to the latitudinal drift of the sunspot zones.
5 Conclusions
We find that if time is measured relative to a cycle starting time determined by fitting the monthly sunspot numbers to a parametric curve, then the latitude positions of the sunspot zones follow a standard path with time.
We find no significant variations from this path associated with sunspot cycle amplitude or hemispheric asymmetry.
This standard behavior suggests that the equatorward drift of the sunspot zones is not produced by the Sun’s meridional flow - which is observed (and theorized) to vary substantially from cycle-to-cycle.
This regularity thus questions the viability of flux transport dynamos as models of the Sun’s activity cycle.
The lack of the variations in drift rate during Cycle 23 in spite of observed and theorized variations in the meridional flow also argues against these models.
The earlier kinematic dynamo models of \inlineciteBabcock61 and \inlineciteLeighton69 may be consistent with the regularity of the sunspot zone drift due to their dependence on the fairly constant differential rotation profile.
However, it is unclear how the variability of the initial polar fields might influence the latitudinal drift in these models.
It is clear, however, that this regularity is consistent with dynamo models in which a Dynamo Wave produces the equatorward drift of the sunspot zones.
The speed of a Dynamo Wave depends on the product of the differential rotation shear and the kinetic helicity - both of which are not observed or expected to vary substantially.
Acknowledgements
The author would like to thank NASA for its support of this research through a grant
from the Heliophysics Causes and Consequences of the Minimum of Solar Cycle 23/24 Program to NASA Marshall Space Flight Center.
He is also indebted to Lisa Rightmire, Ron Moore, and an anonymous referee whose comments and suggestions improved both the figures and the manuscript.
Most importantly, he would like to thank the American taxpayers for supporting scientific research in general and this research in particular.
References
Babcock (1961)
Babcock, H. W.: 1961 Astrophys. J. 133 572.
Becker (1954)
Becker, U.: 1954 Z. Astrophys. 34 129.
Carrington (1858)
Carrington, R. C.: 1858 Mon. Not. Roy. Astron. Soc. 19 1.
Harvey & White (1999)
Harvey, K. L. & White, O. R.: 1999 J. Geophys. Res. 104 (A9) 19,759.
Hathaway (2010)
Hathaway, D. H.: 2010
Living Rev. Solar Phys. 7 1.
Hathaway et al. (2003)
Hathaway, D. H., Nandy, D., Wilson, R. M., & Reichmann, R. J.: 2003
Astrophys. J. 589 665.
Hathaway & Rightmire (2010)
Hathaway, D. H. & Rightmire, L.: 2010 Science 327 1350.
Hathaway & Rightmire (2011)
Hathaway, D. H. & Rightmire, L.: 2011 Astrophys. J. 729 80.
Hathaway, Wilson, & Reichmann (1994)
Hathaway, D. H., Wilson, R. M., & Reichmann, R. J.: 1994
Solar Phys. 151 177.
Leighton (1969)
Leighton, R. B.: 1969 Astrophys. J. 156 1.
Maunder (1903)
Maunder, E. W.: 1903 Observatory 26 329.
Maunder (1904)
Maunder, E. W.: 1904 Mon. Not. Roy. Astron. Soc. 64 747.
McKinnon (1987)
McKinnon, J. A.: 1987 Sunspot Numbers 1610-1986 (based on The Sunspot-Activity in the Years 1610-1960, by Prof. M. Waldmeier, Copyright 1961 Swiss Federal Observatory, Zurich, Switzerland) UAG Reports UAG-95, National Geophysical Data Center, NOAA, Boulder.
Nandy & Choudhuri (2002)
Nandy, D. & Choudhuri, A. R.: 2002 Science 296 1671.
Nandy, Muñoz-Jaramillo, & Martens (2011)
Nandy, D. & Muñoz-Jaramillo, A., & Martens, P. C. H.: 2011
Nature 471 80.
Norton & Gallagher (2010)
Norton, A. A., & , Gallagher, J. C.: 2010
Solar Phys. 261 193.
Press et al. (1986)
Press, W. H., Flannery, B. P., Teukolsky, S. A., & Vetterling, W. T.: 1986
Numerical Recipes Cambridge University Press, Cambridge, 181pp.
Spörer (1880)
Spörer, G.: 1880 Publicationen des Astrophysikalischen zu Potsdam
2 No. 5 1.
Spörer (1889)
Spörer, G.: 1889 Bulletin Astronomique, Serie I
6 60.
Waldmeier (1935)
Waldmeier, M.: 1935 Astron. Mitt. Zurich 14 (133) 105.
Waldmeier (1955)
Waldmeier, M.: 1955 Ergebnisse und Probleme der Sonnenforschung
Geest & Portig, Leipzig, 2nd edn.
Waldmeier (1961)
Waldmeier, M.: 1961 The Sunspot-Activity in the Years 1610-1960
Schulthess Co., Swiss Federal Observatory, Zurich.
Yoshimura (1975)
Yoshimura, H.: 1975 Astrophys. J. 201 740.
\make@ao\writelastpage
\lastpagegivenfalse\inarticlefalse |
Global existence and stability of de Sitter-like solutions to the Einstein–Yang–Mills equations in spacetime dimensions $n\geq 4$
Chao Liu, Todd A. Oliynyk and Jinhua Wang
Center for Mathematical Sciences and School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, Hubei Province, China.
chao.liu.math@foxmail.com
School of Mathematical Sciences, 9 Rainforest Walk, Monash University, Clayton, VIC 3800, Australia
todd.oliynyk@monash.edu
School of Mathematical Science, Xiamen University,
Xiamen 361005, Fujian Province, China.
wangjinhua@xmu.edu.cn
Abstract.
We establish the global existence and stability to the future of non-linear perturbation of de Sitter-like solutions to the Einstein–Yang–Mills system in $n\geq 4$ spacetime dimension. This generalizes Friedrich’s Einstein–Yang–Mills stability results in dimension $n=4$ [11] to all higher dimensions.
1. Introduction
In general relativity, the $n$-dimensional de Sitter (dS${}_{n}$) solution
is an exact, maximally symmetric solution to the vacuum Einstein equations with a positive cosmological constant $\Lambda>0$. It is one of the simplest solutions that exhibits accelerated expansion.
While realistic cosmologies are not homogeneous and isotropic, de Sitter spacetime can still be physically relevant in situations where it is non-linearly stable under perturbations globally to the future. This is because stability will guarantee that the de Sitter spacetime can be used as a model to understand the future asymptotic behavior of cosmological spacetimes that are undergoing accelerated expansions.
This is of direct physical relevance due to cosmological observations of accelerated expansion in our universe and the assertion that this expansion is due to a cosmological constant [23].
The future stability of de Sitter spacetime in $n=4$ spacetime dimensions was first established by Friedrich [10] using a conformal representation of the vacuum Einstein field equations with a positive cosmological constant. Friedrich, in the article [11], subsequently extended this stability result to the Einstein–Yang–Mills equations in $n=4$ spacetime dimension. More precisely, c.f. [11, Thm. 9.8], Friedrich established the global existence to the future of solutions to the Einstein–Yang–Mills equations that are generated from initial data sufficiently close to de Sitter initial data. Friedrich also established that these solutions are all asympotically simple. It is worth noting here that, in the article [11], Friedrich also considered the case of vanishing cosmological constant $\Lambda=0$ and
prescribed initial data on past timelike infinity for $\Lambda>0$.
The main aim of this article is to extend Friedrich’s future global existence and stability result for the Einstein–Yang–Mills equations with a positive cosmological constant $\Lambda>0$ to all higher spacetime dimensions $n>4$ and provide a new global existence and stability proof for $n=4$.
1.1. Einstein–Yang–Mills equations
Before discussing the main results of this article, we first briefly recall some key concepts from Yang–Mills theory that will be needed to fix our notations and formulate the Einstein–Yang–Mills equations; see also [6].
Let $G$ denote a compact and connected Lie group with Lie algebra $\mathcal{G}$. Due to the compactness, we lose no generality in taking $G$ to be a matrix group.
On $\mathcal{G}$, we fix an Ad-invariant, positive definite inner-product that we denote by a dot, i.e. $\phi\cdot\psi$ for $\phi,\psi\in\mathcal{G}$, and we use $|\cdot|$ to denote the associated norm.
Given a $n$-dimensional, connected Lorentzian manifold111In this article, we use abstract index notations, see §2.1. $(\widetilde{\mathcal{M}}^{n},\,\tilde{g}_{ab})$, a connection $\tilde{\omega}$ on a $G$-principal bundle over $\widetilde{\mathcal{M}}^{n}$ can be expressed in a gauge as a local $\mathcal{G}$-valued $1$-form $\tilde{A}_{a}$ on $\widetilde{\mathcal{M}}^{n}$, which is referred to as a gauge potential. The curvature of the connection $\tilde{\omega}$ is then determined locally by the $\mathcal{G}$-valued $2$-form $\tilde{F}_{ab}$ on $\widetilde{\mathcal{M}}^{n}$ defined by222The Yang–Mills curvature $\tilde{F}_{ab}$ is globally defined when viewed as taking values in the adjoint bundle.
$$\tilde{F}_{ab}=\tilde{\nabla}_{a}\tilde{A}_{b}-\tilde{\nabla}_{b}\tilde{A}_{a}+[\tilde{A}_{a},\tilde{A}_{b}],$$
(1.1)
where $[\cdot,\cdot]$ is the Lie bracket on $\mathcal{G}$, i.e. the matrix commutator bracket, and
$\tilde{\nabla}_{a}$ is the covariant derivative associated to $\tilde{g}_{ab}$. We recall also that the Yang–Mills curvature $\tilde{F}_{ab}$ automatically satisfies the Bianchi identities
$$\tilde{D}_{[a}\tilde{F}_{bc]}=0,$$
where $\tilde{D}_{a}=\tilde{\nabla}_{a}+[\tilde{A}_{a},\,\cdot]$ denotes the gauge covariant derivative of a $\mathcal{G}$-valued tensor.
The Einstein–Yang–Mills equations with a positive cosmological constant $\Lambda>0$ are then given by
$$\displaystyle\tilde{G}_{ab}+\Lambda\tilde{g}_{ab}$$
$$\displaystyle=\tilde{T}_{ab},$$
(1.2)
$$\displaystyle\tilde{D}^{a}\tilde{F}_{ab}$$
$$\displaystyle=0,$$
(1.3)
$$\displaystyle\tilde{D}_{[a}\tilde{F}_{bc]}$$
$$\displaystyle=0,$$
(1.4)
where $\tilde{G}_{ab}=\tilde{R}_{ab}-\frac{1}{2}\tilde{R}\tilde{g}_{ab}$ is the Einstein tensor of the metric $\tilde{g}_{ab}$ and the stress energy tensor of a Yang–Mills field is defined by
$$\displaystyle\tilde{T}_{ab}=\tilde{F}_{a}{}^{c}\cdot\tilde{F}_{bc}-\frac{1}{4}\tilde{g}_{ab}\tilde{F}^{cd}\cdot\tilde{F}_{cd}.$$
In this article, we will restrict our attention to trivial principal bundles, i.e. $\widetilde{\mathcal{M}}^{n}\times G$. The purpose of this simplification is to allows us to work with gauge potentials $A_{a}$ that exists globally on $\widetilde{\mathcal{M}}^{n}$. This, in turns, allows us to view the Einstein–Yang–Mills equations as equations for the fields $(\tilde{g}_{ab},\tilde{A}_{a})$ on $\widetilde{\mathcal{M}}^{n}$.
1.2. de Sitter spacetime
For the remainder of the article, we fix the physical spacetime manifold by setting333In fact, we will only make use of the future half $[0,\infty)\times\Sigma$ of $\widetilde{\mathcal{M}}^{n}$ given by $[0,\infty)\times\Sigma$.
$$\widetilde{\mathcal{M}}^{n}=\mathbb{R}\times\Sigma$$
with
$$\Sigma=\mathbb{S}^{n-1}.$$
Then de Sitter spacetime444See the references [13, 28] for a more detailed introduction to de Sitter spacetime. $(\widetilde{\mathcal{M}}^{n},\,\tilde{\underline{g}}_{ab})$ is obtained by equipping $\widetilde{\mathcal{M}}^{n}$ with the de Sitter metric defined by
$$\displaystyle\tilde{\underline{g}}_{ab}=-(d\tau)_{a}(d\tau)_{b}+H^{2}\cosh^{2}(H^{-1}\tau)\underline{h}_{ab}$$
(1.5)
where the constant $H$ is determined by
$$\displaystyle H=\sqrt{\frac{(n-2)(n-1)}{2\Lambda},}$$
(1.6)
$\underline{h}_{ab}$ is the standard metric on $\mathbb{S}^{n-1}$, and $\tau$ is a Cartesian coordinate function on $\mathbb{R}$.
This spacetime plays two roles in our subsequent arguments. First, it provides an ambient spacetime manifold on which we can formulate our global existence and stability results, and second, it provides the background geometric quantities that we use to quantify the size of the metric perturbations away from the de Sitter metric.
In the analysis carried out in this article, we find it advantageous to work with a conformally rescaled version of the de Sitter metric, which we refer to as the conformal de Sitter metric, rather than the de Sitter metric itself. To define the conformal de Sitter metric, we introduce a new time function $t$ via
$$\displaystyle t=\frac{1}{H}\left(\frac{\pi}{2}-\operatorname{gd}(H^{-1}\tau)\right)$$
(1.7)
where $\operatorname{gd}(x)$, known as the Gudermannian function, is defined by
$$\operatorname{gd}(x)=\int^{x}_{0}\frac{1}{\cosh s}ds=\arctan\bigl{(}\sinh(x)\bigr{)},\quad x\in\mathbb{R}.$$
Inverting (1.7) gives
$$\displaystyle\tau=H\operatorname{gd}^{-1}\left(\frac{\pi}{2}-Ht\right)$$
(1.8)
where
$$\operatorname{gd}^{-1}(x)=\int^{x}_{0}\frac{1}{\cos t}dt=\text{arctanh}(\sin x),\quad x\in\Bigl{(}-\frac{\pi}{2},\frac{\pi}{2}\Bigr{)}.$$
From (1.8), we observe that $\tau(t)$ is a monotonic, decreasing and analytic for $t\in\bigl{(}0,\frac{\pi}{H}\bigr{)}$. We further observe that (1.7) maps the infinite interval
$\tau\in(-\infty,+\infty)$ into the finite interval $t\in(0,\frac{\pi}{H})$ with a change of time orientation where the future lies in the direction of decreasing $t$ due to the monotonic, decreasing behavior of $t(\tau)$ on $\mathbb{R}$. Noting that
$t(0)=\frac{\pi}{2H}$ and $t(+\infty)=0$, we conclude that $t=0$ corresponds to future timelike infinity in de Sitter spacetime while $t=\frac{\pi}{2H}$ corresponds to $\tau=0$.
Differentiating (1.8), we find, with the help of the identity $\frac{d}{dx}(\operatorname{gd}^{-1}x)=\sec x$, that
$$(d\tau)_{a}=-H^{2}\sec\left(\frac{\pi}{2}-Ht\right)(dt)_{a}.$$
(1.9)
Then, noting the identity $\cosh(\operatorname{gd}^{-1}(x))=\sec(x)$, we see from (1.9) that the de Sitter metric (1.5) can be written as
$$\displaystyle\tilde{\underline{g}}_{ab}=e^{2\Psi}\underline{g}_{ab}$$
(1.10)
where
$$\displaystyle\Psi=-\ln\left(\frac{\sin(Ht)}{H}\right)$$
(1.11)
and
$$\underline{g}_{ab}=-H^{2}(dt)_{a}(dt)_{b}+\underline{h}_{ab}$$
is the conformal de Sitter metric. The pair $(\mathcal{M}^{n}=\bigl{(}0,\frac{\pi}{H}\bigr{)}\times\Sigma,\,\underline{g}_{ab})$ defines a spacetime that is conformal to the de Sitter spacetime where, by construction, future timelike infinity of the de Sitter spacetime is mapped to the boundary component $\{0\}\times\Sigma$ of $\mathcal{M}^{n}$.
Next, we define a future directed a unit normal to the spatial hypersurface
$$\Sigma_{t}=\{t\}\times\Sigma$$
with respect to the conformal metric $\underline{g}_{ab}$
by
$$\displaystyle\nu_{a}=H(dt)_{a}{\quad\text{and}\quad}\nu^{a}=\underline{g}^{ab}\nu_{b}.$$
(1.12)
Using $\nu^{a}$, we set
$$\displaystyle\tensor{\underline{h}}{{}^{a}_{b}}=\tensor{\delta}{{}^{a}_{b}}+\nu^{a}\nu_{b},$$
(1.13)
which we note defines a projection operator that projects onto the $\underline{g}_{ab}$-orthogonal subspace to the vector field $\nu^{a}$. In an analogous fashion, we define future directed unit normals to the spatial hypersurface $\Sigma_{\tau}$ with respect to the de Sitter metric $\tilde{\underline{g}}_{ab}$ and the physical metric $\tilde{g}_{ab}$ by
$$\tilde{\nu}_{a}=(d\tau)_{a}{\quad\text{and}\quad}\tilde{\nu}^{a}=\tilde{\underline{g}}^{ab}\tilde{\nu}_{b},$$
(1.14)
and by
$$\tilde{T}_{a}=(-\tilde{\lambda})^{-\frac{1}{2}}\tilde{\nu}_{a}{\quad\text{and}\quad}\tilde{T}^{a}=(-\tilde{\lambda})^{-\frac{1}{2}}\tilde{g}^{ab}\tilde{\nu}_{b},\quad\text{where}\quad\tilde{\lambda}=\tilde{g}^{ab}\tilde{\nu}_{a}\tilde{\nu}_{b},$$
(1.15)
respectively.
We denote the corresponding spatial projection operators by
$$\displaystyle\tensor{\tilde{\underline{h}}}{{}^{c}_{d}}=\tensor{\delta}{{}^{c}_{d}}+\tilde{\nu}^{c}\tilde{\nu}_{d}{\quad\text{and}\quad}\tensor{\tilde{h}}{{}^{c}_{d}}=\tensor{\delta}{{}^{c}_{d}}+\tilde{T}^{c}\tilde{T}_{d},$$
(1.16)
and we use $\tensor{\tilde{h}}{{}^{c}_{d}}$ and $\tilde{T}^{a}$ to decompose the physical Yang–Mills curvature $\tilde{F}_{ap}$ into its electric and magnetic components according to
$$\displaystyle\tilde{E}_{b}=\tensor{\tilde{h}}{{}^{a}_{b}}\tilde{F}_{ap}\tilde{T}^{p}{\quad\text{and}\quad}\tilde{H}_{db}=\tensor{\tilde{h}}{{}^{c}_{d}}\tilde{F}_{ca}\tensor{\tilde{h}}{{}^{a}_{b}}.$$
1.3. Gauge conditions
Gauge conditions for both the gravitational and Yang–Mills fields play an essential role in our stability proof. For the physical Yang–Mills, we employ a temporal gauge defined by
$$\tilde{A}_{a}\tilde{T}^{a}=0$$
(1.17)
where the vector field $\tilde{T}^{a}$ is defined above by (1.15), while we employ a wave gauge given by
$$\tilde{Z}^{a}=0$$
(1.18)
for the physical metric where
$$\displaystyle\tilde{Z}^{a}$$
$$\displaystyle=\tilde{X}^{a}-2(\tilde{\underline{g}}^{ac}-\tilde{g}^{ac})(d\Psi)_{c}+n(\tilde{g}^{ac}-\tilde{\underline{g}}^{ac})(d\Psi)_{c}-(\tilde{g}^{fe}-\tilde{\underline{g}}^{fe})\tilde{\underline{g}}^{ac}\tilde{\underline{g}}_{fe}(d\Psi)_{c}$$
and
$$\displaystyle\tilde{X}^{a}$$
$$\displaystyle=-\tilde{\underline{\nabla}}_{e}\tilde{g}^{ae}+\frac{1}{2}\tilde{g}^{ae}\tilde{g}_{df}\tilde{\underline{\nabla}}_{e}\tilde{g}^{df}.$$
Here, $\tilde{\underline{\nabla}}_{e}$ denotes the covariant derivative associated to the de Sitter metric $\tilde{\underline{g}}_{ab}$, see (1.5), and
$\Psi$ is the scalar function defined previously by (1.11).
1.4. Initial data and the constraint equations
Solutions to the Einstein–Yang–Mills equations will be generate from the initial data
$$(\tilde{g}_{ab},\mathcal{L}_{\tilde{\nu}}\tilde{g}_{ab},\tilde{A}_{a},\mathcal{L}_{\tilde{\nu}}\tilde{A}_{a})|_{\Sigma_{0}}=(\acute{g}_{ab},\grave{g}_{ab},\acute{A}_{a},\grave{A}_{a})$$
(1.19)
that is specified on the initial hypersurface $\Sigma_{0}=\{0\}\times\Sigma$, that is, at time $\tau=0$. As is well known, e.g. see [7, Ch. VI & VII], this initial data cannot be chosen freely, but must satisfy the following constraint equations:
$$\displaystyle\tilde{\nu}_{a}(\tilde{G}^{ab}+\Lambda\tilde{g}^{ab}-\tilde{T}^{ab})|_{\Sigma_{0}}$$
$$\displaystyle=0,$$
(1.20)
$$\displaystyle\tilde{h}^{ab}(\tilde{\nabla}_{a}\tilde{E}_{b}+[\tilde{A}_{a},\tilde{E}_{b}])|_{\Sigma_{0}}$$
$$\displaystyle=0,$$
(1.21)
where $\tilde{h}^{ab}=\tensor{\tilde{\underline{h}}}{{}^{a}_{c}}\tensor{\tilde{\underline{h}}}{{}^{b}_{d}}\tilde{g}^{cd}$.
We will always assume that our initial data satisfies these constraints. In addition,
to enforce the gauge conditions, we will further assume that initial data is chosen so that the following gauge constraints hold:
$$\displaystyle\tilde{Z}^{a}|_{\Sigma_{0}}$$
$$\displaystyle=0,$$
(1.22)
$$\displaystyle\tilde{A}_{a}\tilde{T}^{a}|_{\Sigma_{0}}$$
$$\displaystyle=0.$$
(1.23)
1.5. Main theorem
We are now in a position to state the main result of this article in the following theorem.
Theorem 1.1.
Suppose $\Lambda>0$, $s\in\mathbb{Z}_{>\frac{n+1}{2}}$, and the initial data $\acute{g}_{ab}\in H^{s+1}(\Sigma)$, $\grave{g}_{ab}\in H^{s}(\Sigma)$, $\acute{A}_{a}\in H^{s}(\Sigma)$ with $\underline{h}^{c}{}_{a}\underline{h}^{d}{}_{b}(d\acute{A})_{cd}\in H^{s}(\Sigma)$, and $\grave{A}_{a}\in H^{s}(\Sigma)$ satisfy the constraint equations555See §2.1.3 for a definition of the $H^{s}$ norms. (1.20)–(1.23). Then there exists a constant $\sigma>0$ such that if the initial data
satisfy smallness condition
$$\displaystyle\|(\tilde{g}^{ab}(0)-\tilde{\underline{g}}^{ab}(0),\,\tilde{\underline{\nabla}}_{d}\tilde{g}^{ab}(0),\,\tilde{A}_{a}(0),\,\tilde{E}_{a}(0),\,\tilde{H}_{ab}(0))\|_{H^{s}}\leq\sigma,$$
then there exists a unique solution $(\tilde{g}^{ab},\tilde{A}_{a})$ to the Einstein–Yang–Mills equations (1.2)–(1.4)
on $[0,\infty)\times\Sigma$ with regularity
$$(\tilde{g}^{ab},\,\tilde{\underline{\nabla}}_{d}\tilde{g}^{ab},\tilde{A}_{a},\tilde{E}_{a},\tilde{H}_{ab})\in C^{0}\bigl{(}[0,\infty),H^{s}(\Sigma)\bigr{)}\cap C^{1}\bigl{(}[0,\infty),H^{s-1}(\Sigma)\bigr{)}$$
that satisfies the initial conditions (1.19), and the temporal and wave gauge constraints (1.17)–(1.18) on $[0,\infty)\times\Sigma$. Moreover, there exists a constant $C>0$ such that the estimates
$$\displaystyle\|\tilde{A}_{a}(\tau)\|_{H^{s}}+\|\tilde{E}_{a}(\tau)\|_{H^{s}}+\|\tilde{H}_{ab}(\tau)\|_{H^{s}}$$
$$\displaystyle\leq C\sigma$$
and
$$\displaystyle\|\tilde{g}^{ab}(\tau)-\tilde{\underline{g}}^{ab}(\tau)\|_{H^{s}}+\|\tilde{\underline{\nabla}}_{d}\tilde{g}^{ab}(\tau)\|_{H^{s}}$$
$$\displaystyle\leq C\left(\frac{\pi}{2}-\operatorname{gd}(H^{-1}\tau)\right)^{2}\sigma$$
hold for all $\tau\in[0,\infty)$.
1.6. Prior and related works
The vacuum de Sitter stability result of Friedrich in $n=4$ spacetime dimensions was extended to all even spacetime dimension $n\geq 4$ in [1]; see also [14] where a gap in this stability proof
is addressed. It is worth noting that the de Sitter stability proofs from [1, 10] and the related Einstein–Yang–Mills stability proof from [11] rely on specific conformal representations of the Einstein field equations. It is because these conformal representations only exist in certain spacetime dimensions that leads to the dimension dependent restrictions in these stability proofs.
The first de Sitter stability result to extend to all spacetime dimension $n\geq 4$ was established in [24]. In that article, the future global existence of solutions to the Einstein–scalar field system that are generated from initial data that is sufficiently close to de Sitter intial data was established for a class of potentials that includes the vacuum Einstein equations with a positive cosmological constant. This stability result was later generalized to the Einstein–Maxwell–scalar field equations in [29].
Related stability results for perfect fluid Friedmann-Lemaître-Robertson-Walker (FLRW) spacetimes with a postive cosmological constant that are asymptotic to a flat spatial slicing of de Sitter were obtained in the articles [12, 17, 16, 19, 15, 20, 21, 25, 27]. We also note the related stability results from [8, 9, 31] for FLRW spacetimes with non-accelerated expansion.
1.7. Overview and proof strategy
1.7.1. Main idea
The main idea behind the proof of Theorem 1.1, which was first employed in [21] for a different matter model, is to formulate the initial value problem for a gauge reduced version of the Einstein–Yang–Mills equations as a Fuchsian initial value problem of the form
$$\displaystyle\mathbf{A}^{a}(t,\mathbf{U})\underline{\nabla}_{a}\mathbf{U}$$
$$\displaystyle=\frac{1}{t}\mathfrak{A}(t,\mathbf{U})\mathbb{P}\mathbf{U}+G(t,\mathbf{U})\quad$$
$$\displaystyle\text{in }\Bigl{(}0,\frac{\pi}{2H}\Bigr{]}\times\Sigma,$$
(1.24)
$$\displaystyle\mathbf{U}$$
$$\displaystyle=\mathbf{U}_{0}\quad$$
$$\displaystyle\text{in }\Bigl{\{}\frac{\pi}{2H}\Bigr{\}}\times\Sigma,$$
(1.25)
on the conformal de Sitter spacetime manifold $(\mathcal{M}^{n},\underline{g}_{ab})$, where we recall that $t=0$ corresponds to future timelike infinity and $t=\frac{\pi}{2H}$ corresponds to $\tau=0$. The advantage of this reformulation is once it is shown that the coefficients of the Fuchsian equation (1.24) satisfy certain structure conditions, see §5 for details, then, under a suitable smallness assumption on the initial data, the existence of solutions
to (1.24)-(1.25) on the spacetime domain $\bigl{(}0,\frac{\pi}{2H}\bigr{]}\times\Sigma\subset\mathcal{M}^{n}$ follows from an application of Theorem 4.1 in §4.3, which is an adapted version Theorem 3.8 from [4].
For technical reasons, we do not formulate the gauge reduced Einstein–Yang–Mills equations as a Fuchsian system. Instead, we show in §3, see, in particular, Theorems 3.1 and 3.2, that solutions of Einstein–Yang–Mills equations that satisfy a temporal and wave gauge condition yield solutions to a Fuchsian equation of the form (1.24); see (5.1) for the actual equation. We then appeal to the local-in-time existence theory for the Einstein–Yang–Mills equations from the companion article [18] to obtain local-in-time solutions to the Fuchsian (1.24)
on a spacetime domain of the form $\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}\times\Sigma\subset\mathcal{M}^{n}$ for some $t_{*}\in\bigl{[}0,\frac{\pi}{2H}\bigr{)}$
that satisfies a given initial condition $\mathbf{U}=\mathbf{U}_{0}$ at $t=\frac{\pi}{2H}$. It is important to note that de Sitter initial data at $\tau=0$ corresponds to the trivial initial data $\mathbf{U}_{0}=0$, because we are then free to choose $\mathbf{U}_{0}$ as small as we like since we are only interested in perturbations of de Sitter initial data.
Now, the time of existence $t_{*}$ from the local-in-time existence theory may be strictly greater than zero. In order to show that $t_{*}=0$, which would correspond to solutions to the Einstein–Yang–Mills equations that exist globally to the future, we alternately view the initial value problem (1.24)–(1.25) as a stand alone system that, a priori, admits solutions that are not derived from solutions of the reduced Einstein–Yang–Mills equations. The point of doing so is that, under a suitable smallness assumption on the initial data $\mathbf{U}_{0}$, we can apply Theorem 4.1 to conclude the existence of solutions to (1.24) on $\bigl{(}0,\frac{\pi}{2H}\bigr{]}\times\Sigma$ that are uniformly bounded. By uniqueness, this global solution coincides on $\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}\times\Sigma$ with the local-in-time solution to (1.24) obtained from a local-in-time solution to the Einstein–Yang–Mills equations as discussed above. In this way, we obtain uniform bounds on the local solution. Because of these bounds, we can then appeal to the continuation principle from [18] to extend the local-in-time solution past $t_{*}$, and in particular, all the way to $t_{*}=0$, which yields the global existence of solutions to the Einstein–Yang–Mills equations.
The above arguments constitute the main steps in the proof of Theorem 1.1, the main result of this article. The complete proof of the theorem is given in §7.
1.7.2. Fuchsian fields
The Fuchisan formulation (5.1) of the gauge reduced Einstein–Yang–Mills equations is based on a particular choice of fields. To define these fields, we first replace the physical fields with conformally rescaled fields defined by
$$\displaystyle g_{ab}={}$$
$$\displaystyle e^{-2\Psi}\tilde{g}_{ab},$$
(1.26)
$$\displaystyle F_{ab}={}$$
$$\displaystyle e^{-\Psi}\tilde{F}_{ab},$$
(1.27)
$$\displaystyle A_{a}={}$$
$$\displaystyle e^{-\frac{\Psi}{2}}\tilde{A}_{a},$$
(1.28)
where $\Psi$ as previously defined above by (1.11). These conformal fields should be viewed as living on the conformal de Sitter spacetime $\mathcal{M}^{n}=\bigl{(}0,\frac{\pi}{H}\bigr{)}\times\Sigma$. For use below, we note that
(1.26)–(1.27) imply the relations
$$\displaystyle g^{ab}=e^{2\Psi}\tilde{g}^{ab},\quad F^{ab}=e^{3\Psi}\tilde{F}^{ab}{\quad\text{and}\quad}\tensor{F}{{}^{a}_{b}}=e^{\Psi}\tensor{\tilde{F}}{{}^{a}_{b}}.$$
(1.29)
Next, we decompose the conformal metric $g_{ab}$ as
$$\displaystyle\lambda=g^{ab}\nu_{a}\nu_{b},\quad\xi^{c}=g^{ab}\nu_{a}\tensor{\underline{h}}{{}^{c}_{b}}{\quad\text{and}\quad}h^{ab}=\tensor{\underline{h}}{{}^{a}_{c}}\tensor{\underline{h}}{{}^{b}_{d}}g^{cd},$$
(1.30)
where we recall that $\nu_{a}$ is defined above by (1.12). Inspired by the Fuchsian formulation of the Einstein–Euler equations from [21, 16], we introduce a similar set of gravitational field variables defined by
$$\displaystyle m={}$$
$$\displaystyle\frac{1}{\breve{\mathtt{A}}t}(\lambda+1),$$
(1.31)
$$\displaystyle p^{a}={}$$
$$\displaystyle\frac{1}{\breve{\mathtt{B}}t}\xi^{a},$$
(1.32)
$$\displaystyle m_{d}={}$$
$$\displaystyle\underline{\nabla}_{d}\lambda-\frac{1}{\breve{\mathtt{J}}Ht}(\lambda+1)\nu_{d},$$
(1.33)
$$\displaystyle\tensor{p}{{}^{a}_{d}}={}$$
$$\displaystyle\underline{\nabla}_{d}\xi^{a}-\frac{1}{\breve{\mathtt{K}}Ht}\xi^{a}\nu_{d},$$
(1.34)
$$\displaystyle s^{ab}={}$$
$$\displaystyle\mathfrak{h}^{ab}-\underline{h}^{ab},$$
(1.35)
$$\displaystyle\tensor{s}{{}^{ab}_{d}}={}$$
$$\displaystyle\underline{\nabla}_{d}(\mathfrak{h}^{ab}-\underline{h}^{ab}),$$
(1.36)
$$\displaystyle s={}$$
$$\displaystyle q,$$
(1.37)
$$\displaystyle s_{d}={}$$
$$\displaystyle\underline{\nabla}_{d}q,$$
(1.38)
with
$$\displaystyle q={}\lambda+1+(3-n)\ln S{\quad\text{and}\quad}\mathfrak{h}^{ab}={}\frac{1}{S}h^{ab},$$
(1.39)
where $\breve{\mathtt{A}},\breve{\mathtt{B}},\breve{\mathtt{J}},\breve{\mathtt{K}}$ are constants to be determined,
$$\underline{h}^{ab}=\underline{h}^{a}{}_{c}\underline{h}^{b}{}_{d}\underline{g}^{cd},\quad S=\frac{\alpha}{\underline{\alpha}},\quad\alpha=(\det{(h^{ab}+\nu^{a}\nu^{b})})^{\frac{1}{n-1}}{\quad\text{and}\quad}\underline{\alpha}=(\det{(\underline{h}^{ab}+\nu^{a}\nu^{b})})^{\frac{1}{n-1}}.$$
(1.40)
We also find it useful in subsequent calculations to let $h_{ab}$ denote the unique symmetric tensor field satisfying
$$h_{ab}\nu^{b}=0{\quad\text{and}\quad}h^{cb}h_{ab}=\tensor{\underline{h}}{{}^{c}_{a}}.$$
For the conformal Yang–Mills fields, we employ a temporal gauge defined by
$$A_{a}\nu^{a}=0,$$
(1.41)
and we set
$$\bar{A}_{b}=A_{a}\tensor{\underline{h}}{{}^{a}_{b}}{\quad\text{and}\quad}E_{b}=-\nu^{p}F_{pa}\tensor{\underline{h}}{{}^{a}_{b}}.$$
(1.42)
Due to the temporal gauge, the one form $\bar{A}_{b}$ completely determines the gauge potential while $E_{b}$ defines the conformal electric field associated to the splitting determined by the vector field $\nu^{a}$. We further define the additional fields
$$\displaystyle\mathcal{E}^{a}={}$$
$$\displaystyle-h^{ab}E_{b},$$
(1.43)
$$\displaystyle H_{db}={}$$
$$\displaystyle\tensor{\underline{h}}{{}^{c}_{d}}F_{ca}\tensor{\underline{h}}{{}^{a}_{b}},$$
(1.44)
where we note that $H_{db}$ is the magnetic field associated to the splitting determined by the vector field $\nu^{a}$.
For later use, we record the identity
$$H_{pq}=\sqrt{\frac{\sin(Ht)}{H}}\tensor{\underline{h}}{{}^{a}_{p}}\underline{\nabla}_{a}\bar{A}_{q}-\sqrt{\frac{\sin(Ht)}{H}}\tensor{\underline{h}}{{}^{a}_{q}}\underline{\nabla}_{a}\bar{A}_{p}+[\bar{A}_{p},\bar{A}_{q}],$$
(1.45)
which is easily derived from (1.1), (1.11), (1.13), (1.27)–(1.28), (1.42) and (A.4).
We then collect the above fields into the single vector
$$\mathbf{U}=(m,\,p^{a},\,m_{d},\,\tensor{p}{{}^{a}_{d}},\,s^{ab},\,\tensor{s}{{}^{ab}_{d}},\,s,\,s_{d},\,\mathcal{E}^{a},\,E_{b},\,H_{db},\,\bar{A}_{d}).$$
(1.46)
It is this collection of fields that we will use to transform the gauge reduced Einstein–Yang–Mills equations into Fuchsian form.
1.7.3. Derivation of the Fuchsian equation
The majority of this article is devoted to the derivation of the Fuchsian formulation of the gauge reduced conformal Einstein–Yang–Mills equations given by
(5.1). The derivation is lengthy but computationally straightforward and it is split into two parts. The first part concerns the derivation of a Fuchsian formulation of the Einstein equations in the wave gauge (1.18) and it is carried out in §3.1 as well as in the appendix §A.2.1. The resulting Fuchsian equations are displayed in Theorem 3.1. The second part of the derivation is given in §3.2. There a Fuchsian formulation of the Yang–Mills equations in the temporal gauge (1.41) is obtained and the equations are displayed in Theorem 3.2. We then combine the equations from Theorems 3.1 and 3.2 to obtain the Fuchsian equation (5.1).
2. Preliminaries
In this section, we set out some conventions and notation that we will employ throughout this article.
2.1. Abstract indices and tensor conventions
For tensors, we employ abstract index notation, e.g. see [30, §2.4]). Physical fields will be distinguished with a tilde, e.g. $\tilde{g}_{ab}$, while their conformal counterpart will be denoted with the same letter but without the tilde, e.g. $g_{ab}$.
Underlined fields with a tilde, e.g. $\tilde{\underline{g}}_{ab}$, will refer to background fields that are associated with de Sitter spacetime. In line with the above notation, underlined fields without a tilde, e.g. $\underline{g}_{ab}$, will refer to background fields associated with the conformal de Sitter spacetime. In particular, we use $\tilde{\underline{\nabla}}$
and $\underline{\nabla}$ to denote the covariant derivatives associated to the background metrics $\tilde{\underline{g}}_{ab}$ and $\underline{g}_{ab}$, respectively, and likewise we use $\tensor{\tilde{\underline{R}}}{{}_{cde}^{a}}$ and $\tensor{\underline{R}}{{}_{cde}^{a}}$ to denote the associated curvature tensors. The other curvature tensor will be denoted using similar notation, e.g. $\tilde{\underline{R}}_{ab}$ and $\underline{R}_{ab}$ for the Ricci curvature tensors.
As indicated above, we use $\tilde{\nabla}$ to denote the covariant derivative associated to the physical metric $\tilde{g}_{ab}$, and in line with our conventions, we use $\nabla$ to denote the covariant derivative associated to the conformal metric $g_{ab}$. We also use $\tensor{\tilde{R}}{{}_{cde}^{a}}$ and $\tensor{R}{{}_{cde}^{a}}$ to denote curvature tensors of $\tilde{g}_{ab}$ and $g_{ab}$, respectively, and a similar notation for the other curvature tensors. We also use $\Box=g^{ab}\nabla_{a}\nabla_{b}$ to denote the wave operator associated to the conformal metric $g_{ab}$.
Unless indicated otherwise, we use the physical metric $\tilde{g}_{ab}$ to raise and lower indices on physical tensors that are not background tensors,
and the conformal metric $g_{ab}$ to raise and lower tensor indices on tensors built out of the conformal fields and are not purely background tensors. For background tensors associated with the de Sitter spacetime, we use the de Sitter metric $\tilde{\underline{g}}_{ab}$ to raise and lower indices, and correspondingly, we use the conformal background metric $\underline{g}_{ab}$ to raise and lower indices on background tensor fields associated with the conformal de Sitter spacetime.
It is worth noting at this point that the one forms $\tilde{\nu}_{a}$ and $\nu_{a}$, which are derived from the background slicing of the de Sitter spacetime and its conformal counterpart, see (1.12) and (1.14), are not underlined and constitute an exception to our conventions. Any other exceptions to the above conventions will be clearly indicated when they occur.
2.1.1. Index brackets
Round and square brackets on tensor indices are used to identify the symmetric and anti-symmetric, respectively, components of a tensor. For example,
$$\displaystyle A_{[abc]}=\frac{1}{6}(A_{abc}+A_{bca}+A_{cab}-A_{acb}-A_{bac}-A_{cba}){\quad\text{and}\quad}B_{(ab)}=\frac{1}{2}(B_{ab}+B_{ba}).$$
2.1.2. Connection coefficients
The covariant derivatives $\nabla_{a}\tensor{T}{{}^{b_{1}\cdots b_{k}}_{c_{1}\cdots c_{l}}}$ and $\underline{\nabla}_{a}\tensor{T}{{}^{b_{1}\cdots b_{k}}_{c_{1}\cdots c_{l}}}$ are related via
$$\nabla_{a}\tensor{T}{{}^{b_{1}\cdots b_{k}}_{c_{1}\cdots c_{l}}}=\underline{\nabla}_{a}\tensor{T}{{}^{b_{1}\cdots b_{k}}_{c_{1}\cdots c_{l}}}+\sum_{i}\tensor{X}{{}^{b_{i}}_{ad}}\tensor{T}{{}^{b_{1}\cdots d\cdots b_{k}}_{c_{1}\cdots c_{l}}}-\sum_{j}\tensor{X}{{}^{d}_{ac_{j}}}\tensor{T}{{}^{b_{1}\cdots b_{k}}_{c_{1}\cdots d\cdots c_{l}}}$$
where $\tensor{X}{{}^{a}_{bc}}$ is defined by
$$\tensor{X}{{}^{a}_{bc}}=-\frac{1}{2}\bigl{(}g_{ec}\underline{\nabla}_{b}g^{ae}+g_{be}\underline{\nabla}_{c}g^{ae}-g^{ae}g_{bd}g_{cf}\underline{\nabla}_{e}g^{df}\bigr{)}.$$
Contracting $\tensor{X}{{}^{a}_{bc}}$ on the covariant indices yields the vector field
$$X^{a}:=g^{bc}\tensor{X}{{}^{a}_{bc}}=-\underline{\nabla}_{e}g^{ae}+\frac{1}{2}g^{ae}g_{df}\underline{\nabla}_{e}g^{df},$$
(2.1)
which plays an important role in defining the wave gauge.
2.1.3. Sobolev norms for spacetime tensors
To define the Sobolev norms for spacetime tensors that are employed in this article, we
first consider the special case of a rank 2 covariant tensor field $S_{ab}$ on a spacetime manifold of the
form $I\times\Sigma$, where $I\subset\mathbb{R}$ is an interval and, as above, $\Sigma=\mathbb{S}^{n-1}$. Assuming that $t$ is a Cartesian coordinate on $I$, we interpret the associated coordinate vector field $\partial_{t}$ on $I$ as defining a vector field $(\partial_{t})^{a}$ on $I\times\Sigma$ that satisfies $(\partial_{t})^{a}(dt)_{a}=1$. We then decompose $S_{ab}$ into a scalar $(\partial_{t})^{a}S_{ab}(\partial_{t})^{b}$, a spatial one form $\tensor{\underline{h}}{{}^{b}_{a}}S_{bc}(\partial_{t})^{c}$, and a spatial tensor field
$\underline{h}^{c}{}_{a}S_{cd}\underline{h}^{d}{}_{b}$, where, as above, $\underline{h}_{ab}$ is the standard metric on $\mathbb{S}^{n-1}$, which we interpret as a tensor field on $I\times\Sigma$. Now since $(\partial_{t})^{a}S_{ab}(\partial_{t})^{b}$ is a scalar field, and the tensor fields $\underline{h}^{c}{}_{a}S_{cd}\underline{h}^{d}{}_{b}$ and $\underline{h}^{b}{}_{a}S_{bc}(\partial_{t})^{c}$ are purely spatial, we can, by restricting them to $\Sigma_{t}=\{t\}\times\Sigma$, naturally interpret these as tensor fields on the Riemannian manifold $(\Sigma_{t},\underline{h}_{ab})$, which are all trivially isometric for different values of $t$. For $1\leq p\leq\infty$ and $s\in\mathbb{Z}_{\geq 0}$, the $W^{s,p}$ Sobolev norm of $S_{ab}$ is then defined by
$$\|S_{ab}(t)\|_{W^{k,p}}=\|(\partial_{t})^{a}S_{ab}(\partial_{t})^{b}|_{\Sigma_{t}}\|_{W^{k,p}(\Sigma_{t})}+\|\underline{h}^{b}{}_{a}S_{bc}(\partial_{t})^{c}|_{\Sigma_{t}}\|_{W^{k,p}(\Sigma_{t})}+\|\underline{h}^{c}{}_{a}S_{cd}\underline{h}^{d}{}_{b}|_{\Sigma_{t}}\|_{W^{k,p}(\Sigma_{t})}$$
for $t\in I$ where the Sobolev norms $\|\cdot\|_{W^{k,p}(\Sigma_{t})}$ on the Riemannin manifolds $(\Sigma_{t},\underline{h}_{ab})$ are defined in the usual way, e.g. see [2, Ch. 2]. We also define
$$\|S_{ab}\|_{L^{\infty}(I,W^{s,p})}=\sup_{t\in I}\|S_{ab}(t)\|_{W^{s,p}}.$$
By identifying $\Sigma_{t}$ with $\Sigma$ via the isometry $\Sigma\ni x\longmapsto(t,x)\in\Sigma_{t}$, we can view $(\partial_{t})^{a}S_{ab}(\partial_{t})^{b}$, $\underline{h}^{c}{}_{a}S_{cd}\underline{h}^{d}{}_{b}$ and $\underline{h}^{c}{}_{a}S_{cd}\underline{h}^{d}{}_{b}$ as time-dependent tensor fields on the fixed Riemmanian manifold $(\Sigma,\underline{h}_{ab})$. We then have
$$S_{ab}\in C^{\ell}(I,W^{s,p}(\Sigma))$$
provided each of the maps
$I\ni t\longmapsto(\partial_{t})^{a}S_{ab}(\partial_{t})^{b}|_{\Sigma_{t}}\in W^{s,p}(\Sigma)$,
$I\ni t\longmapsto\underline{h}^{c}{}_{a}S_{cd}\underline{h}^{d}{}_{b}|_{\Sigma_{t}}\in W^{s,p}(\Sigma)$
and
$I\ni t\longmapsto\underline{h}^{c}{}_{a}S_{cd}\underline{h}^{d}{}_{b}|_{\Sigma_{t}}\in W^{s,p}(\Sigma)$
are $\ell$-times continuously differentiable with respect to $t$.
The $W^{k,p}$ norms for general spacetime tensor fields can be defined in a similar fashion.
3. Conformal Einstein–Yang–Mills equations
The aim of this section is to transform the conformal Einstein–Yang–Mills equations into Fuchsian form in a step by step fashion. The Fuchsian formulation of the Yang–Mills equations derived below is new while the Einstein equations are transformed into Fuchsian form following the method used in the articles666In this article, the conformal factor, e.g. $\Psi$, is different but has the same asymptotic properties, i.e. $\sin(Ht)\sim\tan(Ht)\sim Ht$ near $t=0$. It is because of this that the same methods continue to work. [21, 17, 16, 19, 15, 22].
3.1. Fuchsian formalism of the reduced conformal Einstein equations
We begin the transformation of the Einstein equations (1.2) into Fuchsian form by contracting them with $\tilde{g}^{ab}$ to get
$$\tilde{R}=\frac{2}{n-2}(n\Lambda-\tilde{T}),$$
(3.1)
where we are using $\tilde{T}=\tilde{g}^{cd}\tilde{T}_{cd}$ to denote the trace of the stress-energy tensor.
Inserting (3.1) into (1.2) yields
$$\displaystyle\tilde{R}_{ab}-\frac{2}{n-2}\Lambda\tilde{g}_{ab}=\tilde{T}_{ab}-\frac{1}{n-2}\tilde{T}\tilde{g}_{ab}.$$
(3.2)
With the help of (1.6) and (3.2), a straightforward calculation
using well-known conformal transformation rules (e.g. [7, Appendix VI]) then shows that the Einstein equations (1.2) transform under the conformal change of variable
(1.26)–(1.27)
into
$$\displaystyle R^{ab}=(n-2)(\nabla^{a}\nabla^{b}\Psi-\nabla^{a}\Psi\nabla^{b}\Psi)+\left(\Box\Psi+(n-2)\nabla^{c}\Psi\nabla_{c}\Psi+\frac{n-1}{H^{2}}e^{2\Psi}\right)g^{ab}$$
$$\displaystyle\hskip 142.26378pt+T^{ab}-\frac{1}{n-2}Tg^{ab},$$
(3.3)
where $T_{ab}=F_{a}{}^{c}F_{bc}-\frac{1}{4}g_{ab}F^{cd}F_{cd}$, $T^{ab}=g^{ac}g^{bd}T_{cd}$, $T=g_{cd}T^{cd}$, and we note that $T_{ab}=\tilde{T}_{ab}$ and $T=e^{2\Psi}\tilde{T}$ by (1.26)–(1.27) and (1.29).
3.1.1. The reduced Einstein equations
The next step in transforming the Einstein equations into Fuchsian form is to select a wave gauge that will allow us to formulate the Einstein equations as a symmetric hyperbolic system and eliminate problematic singular terms.
To this end, we set
$$\displaystyle Z^{a}=X^{a}+Y^{a}$$
(3.4)
where $X^{a}$ is as defined above by (2.1) and
$$\displaystyle Y^{a}=-(n-2)\nabla^{a}\Psi+\eta^{a}\quad\text{with}\quad\eta^{a}=(n-2)\underline{\nabla}^{a}\Psi=-\frac{n-2}{\tan(Ht)}\nu^{a}.$$
(3.5)
The wave gauge that we employ for the conformal Einstein equations is then defined by the vanishing of the vector field (3.4), that is,
$$\displaystyle Z^{a}=0.$$
(3.6)
This gauge choice is preserved by the evolution and so we only need to choose initial data so that $Z^{a}$ vanishes on the initial hypersurface to ensure that it vanishes throughout the evolution. We will refer to the conformal Einstein equations in the wave gauge (3.6) as the reduced conformal Einstein equations.
Lemma 3.1.1.
The reduced conformal Einstein equations are given by
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}g^{ab}+\underline{R}^{ab}+P^{ab}+Q^{ab}+\frac{1}{n-2}X^{a}X^{b}-(n-2)\nu_{c}g^{c(a}\nu^{b)}$$
$$\displaystyle+\frac{n-2}{2\tan^{2}(Ht)}\nu^{a}(g^{bc}-\underline{g}^{bc})\nu_{c}+\frac{n-2}{2\tan^{2}(Ht)}\nu^{b}(g^{ac}-\underline{g}^{ac})\nu_{c}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}g^{ab}+\left(\frac{\lambda+1}{\sin^{2}(Ht)}+(n-2)\right)g^{ab}+g^{bd}F^{ac}F_{dc}-\frac{1}{2(n-2)}g^{ab}F^{cd}F_{cd}.$$
(3.7)
Before proving this lemma, we first observe that corollary below follows from setting $g^{ab}=\underline{g}^{ab}$, $\lambda=-1$, and $F_{cd}=0$ in (3.1.1).
Corollary 3.1.2.
The Ricci tensor of the conformal de Sitter metric $\underline{g}_{ab}$ is given by
$$\underline{R}^{ab}=(n-2)\underline{h}^{ab}$$
and satisfies the relations
$$\underline{R}^{ab}\nu_{a}\nu_{b}=0,\quad\underline{R}^{ab}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}=0{\quad\text{and}\quad}\underline{R}^{ab}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}=(n-2)\underline{h}^{ef}.$$
Proof of Lemma 3.1.1.
The reduced Einstein equations are obtained from the Einstein equations (3.1) by adding the term $-\nabla^{(a}Z^{b)}-\frac{1}{n-2}\tensor{A}{{}^{ab}_{c}}Z^{c}$ that vanishes when the wave gauge $Z^{a}=0$ holds. The resulting equations are given by
$$\displaystyle R^{ab}-\nabla^{(a}Z^{b)}-\frac{1}{n-2}\tensor{A}{{}^{ab}_{c}}Z^{c}={}$$
$$\displaystyle T^{ab}-\frac{1}{n-2}g^{ab}g^{cd}T_{cd}+(n-2)(\nabla^{a}\nabla^{b}\Psi-\nabla^{a}\Psi\nabla^{b}\Psi)$$
$$\displaystyle+\left(\Box\Psi+(n-2)\nabla^{c}\Psi\nabla_{c}\Psi+\frac{n-1}{H^{2}}e^{2\Psi}\right)g^{ab},$$
(3.8)
where $\tensor{A}{{}^{ab}_{c}}$ is defined by
$$\displaystyle\tensor{A}{{}^{ab}_{c}}=-X^{(a}\tensor{\delta}{{}^{b)}_{c}}+Y^{(a}\tensor{\delta}{{}^{b)}_{c}},$$
and we note that
$$\displaystyle-\frac{1}{n-2}\tensor{A}{{}^{ab}_{c}}Z^{c}=\frac{1}{n-2}X^{a}X^{b}-\frac{1}{n-2}Y^{a}Y^{b}{\quad\text{and}\quad}-\nabla^{(a}Z^{b)}=-\nabla^{(a}X^{b)}-\nabla^{(a}Y^{b)}.$$
Recalling the decomposition (A.3) for the conformal Ricci tensor from Lemma A.1.2 and noting the identity
$$T^{ab}-\frac{1}{n-2}g^{ab}g^{cd}T_{cd}={}g^{bd}F^{ac}F_{dc}-\frac{1}{2(n-2)}g^{ab}F^{cd}F_{cd},$$
it is then straightforward to verify that
reduced conformal Einstein equations (3.1.1) can be expressed as
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}g^{ab}+\underline{R}^{ab}+P^{ab}+Q^{ab}+\frac{1}{n-2}X^{a}X^{b}-\nabla^{(a}Y^{b)}-\frac{1}{n-2}Y^{a}Y^{b}$$
$$\displaystyle={}$$
$$\displaystyle(n-2)(\nabla^{a}\nabla^{b}\Psi-\nabla^{a}\Psi\nabla^{b}\Psi)+\left(\Box\Psi+(n-2)\nabla^{c}\Psi\nabla_{c}\Psi+\frac{n-1}{H^{2}}\frac{H^{2}}{\sin^{2}(Ht)}\right)g^{ab}$$
$$\displaystyle+g^{bd}F^{ac}F_{dc}-\frac{1}{2(n-2)}g^{ab}F^{cd}F_{cd}.$$
(3.9)
But by (3.5), we observe with the help of the identity (A.2) from Lemma A.1.1 that
$$\displaystyle-\nabla^{(a}Y^{b)}=(n-2)\nabla^{a}\nabla^{b}\Psi-\frac{n-2}{\sin^{2}(Ht)}\nu_{c}g^{c(a}\nu^{b)}-\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}g^{ab}.$$
Using this expression and (3.5) allows us to write (3.1.1) as
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}g^{ab}+\underline{R}^{ab}+P^{ab}+Q^{ab}+\frac{1}{n-2}X^{a}X^{b}+(n-2)\nabla^{a}\nabla^{b}\Psi-\frac{n-2}{\sin^{2}(Ht)}\nu_{c}g^{c(a}\nu^{b)}$$
$$\displaystyle-\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}g^{ab}-\frac{1}{n-2}(-(n-2)\nabla^{a}\Psi+\eta^{a})(-(n-2)\nabla^{b}\Psi+\eta^{b})$$
$$\displaystyle={}$$
$$\displaystyle(n-2)(\nabla^{a}\nabla^{b}\Psi-\nabla^{a}\Psi\nabla^{b}\Psi)+\left(\Box\Psi+(n-2)\nabla^{c}\Psi\nabla_{c}\Psi+\frac{n-1}{H^{2}}\frac{H^{2}}{\sin^{2}(Ht)}\right)g^{ab}$$
$$\displaystyle+g^{bd}F^{ac}F_{dc}-\frac{1}{2(n-2)}g^{ab}F^{cd}F_{cd}.$$
Finally, using the identity (A.6) from Lemma A.1.3 and the relation
$$\displaystyle X^{a}\nu_{a}=-Y^{a}\nu_{a}=-\frac{n-2}{\tan(Ht)}(\lambda+1),$$
the above formulation of the reduced Einstein equations is easily seen to be equivalent to (3.1.1), which completes the proof.
∎
We proceed by deriving a $(n-1)+1$ decomposition of the reduced Einstein equations that is formulated in terms of the variables defined by (1.30) and (1.42). Here, one should view the gravitational fields
$\lambda+1$, $\xi^{a}$ and $h^{ab}-\underline{h}^{ab}$ as being small, and hence, representing perturbations of the background de Sitter solution. As we establish in the following corollary, each of these perturbed variables satisfies a wave equation.
Corollary 3.1.3.
Let $\lambda$, $\xi^{c}$, $h^{ab}$, $E_{b}$ and $H_{db}$ be as defined above by (1.30), (1.42) and (1.44). Then $\lambda+1$, $\xi^{a}$ and $h^{ab}-\underline{h}^{ab}$ satisfy the wave equations
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}(\lambda+1)+P^{ab}\nu_{a}\nu_{b}+Q^{ab}\nu_{a}\nu_{b}+\frac{1}{n-2}X^{a}X^{b}\nu_{a}\nu_{b}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}(\lambda+1)+\frac{(\lambda+1)^{2}}{\sin^{2}(Ht)}+\frac{n-3}{\sin^{2}(Ht)}(\lambda+1)-(n-2)(\lambda+1)$$
$$\displaystyle+\Bigl{(}\nu_{a}\nu_{b}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\lambda g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
(3.10)
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}\xi^{e}+P^{ab}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}+Q^{ab}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}+\frac{1}{n-2}X^{a}X^{b}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}\xi^{e}+\frac{n-2}{2\tan^{2}(Ht)}\xi^{e}+(\lambda+1)\xi^{e}\frac{1}{\sin^{2}(Ht)}+\frac{1}{2}(n-2)\xi^{e}$$
$$\displaystyle+\Bigl{(}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\xi^{e}g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
(3.11)
and
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}\left(h^{ef}-\underline{h}^{ef}\right)+\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}P^{ab}+\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}Q^{ab}+\frac{1}{n-2}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}X^{a}X^{b}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}\left(h^{ef}-\underline{h}^{ef}\right)+h^{ef}\frac{\lambda+1}{\sin^{2}(Ht)}+(n-2)(\underline{h}^{ef}-h^{ef})$$
$$\displaystyle+\Bigl{(}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}h^{ef}g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
(3.12)
respectively.
Proof.
Expressing the non-linear source term in (3.1.1) involving $F_{ab}$ in terms of $E_{b}$ and $H_{ab}$, see (1.42) and (1.44), as
$$\displaystyle g^{bd}F^{ac}F_{dc}-\frac{1}{2(n-2)}g^{ab}F^{cd}F_{cd}$$
$$\displaystyle={}$$
$$\displaystyle\Bigl{(}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}g^{ab}g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
the proof then follows from applying $\nu_{a}\nu_{b}$, $\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}$, and $\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}$ to (3.1.1) and employing Corollary 3.1.2 and the relation (A.1) from Lemma A.1.1.
∎
Next, we further decompose the gravitational field $h^{ab}$ in terms of the variables $q$ and $\mathfrak{h}^{ab}$ defined by (1.39). To carry out the decomposition, we introduce the projection operator
$$\displaystyle\tensor{\mathcal{L}}{{}^{ab}_{cd}}=\tensor{\delta}{{}^{a}_{c}}\tensor{\delta}{{}^{b}_{d}}-\frac{1}{n-1}h^{ab}h_{cd}.$$
(3.13)
The motivation for the additional decomposition of $h^{ab}$ is that, due to the identity
$$\displaystyle\tensor{\mathcal{L}}{{}^{ab}_{cd}}h^{cd}=0,$$
(3.14)
an application of the projection (3.13) to the wave equation (3.1.3) for $h^{ab}$ removes the problematic singular term
$h^{ef}\frac{\lambda+1}{\sin^{2}(Ht)}$ and results in a wave equation for $\mathfrak{h}^{ab}$. As we shall see in the following corollary, the particular form of the variable $q$ is chosen to remove problematic singular terms its evolution equation, which also happens to be a wave equation.
Corollary 3.1.4.
Let $q$ and $\mathfrak{h}^{ab}-\underline{h}^{ab}$ be defined by (1.39). Then $q$ and $\mathfrak{h}^{ab}-\underline{h}^{ab}$ satisfy the wave equations
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}q-\frac{3-n}{2(n-1)}g^{cd}\underline{\nabla}_{c}h_{ef}\underline{\nabla}_{d}h^{ef}+\frac{3-n}{n-1}h_{ab}P^{ab}+\frac{3-n}{n-1}h_{ab}Q^{ab}$$
$$\displaystyle+\frac{3-n}{(n-1)(n-2)}h_{ab}X^{a}X^{b}+P^{ab}\nu_{a}\nu_{b}+Q^{ab}\nu_{a}\nu_{b}+\frac{1}{n-2}X^{a}X^{b}\nu_{a}\nu_{b}$$
$$\displaystyle={}$$
$$\displaystyle\frac{(n-2)}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}q+(\lambda+1)^{2}\frac{1}{\sin^{2}(Ht)}-(n-2)(\lambda+1)+\Bigl{(}\frac{3-n}{n-1}h_{ab}g^{bd}g^{a\hat{a}}$$
$$\displaystyle-\frac{3-n+\lambda}{2(n-2)}g^{d\hat{a}}+\nu_{a}\nu_{b}g^{bd}g^{a\hat{a}}\Bigr{)}\cdot g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
(3.15)
and
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}(\mathfrak{h}^{ab}-\underline{h}^{ab})-\frac{1}{2}g^{cd}\underline{\nabla}_{c}(S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}})\underline{\nabla}_{d}h^{ef}+S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}P^{ab}$$
$$\displaystyle+S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}Q^{ab}+\frac{1}{n-2}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}X^{a}X^{b}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}(\mathfrak{h}^{ab}-\underline{h}^{ab})-(n-2)S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{cd}}(\mathfrak{h}^{cd}-\underline{h}^{cd})$$
$$\displaystyle+S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}g^{bd}g^{a\hat{a}}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
(3.16)
respectively.
Proof.
Rather than deriving the wave (3.1.4) for $q$ directly, we first derive a wave equation for $\ln S$, where we recall that $S$ is defined by (1.40). We begin the derivation by noting from Jacobi’s identity and the definition of $\alpha$, see (1.40), that
$$(n-1)\underline{\nabla}_{d}(\ln\alpha)=h_{ab}{\partial_{d}}h^{ab}=h_{ab}\underline{\nabla}_{d}h^{ab}+\underline{h}_{ab}{\partial_{d}}\underline{h}^{ab}{\quad\text{and}\quad}(n-1)\underline{\nabla}_{d}(\ln\underline{\alpha})=\underline{h}_{ab}{\partial_{d}}\underline{h}^{ab}.$$
Using these expressions, we obtain
$$\displaystyle h_{ab}\underline{\nabla}_{d}h^{ab}=(n-1)\underline{\nabla}_{d}(\ln S),$$
which, after differentiating, yields
$$\displaystyle h_{ab}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}h^{ab}={}$$
$$\displaystyle(n-1)g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}(\ln S)-g^{cd}\underline{\nabla}_{c}h_{ab}\underline{\nabla}_{d}h^{ab}.$$
(3.17)
Contracting (3.1.3) with $h_{ef}/(n-1)$ while making use of (3.17) then shows that $\ln S$ satisfies
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}(\ln S)-\frac{1}{2(n-1)}g^{cd}\underline{\nabla}_{c}h_{ef}\underline{\nabla}_{d}h^{ef}+\frac{1}{n-1}h_{ab}P^{ab}+\frac{1}{n-1}h_{ab}Q^{ab}$$
$$\displaystyle\hskip 28.45274pt+\frac{1}{(n-1)(n-2)}h_{ab}X^{a}X^{b}=\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}\ln S+(\lambda+1)\frac{1}{\sin^{2}(Ht)}$$
$$\displaystyle\hskip 28.45274pt+\Bigl{(}\frac{1}{n-1}h_{ab}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
(3.18)
Noting the singular terms $(\lambda+1)\frac{1}{\sin^{2}(Ht)}$ in (3.1.1) and $\frac{n-3}{\sin^{2}(Ht)}(\lambda+1)$ in (3.1.3), the combination $(3-n)\times\eqref{E:SSEQ1}+\eqref{E:TTEQ}$ removes these terms and, with the help of the definition (1.39) of $q$, leads to the wave equation (3.1.4) for $q$.
In order to obtain the wave equation (3.1.4), we first note from (1.39) and (3.13) that
$$\displaystyle\frac{1}{S}\tensor{\mathcal{L}}{{}^{ab}_{cd}}\underline{\nabla}_{e}h^{cd}=\underline{\nabla}_{e}\mathfrak{h}^{ab}.$$
(3.19)
Making use of the identities (3.14)–(3.19), it follows from applying $S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}$ to (3.1.3) that
$$\displaystyle\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}\mathfrak{h}^{ab}-\frac{1}{2}g^{cd}\underline{\nabla}_{c}(S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}})\underline{\nabla}_{d}h^{ef}+S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}P^{ab}$$
$$\displaystyle+S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}Q^{ab}+\frac{1}{n-2}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}X^{a}X^{b}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-2}{2\tan(Ht)}\nu^{c}\underline{\nabla}_{c}\mathfrak{h}^{ab}+(n-2)S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{cd}}(\underline{h}^{cd}-h^{cd})$$
$$\displaystyle+S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}g^{bd}g^{a\hat{a}}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
However, due to (3.14), we have
$$(n-2)S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{cd}}(\underline{h}^{cd}-h^{cd})=(n-2)S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{cd}}(\underline{h}^{cd}-S^{-1}h^{cd}),$$
and so, $\mathfrak{h}^{ab}$ satisfies the wave equation (3.1.4).
∎
3.1.2. A Fuchsian formulation of the reduced conformal Einstein equations
In this section, we transform the second order wave equations (3.1.3)–(3.1.3) and (3.1.4)–(3.1.4) for the gravitational fields $\lambda$, $\xi^{c}$, $q$ and $\mathfrak{h}^{ab}$ into a first order, symmetric hyperbolic Fuchsian equation that is expressed in terms of the renormalized fields (1.31)–(1.38). From (1.31)–(1.38), we observe that
$$\displaystyle\underline{\nabla}_{d}\lambda=m_{d}+\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu_{d}{\quad\text{and}\quad}\underline{\nabla}_{d}\xi^{a}=\tensor{p}{{}^{a}_{d}}+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{a}\nu_{d}.$$
(3.20)
Using these relations and (1.12), we obtain the first order equations
$$\displaystyle\underline{\nabla}_{d}m=\left(\frac{1}{\breve{\mathtt{J}}}-1\right)\frac{1}{Ht}m\nu_{d}+\frac{1}{\breve{\mathtt{A}}t}m_{d},$$
(3.21)
$$\displaystyle\underline{\nabla}_{d}p^{a}=\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{1}{Ht}p^{a}\nu_{d}+\frac{1}{\breve{\mathtt{B}}t}\tensor{p}{{}^{a}_{d}},$$
(3.22)
and
$$\displaystyle\underline{\nabla}_{d}s^{ab}=\tensor{s}{{}^{ab}_{d}},\qquad\underline{\nabla}_{d}s=s_{d},$$
(3.23)
from differentiating (1.31), (1.32), (1.35) and (1.37). In the derivation of our Fuchsian formulation, we will need to consider these first order equations as well as the second order wave equations (3.1.3)–(3.1.3) and (3.1.4)–(3.1.4).
In the following, we use repeatedly the symmetrizing tensor $Q^{edc}$ defined by
$$\displaystyle Q^{edc}=\nu^{e}g^{dc}+\nu^{d}g^{ec}-\nu^{c}g^{ed}$$
(3.24)
to cast the reduced conformal Einstein equations in first order form. To illustrate how this tensor is employed, we consider first the wave equation (3.1.4). Using the symmetrizing tensor $Q^{edc}$, we observe that the principle part of (3.1.4), which by (1.35) is given by $g^{cd}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}$, can be expressed in the symmetric form $Q^{edc}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}$ using the relation
$$\displaystyle\nu^{e}(g^{cd}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}})=Q^{edc}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}-(\nu^{d}g^{ec}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}-\nu^{c}g^{ed}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}})$$
$$\displaystyle\hskip 28.45274pt=Q^{edc}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}-\nu^{d}g^{ec}(\underline{\nabla}_{c}\underline{\nabla}_{d}s^{ab}-\underline{\nabla}_{d}\underline{\nabla}_{c}s^{ab})=Q^{edc}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}+\nu^{d}g^{ec}(\tensor{\underline{R}}{{}_{cdf}^{a}}s^{fb}+\tensor{\underline{R}}{{}_{cdf}^{b}}s^{fa}).$$
In this way, the symmetrizing tensor $Q^{edc}$ can be used to obtain a symmetric hyperbolic formulation from the wave equation (3.1.4).
With the preliminary calculations out of the way, we are now ready to complete the transformation of the reduced conformal Einstein equations into Fuchsian form. This will be done in a sequence of steps starting with the Fuchsian formulation of the wave equation (3.1.3) for $\lambda+1$. The resulting Fuchsian equation is displayed in the following lemma.
Lemma 3.1.5.
Let $\mathbf{U}$ be defined by (1.46). Then the wave equation (3.1.3) for the gravitational field $\lambda+1$ can be expressed in the first order Fuchsian form
$$\displaystyle-\bar{\mathbf{A}}_{1}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}+\bar{\mathbf{A}}_{1}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{1}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}+\bar{G}_{1},$$
where
$$\displaystyle\bar{\mathbf{A}}_{1}^{0}=\begin{pmatrix}-\lambda&0&0\\
0&h^{f\hat{e}}&0\\
0&0&\breve{\mathtt{E}}(-\lambda)\end{pmatrix},\quad\bar{\mathbf{A}}_{1}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},\quad\bar{G}_{1}=\begin{pmatrix}\nu_{e}\triangle^{e}_{1}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}^{f}_{e}}\triangle^{e}_{1}(t,\mathbf{U})\\
0\end{pmatrix},$$
$$\displaystyle\bar{\mathcal{B}}_{1}=\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)(-\lambda)&0&\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}}{H}(-\lambda)\\
0&\frac{1}{\breve{\mathtt{J}}}h^{f\hat{e}}&0\\
\breve{\mathtt{E}}\frac{H}{\breve{\mathtt{A}}}(-\lambda)&0&\breve{\mathtt{E}}\left(\frac{1}{\breve{\mathtt{J}}}-1\right)(-\lambda)\end{pmatrix},$$
$\breve{\mathtt{E}}$ is an arbitrary constant constant, and
$$\displaystyle\triangle^{e}_{1}(t,\mathbf{U})=$$
$$\displaystyle\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(n-3-\frac{1}{\breve{\mathtt{J}}}\right)\frac{\breve{\mathtt{A}}^{2}}{H^{2}}m^{2}\nu^{e}-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{J}}H}p^{d}m_{d}\nu^{e}-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{J}}H}p^{a}m_{a}\nu^{e}$$
$$\displaystyle+\frac{1}{\breve{\mathtt{J}}H}\breve{\mathtt{A}}mm_{d}\nu^{e}\nu^{d}-\frac{\breve{\mathtt{A}}^{2}}{\breve{\mathtt{J}}H^{2}}\left(\frac{1}{\breve{\mathtt{J}}}-1\right)m^{2}\nu^{e}-\breve{\mathtt{A}}\left(\frac{1}{\breve{\mathtt{J}}}-n+2\right)\frac{1}{H}m\nu^{e}\nu^{d}m_{d}$$
$$\displaystyle-\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)(n-2)\nu^{e}\nu^{c}m_{c}+\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)(n-2)\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu^{e}$$
$$\displaystyle-2(n-3)\left(\frac{1}{(Ht)^{2}}+\frac{1}{\sin^{2}(Ht)}\right)\breve{\mathtt{A}}tm\nu^{e}-\frac{2(\breve{\mathtt{A}}tm)^{2}}{\sin^{2}(Ht)}\nu^{e}-2(n-2)\breve{\mathtt{A}}tm\nu^{e}$$
$$\displaystyle-2P^{ab}\nu_{a}\nu_{b}\nu^{e}-2Q^{ab}\nu_{a}\nu_{b}\nu^{e}-\frac{2}{n-2}X^{a}X^{b}\nu_{a}\nu_{b}\nu^{e}$$
$$\displaystyle+2\nu^{e}\Bigl{(}\nu_{a}\nu_{b}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\lambda g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
Proof.
Using the definition (1.31) and (1.33) for $m_{d}$ and $m$, respectively, we can express the wave equation (3.1.3) for $\lambda+1$ as
$$\displaystyle g^{cd}\underline{\nabla}_{c}m_{d}=$$
$$\displaystyle\frac{n-2}{\tan(Ht)}\nu^{c}m_{c}-\frac{n-2}{\tan(Ht)}\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m+\frac{2\breve{\mathtt{A}}t}{\sin^{2}(Ht)}m(n-2+\lambda)-2P^{ab}\nu_{a}\nu_{b}-2Q^{ab}\nu_{a}\nu_{b}$$
$$\displaystyle-2(n-2)\breve{\mathtt{A}}tm-\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H^{2}t}\left(\frac{1}{\breve{\mathtt{J}}}-1\right)\lambda m-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{J}}H}p^{a}m_{a}+\frac{1}{\breve{\mathtt{J}}Ht}\lambda\nu^{a}m_{a}-\frac{2}{n-2}X^{a}X^{b}\nu_{a}\nu_{b}$$
$$\displaystyle+2\Bigl{(}\nu_{a}\nu_{b}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\lambda g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
(3.25)
where in deriving this we have employed the identity
$$-\frac{1}{2\breve{\mathtt{J}}Ht}g^{cd}\nu_{d}m_{c}=-\frac{\breve{\mathtt{B}}}{2\breve{\mathtt{J}}H}p^{a}m_{a}+\frac{1}{2\breve{\mathtt{J}}Ht}\lambda\nu^{a}m_{a}.$$
Multiplying (3.1.2) by $\nu^{e}$ yields
$$\displaystyle\nu^{e}g^{cd}\underline{\nabla}_{c}m_{d}=$$
$$\displaystyle\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)\frac{1}{Ht}\nu^{e}\nu^{d}m_{d}-\left(\frac{1}{\breve{\mathtt{J}}}\left(n-1-\frac{1}{\breve{\mathtt{J}}}\right)-2n+6\right)\frac{\breve{\mathtt{A}}}{H^{2}t}m\nu^{e}+\triangle_{11}^{e}$$
(3.26)
where $\triangle_{11}^{e}$ is defined by
$$\displaystyle\triangle_{11}^{e}$$
$$\displaystyle=\frac{1}{\breve{\mathtt{J}}H}\breve{\mathtt{A}}m\nu^{e}\nu^{d}m_{d}-\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H^{2}}\left(\frac{1}{\breve{\mathtt{J}}}-1\right)\breve{\mathtt{A}}m^{2}\nu^{e}-\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)(n-2)\nu^{e}\nu^{c}m_{c}$$
$$\displaystyle+\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)(n-2)\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu^{e}-2(n-3)\left(\frac{1}{(Ht)^{2}}-\frac{1}{\sin^{2}(Ht)}\right)\breve{\mathtt{A}}tm\nu^{e}+\frac{2(\breve{\mathtt{A}}tm)^{2}}{\sin^{2}(Ht)}\nu^{e}$$
$$\displaystyle-2(n-2)\breve{\mathtt{A}}tm\nu^{e}-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{J}}H}p^{a}m_{a}\nu^{e}-2P^{ab}\nu_{a}\nu_{b}\nu^{e}-2Q^{ab}\nu_{a}\nu_{b}\nu^{e}-\frac{2}{n-2}X^{a}X^{b}\nu_{a}\nu_{b}\nu^{e}$$
$$\displaystyle+2\nu^{e}\Bigl{(}\nu_{a}\nu_{b}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\lambda g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
(3.27)
To symmetrize the system, we use the symmetrizing tensor $Q^{edc}$ defined by (3.24) as
well as the identities
$$\frac{1}{t}\nu^{d}\nu^{e}=\frac{1}{t}Q^{fgc}\nu_{c}(\tensor{\delta}{{}^{e}_{f}}\tensor{\delta}{{}^{d}_{g}}-\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}})+\breve{\mathtt{A}}m\nu^{e}\nu^{d}{\quad\text{and}\quad}\frac{1}{t}\nu^{e}=-\frac{1}{t}Q^{edc}\nu_{c}\nu_{d}+\breve{\mathtt{A}}m\nu^{e},$$
(3.28)
which are a consequence of the definitions (1.30), (1.31) and (3.24).
By (3.24) and (3.26), we have
$$\displaystyle Q^{edc}\underline{\nabla}_{c}m_{d}-\left(\nu^{d}g^{ec}\underline{\nabla}_{c}m_{d}-\nu^{c}g^{ed}\underline{\nabla}_{c}m_{d}\right)=\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)\frac{1}{Ht}Q^{fgc}\nu_{c}(\tensor{\delta}{{}^{e}_{f}}\tensor{\delta}{{}^{d}_{g}}-\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}})m_{d}$$
$$\displaystyle\hskip 56.9055pt+\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}}{H^{2}t}Q^{edc}\nu_{c}\nu_{d}m+\triangle_{12}^{e},$$
(3.29)
where we note that $\frac{1}{\breve{\mathtt{J}}}\left(n-1-\frac{1}{\breve{\mathtt{J}}}\right)-2n+6=\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)$ and $\triangle_{12}^{e}$ is defined by
$$\displaystyle\triangle_{12}^{e}={}$$
$$\displaystyle\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)\frac{\breve{\mathtt{A}}}{H}m\nu^{e}\nu^{d}m_{d}-\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}^{2}}{H^{2}}m^{2}\nu^{e}+\triangle_{11}^{e},$$
with $\triangle_{11}^{e}$ given by (3.1.2).
Observing from
(3.20) and (3.21) that
$$\displaystyle\nu^{d}g^{ec}\underline{\nabla}_{c}m_{d}-\nu^{c}g^{ed}\underline{\nabla}_{c}m_{d}=\nu^{d}g^{ec}(\underline{\nabla}_{c}m_{d}-\underline{\nabla}_{d}m_{c})$$
$$\displaystyle={}$$
$$\displaystyle\nu^{d}g^{ec}\left[\underline{\nabla}_{c}\underline{\nabla}_{d}\lambda-\underline{\nabla}_{c}\frac{\breve{\mathtt{A}}m}{\breve{\mathtt{J}}H}\nu_{d}-\underline{\nabla}_{d}\underline{\nabla}_{c}\lambda+\underline{\nabla}_{d}\frac{\breve{\mathtt{A}}m}{\breve{\mathtt{J}}H}\nu_{c}\right]=\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}\nu^{d}g^{ec}\left(-\underline{\nabla}_{c}m\nu_{d}+\underline{\nabla}_{d}m\nu_{c}\right)$$
$$\displaystyle={}$$
$$\displaystyle\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}g^{ec}\tensor{\underline{h}}{{}^{d}_{c}}\underline{\nabla}_{d}m=\frac{1}{\breve{\mathtt{J}}Ht}g^{ec}\tensor{\underline{h}}{{}^{d}_{c}}m_{d}=\frac{1}{\breve{\mathtt{J}}Ht}(h^{ed}-\xi^{d}\nu^{e})m_{d}=-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{J}}H}p^{d}m_{d}\nu^{e}+\frac{1}{\breve{\mathtt{J}}Ht}m_{d}h^{ed}$$
$$\displaystyle={}$$
$$\displaystyle-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{J}}H}p^{d}m_{d}\nu^{e}+\frac{1}{\breve{\mathtt{J}}Ht}Q^{abc}\nu_{c}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}}m_{d},$$
we find after substituting this expression into (3.1.2) that
$$\displaystyle Q^{edc}\underline{\nabla}_{c}m_{d}={}$$
$$\displaystyle\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)\frac{1}{Ht}Q^{fgc}\nu_{c}(\tensor{\delta}{{}^{e}_{f}}\tensor{\delta}{{}^{d}_{g}}-\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}})m_{d}$$
$$\displaystyle+\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}}{H^{2}t}Q^{edc}\nu_{c}\nu_{d}m+\frac{1}{\breve{\mathtt{J}}Ht}Q^{fgc}\nu_{c}\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}}m_{d}+\triangle^{e}_{1}$$
(3.30)
where $\triangle^{e}_{1}$ is defined by
$$\triangle^{e}_{1}=-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{J}}H}p^{d}m_{d}\nu^{e}+\triangle_{12}^{e}.$$
To complete derivation, the evolution equation (3.1.2) of $m_{d}$ needs to be supplemented with a first order evolution equation for $m$.
Using the identity $Q^{ebc}\nu_{b}\nu_{c}=\lambda\nu^{e}$ from Lemma A.1.4, it follows immediately from (3.21) that $m$ evolves according to
$$\displaystyle\breve{\mathtt{E}}\lambda\nu^{e}\underline{\nabla}_{e}m=-\breve{\mathtt{E}}\lambda\left(\frac{1}{\breve{\mathtt{J}}}-1\right)\frac{1}{Ht}m+\breve{\mathtt{E}}\frac{\lambda}{\breve{\mathtt{A}}t}m_{e}\nu^{e},$$
where $\breve{\mathtt{E}}$ is a constant will be fixed later. Combining this equation with (3.1.2) yields the singular symmetric hyperbolic system
$$\displaystyle\mathbf{A}_{1}^{c}\underline{\nabla}_{c}\begin{pmatrix}m_{d}\\
m\end{pmatrix}=\frac{1}{Ht}\mathcal{B}_{1}\begin{pmatrix}m_{d}\\
m\end{pmatrix}+\begin{pmatrix}\triangle^{e}_{1}\\
0\end{pmatrix}$$
where
$$\displaystyle\mathbf{A}_{1}^{c}=\begin{pmatrix}Q^{edc}&0\\
0&\lambda\breve{\mathtt{E}}\nu^{c}\end{pmatrix}$$
and
$$\displaystyle\mathcal{B}_{1}=Q^{abc}\nu_{c}\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)(\tensor{\delta}{{}^{e}_{a}}\tensor{\delta}{{}^{d}_{b}}-\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}})+\frac{1}{\breve{\mathtt{J}}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}}&\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}}{H}\tensor{\delta}{{}^{e}_{a}}\nu_{b}\\
\breve{\mathtt{E}}\frac{H}{\breve{\mathtt{A}}}\tensor{\delta}{{}^{d}_{a}}\nu_{b}&\breve{\mathtt{E}}\left(\frac{1}{\breve{\mathtt{J}}}-1\right)\nu_{b}\nu_{a}\end{pmatrix}$$
$$\displaystyle=\begin{pmatrix}Q^{abc}\nu_{c}&0\\
0&-\lambda\breve{\mathtt{E}}\end{pmatrix}\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)(\tensor{\delta}{{}^{e}_{a}}\tensor{\delta}{{}^{d}_{b}}-\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}})+\frac{1}{\breve{\mathtt{J}}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}}&\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}}{H}\tensor{\delta}{{}^{e}_{a}}\nu_{b}\\
-\frac{H}{\breve{\mathtt{A}}}\nu^{d}&\left(\frac{1}{\breve{\mathtt{J}}}-1\right)\end{pmatrix}.$$
$$\displaystyle\bar{\mathbf{A}}_{1}^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{1}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}+\bar{G}_{1}$$
where
$$\displaystyle\bar{\mathbf{A}}_{1}^{c}=\begin{pmatrix}\nu_{e}Q^{edc}\nu_{d}&\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}Q^{edc}\nu_{d}&\tensor{\underline{h}}{{}^{f}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&\lambda\breve{\mathtt{E}}\nu^{c}\end{pmatrix},\quad\bar{G}_{1}=\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}\begin{pmatrix}\triangle^{e}_{1}\\
0\end{pmatrix},$$
(3.39)
and
$$\displaystyle\bar{\mathcal{B}}_{1}={}$$
$$\displaystyle\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}\mathcal{B}_{1}\begin{pmatrix}\nu_{d}&\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&1\end{pmatrix}=\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)(-\lambda)&0&\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}}{H}(-\lambda)\\
0&\frac{1}{\breve{\mathtt{J}}}h^{f\hat{e}}&0\\
\breve{\mathtt{E}}\frac{H}{\breve{\mathtt{A}}}(-\lambda)&0&\breve{\mathtt{E}}\left(\frac{1}{\breve{\mathtt{J}}}-1\right)(-\lambda)\end{pmatrix}.$$
To complete the proof, we observe from Lemma A.1.4 that $\bar{\mathbf{A}}_{1}^{c}\tensor{\underline{h}}{{}^{b}_{c}}$ and $\bar{\mathbf{A}}^{0}_{1}:=\bar{\mathbf{A}}_{1}^{c}\nu_{c}$ agree with the formulas provided in the statement of the lemma.
∎
Corresponding Fuchsian formulations of the wave equations (3.1.3) and (3.1.4)–(3.1.4) for the gravitation fields $\xi^{e}$, $q$ and $\mathfrak{h}^{ab}-\underline{h}^{ab}$ can be derived using similar arguments. Detailed derivations are provided in Appendix A, see Lemmas A.2.1–A.2.3. The Fuchsian formulations for all the gravitational wave equations (3.1.3)–(3.1.3) and (3.1.4)–(3.1.4) are displayed together in the theorem below. The Fuchsian formulation of the Yang–Mills equaitons will be derived separately in §3.2.
Theorem 3.1.
The reduced conformal Einstein equations can be expressed in the following first order, symmetric hyperbolic Fuchsian form:
$$\displaystyle-\bar{\mathbf{A}}_{1}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}+\bar{\mathbf{A}}_{1}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{1}\begin{pmatrix}-\nu^{e}m_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e}\\
m\end{pmatrix}+\bar{G}_{1}(t,\mathbf{U}),$$
$$\displaystyle-\bar{\mathbf{A}}_{2}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}\tensor{p}{{}^{a}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{p}{{}^{a}_{e}}\\
p^{a}\end{pmatrix}+\bar{\mathbf{A}}_{2}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}\tensor{p}{{}^{a}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{p}{{}^{a}_{e}}\\
p^{a}\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{2}\begin{pmatrix}-\nu^{e}\tensor{p}{{}^{a}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{p}{{}^{a}_{e}}\\
p^{a}\end{pmatrix}+\bar{G}_{2}(t,\mathbf{U}),$$
$$\displaystyle-\bar{\mathbf{A}}_{3}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}+\bar{\mathbf{A}}_{3}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{3}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}+\bar{G}_{3}(t,\mathbf{U}),$$
$$\displaystyle-\bar{\mathbf{A}}_{4}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}s_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}s_{e}\\
s\end{pmatrix}+\bar{\mathbf{A}}_{4}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}s_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}s_{e}\\
s\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{4}\begin{pmatrix}-\nu^{e}s_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}s_{e}\\
s\end{pmatrix}+\bar{G}_{4}(t,\mathbf{U}),$$
where
$$\displaystyle\bar{\mathbf{A}}_{1}^{0}=\begin{pmatrix}-\lambda&0&0\\
0&\underline{h}_{f\hat{c}}h^{f\hat{e}}&0\\
0&0&\breve{\mathtt{E}}(-\lambda)\end{pmatrix},\quad\bar{\mathbf{A}}_{1}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\underline{h}_{f\hat{c}}\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{\mathbf{A}}_{2}^{0}=\begin{pmatrix}-\lambda&0&0\\
0&\underline{h}_{f\hat{d}}h^{f\hat{e}}&0\\
0&0&\breve{\mathtt{F}}(-\lambda)\end{pmatrix},\quad\bar{\mathbf{A}}_{2}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\underline{h}_{f\hat{d}}\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{\mathcal{B}}_{1}=\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{J}}}\right)(-\lambda)&0&\left(2-\frac{1}{\breve{\mathtt{J}}}\right)\left(\frac{1}{\breve{\mathtt{J}}}-n+3\right)\frac{\breve{\mathtt{A}}}{H}(-\lambda)\\
0&\frac{1}{\breve{\mathtt{J}}}\underline{h}_{f\hat{c}}h^{f\hat{e}}&0\\
\breve{\mathtt{E}}\frac{H}{\breve{\mathtt{A}}}(-\lambda)&0&\breve{\mathtt{E}}\left(\frac{1}{\breve{\mathtt{J}}}-1\right)(-\lambda)\end{pmatrix},$$
$$\displaystyle\bar{\mathcal{B}}_{2}=\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)(-\lambda)&0&\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\left(\frac{1}{\breve{\mathtt{K}}}-n+2\right)\frac{\breve{\mathtt{B}}}{H}(-\lambda)\\
0&\frac{1}{\breve{\mathtt{K}}}\underline{h}_{f\hat{d}}\tensor{h}{{}^{f\hat{e}}}&0\\
\breve{\mathtt{F}}\frac{H}{\breve{\mathtt{B}}}(-\lambda)&0&\breve{\mathtt{F}}\left(\frac{1}{\breve{\mathtt{K}}}-1\right)(-\lambda)\end{pmatrix},$$
$$\displaystyle\bar{G}_{1}(t,\mathbf{U})=\begin{pmatrix}\nu_{e}\triangle^{e}_{1}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}_{\hat{c}e}}\triangle^{e}_{1}(t,\mathbf{U})\\
0\end{pmatrix},\qquad\bar{G}_{2}(t,\mathbf{U})=\begin{pmatrix}\nu_{e}\triangle^{ea}_{2}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}_{\hat{d}e}}\triangle^{ea}_{2}(t,\mathbf{U})\\
0\end{pmatrix},$$
$$\displaystyle\bar{\mathbf{A}}_{3}^{0}=\begin{pmatrix}-\lambda&0&0\\
0&\underline{h}_{fa}h^{f\hat{e}}&0\\
0&0&-\lambda\end{pmatrix},$$
$$\displaystyle\bar{\mathbf{A}}_{3}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\underline{h}_{fa}\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{\mathbf{A}}_{4}^{0}=\begin{pmatrix}-\lambda&0&0\\
0&\underline{h}_{f\hat{a}}h^{f\hat{e}}&0\\
0&0&-\lambda\end{pmatrix},$$
$$\displaystyle\bar{\mathbf{A}}_{4}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\underline{h}_{f\hat{a}}\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{\mathcal{B}}_{3}=\begin{pmatrix}-\lambda\left(n-2\right)&0&0\\
0&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{\mathcal{B}}_{4}=\begin{pmatrix}-\lambda\left(n-2\right)&0&0\\
0&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{G}_{3}(t,\mathbf{U})=\begin{pmatrix}\nu_{e}\triangle^{e\hat{a}\hat{b}}_{3}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}_{ae}}\triangle^{e\hat{a}\hat{b}}_{3}(t,\mathbf{U})\\
\lambda\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\end{pmatrix},$$
$$\displaystyle\bar{G}_{4}(t,\mathbf{U})=\begin{pmatrix}\nu_{e}\triangle^{e}_{4}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}_{\hat{a}e}}\triangle^{e}_{4}(t,\mathbf{U})\\
\lambda\nu^{e}s_{e}\end{pmatrix}.$$
Here, the maps $\triangle^{e}_{1}(t,\mathbf{U})$, $\triangle^{ea}_{2}(t,\mathbf{U})$, $\triangle^{e\hat{a}\hat{b}}_{3}(t,\mathbf{U})$ and $\triangle^{e}_{4}(t,\mathbf{U})$ are as defined in Lemmas 3.1.5 and A.2.1–A.2.3, and there exists constants $\iota>0$ and $R>0$ such that these maps are analytic for
$(t,\mathbf{U})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$.
Proof.
The proof is a direct consequence of Lemmas 3.1.5 and A.2.1–A.2.3.
∎
3.2. Fuchsian formulation of the Yang–Mills equations
In this section, we turn to deriving a Fuchsian formulation of the Yang–Mills equations. Under the conformal change of variables (1.27)–(1.28), the Yang–Mills system (1.3)–(1.4) transforms into
$$\displaystyle\nabla^{a}F_{ab}$$
$$\displaystyle=-(n-3)\nabla^{a}\Psi F_{ab}-e^{\frac{\Psi}{2}}g^{ac}[A_{c},F_{ab}],$$
(3.40)
$$\displaystyle\nabla_{[a}F_{bc]}$$
$$\displaystyle=-\nabla_{[a}\Psi\cdot F_{bc]}-e^{\frac{\Psi}{2}}[A_{[a},F_{bc]}],$$
(3.41)
which we will refer to as the conformal Yang–Mills equations.
We remark that the Yang–Mills Bianchi equation (3.41) is independent of the choice of connection, and hence, we can replace it by
$$\underline{\nabla}_{[a}F_{bc]}=-\underline{\nabla}_{[a}\Psi\cdot F_{bc]}-e^{\frac{\Psi}{2}}[A_{[a},F_{bc]}].$$
Noting that
$$\displaystyle\nabla_{a}F_{bc}$$
$$\displaystyle=\underline{\nabla}_{a}F_{bc}-X^{d}_{ab}F_{dc}-X^{d}_{ac}F_{bd},$$
the conformal Yang–Mills equation (3.40)–(3.41) can, with the help of (1.11) and Lemma A.1.3, be rewritten as
$$\displaystyle g^{ba}\underline{\nabla}_{b}F_{ac}$$
$$\displaystyle=\frac{(n-3)}{\tan(Ht)}g^{ba}\nu_{a}F_{bc}+X^{d}F_{dc}+g^{ba}X^{d}_{bc}F_{ad}-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}g^{ab}[A_{b},F_{ac}],$$
(3.42)
$$\displaystyle\underline{\nabla}_{[b}F_{ac]}$$
$$\displaystyle=\frac{1}{\tan(Ht)}\nu_{[b}F_{ac]}-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}[A_{[b},F_{ac]}].$$
(3.43)
It is worth noting that among the above equations, the non-vanishing, dynamical equations arise from applying $\tensor{\underline{h}}{{}^{c}_{\hat{c}}}$ and $\nu^{c}$ to (3.42) and $\nu^{b}\tensor{\underline{h}}{{}^{a}_{p}}\tensor{\underline{h}}{{}^{c}_{q}}$ to (3.43),
while the non-dynamical constraint equations are obtained from applying $\tensor{\underline{h}}{{}^{b}_{d}}\tensor{\underline{h}}{{}^{a}_{p}}\tensor{\underline{h}}{{}^{c}_{q}}$ and $g^{cd}\nu_{d}$ to
(3.43) and (3.42), respectively.
In the following, we assume a temporal gauge for the conformal Yang–Mills field defined by
$$A_{d}\nu^{d}=0.$$
(3.44)
With the help of this gauge choice, we combine, in the following lemma, the conformal Yang–Mills equations (3.42)–(3.43) together with an equation for the spatial component of the gauge potential into a single system that is expressed in terms of the Yang–Mills fields
$$E_{b}=-\nu^{p}F_{pa}\tensor{\underline{h}}{{}^{a}_{b}},\quad H_{db}=\tensor{\underline{h}}{{}^{c}_{d}}F_{ca}\tensor{\underline{h}}{{}^{a}_{b}},\quad\text{and}\quad\bar{A}_{b}=A_{a}\tensor{\underline{h}}{{}^{a}_{b}}.$$
Lemma 3.2.1.
If $(E_{a},A_{b})$ solves the conformal Yang–Mills equations (3.42)–(3.43) in the temporal gauge (3.44), then the triple $(E_{d},\,H_{pq},\,\bar{A}_{s})$ defined via (1.42) and (1.44) solves
$$\displaystyle-\mathcal{A}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}E_{\hat{a}}\\
H_{l\hat{a}}\\
\bar{A}_{\hat{a}}\end{pmatrix}+\mathcal{A}^{c}_{\hat{a}}\tensor{\underline{h}}{{}^{\hat{c}}_{c}}\underline{\nabla}_{\hat{c}}\begin{pmatrix}E_{f}\\
H_{lf}\\
\bar{A}_{f}\end{pmatrix}={}$$
$$\displaystyle\frac{1}{Ht}\mathcal{B}\begin{pmatrix}E_{\hat{a}}\\
H_{l\hat{a}}\\
\bar{A}_{\hat{a}}\end{pmatrix}+\frac{1}{\sqrt{t}}\begin{pmatrix}\Xi_{1\hat{a}}\\
\Xi^{h}_{2\hat{a}}\\
\Xi_{3\hat{a}}\end{pmatrix}+\begin{pmatrix}\widehat{\Delta}_{1\hat{a}}\\
\widehat{\Delta}_{2\hat{a}}^{h}\\
\widehat{\Delta}_{3\hat{a}}\end{pmatrix},$$
(3.60)
where
$$\mathcal{A}^{0}=\begin{pmatrix}-\lambda&0&0\\
0&h^{hl}&0\\
0&0&1\end{pmatrix},\quad\mathcal{B}=\begin{pmatrix}-(n-3)\lambda&-(n-4)\xi^{l}&0\\
0&h^{hl}&0\\
0&0&\frac{1}{2}\end{pmatrix},$$
$$\displaystyle\mathcal{A}_{\hat{a}}^{c}\tensor{\underline{h}}{{}^{\hat{c}}_{c}}$$
$$\displaystyle=\begin{pmatrix}-2\tensor{h}{{}^{f}_{\hat{a}}}g^{dc}\nu_{d}\tensor{\underline{h}}{{}^{\hat{c}}_{c}}+\tensor{\underline{h}}{{}^{\hat{c}}_{\hat{a}}}g^{dc}\nu_{d}\tensor{\underline{h}}{{}^{f}_{c}}&-\tensor{\underline{h}}{{}^{f}_{\hat{a}}}h^{l\hat{c}}&0\\
\tensor{\underline{h}}{{}^{\hat{c}}_{\hat{a}}}h^{hf}-\tensor{\underline{h}}{{}^{f}_{\hat{a}}}h^{h\hat{c}}&0&0\\
0&0&0\end{pmatrix},$$
and
$$\displaystyle\Xi_{1\hat{a}}={}$$
$$\displaystyle h^{dc}[\bar{A}_{c},H_{d\hat{a}}],\quad\Xi^{e}_{2\hat{a}}=h^{ec}([\bar{A}_{c},E_{\hat{a}}]-[\bar{A}_{\hat{a}},E_{c}]),\quad\Xi_{3\hat{a}}=E_{\hat{a}},$$
$$\displaystyle\widehat{\Delta}_{1\hat{a}}={}$$
$$\displaystyle\sqrt{t}\breve{\mathtt{B}}p^{c}(-[\bar{A}_{\hat{a}},E_{c}]+2[\bar{A}_{c},E_{\hat{a}}])-X^{d}(\nu_{d}E_{\hat{a}}+H_{d\hat{a}})-g^{bc}\tensor{X}{{}^{d}_{ba}}\tensor{\underline{h}}{{}^{a}_{\hat{a}}}(\nu_{c}E_{d}-\nu_{d}E_{c}+H_{cd})$$
$$\displaystyle+\left(\frac{1}{\tan(Ht)}-\frac{1}{Ht}\right)\left(-(n-3)\lambda E_{\hat{a}}-(n-4)\breve{\mathtt{B}}tp^{d}H_{d\hat{a}}\right)$$
$$\displaystyle+\left(\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}-\frac{1}{\sqrt{t}}\right)\left(h^{dc}[\bar{A}_{c},H_{d\hat{a}}]+t\breve{\mathtt{B}}p^{c}(2[\bar{A}_{c},E_{\hat{a}}]-[\bar{A}_{\hat{a}},E_{c}])\right),$$
$$\displaystyle\widehat{\Delta}_{2\hat{a}}^{h}={}$$
$$\displaystyle\left(\frac{1}{\tan(Ht)}-\frac{1}{Ht}\right)h^{hd}H_{d\hat{a}}+\left(\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}-\frac{1}{\sqrt{t}}\right)h^{hc}([\bar{A}_{c},E_{\hat{a}}]-[\bar{A}_{\hat{a}},E_{c}]),$$
$$\displaystyle\widehat{\Delta}_{3\hat{a}}={}$$
$$\displaystyle\left(\frac{1}{2\tan(Ht)}-\frac{1}{2Ht}\right)\bar{A}_{\hat{a}}+\left(\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}-\frac{1}{\sqrt{t}}\right)E_{\hat{a}}.$$
Moreover, there exists constants $\iota>0$ and $R>0$ such that the maps $\Xi_{i}$, $\widehat{\Delta}_{i}$, $i=1,2,3$, are analytic for $(t,\mathbf{U})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$.
Remark 3.2.1.
An important point regarding the equation (3.60) is that it is not equivalent to the conformal Yang–Mills equations expressed in the temporal gauge.
This is because (3.60) does not guarantee that the dynamical Yang–Mills equation obtained from applying $\nu^{c}$ to (3.42) must hold. However, it is true that if $(E_{a},\,\bar{A}_{b})$ solves the conformal Yang–Mills equations
in the temporal gauge, then the triple $(E_{a},\,H_{pq},\,\bar{A}_{b})$, where $H_{pq}$ is given in terms of $\bar{A}_{a}$ by (1.45), determine a solution of the (3.60).
Proof of Lemma 3.2.1.
Multiplying (3.42) on both sides by the spatial projection tensor $\tensor{\underline{h}}{{}^{a}_{b}}$ gives
$$\displaystyle\quad Q^{edc}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{a}_{b}}+(-\nu^{d}g^{ec}+\nu^{c}g^{ed})\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{a}_{b}}$$
$$\displaystyle=\frac{(n-3)}{\tan(Ht)}\nu^{e}g^{dc}\nu_{c}F_{da}\tensor{\underline{h}}{{}^{a}_{b}}+\nu^{e}X^{d}F_{da}\tensor{\underline{h}}{{}^{a}_{b}}+\nu^{e}g^{\hat{a}\hat{b}}\tensor{X}{{}^{d}_{\hat{a}a}}F_{\hat{b}d}\tensor{\underline{h}}{{}^{a}_{b}}-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}\nu^{e}g^{cd}[A_{d},F_{ca}]\tensor{\underline{h}}{{}^{a}_{b}}.$$
Noting that $-\nu^{d}g^{ea}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{c}_{b}}=-\nu^{d}g^{ef}\tensor{\underline{h}}{{}^{a}_{f}}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{c}_{b}}=-\nu^{d}g^{eq}\tensor{\underline{h}}{{}^{f}_{q}}\tensor{\underline{h}}{{}^{a}_{f}}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{c}_{b}}$ due to the anti-symmetry of the Yang–Mills field, it follows from the Yang–Mills Bianchi equation (3.43) that
$$\displaystyle(-\nu^{d}g^{ec}+\nu^{c}g^{ed})\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{a}_{b}}=\nu^{d}g^{ec}(\underline{\nabla}_{c}F_{ad}+\underline{\nabla}_{d}F_{ca})\tensor{\underline{h}}{{}^{a}_{b}}$$
$$\displaystyle=$$
$$\displaystyle-\nu^{d}g^{ec}\underline{\nabla}_{a}F_{dc}\tensor{\underline{h}}{{}^{a}_{b}}+\frac{3}{\tan(Ht)}\nu^{d}g^{ec}\tensor{\underline{h}}{{}^{a}_{b}}\nu_{[d}F_{ca]}-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}\nu^{d}g^{ec}\tensor{\underline{h}}{{}^{a}_{b}}[A_{[d},F_{ca]}]$$
$$\displaystyle=$$
$$\displaystyle-\nu^{d}g^{ea}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{c}_{b}}-\frac{1}{\tan(Ht)}(g^{ed}+\nu^{d}g^{ec}\nu_{c})F_{da}\tensor{\underline{h}}{{}^{a}_{b}}-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}\nu^{d}g^{ec}\tensor{\underline{h}}{{}^{a}_{b}}[A_{[d},F_{ca]}]$$
$$\displaystyle=$$
$$\displaystyle-\nu^{d}g^{eq}\tensor{\underline{h}}{{}^{f}_{q}}\tensor{\underline{h}}{{}^{a}_{f}}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{c}_{b}}-\frac{1}{\tan(Ht)}g^{ec}\tensor{\underline{h}}{{}^{d}_{c}}F_{da}\tensor{\underline{h}}{{}^{a}_{b}}-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}\nu^{d}g^{ec}\tensor{\underline{h}}{{}^{a}_{b}}[A_{[d},F_{ca]}].$$
Combining the above two expressions, we arrive at
$$\displaystyle\tensor{\underline{h}}{{}^{f}_{b}}Q^{edc}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{a}_{f}}-\tensor{\underline{h}}{{}^{c}_{b}}\nu^{d}g^{eq}\tensor{\underline{h}}{{}^{f}_{q}}\underline{\nabla}_{c}F_{da}\tensor{\underline{h}}{{}^{a}_{f}}=\frac{1}{Ht}\tensor{\underline{h}}{{}^{f}_{b}}((n-3)\nu^{e}g^{dc}\nu_{c}+g^{ec}\tensor{\underline{h}}{{}^{d}_{c}})F_{da}\tensor{\underline{h}}{{}^{a}_{f}}$$
$$\displaystyle+\frac{1}{\sqrt{t}}\left(-\tensor{\underline{h}}{{}^{a}_{b}}\nu^{e}g^{cd}[A_{d},F_{ca}]+\tensor{\underline{h}}{{}^{a}_{b}}\nu^{d}g^{ec}[A_{[d},F_{ca]}]\right)+\tensor{\widehat{\Delta}}{{}^{\prime e}_{b}},$$
(3.61)
where
$$\displaystyle\tensor{\widehat{\Delta}}{{}^{\prime e}_{b}}={}$$
$$\displaystyle\nu^{e}X^{d}F_{da}\tensor{\underline{h}}{{}^{a}_{b}}+\nu^{e}g^{\hat{a}\hat{b}}\tensor{X}{{}^{d}_{\hat{a}a}}F_{\hat{b}d}\tensor{\underline{h}}{{}^{a}_{b}}+\left(\frac{1}{\tan(Ht)}-\frac{1}{Ht}\right)\left((n-3)\nu^{e}g^{dc}\nu_{c}+g^{ec}\tensor{\underline{h}}{{}^{d}_{c}}\right)F_{da}\tensor{\underline{h}}{{}^{a}_{b}}$$
$$\displaystyle+\left(\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}-\frac{1}{\sqrt{t}}\right)\tensor{\underline{h}}{{}^{a}_{b}}\left(\nu^{d}g^{ec}[A_{[d},F_{ca]}]-\nu^{e}g^{cd}[A_{d},F_{ca}]\right).$$
Then noting the decomposition
$$\displaystyle F_{da}\tensor{\underline{h}}{{}^{a}_{f}}={}$$
$$\displaystyle\begin{pmatrix}\nu_{d},\tensor{\underline{h}}{{}^{l}_{d}}\end{pmatrix}\begin{pmatrix}-\nu^{p}F_{pa}\tensor{\underline{h}}{{}^{a}_{f}}\\
\tensor{\underline{h}}{{}^{q}_{l}}F_{qa}\tensor{\underline{h}}{{}^{a}_{f}}\end{pmatrix}=\begin{pmatrix}\nu_{d},\tensor{\underline{h}}{{}^{l}_{d}}\end{pmatrix}\begin{pmatrix}E_{f}\\
H_{lf}\end{pmatrix},$$
we can act on (3.61) with
$\begin{pmatrix}\nu_{e}\\
\tensor{\underline{h}}{{}^{h}_{e}}\end{pmatrix}$ to get
$$\displaystyle\begin{pmatrix}\tensor{\underline{h}}{{}^{f}_{b}}\nu_{e}Q^{edc}\nu_{d}+\tensor{\underline{h}}{{}^{c}_{b}}\nu_{e}g^{eq}\tensor{\underline{h}}{{}^{f}_{q}}&\tensor{\underline{h}}{{}^{f}_{b}}\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{l}_{d}}\\
\tensor{\underline{h}}{{}^{f}_{b}}\tensor{\underline{h}}{{}^{h}_{e}}Q^{edc}\nu_{d}+\tensor{\underline{h}}{{}^{c}_{b}}\tensor{\underline{h}}{{}^{h}_{e}}g^{eq}\tensor{\underline{h}}{{}^{f}_{q}}&\tensor{\underline{h}}{{}^{f}_{b}}\tensor{\underline{h}}{{}^{h}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{l}_{d}}\end{pmatrix}\underline{\nabla}_{c}\begin{pmatrix}E_{f}\\
H_{lf}\end{pmatrix}$$
(3.66)
$$\displaystyle\hskip 71.13188pt=\frac{1}{Ht}\begin{pmatrix}-(n-3)\tensor{\underline{h}}{{}^{f}_{b}}\lambda&-(n-4)\tensor{\underline{h}}{{}^{f}_{b}}\xi^{l}\\
0&\tensor{\underline{h}}{{}^{f}_{b}}h^{hl}\end{pmatrix}\begin{pmatrix}E_{f}\\
H_{lf}\end{pmatrix}$$
(3.71)
$$\displaystyle\hskip 85.35826pt+\frac{1}{\sqrt{t}}\begin{pmatrix}\nu_{e}\nu^{d}g^{ec}[A_{[d},F_{ca]}]\tensor{\underline{h}}{{}^{a}_{b}}+g^{dc}[A_{c},F_{da}]\tensor{\underline{h}}{{}^{a}_{b}}\\
\tensor{\underline{h}}{{}^{h}_{e}}\nu^{d}g^{ec}\tensor{\underline{h}}{{}^{a}_{b}}[A_{[d},F_{ca]}]\end{pmatrix}+\begin{pmatrix}\nu_{e}\tensor{\widehat{\Delta}}{{}^{\prime e}_{b}}\\
\tensor{\underline{h}}{{}^{h}_{e}}\tensor{\widehat{\Delta}}{{}^{\prime e}_{b}}\end{pmatrix}.$$
(3.76)
But, since
$$\displaystyle\frac{1}{\sqrt{t}}\nu_{e}\nu^{d}g^{ec}[A_{[d},F_{ca]}]\tensor{\underline{h}}{{}^{a}_{b}}=\frac{1}{\sqrt{t}}\xi^{c}\nu^{d}([\bar{A}_{a},F_{dc}]+[\bar{A}_{c},F_{ad}])\tensor{\underline{h}}{{}^{a}_{b}}=\sqrt{t}\breve{\mathtt{B}}p^{c}(-[\bar{A}_{b},E_{c}]+[\bar{A}_{c},E_{b}]),$$
$$\displaystyle\frac{1}{\sqrt{t}}\tensor{\underline{h}}{{}^{h}_{e}}\nu^{d}g^{ec}[A_{[d},F_{ca]}]\tensor{\underline{h}}{{}^{a}_{b}}=\frac{1}{\sqrt{t}}h^{hc}(-[\bar{A}_{b},E_{c}]+[\bar{A}_{c},E_{b}])$$
and
$$\displaystyle\frac{1}{\sqrt{t}}g^{dc}[A_{c},F_{da}]\tensor{\underline{h}}{{}^{a}_{b}}=-\sqrt{t}\breve{\mathtt{B}}p^{c}\nu^{d}[\bar{A}_{c},F_{da}]\tensor{\underline{h}}{{}^{a}_{b}}+\frac{1}{\sqrt{t}}h^{dc}[\bar{A}_{c},F_{da}]\tensor{\underline{h}}{{}^{a}_{b}}=\sqrt{t}\breve{\mathtt{B}}p^{c}[\bar{A}_{c},E_{b}]+\frac{1}{\sqrt{t}}h^{dc}[\bar{A}_{c},H_{db}]$$
in the temporal gauge, equation
(3.66) becomes
$$\displaystyle\begin{pmatrix}\tensor{\underline{h}}{{}^{f}_{b}}\nu_{e}Q^{edc}\nu_{d}+\tensor{\underline{h}}{{}^{c}_{b}}\nu_{e}g^{eq}\tensor{\underline{h}}{{}^{f}_{q}}&\tensor{\underline{h}}{{}^{f}_{b}}\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{l}_{d}}\\
\tensor{\underline{h}}{{}^{f}_{b}}\tensor{\underline{h}}{{}^{h}_{e}}Q^{edc}\nu_{d}+\tensor{\underline{h}}{{}^{c}_{b}}\tensor{\underline{h}}{{}^{h}_{e}}g^{eq}\tensor{\underline{h}}{{}^{f}_{q}}&\tensor{\underline{h}}{{}^{f}_{b}}\tensor{\underline{h}}{{}^{h}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{l}_{d}}\end{pmatrix}\underline{\nabla}_{c}\begin{pmatrix}E_{f}\\
H_{lf}\end{pmatrix}$$
(3.81)
$$\displaystyle=$$
$$\displaystyle\frac{1}{Ht}\begin{pmatrix}-(n-3)\tensor{\underline{h}}{{}^{f}_{b}}\lambda&-(n-4)\tensor{\underline{h}}{{}^{f}_{b}}\xi^{l}\\
0&\tensor{\underline{h}}{{}^{f}_{b}}h^{hl}\end{pmatrix}\begin{pmatrix}E_{f}\\
H_{lf}\end{pmatrix}+\frac{1}{\sqrt{t}}\begin{pmatrix}h^{dc}[\bar{A}_{c},H_{db}]\\
h^{hc}([\bar{A}_{c},E_{b}]-[\bar{A}_{b},E_{c}])\end{pmatrix}+\begin{pmatrix}\widehat{\Delta}_{1b}\\
\tensor{\widehat{\Delta}}{{}^{h}_{2b}}\end{pmatrix},$$
(3.90)
where
$$\displaystyle\widehat{\Delta}_{1b}=\nu_{e}\tensor{\widehat{\Delta}}{{}^{\prime e}_{b}}+\sqrt{t}\breve{\mathtt{B}}p^{c}(-[\bar{A}_{b},E_{c}]+2[\bar{A}_{c},E_{b}]){\quad\text{and}\quad}\widehat{\Delta}^{h}_{2b}=\tensor{\underline{h}}{{}^{h}_{e}}\tensor{\widehat{\Delta}}{{}^{\prime e}_{b}}.$$
On the other hand,
$$\tilde{F}_{da}\nu^{d}=\nu^{d}\underline{\nabla}_{d}\tilde{A}_{a}-\underline{\nabla}_{a}\tilde{A}_{d}\nu^{d}+\nu^{d}[\tilde{A}_{d},\tilde{A}_{a}]$$
(3.91)
by the definition (1.1) of the Yang–Mills curvature, and so, it follows from (1.27), (1.28), (A.4) and a direct calculation that, in the temporal gauge (3.44), (3.91) reduces to
$$-\tensor{\underline{h}}{{}^{f}_{b}}\nu^{d}\underline{\nabla}_{d}(\bar{A}_{f})=\frac{1}{2Ht}\bar{A}_{b}+\frac{1}{\sqrt{t}}E_{b}+\widehat{\Delta}_{3b}$$
(3.92)
where $\widehat{\Delta}_{3b}$ is given by
$$\widehat{\Delta}_{3b}=\left(\frac{1}{2\tan(Ht)}-\frac{1}{2Ht}\right)\bar{A}_{b}+\left(\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}-\frac{1}{\sqrt{t}}\right)E_{b}.$$
Putting (3.81) and (3.92) together completes the proof.
∎
Although $\mathcal{A}^{0}$ in (3.60) is positive and symmetric, the system (3.60) is not symmetric hyperbolic due to the matrices non-symmetry of the operators $\mathcal{A}_{\hat{a}}^{c}\tensor{\underline{h}}{{}^{\hat{c}}_{c}}$. To remedy this defect, we supplement (3.60) with an additional equation and introduce a new variable in order to symmetrize it. We begin by appending to (3.60) the
dynamical equation for $E_{a}$ given by
$$\displaystyle g^{ba}\underline{\nabla}_{b}E_{a}$$
$$\displaystyle=\frac{\breve{\mathtt{B}}(n-3)t}{\tan(Ht)}p^{b}E_{b}+(X^{d}E_{d}+\nu^{c}g^{ba}\tensor{X}{{}^{d}_{bc}}F_{ad})-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}h^{ab}[\bar{A}_{b},E_{a}],$$
(3.93)
which is obtained by contracting (3.42) with $\nu^{c}$ to get
$$g^{ba}\underline{\nabla}_{b}F_{ac}\nu^{c}=\frac{(n-3)}{\tan(Ht)}g^{ba}\nu_{a}F_{bc}\nu^{c}+\nu^{c}(X^{d}F_{dc}+g^{ba}\tensor{X}{{}^{d}_{bc}}F_{ad})-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}g^{ab}[A_{b},F_{ac}]\nu^{c},$$
and noting
$$\displaystyle g^{ba}\nu_{a}F_{bc}\nu^{c}=g^{\hat{a}a}\tensor{\underline{h}}{{}^{b}_{\hat{a}}}\nu_{a}\tensor{\underline{h}}{{}^{d}_{b}}F_{dc}\nu^{c}=\xi^{b}E_{b}\overset{\eqref{E:V}}{=}\breve{\mathtt{B}}tp^{b}E_{b}.$$
At this point, (3.60) with (3.93) appended to it is still not symmetric hyperbolic. To complete the symmetrization, we define
$$\mathcal{E}^{a}=-g^{e\hat{b}}\tensor{\underline{h}}{{}^{a}_{e}}E_{\hat{b}},$$
and note that the $E_{b}$ and $\mathcal{E}^{a}$ are related by
$$\mathcal{E}^{a}=-h^{a\hat{b}}E_{\hat{b}}{\quad\text{and}\quad}E_{b}=-\mathcal{E}^{a}g_{ab}-g^{e\hat{b}}\nu_{e}E_{\hat{b}}\nu^{a}g_{ab}.$$
(3.94)
It can then be verified by a straightforward calculation, which we carry out in the proof of the following lemma, that the system consisting of
(3.60) and (3.93) can be cast in a symmetric hyperbolic form in terms of the variables $\mathcal{E}^{e}$, $E_{d}$, $H_{ab}$, $\bar{A}_{s}$.
Lemma 3.2.2.
If $(E_{a},\,A_{b})$ solves the conformal Yang–Mills equations (3.42)–(3.43) in the temporal gauge (3.44), then the quadruple $(\mathcal{E}^{e},E_{d},\,H_{pq},\,\bar{A}_{s})$ defined via (1.42), (1.43) and (1.44) solves the symmetric hyperbolic equation
$$\displaystyle-\check{\mathbf{A}}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}\mathcal{E}^{e}\\
E_{\hat{d}}\\
H_{\hat{a}\hat{b}}\\
\bar{A}_{s}\end{pmatrix}-\check{\mathbf{A}}^{f}\tensor{\underline{h}}{{}^{c}_{f}}\underline{\nabla}_{c}\begin{pmatrix}\mathcal{E}^{e}\\
E_{\hat{d}}\\
H_{\hat{a}\hat{b}}\\
\bar{A}_{s}\end{pmatrix}=$$
$$\displaystyle\frac{1}{Ht}\check{\mathcal{B}}\begin{pmatrix}\mathcal{E}^{e}\\
E_{\hat{d}}\\
H_{\hat{a}\hat{b}}\\
\bar{A}_{s}\end{pmatrix}+\frac{1}{\sqrt{t}}\begin{pmatrix}-\Xi_{1\hat{e}}\\
\tensor{h}{{}^{d\hat{a}}}\Xi_{1\hat{a}}\\
-\tensor{h}{{}^{a\hat{a}}}\Xi^{b}_{2\hat{a}}\\
h^{ra}\Xi_{3a}\end{pmatrix}+\begin{pmatrix}\mathfrak{D}^{\sharp}_{1\hat{e}}\\
\mathfrak{D}_{2}^{\sharp d}\\
\mathfrak{D}_{3}^{\sharp ab}\\
\mathfrak{D}_{4}^{\sharp r}\end{pmatrix},$$
(3.115)
where
$$\displaystyle\check{\mathbf{A}}^{0}={}$$
$$\displaystyle\begin{pmatrix}-\lambda\tensor{\underline{h}}{{}^{a}_{\hat{e}}}g_{ba}\tensor{\underline{h}}{{}^{b}_{e}}&-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{\hat{e}}}\xi^{\hat{d}}&0&0\\
-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\xi^{d}&\bigl{[}-\lambda h^{\hat{d}d}-\lambda\nu^{r}\nu^{s}g_{rs}\xi^{d}\xi^{\hat{d}}+2\xi^{d}\xi^{\hat{d}}\bigr{]}&0&0\\
0&0&h^{\hat{a}a}h^{\hat{b}b}&0\\
0&0&0&h^{rs}\end{pmatrix},$$
$$\displaystyle\check{\mathbf{A}}^{f}\tensor{\underline{h}}{{}^{c}_{f}}={}$$
$$\displaystyle\begin{pmatrix}2\xi^{c}\tensor{\underline{h}}{{}^{a}_{\hat{e}}}g_{ba}\tensor{\underline{h}}{{}^{b}_{e}}&\bigl{[}2\xi^{c}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{\hat{e}}}+\tensor{\underline{h}}{{}^{c}_{\hat{e}}}\bigr{]}\xi^{\hat{d}}&-h^{\hat{a}c}\tensor{\underline{h}}{{}^{\hat{b}}_{\hat{e}}}&0\\
\bigl{[}2\xi^{c}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}+\tensor{\underline{h}}{{}^{c}_{e}}\bigr{]}\xi^{d}&\bigl{[}2\nu^{r}\nu^{s}g_{rs}\xi^{c}\xi^{\hat{d}}\xi^{d}-2\xi^{(d}h^{\hat{d})c}+2\xi^{c}h^{\hat{d}d}\bigr{]}&-h^{\hat{a}d}h^{\hat{b}c}&0\\
-h^{ac}\tensor{\underline{h}}{{}^{b}_{e}}&-h^{a\hat{d}}h^{bc}&0&0\\
0&0&0&0\end{pmatrix},$$
$$\displaystyle\check{\mathcal{B}}={}$$
$$\displaystyle\begin{pmatrix}-(n-3)\lambda\tensor{\underline{h}}{{}^{a}_{\hat{e}}}g_{ba}\tensor{\underline{h}}{{}^{b}_{e}}&0&0&0\\
0&-(n-3)\lambda h^{\hat{d}d}&0&0\\
0&0&h^{\hat{a}a}h^{\hat{b}b}&0\\
0&0&0&\frac{1}{2}h^{rs}\end{pmatrix},$$
$$\displaystyle\mathfrak{D}^{\sharp}_{1\hat{e}}(t,\mathbf{U})={}$$
$$\displaystyle-\widehat{\Delta}_{1\hat{e}}+(2\breve{\mathtt{B}}tp^{c}-\lambda\nu^{c})\tensor{\underline{h}}{{}^{\hat{a}}_{\hat{e}}}g_{\hat{a}a}E_{b}\underline{\nabla}_{c}h^{ba}+\frac{n-4}{H}\breve{\mathtt{B}}p^{a}H_{a\hat{e}}-\frac{n-3}{H}\breve{\mathtt{B}}\lambda p^{\hat{d}}\nu^{a}g_{a\hat{e}}E_{\hat{d}},$$
$$\displaystyle\mathfrak{D}_{2}^{\sharp d}(t,\mathbf{U})={}$$
$$\displaystyle\tensor{h}{{}^{d\hat{a}}}\widehat{\Delta}_{1\hat{a}}+(2\breve{\mathtt{B}}tp^{c}-\lambda\nu^{c})\breve{\mathtt{B}}t\nu^{b}g_{b\hat{a}}p^{d}E_{\hat{b}}\underline{\nabla}_{c}h^{\hat{a}\hat{b}}+\breve{\mathtt{B}}tp^{d}E_{\hat{a}}\underline{\nabla}_{c}h^{c\hat{a}}$$
$$\displaystyle+2\breve{\mathtt{B}}tp^{d}X^{b}E_{b}+2\breve{\mathtt{B}}tp^{d}\nu^{c}(g^{bs}\tensor{X}{{}^{\hat{c}}_{sc}}H_{b\hat{c}}+g^{rs}\tensor{X}{{}^{b}_{sc}}(\nu_{r}E_{b}-\nu_{b}E_{r}))$$
$$\displaystyle+2\breve{\mathtt{B}}t\frac{\breve{\mathtt{B}}(n-3)t}{\tan(Ht)}p^{d}p^{b}E_{b}-2\breve{\mathtt{B}}p^{d}\frac{\sqrt{H}t}{\sqrt{\sin(Ht)}}h^{ab}[\bar{A}_{b},E_{a}]-\frac{n-4}{H}\breve{\mathtt{B}}h^{\hat{b}d}p^{\hat{a}}H_{\hat{a}\hat{b}},$$
$$\displaystyle\mathfrak{D}_{3}^{\sharp ab}(t,\mathbf{U})={}$$
$$\displaystyle-\tensor{h}{{}^{a\hat{a}}}\widehat{\Delta}_{2\hat{a}}^{b}-h^{ac}E_{\hat{a}}\underline{\nabla}_{c}h^{b\hat{a}},\quad\mathfrak{D}_{4}^{\sharp r}(t,\mathbf{U})=h^{r\hat{a}}\widehat{\Delta}_{3\hat{a}},$$
and the maps $\Xi_{i},\,\widehat{\Delta}_{i},\,i=1,2,3$, are as defined in Lemma 3.2.1.
Moreover, there exist constants $\iota>0$ and $R>0$ such that the maps $\mathfrak{D}^{\sharp}_{1\hat{e}}(t,\mathbf{U})$, $\mathfrak{D}_{2}^{\sharp d}(t,\mathbf{U})$, $\mathfrak{D}_{3}^{\sharp ab}(t,\mathbf{U})$ and $\mathfrak{D}_{4}^{\sharp r}(t,\mathbf{U})$ are analytic for $(t,\mathbf{U})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$ and vanish for $\mathbf{U}=0$.
Remark 3.2.2.
The symmetric hyperbolic nature of (3.115) is a consequence of the symmetry of the matrices $\check{\mathbf{A}}^{0}$ and $\check{\mathbf{A}}^{f}\tensor{\underline{h}}{{}^{c}_{f}}$ in the pairs of indices $(e,\,\hat{e})$, $(d,\,\hat{d})$, $(a,\,\hat{a})$, $(b,\,\hat{b})$ and $(r,\,s)$,
and the fact that $\check{\mathbf{A}}^{0}$ is positive as can be easily verified.
Remark 3.2.3.
As was the case for (3.60), equation (3.115) is not equivalent to the conformal Yang–Mills equations expressed in the temporal gauge (3.44). This is because the relation $\mathcal{E}^{e}=-h^{ea}E_{a}$ cannot be recovered from a solution $(\mathcal{E}^{e},E_{d},H_{ab},\bar{A}_{s})$ of (3.115) even in $\mathcal{E}^{e}=-h^{ea}E_{a}$ holds initially. Consequently, we cannot guarantee that the Yang–Mills equation (3.93) will hold for a
solution of (3.115). However, it is true that if $(E_{a},\,\bar{A}_{b})$ solves the conformal Yang–Mills system in the temporal gauge, then the quadruple $(\mathcal{E}^{e},\,E_{d},\,H_{pq},\,\bar{A}_{s})$, where $\mathcal{E}^{e}=-h^{ea}E_{a}$ and $H_{pq}$ is given in terms of $\bar{A}_{a}$ via (1.45), will solve (3.115).
Proof of Lemma 3.2.2.
We start the derivation of (3.115) by considering the first equation from (3.66), which reads
$$\displaystyle\nu_{e}Q^{edc}\nu_{d}\underline{\nabla}_{c}E_{\hat{a}}+\tensor{\underline{h}}{{}^{c}_{\hat{a}}}\nu_{e}g^{e\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}+\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{\hat{b}}_{d}}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}$$
$$\displaystyle\hskip 56.9055pt=\frac{1}{Ht}\bigl{[}-(n-3)\lambda\tensor{\underline{h}}{{}^{\hat{d}}_{\hat{a}}}E_{\hat{d}}-(n-4)\tensor{\underline{h}}{{}^{\hat{b}}_{\hat{a}}}\xi^{a}H_{a\hat{b}}\bigr{]}+\frac{1}{\sqrt{t}}\Xi_{1\hat{a}}+\widehat{\Delta}_{1\hat{a}}.$$
(3.116)
With the help of (3.94) and the identity
$$\displaystyle-\nu_{e}Q^{edc}\nu_{d}\underline{\nabla}_{c}g_{a\hat{a}}\mathcal{E}^{a}-\nu_{e}Q^{edc}\nu_{d}\underline{\nabla}_{c}g_{a\hat{a}}g^{\hat{c}\hat{b}}\nu_{\hat{c}}E_{\hat{b}}\nu^{a}=\nu_{e}Q^{edc}\nu_{d}g^{a\hat{b}}E_{\hat{b}}\underline{\nabla}_{c}g_{a\hat{a}},$$
we can write (3.2) as
$$\displaystyle\nu_{e}Q^{edc}\nu_{d}\underline{\nabla}_{c}E_{\hat{a}}=\nu_{e}Q^{edc}\nu_{d}\underline{\nabla}_{c}(-\mathcal{E}^{a}g_{a\hat{a}}-g^{\hat{c}\hat{b}}\nu_{\hat{c}}E_{\hat{b}}\nu^{a}g_{a\hat{a}})$$
$$\displaystyle=-\nu_{e}Q^{edc}\nu_{d}g_{a\hat{a}}\underline{\nabla}_{c}\mathcal{E}^{a}-\nu_{e}Q^{edc}\nu_{d}\nu_{\hat{c}}\nu^{a}g^{\hat{c}\hat{b}}g_{a\hat{a}}\underline{\nabla}_{c}E_{\hat{b}}-\nu_{e}Q^{edc}\nu_{d}\nu_{\hat{c}}\nu^{a}E_{\hat{b}}g_{a\hat{a}}\underline{\nabla}_{c}g^{\hat{c}\hat{b}}+\nu_{e}Q^{edc}\nu_{d}g^{a\hat{b}}E_{\hat{b}}\underline{\nabla}_{c}g_{a\hat{a}}.$$
Noting that $\nu_{e}Q^{edc}\nu_{d}=-2\xi^{c}+\lambda\nu^{c}$, it then follows that
$$\displaystyle(2\xi^{c}-\lambda\nu^{c})g_{a\hat{a}}\underline{\nabla}_{c}\mathcal{E}^{a}+\bigl{[}(2\xi^{c}-\lambda\nu^{c})\nu^{a}g_{a\hat{a}}+\tensor{\underline{h}}{{}^{c}_{\hat{a}}}\bigr{]}\nu_{e}g^{e\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}-h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{Ht}\bigl{[}-(n-3)\tensor{\underline{h}}{{}^{\hat{d}}_{\hat{a}}}\lambda E_{\hat{d}}-(n-4)\tensor{\underline{h}}{{}^{\hat{b}}_{\hat{a}}}\breve{\mathtt{B}}tp^{a}H_{a\hat{b}}\bigr{]}+\frac{1}{\sqrt{t}}\Xi_{1\hat{a}}+\widehat{\Delta}_{1\hat{a}}$$
$$\displaystyle+(-2\xi^{c}+\lambda\nu^{c})\left(\nu_{\hat{c}}\nu^{a}g_{a\hat{a}}\underline{\nabla}_{c}g^{\hat{c}\hat{b}}E_{\hat{b}}-g^{a\hat{b}}\underline{\nabla}_{c}g_{a\hat{a}}E_{\hat{b}}\right).$$
(3.117)
Applying the projection operator $\tensor{\underline{h}}{{}^{\hat{a}}_{e}}$ to (3.2), we find, after using (3.94) to replace the $E_{\hat{d}}$ in the first singular term of the right hand side of (3.2) and noting that $\underline{\nabla}_{c}g_{ab}=-g_{a\hat{a}}g_{b\hat{b}}\underline{\nabla}_{c}g^{\hat{a}\hat{b}}$, $\xi^{a}=\breve{\mathtt{B}}tp^{a}$ (recall (1.32)) and $(n-3)\lambda g^{\hat{e}\hat{d}}\nu_{\hat{e}}\nu^{a}g_{ae}E_{\hat{d}}=(n-3)\lambda\xi^{\hat{d}}\nu^{a}g_{ae}E_{\hat{d}}=(n-3)\breve{\mathtt{B}}t\lambda p^{\hat{d}}\nu^{a}g_{ae}E_{\hat{d}}$, that
$$\displaystyle(2\xi^{c}-\lambda\nu^{c})\tensor{\underline{h}}{{}^{\hat{a}}_{e}}g_{b\hat{a}}\tensor{\underline{h}}{{}^{b}_{a}}\underline{\nabla}_{c}\mathcal{E}^{a}+\bigl{[}(2\xi^{c}-\lambda\nu^{c})\nu^{a}g_{a\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{e}}+\tensor{\underline{h}}{{}^{c}_{e}}\bigr{]}\xi^{\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}-h^{\hat{a}c}\underline{\nabla}_{c}H_{\hat{a}e}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-3}{Ht}\lambda g_{ae}\mathcal{E}^{a}+\frac{1}{\sqrt{t}}\Xi_{1e}-\mathfrak{D}^{\sharp}_{1e}(t,\mathbf{U})$$
(3.118)
where
$$\displaystyle\mathfrak{D}^{\sharp}_{1e}(t,\mathbf{U})=$$
$$\displaystyle-\widehat{\Delta}_{1e}-\tensor{\underline{h}}{{}^{\hat{a}}_{e}}(-2\xi^{c}+\lambda\nu^{c})\left(\nu_{\hat{c}}\nu^{a}g_{a\hat{a}}\underline{\nabla}_{c}g^{\hat{c}\hat{d}}E_{\hat{d}}+E_{\hat{d}}g_{\hat{a}b}\underline{\nabla}_{c}g^{\hat{d}b}\right)$$
$$\displaystyle+\frac{n-4}{H}\breve{\mathtt{B}}p^{a}H_{ae}-\frac{(n-3)}{H}\breve{\mathtt{B}}\lambda p^{\hat{d}}\nu^{a}g_{ae}E_{\hat{d}}.$$
Applying the projection $\tensor{\underline{h}}{{}^{e}_{\hat{b}}}$ to (3.2), we arrive at
$$\displaystyle(2\xi^{c}-\lambda\nu^{c})\tensor{\underline{h}}{{}^{\hat{a}}_{\hat{b}}}g_{b\hat{a}}\tensor{\underline{h}}{{}^{b}_{a}}\underline{\nabla}_{c}\mathcal{E}^{a}+\bigl{[}(2\xi^{c}-\lambda\nu^{c})\nu^{a}g_{a\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{\hat{b}}}+\tensor{\underline{h}}{{}^{c}_{\hat{b}}}\bigr{]}\xi^{\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}-h^{\hat{a}c}\underline{\nabla}_{c}H_{\hat{a}\hat{b}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{n-3}{Ht}\lambda\tensor{\underline{h}}{{}^{\hat{d}}_{\hat{b}}}g_{\hat{a}\hat{d}}\tensor{\underline{h}}{{}^{\hat{a}}_{a}}\mathcal{E}^{a}+\frac{1}{\sqrt{t}}\Xi_{1\hat{b}}-\mathfrak{D}^{\sharp}_{1\hat{b}}(t,\mathbf{U}),$$
(3.119)
which determines the first component of (3.115) and can be viewed as an evolution equation for the Yang–Mills field $\mathcal{E}^{a}$.
Next, we derive an evolution equation for $E_{b}$ that will comprise the second component of (3.115). Although $E_{b}$ and $\mathcal{E}^{a}$ are not independent since they are related by (3.94), we treat them as independent fields in order to obtain a symmetric hyperbolic equation. The evolution equation for $E_{b}$ is derived from
$\tensor{\underline{h}}{{}^{c}_{a}}\nu_{e}g^{eb}\underline{\nabla}_{c}\mathcal{E}^{a}-\tensor{\underline{h}}{{}^{c}_{a}}\nu_{e}g^{eb}\underline{\nabla}_{c}\mathcal{E}^{a}-g^{b\hat{a}}\times\eqref{e:eq2}$, which, after using (3.94), reads
$$\displaystyle\tensor{\underline{h}}{{}^{c}_{a}}\nu_{e}g^{eb}\underline{\nabla}_{c}\mathcal{E}^{a}-\tensor{\underline{h}}{{}^{c}_{a}}\nu_{e}g^{eb}\underline{\nabla}_{c}\left(-g^{\hat{e}\hat{b}}\tensor{\underline{h}}{{}^{a}_{\hat{e}}}E_{\hat{b}}\right)-g^{b\hat{a}}\nu_{e}Q^{edc}\nu_{d}\underline{\nabla}_{c}E_{\hat{a}}-g^{b\hat{a}}\tensor{\underline{h}}{{}^{c}_{\hat{a}}}\nu_{e}g^{e\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}$$
$$\displaystyle\hskip 14.22636pt-g^{b\hat{a}}\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{\hat{b}}_{d}}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}=\frac{1}{Ht}\bigl{[}(n-3)\lambda g^{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{d}}_{\hat{a}}}E_{\hat{d}}+(n-4)g^{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{b}}_{\hat{a}}}\xi^{a}H_{a\hat{b}}\bigr{]}-\frac{1}{\sqrt{t}}g^{b\hat{a}}\Xi_{1\hat{a}}-g^{b\hat{a}}\widehat{\Delta}_{1\hat{a}}.$$
A straightforward calculation then shows that the above equation is equivalent to
$$\displaystyle\tensor{\underline{h}}{{}^{c}_{a}}\nu_{e}g^{eb}\underline{\nabla}_{c}\mathcal{E}^{a}+(\tensor{\underline{h}}{{}^{c}_{d}}\nu_{e}g^{eb}g^{d\hat{a}}-g^{b\hat{a}}\nu_{e}Q^{edc}\nu_{d}-g^{bd}\tensor{\underline{h}}{{}^{c}_{d}}\nu_{e}g^{e\hat{a}})\underline{\nabla}_{c}E_{\hat{a}}+g^{b\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{Ht}\bigl{[}(n-3)\lambda g^{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{d}}_{\hat{a}}}E_{\hat{d}}+(n-4)g^{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{b}}_{\hat{a}}}\xi^{a}H_{a\hat{b}}\bigr{]}-\frac{1}{\sqrt{t}}g^{b\hat{a}}\Xi_{1\hat{a}}-g^{b\hat{a}}\widehat{\Delta}_{1\hat{a}}-\tensor{\underline{h}}{{}^{c}_{d}}\nu_{e}g^{eb}E_{a}\underline{\nabla}_{c}g^{da}.$$
(3.120)
From Lemma A.1.4 (i.e., (A.8)) and the identity
$$\displaystyle\tensor{\underline{h}}{{}^{c}_{d}}\nu_{e}g^{eb}g^{d\hat{a}}=$$
$$\displaystyle\nu_{e}g^{e\hat{b}}(\tensor{\underline{h}}{{}^{b}_{\hat{b}}}-\nu^{b}\nu_{\hat{b}})g^{da}\tensor{\underline{h}}{{}^{c}_{d}}(\tensor{\underline{h}}{{}^{\hat{a}}_{a}}-\nu_{a}\nu^{\hat{a}})=(\xi^{b}-\lambda\nu^{b})(h^{c\hat{a}}-\xi^{c}\nu^{\hat{a}}),$$
we observe that
$$\displaystyle\tensor{\underline{h}}{{}^{c}_{d}}\nu_{e}g^{eb}g^{d\hat{a}}-g^{b\hat{a}}\nu_{e}Q^{edc}\nu_{d}-g^{bd}\tensor{h}{{}^{c}_{d}}\nu_{e}g^{e\hat{a}}=\xi^{b}h^{c\hat{a}}-\xi^{\hat{a}}h^{cb}+2\xi^{c}h^{b\hat{a}}-3\nu^{\hat{a}}\xi^{b}\xi^{c}-\lambda\nu^{b}h^{c\hat{a}}$$
$$\displaystyle\hskip 28.45274pt+\lambda\nu^{\hat{a}}h^{cb}-\nu^{b}\xi^{\hat{a}}\xi^{c}-\lambda\nu^{c}h^{b\hat{a}}+\lambda\nu^{c}\nu^{\hat{a}}\xi^{b}+\lambda\nu^{c}\nu^{b}\xi^{\hat{a}}-\lambda^{2}\nu^{c}\nu^{b}\nu^{\hat{a}}+2\lambda\xi^{c}\nu^{\hat{a}}\nu^{b}.$$
Substituting this into (3.2) yields
$$\displaystyle\tensor{\underline{h}}{{}^{c}_{a}}\nu_{e}g^{eb}\underline{\nabla}_{c}\mathcal{E}^{a}+(\xi^{b}h^{c\hat{d}}-\xi^{\hat{d}}h^{cb}+2\xi^{c}h^{b\hat{d}}-\lambda\nu^{b}h^{c\hat{d}}-\nu^{b}\xi^{\hat{d}}\xi^{c}-\lambda\nu^{c}h^{b\hat{d}}+\lambda\nu^{c}\nu^{b}\xi^{\hat{d}})\underline{\nabla}_{c}E_{\hat{d}}+g^{b\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}$$
$$\displaystyle=\frac{1}{Ht}\bigl{[}(n-3)\lambda g^{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{d}}_{\hat{a}}}E_{\hat{d}}+(n-4)g^{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{b}}_{\hat{a}}}\xi^{a}H_{a\hat{b}}\bigr{]}-\frac{1}{\sqrt{t}}g^{b\hat{a}}\Xi_{1\hat{a}}-g^{b\hat{a}}\widehat{\Delta}_{1\hat{a}}-\tensor{\underline{h}}{{}^{c}_{d}}\nu_{e}g^{eb}E_{\hat{a}}\underline{\nabla}_{c}g^{d\hat{a}},$$
where in deriving this we also used $\nu^{\hat{a}}E_{\hat{a}}=0$, which holds by (1.42).
Applying the projection $\tensor{\underline{h}}{{}^{\hat{c}}_{b}}$ to this equation gives
$$\displaystyle\xi^{\hat{c}}\tensor{\underline{h}}{{}^{c}_{a}}\underline{\nabla}_{c}\mathcal{E}^{a}+(\xi^{\hat{c}}h^{c\hat{d}}-\xi^{\hat{d}}h^{c\hat{c}}+2\xi^{c}h^{\hat{c}\hat{d}}-\lambda\nu^{c}h^{\hat{c}\hat{d}})\underline{\nabla}_{c}E_{\hat{d}}+h^{\hat{c}\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{Ht}\bigl{[}(n-3)\lambda h^{\hat{c}\hat{d}}E_{\hat{d}}+(n-4)h^{\hat{c}\hat{b}}\xi^{a}H_{a\hat{b}}\bigr{]}-\frac{1}{\sqrt{t}}\tensor{h}{{}^{\hat{c}\hat{a}}}\Xi_{1\hat{a}}-\tensor{h}{{}^{\hat{c}\hat{a}}}\widehat{\Delta}_{1\hat{a}}-\tensor{\underline{h}}{{}^{c}_{d}}\xi^{\hat{c}}E_{\hat{a}}\underline{\nabla}_{c}g^{d\hat{a}}.$$
Reformulating this equation as
$$\displaystyle\bigl{[}(2\xi^{c}-\lambda\nu^{c})\nu^{b}g_{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{a}}+\tensor{\underline{h}}{{}^{c}_{a}}\bigr{]}\xi^{\hat{c}}\underline{\nabla}_{c}\mathcal{E}^{a}-(2\xi^{c}-\lambda\nu^{c})\nu^{b}g_{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{a}}\xi^{\hat{c}}\underline{\nabla}_{c}\mathcal{E}^{a}$$
$$\displaystyle\qquad+(\xi^{\hat{c}}h^{c\hat{d}}-\xi^{\hat{d}}h^{c\hat{c}}+2\xi^{c}h^{\hat{c}\hat{d}}-\lambda\nu^{c}h^{\hat{c}\hat{d}})\underline{\nabla}_{c}E_{\hat{d}}-h^{\hat{c}\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{a}\hat{b}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{Ht}\bigl{[}(n-3)\lambda h^{\hat{c}\hat{d}}E_{\hat{d}}+(n-4)h^{\hat{c}\hat{b}}\xi^{a}H_{a\hat{b}}\bigr{]}-\frac{1}{\sqrt{t}}\tensor{h}{{}^{\hat{c}\hat{a}}}\Xi_{1\hat{a}}-\tensor{h}{{}^{\hat{c}\hat{a}}}\widehat{\Delta}_{1\hat{a}}-\tensor{\underline{h}}{{}^{c}_{d}}\xi^{\hat{c}}E_{\hat{a}}\underline{\nabla}_{c}g^{d\hat{a}},$$
where we note that $h^{\hat{c}\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}$ has been rewritten as $-h^{\hat{c}\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{a}\hat{b}}$, we then use (3.94) to replace $\mathcal{E}^{a}$ with $E_{b}$ in the term $-(2\xi^{c}-\lambda\nu^{c})\nu^{b}g_{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{a}}\xi^{\hat{c}}\underline{\nabla}_{c}\mathcal{E}^{a}$ from the above equation to get
$$\displaystyle\bigl{[}(2\xi^{c}-\lambda\nu^{c})\nu^{b}g_{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{e}}+\tensor{\underline{h}}{{}^{c}_{e}}\bigr{]}\xi^{d}\underline{\nabla}_{c}\mathcal{E}^{e}$$
$$\displaystyle+\bigl{[}(2\xi^{c}-\lambda\nu^{c})\xi^{d}\nu^{b}g_{bs}h^{s\hat{d}}+(\xi^{d}h^{c\hat{d}}-\xi^{\hat{d}}h^{cd}+2\xi^{c}h^{d\hat{d}}-\lambda\nu^{c}h^{d\hat{d}})\bigr{]}\underline{\nabla}_{c}E_{\hat{d}}-h^{d\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{a}\hat{b}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{Ht}\bigl{[}(n-3)\lambda h^{d\hat{d}}E_{\hat{d}}+(n-4)h^{d\hat{b}}\xi^{\hat{a}}H_{\hat{a}\hat{b}}\bigr{]}-\frac{1}{\sqrt{t}}\tensor{h}{{}^{d\hat{a}}}\Xi_{1\hat{a}}-\mathfrak{D}_{2}^{\dagger d}(t,\mathbf{U})$$
where
$$\displaystyle\mathfrak{D}_{2}^{\dagger d}(t,\mathbf{U})=\tensor{h}{{}^{d\hat{a}}}\widehat{\Delta}_{1\hat{a}}+(2\xi^{c}-\lambda\nu^{c})\nu^{b}g_{b\hat{a}}\xi^{d}E_{\hat{b}}\tensor{\underline{h}}{{}^{\hat{a}}_{e}}\underline{\nabla}_{c}g^{e\hat{b}}+\xi^{d}E_{\hat{a}}\tensor{\underline{h}}{{}^{c}_{\hat{d}}}\underline{\nabla}_{c}g^{\hat{d}\hat{a}}.$$
Using
$$\displaystyle\nu^{b}g_{bs}h^{s\hat{d}}=\nu^{b}g_{bs}(g^{s\hat{d}}-\lambda\nu^{s}\nu^{\hat{d}}+\xi^{s}\nu^{\hat{d}}+\xi^{\hat{d}}\nu^{s})=\nu^{\hat{d}}(1-\lambda\nu^{b}\nu^{s}g_{bs}+\nu^{b}\xi^{s}g_{bs})+\nu^{b}\nu^{s}g_{bs}\xi^{\hat{d}}$$
and $\nu^{\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}=0$ then allows us to express the above equation as
$$\displaystyle\bigl{[}(2\xi^{c}-\lambda\nu^{c})\nu^{b}g_{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{e}}+\tensor{\underline{h}}{{}^{c}_{e}}\bigr{]}\xi^{d}\underline{\nabla}_{c}\mathcal{E}^{e}$$
$$\displaystyle+\bigl{[}(2\xi^{c}-\lambda\nu^{c})\xi^{d}\xi^{\hat{d}}\nu^{b}\nu^{s}g_{bs}+\xi^{d}h^{c\hat{d}}-\xi^{\hat{d}}h^{cd}+2\xi^{c}h^{d\hat{d}}-\lambda\nu^{c}h^{d\hat{d}}\bigr{]}\underline{\nabla}_{c}E_{\hat{d}}-h^{d\hat{a}}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{a}\hat{b}}$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{Ht}\bigl{[}(n-3)\lambda h^{d\hat{d}}E_{\hat{d}}+(n-4)h^{d\hat{b}}\xi^{\hat{a}}H_{\hat{a}\hat{b}}\bigr{]}-\frac{1}{\sqrt{t}}\tensor{h}{{}^{d\hat{a}}}\Xi_{1\hat{a}}-\mathfrak{D}_{2}^{\dagger d}(t,\mathbf{U}).$$
(3.121)
Next, we observe that the coefficients of $\underline{\nabla}_{c}E_{\hat{d}}$ in (3.121) can be expressed as
$$\displaystyle(2\xi^{c}-\lambda\nu^{c})\xi^{d}\xi^{\hat{d}}\nu^{b}\nu^{s}g_{bs}+\xi^{d}h^{c\hat{d}}-\xi^{\hat{d}}h^{cd}+2\xi^{c}h^{d\hat{d}}-\lambda\nu^{c}h^{d\hat{d}}$$
$$\displaystyle={}$$
$$\displaystyle\bigl{[}(2\xi^{c}-\lambda\nu^{c})\xi^{\hat{d}}\xi^{d}\nu^{b}\nu^{s}g_{bs}-2\xi^{(d}h^{\hat{d})c}+2\xi^{c}h^{\hat{d}d}-\lambda\nu^{c}h^{\hat{d}d}\bigr{]}+2\xi^{d}h^{c\hat{d}},$$
where the first term in the bracket is symmetric in the indices $d$ and $\hat{d}$ while the remaining term $2\xi^{d}h^{c\hat{d}}$ is non-symmetric in $d$ and $\hat{d}$.
The non-symmetric term need to be addressed in order to obtain a symmetric hyperbolic equation. To handle this term, we use (A.8) to write $\xi^{d}h^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}$ as
$$\displaystyle 2\xi^{d}h^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}=2\xi^{d}(g^{c\hat{d}}-\lambda\nu^{c}\nu^{\hat{d}}+\xi^{c}\nu^{\hat{d}}+\xi^{\hat{d}}\nu^{c})\underline{\nabla}_{c}E_{\hat{d}}=2\xi^{d}g^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}+2\xi^{d}\xi^{\hat{d}}\nu^{c}\underline{\nabla}_{c}E_{\hat{d}},$$
where we note that the coefficient $2\xi^{d}\xi^{\hat{d}}\nu^{c}$ appearing on the right hand side is symmetric in $d$ and $\hat{d}$.
This leaves us to consider the term $2\xi^{d}g^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}$. Making use of (3.93) to express $g^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}$ as
$$g^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}=\frac{\breve{\mathtt{B}}(n-3)t}{\tan(Ht)}p^{b}E_{b}-\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}h^{ab}[\bar{A}_{b},E_{a}]+(X^{d}E_{d}+\nu^{c}g^{ba}\tensor{X}{{}^{d}_{bc}}F_{ad}),$$
we see that $2\xi^{d}g^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}$ is given by
$$\displaystyle 2\xi^{d}h^{c\hat{d}}\underline{\nabla}_{c}E_{\hat{d}}={}$$
$$\displaystyle 2\xi^{d}\frac{\breve{\mathtt{B}}(n-3)t}{\tan(Ht)}p^{b}E_{b}-2\xi^{d}\frac{\sqrt{H}}{\sqrt{\sin(Ht)}}h^{ab}[\bar{A}_{b},E_{a}]$$
$$\displaystyle+2\xi^{d}(X^{b}E_{b}+\nu^{b}g^{ra}\tensor{X}{{}^{s}_{rb}}F_{as})+2\xi^{d}\xi^{\hat{d}}\nu^{c}\underline{\nabla}_{c}E_{\hat{d}},$$
where we note now that the principle term $2\xi^{d}\xi^{\hat{d}}\nu^{c}\underline{\nabla}_{c}E_{\hat{d}}$ is symmetric in $d$ and $\hat{d}$. With the help of this expression and the above arguments, it is clear that we can write (3.121) as
$$\displaystyle\bigl{[}(2\xi^{c}-\lambda\nu^{c})\nu^{b}g_{b\hat{a}}\tensor{\underline{h}}{{}^{\hat{a}}_{e}}+\tensor{\underline{h}}{{}^{c}_{e}}\bigr{]}\xi^{d}\underline{\nabla}_{c}\mathcal{E}^{e}$$
$$\displaystyle+\bigl{[}(2\xi^{c}-\lambda\nu^{c})\xi^{\hat{d}}\xi^{d}\nu^{b}\nu^{s}g_{bs}+(2\xi^{d}\xi^{\hat{d}}\nu^{c}-\xi^{d}h^{c\hat{d}}-\xi^{\hat{d}}h^{cd}+2\xi^{c}h^{\hat{d}d}-\lambda\nu^{c}h^{\hat{d}d})\bigr{]}\underline{\nabla}_{c}E_{\hat{d}}$$
$$\displaystyle-h^{\hat{a}d}h^{\hat{b}c}\underline{\nabla}_{c}H_{\hat{a}\hat{b}}=\frac{n-3}{Ht}\lambda h^{\hat{d}d}E_{\hat{d}}-\frac{1}{\sqrt{t}}\tensor{h}{{}^{d\hat{a}}}\Xi_{1\hat{a}}-\mathfrak{D}_{2}^{\sharp d}(t,\mathbf{U}),$$
(3.122)
where
$$\displaystyle\mathfrak{D}^{\sharp d}_{2}(t,\mathbf{U})={}$$
$$\displaystyle\mathfrak{D}^{\dagger d}_{2}(t,\mathbf{U})+2\breve{\mathtt{B}}tp^{d}(X^{b}E_{b}+\nu^{b}g^{ra}\tensor{X}{{}^{s}_{rb}}F_{as})+2\breve{\mathtt{B}}tp^{d}\frac{\breve{\mathtt{B}}(n-3)t}{\tan(Ht)}p^{b}E_{b}$$
$$\displaystyle-2\breve{\mathtt{B}}p^{d}\frac{\sqrt{H}t}{\sqrt{\sin(Ht)}}h^{ab}[\bar{A}_{b},E_{a}]-\frac{n-4}{H}h^{\hat{b}d}\breve{\mathtt{B}}p^{\hat{a}}H_{\hat{a}\hat{b}},$$
which is the second component from (3.115).
Turning now to the derivation of the third equation from (3.115), we
have from the second line of (3.66) that
$$\displaystyle\tensor{\underline{h}}{{}^{h}_{e}}Q^{edc}\nu_{d}\underline{\nabla}_{c}E_{\hat{a}}+\tensor{\underline{h}}{{}^{c}_{\hat{a}}}\tensor{h}{{}^{ha}}\underline{\nabla}_{c}E_{a}+\tensor{\underline{h}}{{}^{h}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{\hat{b}}_{d}}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}=\frac{1}{Ht}\tensor{\underline{h}}{{}^{\hat{b}}_{\hat{a}}}h^{hb}H_{b\hat{b}}+\frac{1}{\sqrt{t}}\Xi^{h}_{2\hat{a}}+\widehat{\Delta}^{h}_{2\hat{a}}.$$
(3.123)
Multiplying (3.123) by $\tensor{h}{{}^{\hat{c}\hat{a}}}$, we find, with the help of the identities
$\tensor{\underline{h}}{{}^{\hat{d}}_{e}}Q^{edc}\nu_{d}=-h^{\hat{d}c}$
$\tensor{\underline{h}}{{}^{\hat{d}}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{\hat{b}}_{d}}=-\nu^{c}h^{\hat{b}\hat{d}}$
from Lemma A.1.4,
that
$$\displaystyle-\tensor{h}{{}^{\hat{c}\hat{a}}}h^{\hat{d}c}\underline{\nabla}_{c}E_{\hat{a}}+h^{\hat{c}c}h^{\hat{d}\hat{a}}\underline{\nabla}_{c}E_{\hat{a}}-\tensor{h}{{}^{\hat{c}\hat{a}}}h^{\hat{b}\hat{d}}\nu^{c}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}=\frac{1}{Ht}h^{\hat{c}\hat{b}}h^{\hat{d}d}H_{d\hat{b}}+\frac{1}{\sqrt{t}}\tensor{h}{{}^{\hat{c}\hat{a}}}\Xi^{\hat{d}}_{2\hat{a}}+\tensor{h}{{}^{\hat{c}\hat{a}}}\widehat{\Delta}^{\hat{d}}_{2\hat{a}}.$$
But by (3.94), we have
$$h^{\hat{c}c}h^{\hat{d}\hat{a}}\underline{\nabla}_{c}E_{\hat{a}}=-h^{\hat{c}c}\underline{\nabla}_{c}\mathcal{E}^{\hat{d}}-h^{\hat{c}c}E_{\hat{a}}\underline{\nabla}_{c}h^{\hat{d}\hat{a}},$$
and so, we conclude that
$$\displaystyle-h^{\hat{c}\hat{a}}h^{\hat{d}c}\underline{\nabla}_{c}E_{\hat{a}}-h^{\hat{c}c}\underline{\nabla}_{c}\mathcal{E}^{\hat{d}}-$$
$$\displaystyle\tensor{h}{{}^{\hat{c}\hat{a}}}h^{\hat{b}\hat{d}}\nu^{c}\underline{\nabla}_{c}H_{\hat{b}\hat{a}}=\frac{1}{Ht}h^{\hat{c}\hat{b}}h^{\hat{d}\hat{a}}H_{\hat{a}\hat{b}}+\frac{1}{\sqrt{t}}\tensor{h}{{}^{\hat{c}\hat{a}}}\Xi^{\hat{d}}_{2\hat{a}}-\mathfrak{D}_{3}^{\sharp\hat{c}\hat{d}}(t,\mathbf{U}),$$
where
$$\mathfrak{D}_{3}^{\sharp\hat{c}\hat{d}}(t,\mathbf{U})=-\tensor{h}{{}^{\hat{c}\hat{a}}}\widehat{\Delta}_{2\hat{a}}^{\hat{d}}-h^{\hat{c}c}E_{\hat{a}}\underline{\nabla}_{c}h^{\hat{d}\hat{a}}.$$
Noting that $h^{a\hat{b}}h^{b\hat{a}}H_{\hat{a}\hat{b}}=-h^{a\hat{a}}h^{b\hat{b}}H_{\hat{a}\hat{b}}$, we see that the above equation is equivalent to the third component from (3.115), which is given by
$$-h^{ac}\underline{\nabla}_{c}\mathcal{E}^{b}-h^{a\hat{d}}h^{bc}\underline{\nabla}_{c}E_{\hat{d}}+h^{a\hat{a}}h^{\hat{b}b}\nu^{c}\underline{\nabla}_{c}H_{\hat{a}\hat{b}}=-\frac{1}{Ht}h^{a\hat{a}}h^{\hat{b}b}H_{\hat{a}\hat{b}}+\frac{1}{\sqrt{t}}\tensor{h}{{}^{a\hat{a}}}\Xi^{b}_{2\hat{a}}-\mathfrak{D}_{3}^{\sharp ab}(t,\mathbf{U}).$$
(3.124)
The proof then follows from collecting (3.2), (3.2) and (3.124) together with the equation (3.92) for the gauge potential.
∎
The last step needed to bring the Yang–Mills equations into a form that will be favourable for our analysis involves multiplying each line of (3.115) by $\underline{h}^{\hat{b}\hat{e}}$, $\underline{h}_{df}$, $\underline{h}_{a\bar{a}}\underline{h}_{b\bar{b}}$ and $\underline{h}_{or}$, respectively. The resulting first order, symmetric hyperbolic Fuchsian equation is displayed in the theorem below.
Theorem 3.2.
If $(E_{a},A_{b})$ solves the conformal Yang–Mills equations (3.42)–(3.43) in the temporal gauge (3.44), then the quadruple $(\mathcal{E}^{e},E_{d},\,H_{pq},\,\bar{A}_{s})$ defined via (1.42), (1.43) and (1.44)
solves the first order, symmetric hyperbolic Fuchsian equation
$$\displaystyle-\acute{\mathbf{A}}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}\mathcal{E}^{e}\\
E_{\hat{d}}\\
H_{\hat{a}\hat{b}}\\
\bar{A}_{s}\end{pmatrix}+\acute{\mathbf{A}}^{f}\tensor{\underline{h}}{{}^{c}_{f}}\underline{\nabla}_{c}\begin{pmatrix}\mathcal{E}^{e}\\
E_{\hat{d}}\\
H_{\hat{a}\hat{b}}\\
\bar{A}_{s}\end{pmatrix}=\frac{1}{Ht}\acute{\mathcal{B}}\begin{pmatrix}\mathcal{E}^{e}\\
E_{\hat{d}}\\
H_{\hat{a}\hat{b}}\\
\bar{A}_{s}\end{pmatrix}+\acute{G}(t,\mathbf{U}),$$
(3.137)
where
$$\displaystyle\acute{\mathbf{A}}^{0}={}$$
$$\displaystyle\begin{pmatrix}-\lambda\underline{h}^{\hat{b}\hat{e}}\tensor{\underline{h}}{{}^{a}_{\hat{b}}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}&-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s\hat{e}}}\xi^{\hat{d}}&0&0\\
-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\underline{h}_{df}\xi^{d}&\bigl{[}-\lambda h^{\hat{d}d}-\lambda\nu^{r}\nu^{s}g_{rs}\xi^{d}\xi^{\hat{d}}+2\xi^{d}\xi^{\hat{d}}\bigr{]}\underline{h}_{df}&0&0\\
0&0&\underline{h}_{a\bar{a}}\underline{h}_{b\bar{b}}h^{\hat{a}a}h^{\hat{b}b}&0\\
0&0&0&\underline{h}_{or}h^{rs}\end{pmatrix},$$
$$\displaystyle\acute{\mathbf{A}}^{f}\tensor{\underline{h}}{{}^{c}_{f}}={}$$
$$\displaystyle\begin{pmatrix}-2\xi^{c}\tensor{\underline{h}}{{}^{a}_{s}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}\underline{h}^{s\hat{e}}&-\bigl{[}2\xi^{c}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s\hat{e}}}+\tensor{\underline{h}}{{}^{c\hat{e}}}\bigr{]}\xi^{\hat{d}}&\underline{h}^{\hat{e}\hat{b}}h^{\hat{a}c}&0\\
-\bigl{[}2\xi^{c}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}+\tensor{\underline{h}}{{}^{c}_{e}}\bigr{]}\underline{h}_{df}\xi^{d}&-\bigl{[}2\nu^{r}\nu^{s}g_{rs}\xi^{c}\xi^{\hat{d}}\xi^{d}-2\xi^{(d}h^{\hat{d})c}+2\xi^{c}h^{\hat{d}d}\bigr{]}\underline{h}_{df}&\underline{h}_{df}h^{\hat{a}d}h^{\hat{b}c}&0\\
\underline{h}_{e\bar{b}}\underline{h}_{a\bar{a}}h^{ac}&\underline{h}_{a\bar{a}}\underline{h}_{b\bar{b}}h^{a\hat{d}}h^{bc}&0&0\\
0&0&0&0\end{pmatrix},$$
$$\displaystyle\acute{\mathcal{B}}={}$$
$$\displaystyle\begin{pmatrix}-(n-3)\lambda\underline{h}^{\hat{b}\hat{e}}\tensor{\underline{h}}{{}^{a}_{\hat{b}}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}&0&0&0\\
0&-(n-3)\lambda h^{\hat{d}d}\underline{h}_{df}&0&0\\
0&0&\underline{h}_{a\bar{a}}\underline{h}_{b\bar{b}}h^{\hat{a}a}h^{\hat{b}b}&0\\
0&0&0&\frac{1}{2}\underline{h}_{or}h^{rs}\end{pmatrix},$$
$$\displaystyle\acute{G}(t,\mathbf{U})$$
$$\displaystyle=\acute{G}_{0}(t,\mathbf{U})+\frac{1}{\sqrt{t}}\acute{G}_{1}(t,\mathbf{U}),$$
$$\displaystyle\acute{G}_{0}(t,\mathbf{U})$$
$$\displaystyle=\left(\mathfrak{D}^{\hat{e}}_{1}(t,\mathbf{U}),\mathfrak{D}_{2f}(t,\mathbf{U}),\mathfrak{D}_{3\bar{a}\bar{b}}(t,\mathbf{U}),\mathfrak{D}_{4o}(t,\mathbf{U}),\right)^{\text{tr}},$$
$$\displaystyle\acute{G}_{1}(t,\mathbf{U})$$
$$\displaystyle=\left(-\underline{h}^{\hat{e}e}\Xi_{1e},\underline{h}_{df}h^{d\hat{a}}\Xi_{1\hat{a}},-\underline{h}_{a\bar{a}}\underline{h}_{b\bar{a}}h^{a\hat{a}}\Xi_{2\hat{a}}^{b},\underline{h}_{or}h^{ra}\Xi_{3a}\right)^{\text{tr}},$$
the maps $\Xi_{1e}$, $\Xi^{b}_{2\hat{a}}$, $\Xi_{3a}$ are as defined in Lemma 3.2.1, and the
maps $\mathfrak{D}_{1}^{\hat{e}}$, $\mathfrak{D}_{2f}$, $\mathfrak{D}_{3\bar{a}\bar{b}}$ and $\mathfrak{D}_{4o}$ are defined by
$$\displaystyle\mathfrak{D}_{1}^{\hat{e}}(t,\mathbf{U})={}$$
$$\displaystyle\underline{h}^{\hat{e}\hat{b}}\mathfrak{D}^{\sharp}_{1\hat{b}}(t,\mathbf{U}),\quad\mathfrak{D}_{2f}(t,\mathbf{U})=\underline{h}_{df}\mathfrak{D}_{2}^{\sharp d}(t,\mathbf{U}),$$
$$\displaystyle\mathfrak{D}_{3\bar{a}\bar{b}}(t,\mathbf{U})={}$$
$$\displaystyle\underline{h}_{a\bar{a}}\underline{h}_{b\bar{b}}\mathfrak{D}_{3}^{\sharp ab}(t,\mathbf{U}),{\quad\text{and}\quad}\mathfrak{D}_{4o}(t,\mathbf{U})=\underline{h}_{or}\mathfrak{D}_{4}^{\sharp r}(t,\mathbf{U}),$$
respectively. Moreover, there
exist constants $\iota>0$ and $R>0$ such that the
maps $\mathfrak{D}_{1}^{\hat{e}}$, $\mathfrak{D}_{2f}$, $\mathfrak{D}_{3\bar{a}\bar{b}}$ and $\mathfrak{D}_{4o}$ are analytic for $(t,\mathbf{U})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$ and vanish for $\mathbf{U}=0$.
4. Symmetric hyperbolic Fuchsian equations
In the previous section, we derived, in Theorems 3.1 and 3.2, a Fuchsian formulation of the gauge reduced conformal Einstein–Yang–Mills equations. In this section, we quickly review the global existence theory for symmetric hyperbolic Fuchsian equations developed in [4], which is an extension of the Fuchsian existence theory from [21]; see also [3] for related results. This theory relies on the Fuchsian system satisfying a number of structural conditions, which we recall for the convenience of the reader. A tailored version of the main Fuchsian existence theorem from [4] is stated below in Theorem 4.1. In the next section, we apply this theorem to our Fuchsian formulation of the gauge reduced conformal Einstein–Yang–Mills equations. The purpose of doing so is to obtain uniform bounds and decay estimates as $t\searrow 0$ for solutions to the conformal Einstein–Yang–Mills equations, where we recall that, in the conformal picture, $t=0$ corresponds to future timelike infinity.
Rather than considering the general class of Fuchsian systems analyzed in [4], we instead consider a restricted class that encompasses our Fuchsian formulation of the Einstein–Yang–Mills system. This will allow us to simplify the presentation and application of the existence theory from [4] in our setting.
The class of Fuchsian equations and the corresponding initial value problem that we consider in this section are of the form
$$\displaystyle-\mathbf{A}^{0}(u)\nu^{c}\underline{\nabla}_{c}u+\mathbf{A}^{c}(u)\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}u={}$$
$$\displaystyle\frac{1}{t}\mathfrak{A}(u)\mathbb{P}u+G(u),\quad$$
$$\displaystyle\text{in }[T_{0},0)\times\Sigma,$$
(4.1)
$$\displaystyle u={}$$
$$\displaystyle u_{0},\quad$$
$$\displaystyle\text{in }\{T_{0}\}\times\Sigma,$$
(4.2)
where $T_{0}<0$ and $\mathbf{A}^{0}=\mathbf{A}^{c}\nu_{c}$. Here777In this section only, we do not assume that $\Sigma$ is $\mathbb{S}^{n-1}$, although, in light of our application to the Einstein–Yang–Mills equations, it is fine to assume this., $\Sigma$ is a $(n-1)$-dimensional closed Riemannian manifold with time-independent metric $\underline{h}_{ab}$, $t$ is a Cartesian coordinate on the interval $[T_{0},0)\subset\mathbb{R}$, $\underline{\nabla}$ is the Levi-Civita connection of the Lorentzian metric888Note that we are now interpreting $\underline{h}_{ab}$ as spacetime tensor field on $\mathcal{M}$ in the obvious manner.
$$\underline{g}_{ab}=-(dt)_{a}(dt)_{b}+\underline{h}_{ab}$$
on the spacetime manifold
$$\mathcal{M}=[T_{0},0)\times\Sigma,$$
$\nu_{c}=(dt)_{c}$ is the unit co-normal to $\Sigma$ and $\nu^{c}=\underline{g}^{cd}\nu_{d}=-(\partial/\partial t)^{c}$, $\tensor{\underline{h}}{{}^{b}_{c}}=\tensor{\delta}{{}^{b}_{c}}+\nu^{b}\nu_{c}=\tensor{\delta}{{}^{b}_{c}}-(\partial/\partial t)^{b}(dt)_{c}$ is the projection onto $\underline{g}$-orthogonal subspace to $\nu^{c}$, and
$$u=(u_{(1)},\ldots,u_{(\ell)})$$
is a section of the vector bundle
$$\textbf{V}=\bigoplus^{\ell}_{k=1}h(T^{m_{k}}_{n_{k}}\mathcal{M})$$
over $\mathcal{M}$,
where we are using $h(T^{r}_{s}\mathcal{M})$ to denote the projection of the tensor bundle $T^{r}_{s}\mathcal{M}$ by $\tensor{\underline{h}}{{}^{b}_{c}}$, i.e.
$\tensor{S}{{}^{a_{1}\cdots a_{r}}_{b_{1}\ldots b_{s}}}\in h(T^{r}_{s}\mathcal{M})$ if and only if
$\tensor{\underline{h}}{{}^{a_{1}}_{c_{1}}}\cdots\tensor{\underline{h}}{{}^{a_{r}}_{c_{r}}}\tensor{\underline{h}}{{}^{d_{1}}_{b_{1}}}\cdots\tensor{\underline{h}}{{}^{d_{s}}_{b_{s}}}\tensor{S}{{}^{c_{1}\cdots c_{r}}_{d_{1}\ldots d_{s}}}=\tensor{S}{{}^{a_{1}\ldots a_{r}}_{b_{1}\ldots b_{s}}}$.
The coefficients $\mathbf{A}^{c}(u)$, $\mathfrak{A}(u)$, $\mathbb{P}$ and $G(u)$ will be assumed to satisfy conditions $(1)$–$(5)$ from §4.2 below.
Remark 4.0.1.
(i)
The coefficients $\mathbf{A}^{c}$ and $G$ in (4.1) implicitly depend on the spacetime points $(t,x)\in\mathcal{M}$ via the section $u$ of $\mathbf{V}$. The dependence can be made explicit by locally representing $u$ in a vector bundle chart as $(t,x,\tilde{u}(t,x))$.
(ii)
We can naturally view $u$ as a time-dependent section of the vector bundle
$$V=\bigoplus^{\ell}_{k=1}T^{m_{k}}_{n_{k}}\Sigma$$
over $\Sigma$. Since, in this viewpoint, $u$ no longer includes $t$ in its base point, i.e. $u$ is locally of the form $(x,\tilde{u}(t,x))$ with $(t,x)\in\mathcal{M}$, we will interpret the coefficients $\mathbf{A}^{c}$ and $G$ as depending on $t$ as well as $u$, and we will write
$\mathbf{A}^{c}(t,u)$ and $G(t,u)$. This interpretation also allows us to write the Fuchsian equation (4.1) as
$$\mathbf{A}^{0}(t,u)\partial_{t}u+\mathbf{A}^{c}(t,u)\tensor{\underline{h}}{{}^{b}_{c}}\underline{D}_{b}u=\frac{1}{t}\mathfrak{A}(t,u)\mathbb{P}u+G(t,u)$$
where $\mathbf{A}^{0}=-\mathbf{A}^{c}\nu_{c}$ and $\underline{D}$ denotes the Levi-Civita connection of the Riemannian metric $\underline{h}_{ab}$ on $\Sigma$.
For the remainder of this section, we will favour the interpretation of $u$ taking values in the vector bundle $V$ in line with Remark 4.0.1.(ii).
4.1. Symmetric linear operators and inner products
For fixed $t$, $u$ and $\eta_{b}$ , $\mathbf{A}^{c}(t,u)\underline{h}^{b}{}_{c}\eta_{b}$ and $\mathbf{A}^{0}(t,u)$ define linear operators on $V$.
One of the assumptions needed for the
Fuchsian existence theory from [4] is that these operators are symmetric with respect to a given inner product on $V$. In the following, we will employ the inner product defined on elements
$$v=(v_{(1)},\ldots,v_{(\ell)}),\;u=(u_{(1)},\ldots,u_{(\ell)})\in V$$
by
$$\displaystyle\langle v,u\rangle_{\underline{h}}=$$
$$\displaystyle\sum^{\ell}_{k=1}\Bigl{(}\prod_{i=1}^{m_{k}}\underline{h}_{c_{i}b_{i}}\Bigr{)}\Bigl{(}\prod_{j=1}^{n_{k}}\underline{h}^{d_{j}a_{j}}\Bigr{)}\tensor{(v_{(k)})}{{}^{c_{1}\cdots c_{m_{k}}}_{d_{1}\ldots d_{n_{k}}}}\tensor{(u_{(k)})}{{}^{b_{1}\cdots b_{m_{k}}}_{a_{1}\ldots a_{n_{k}}}}.$$
(4.3)
For use below, we define projections $\mathbb{P}_{(j)}:V\rightarrow V$ and maps $\phi_{(j)}:\operatorname{Im}\mathbb{P}_{(j)}\rightarrow T^{m_{j}}_{n_{j}}\Sigma$ by
$$\displaystyle\mathbb{P}_{(j)}u=(0,\cdots,u_{(j)},\cdots 0)\quad\text{and}\quad\phi_{(j)}(0,\cdots,u_{(j)},\cdots 0)=u_{(j)},$$
respectively,
and we set $\widetilde{\mathbb{P}}_{(j)}=\phi_{(j)}\circ\mathbb{P}_{(j)}$. We also note using these maps that $u=\sum^{\ell}_{k=1}\phi^{-1}_{(k)}\widetilde{\mathbb{P}}_{(k)}u$.
Definition 4.1.1.
The transpose of999Here, $L(V)$ denotes the set of linear operators on $V$, which is isomorphic to $V^{*}\otimes V$. $\mathbf{A}\in L(V)$, denoted $\mathbf{A}^{\text{tr}}$, is the unique element of $L(V)$ satisfying
$$\displaystyle\langle v,\mathbf{A}u\rangle_{\underline{h}}=\langle\mathbf{A}^{\text{tr}}v,u\rangle_{\underline{h}}$$
for all $u,v\in V$. Moreover, we say that $\mathbf{A}\in L(V)$ is symmetric if $\mathbf{A}^{\text{tr}}=\mathbf{A}$.
Given $\mathbf{A}\in L(V)$, we can represent it in block form as $\mathbf{A}=(\mathbf{A}_{(kl)})$
where the blocks $\mathbf{A}_{(kl)}$
are defined by
$$\mathbf{A}_{(kl)}=\widetilde{\mathbb{P}}_{(k)}\mathbf{A}\phi_{(l)}^{-1}.$$
Using this notation, it can then be verified by a straight forward calculation that the transpose of $\mathbf{A}=(\mathbf{A}_{(kl)})$ is given by
$$\displaystyle\tensor{\bigl{(}(\mathbf{A}^{\text{tr}})_{(lk)}\bigr{)}}{{}^{\hat{e}_{1}\cdots\hat{e}_{m_{l}}d_{1}\cdots d_{n_{k}}}_{e_{1}\cdots e_{n_{l}}c_{1}\cdots c_{m_{k}}}}=$$
$$\displaystyle\hskip 28.45274pt\tensor{\bigl{(}\mathbf{A}_{(kl)}\bigr{)}}{{}^{b_{1}\cdots b_{m_{k}}\hat{a}_{1}\cdots\hat{a}_{n_{l}}}_{a_{1}\cdots a_{n_{k}}\hat{b}_{1}\cdots\hat{b}_{m_{l}}}}\Bigl{(}\prod_{i=1}^{m_{k}}\underline{h}_{c_{i}b_{i}}\Bigr{)}\Bigl{(}\prod_{i=1}^{n_{l}}\underline{h}_{\hat{a}_{i}e_{i}}\Bigr{)}\Bigl{(}\prod_{j=1}^{n_{k}}\underline{h}^{d_{j}a_{j}}\Bigr{)}\Bigl{(}\prod_{j=1}^{m_{l}}\underline{h}^{\hat{b}_{j}\hat{e}_{j}}\Bigr{)}.$$
(4.4)
4.2. Coefficient assumptions
Here, we state an adapted version of the coefficient assumptions from [4, §$3.1$]. These assumptions need to be satisfied in order to apply the Fuchsian existence results [4]. In the following, we employ the order notation, i.e. $\mathrm{O}$ and $\mathcal{O}$, from [4, §2.4].
(1)
The map $\mathbb{P}$ is a time-independent section of the vector bundle $L(V)$ over $\Sigma$ that is covariantly constant and defines a symmetric projection operator on $V$, i.e.
$$\mathbb{P}^{2}=\mathbb{P},\quad\mathbb{P}^{\text{tr}}=\mathbb{P},\quad\partial_{t}\mathbb{P}=0\quad\text{and}\quad\underline{D}\mathbb{P}=0.$$
(2)
There exist constants $R,\kappa,\gamma_{1},\gamma_{2}>0$ such that the maps
$$\mathbf{A}^{0}\in C^{1}([T_{0},0],C^{\infty}(B_{R}(V),L(V))){\quad\text{and}\quad}\mathfrak{A}\in C^{0}([T_{0},0],C^{\infty}(B_{R}(V),L(V)))$$
satisfy101010Note here in the following $\pi$ is used to denote a vector bundle projection map. The particular vector bundle will be clear from context and so no confusion should arise from the use of the same symbol for all the vector bundle projections. $\pi(\mathbf{A}^{0}(t,v))=\pi(\mathfrak{A}(t,v))=\pi(v)$ and
$$\displaystyle\frac{1}{\gamma_{1}}\langle u,u\rangle_{\underline{h}}\leq\langle u,\mathbf{A}^{0}(t,v)u\rangle_{\underline{h}}\leq\frac{1}{\kappa}\langle u,\mathfrak{A}(t,v)u\rangle_{\underline{h}}\leq\gamma_{2}\langle u,u\rangle_{\underline{h}}$$
(4.5)
for all
$(t,u,v)\in[T_{0},0)\times V\times B_{R}(V)$.
Moreover, $\mathbf{A}^{0}$ satisfies the relations
$$\displaystyle\mathbf{A}^{0}(t,v)^{\text{tr}}=\mathbf{A}^{0}(t,v),$$
$$\displaystyle[\mathbb{P}(\pi(v)),\mathfrak{A}(t,v)]=0,$$
$$\displaystyle\mathbb{P}(\pi(v))\mathbf{A}^{0}(t,v)\mathbb{P}^{\perp}(\pi(v))=\mathrm{O}\bigl{(}\mathbb{P}(\pi(v))v\bigr{)}$$
and
$$\displaystyle\mathbb{P}^{\perp}(\pi(v))\mathbf{A}^{0}(t,v)\mathbb{P}(\pi(v))=\mathrm{O}\bigl{(}\mathbb{P}(\pi(v))v\bigr{)},$$
for all $(t,v)\in[T_{0},0)\times B_{R}(V)$, and there exist maps111111Here, $\Gamma(L(V))$ denotes the sections of the vector bundle $L(V)$ over $\Sigma$. $\mathring{\mathbf{A}}^{0},\,\mathring{\mathfrak{A}}\in C^{0}([T_{0},0],\Gamma(L(V)))$ satisfying
$[\mathbb{P},\mathring{\mathfrak{A}}]=0$,
and
$$\mathbf{A}^{0}(t,v)-\mathring{\mathbf{A}}^{0}(t,\pi(v))=\mathrm{O}(v){\quad\text{and}\quad}\mathfrak{A}(t,v)-\mathring{\mathfrak{A}}(t,\pi(v))=\mathrm{O}(v)$$
for all $(t,v)\in[T_{0},0)\times B_{R}(V)$.
(3)
The map $G(t,v)\in C^{0}([T_{0},0),C^{\infty}(B_{R}(V),V))$ admits an expansion of the form
$$G(t,v)=\mathring{G}(t,\pi(v))+G_{0}(t,v)+|t|^{-\frac{1}{2}}G_{1}(t,v)+|t|^{-1}G_{2}(t,v)$$
where $\mathring{G}\in C^{0}([T_{0},0],\Gamma(V))$ and the maps $G_{\ell}(t,v)\in C^{0}([T_{0},0],C^{\infty}(B_{R}(V),V))$, $\ell=0,1,2$, satisfy $\pi(G_{\ell}(t,v))=\pi(v)$ and
$$\mathbb{P}(\pi(v))G_{2}(t,v)=0$$
for all $(t,v)\in[T_{0},0]\times B_{R}(V)$. Moreover, there exist constants $\lambda_{\ell}\geq 0$, $\ell=1,2,3$, such that
$$\displaystyle G_{0}(t,v)=\mathrm{O}(v),\quad\mathbb{P}(\pi(v))G_{1}(t,v)=\mathcal{O}(\lambda_{1}v),\quad\mathbb{P}^{\perp}(\pi(v))G_{1}(t,v)=\mathcal{O}(\lambda_{2}\mathbb{P}(\pi(v))v),$$
and
$$\displaystyle\mathbb{P}^{\perp}(\pi(v))G_{2}(t,v)=\mathcal{O}\Bigl{(}\lambda_{3}R^{-1}\mathbb{P}(\pi(v))v\otimes\mathbb{P}v\Bigr{)}$$
for all $(t,v)\in[T_{0},0)\times B_{R}(V)$.
(4)
The map $\mathbf{A}^{c}\underline{h}^{b}{}_{c}\in C^{0}([T_{0},0],C^{\infty}(B_{R}(V),L(V)\otimes T\Sigma))$ satisfies $\pi(\mathbf{A}^{c}(t,v)\underline{h}^{b}{}_{c})=\pi(v)$
and
$$[\sigma_{c}(\pi(v))\mathbf{A}^{c}(t,v)]^{\text{tr}}=\sigma_{c}(\pi(v))\mathbf{A}^{c}(t,v)$$
for all $(t,v)\in[T_{0},0)\times B_{R}(V)$ and spatial one forms $\sigma_{a}\in\mathfrak{X}^{*}(\Sigma)$.
(5)
For each $(t,v)\in[T_{0},0)\times B_{R}(V)$, there exists a $s_{0}>0$ such that
$$\Theta(s)=\mathbf{A}^{0}\Bigl{(}t,v+s[\mathbf{A}^{0}(t,v)]^{-1}\Bigl{(}\frac{1}{t}\mathfrak{A}(t,v)\mathbb{P}v+G(t,v)\Bigr{)}\Bigr{)},\quad|s|<s_{0},$$
defines smooth curve in $L(V)$. There exist
constants $\theta$ and $\beta_{\ell}\geq 0$, $\ell=0,1,\ldots,7$, such that the derivative $\Theta^{\prime}(0)$ satisfies121212This condition is a reformulation of [4, §3.1.v]. It is straightforward to check that it implies the condition [4, §3.1.v] for the Fuchsian equation (4.1) that we are considering here
$$\displaystyle\langle v,\mathbb{P}(\pi(v))\Theta^{\prime}(0)\mathbb{P}(\pi(v))v\rangle_{\underline{h}}=\mathcal{O}\Bigl{(}\theta v\otimes v+|t|^{-\frac{1}{2}}\beta_{0}v\otimes\mathbb{P}(\pi(v))v+|t|^{-1}\beta_{1}\mathbb{P}(\pi(v))v\otimes\mathbb{P}(\pi(v))v\Bigr{)},$$
$$\displaystyle\langle v,\mathbb{P}(\pi(v))\Theta^{\prime}(0)\mathbb{P}^{\perp}(\pi(v))v\rangle_{\underline{h}}=\mathcal{O}\Biggl{(}\theta v\otimes v+|t|^{-\frac{1}{2}}\beta_{2}v\otimes\mathbb{P}(\pi(v))v+\frac{|t|^{-1}\beta_{3}}{R}\mathbb{P}(\pi(v))v\otimes\mathbb{P}(\pi(v))v\Biggr{)},$$
$$\displaystyle\langle v,\mathbb{P}^{\perp}(\pi(v))\Theta^{\prime}(0)\mathbb{P}(\pi(v))v\rangle_{\underline{h}}=\mathcal{O}\Biggl{(}\theta v\otimes v+|t|^{-\frac{1}{2}}\beta_{4}v\otimes\mathbb{P}(\pi(v))v+\frac{|t|^{-1}\beta_{5}}{R}\mathbb{P}(\pi(v))v\otimes\mathbb{P}(\pi(v))v\Biggr{)}$$
and
$$\displaystyle\langle v,\mathbb{P}^{\perp}(\pi(v))\Theta^{\prime}(0)\mathbb{P}^{\perp}(\pi(v))v\rangle_{\underline{h}}=\mathcal{O}\Biggl{(}\theta v\otimes v+\frac{|t|^{-\frac{1}{2}}\beta_{6}}{R}v\otimes\mathbb{P}(\pi(v))v+\frac{|t|^{-1}\beta_{7}}{R^{2}}\mathbb{P}(\pi(v))v\otimes\mathbb{P}(\pi(v))v\Biggr{)}$$
for all $(t,v)\in[T_{0},0)\times B_{R}(V)$.
4.3. Global existence theorem
Theorem 3.8 from [4] implies the global existence result for the Fuchsian initial value problem (4.1)–(4.2) that is stated in the following theorem. It is worth noting that the regularity for the initial data in the theorem below is less than what is required for [4, Theorem 3.8]. This is because the spatial derivative term $\mathbf{A}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{c}$ appearing in (4.1) does not contain any $1/t$ singular terms by assumption. As a consequence, the use of [4, Lemma 3.5] can be avoided in the proof of [4, Theorem 3.8] under this assumption, which leads to the reduction in the required regularity of the initial data.
Theorem 4.1 (Global existence theorem).
Suppose $s\in\mathbb{Z}_{>\frac{n+1}{2}}$, $T_{0}<0$, $u_{0}\in H^{s}(\Sigma;V)$, the coefficient assumptions $(1)$–$(5)$ from §4.2 are satisfied, and the constants $\kappa,\gamma_{1},\lambda_{3},\beta_{0},\beta_{2k+1}$, $k=0,\ldots,3$ satisfy
$$\kappa>\frac{1}{2}\gamma_{1}\Bigl{(}\sum_{k=0}^{3}\beta_{2k+1}+2\lambda_{3}\Bigr{)}.$$
Then, there exists a constant $\delta>0$, such that if
$$\max\Bigl{\{}\|u_{0}\|_{H^{s}},\sup_{T_{0}\leq\tau<0}\|\mathring{G}(\tau)\|_{H^{s}}\Bigr{\}}\leq\delta,$$
then there is a unique solution
$$u\in C^{0}([T_{0},0),H^{s}(\Sigma;V))\cap C^{1}([T_{0},0),H^{s-1}(\Sigma;V))\cap L^{\infty}([T_{0},0),H^{s}(\Sigma;V))$$
of the initial value problem (4.1)–(4.2) such that $\mathbb{P}^{\perp}u(0):=\lim_{t\nearrow 0}\mathbb{P}^{\perp}u(t)$ exists in $H^{s-1}(\Sigma;V)$.
Moreover, the solution $u$ satisfies the energy estimate
$$\|u(t)\|_{H^{s}}^{2}+\sup_{T_{0}\leq\tau<0}\|\mathring{G}(\tau)\|^{2}_{H^{s}}-\int^{t}_{T_{0}}\frac{1}{\tau}\|\mathbb{P}u(\tau)\|^{2}_{H^{s}}d\tau\leq C(\delta,\delta^{-1})\bigl{(}\|u_{0}\|^{2}_{H^{s}}+\sup_{T_{0}\leq\tau<0}\|\mathring{G}(\tau)\|^{2}_{H^{s}}\bigr{)}$$
(4.6)
and the decay estimates
$$\displaystyle\|\mathbb{P}u(t)\|_{H^{s-1}}\lesssim\begin{cases}|t|+\lambda_{1}|t|^{\frac{1}{2}},\quad&\text{if }\zeta>1\\
|t|^{\zeta-\sigma}+\lambda_{1}|t|^{\frac{1}{2}},\quad&\text{if }\frac{1}{2}<\zeta\leq 1\\
|t|^{\zeta-\sigma},\quad&\text{if }0<\zeta\leq\frac{1}{2}\end{cases}$$
and
$$\displaystyle\|\mathbb{P}^{\perp}u(t)-\mathbb{P}^{\perp}u(0)\|_{H^{s}}\lesssim\begin{cases}|t|^{\frac{1}{2}}+|t|^{\zeta-\sigma},\quad&\text{if }\zeta>\frac{1}{2}\\
|t|^{\zeta-\sigma},\quad&\text{if }\zeta\leq\frac{1}{2}\end{cases}$$
for all $t\in[T_{0},0)$ where $\zeta=\kappa-\frac{1}{2}\gamma_{1}\beta_{1}$.
5. Global existence for the Fuchsian formulation of the EYM equations
In this section, we carry out one of the main steps in the proof of the Theorem 1.1 by establishing the global existence of solutions to the Fuchsian equation obtained from combining the Fuchsian equations from Theorems 3.1 and 3.2, which is given by
$$\displaystyle-\widehat{\mathbf{A}}^{0}\nu^{c}\underline{\nabla}_{c}\widehat{\mathbf{U}}+\widehat{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\underline{\nabla}_{c}\widehat{\mathbf{U}}=\frac{1}{Ht}\widehat{\mathcal{B}}\widehat{\mathbf{U}}+\widehat{G}(t,\widehat{\mathbf{U}}),$$
(5.1)
where
$$\displaystyle\widehat{\mathbf{A}}^{0}=\begin{pmatrix}\bar{\mathbf{A}}_{1}^{0}&0&0&0&0\\
0&\bar{\mathbf{A}}_{2}^{0}&0&0&0\\
0&0&\bar{\mathbf{A}}_{3}^{0}&0&0\\
0&0&0&\bar{\mathbf{A}}_{4}^{0}&0\\
0&0&0&0&\acute{\mathbf{A}}^{0}\end{pmatrix},\quad\widehat{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}=\begin{pmatrix}\bar{\mathbf{A}}_{1}^{b}\tensor{\underline{h}}{{}^{c}_{b}}&0&0&0&0\\
0&\bar{\mathbf{A}}_{2}^{b}\tensor{\underline{h}}{{}^{c}_{b}}&0&0&0\\
0&0&\bar{\mathbf{A}}_{3}^{b}\tensor{\underline{h}}{{}^{c}_{b}}&0&0\\
0&0&0&\bar{\mathbf{A}}_{4}^{b}\tensor{\underline{h}}{{}^{c}_{b}}&0\\
0&0&0&0&\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\end{pmatrix},$$
(5.12)
$$\displaystyle\widehat{\mathcal{B}}=\begin{pmatrix}\bar{\mathcal{B}}_{1}&0&0&0&0\\
0&\bar{\mathcal{B}}_{2}&0&0&0\\
0&0&\bar{\mathcal{B}}_{3}&0&0\\
0&0&0&\bar{\mathcal{B}}_{4}&0\\
0&0&0&0&\acute{\mathcal{B}}\end{pmatrix},\quad\widehat{G}(t,\widehat{\mathbf{U}})=\begin{pmatrix}\bar{G}_{1}(t,\widehat{\mathbf{U}})\\
\bar{G}_{2}(t,\widehat{\mathbf{U}})\\
\bar{G}_{3}(t,\widehat{\mathbf{U}})\\
\bar{G}_{4}(t,\widehat{\mathbf{U}})\\
\acute{G}(t,\widehat{\mathbf{U}})\end{pmatrix},$$
(5.23)
and
$$\widehat{\mathbf{U}}=(-\nu^{e}m_{e},\tensor{\underline{h}}{{}^{e}_{\hat{e}}}m_{e},m,-\nu^{e}\tensor{p}{{}^{a}_{e}},\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{p}{{}^{a}_{e}},p^{a},-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}},\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}},s^{\hat{a}\hat{b}},-\nu^{e}s_{e},\tensor{\underline{h}}{{}^{e}_{\hat{e}}}s_{e},s,\mathcal{E}^{e},E_{d},H_{\hat{a}\hat{b}},\bar{A}_{s})^{\text{tr}}.$$
(5.24)
While, by construction, solutions $(g_{ab},A_{b})$ of the reduced conformal Einstein–Yang–Mills equations in the temporal gauge determine solutions of the Fuchsian system (5.1) via (1.30), (1.31)–(1.38) and (1.43)–(1.44), we will, in this section, analyze general solutions to (5.1) and not, a priori, assume that they are derived from solutions of the conformal Einstein–Yang–Mills equations. The main purpose of establishing the existence of solutions to the Fuchsian system (5.1) will be to obtain global bounds on solutions of the reduced conformal Einstein–Yang–Mills equations.
The details of this argument can be found in §7.
The main global existence theorem for the Fuchsian equation (5.1) is stated below in Theorem 5.1. In order to be able to apply this theorem, we need to establish that (5.1) verifies the coefficient assumptions $(1)$–$(5)$ from §4.2. As we show below, this requires us to make certain dimension dependent choices for the free parameters $\breve{\mathtt{A}},\breve{\mathtt{B}},\breve{\mathtt{J}},\breve{\mathtt{K}},\breve{\mathtt{E}},\breve{\mathtt{F}}$ that appear in the coefficients of (5.1).
5.1. Parameter selection
The parameters $\breve{\mathtt{A}},\breve{\mathtt{B}},\breve{\mathtt{J}},\breve{\mathtt{K}},\breve{\mathtt{E}},\breve{\mathtt{F}}$ are determined by the spacetime dimension $n$ and the following decomposition for the matrix $\widehat{\mathcal{B}}$ defined above by (5.23):
$$\displaystyle\widehat{\mathcal{B}}=\mathfrak{\widehat{A}}\widehat{\mathbb{P}}=\begin{pmatrix}\mathfrak{\bar{A}}_{1}&0&0&0&0\\
0&\mathfrak{\bar{A}}_{2}&0&0&0\\
0&0&\mathfrak{\bar{A}}_{3}&0&0\\
0&0&0&\mathfrak{\bar{A}}_{4}&0\\
0&0&0&0&\mathfrak{\acute{A}}\end{pmatrix}\begin{pmatrix}\bar{\mathbb{P}}_{1}&0&0&0&0\\
0&\bar{\mathbb{P}}_{2}&0&0&0\\
0&0&\bar{\mathbb{P}}_{3}&0&0\\
0&0&0&\bar{\mathbb{P}}_{4}&0\\
0&0&0&0&\acute{\mathbb{P}}\end{pmatrix},$$
which implies, in particular that
$\bar{\mathcal{B}}_{\ell}=\mathfrak{\bar{A}}_{\ell}\bar{\mathbb{P}}_{\ell}$, $\ell=1,\ldots,4$, and $\acute{\mathcal{B}}=\acute{\mathfrak{A}}\acute{\mathbb{P}}$. This decomposition is not unique and we have some freedom in choosing the particular form of the operators $\mathfrak{\bar{A}}_{\ell},\acute{\mathfrak{A}}$ and the projection operators $\bar{\mathbb{P}}_{\ell},\acute{\mathbb{P}}$. We fix $\mathfrak{\bar{A}}_{3}$, $\mathfrak{\bar{A}}_{4}$, $\acute{\mathfrak{A}}$, $\bar{\mathbb{P}}_{3}$, $\bar{\mathbb{P}}_{4}$ and $\acute{\mathbb{P}}$ by setting
$$\displaystyle\mathfrak{\bar{A}}_{3}=\mathfrak{\bar{A}}_{4}=\begin{pmatrix}-\lambda(n-2)&0&0\\
0&1&0\\
0&0&1\end{pmatrix},\quad\bar{\mathbb{P}}_{3}=\bar{\mathbb{P}}_{4}=\begin{pmatrix}1&0&0\\
0&0&0\\
0&0&0\end{pmatrix},\quad\acute{\mathfrak{A}}=\acute{\mathcal{B}}{\quad\text{and}\quad}\acute{\mathbb{P}}=\mathds{1},$$
respectively. The choice of the remaining operators $\mathfrak{\bar{A}}_{1}$, $\mathfrak{\bar{A}}_{2}$, $\bar{\mathbb{P}}_{1}$ and $\bar{\mathbb{P}}_{2}$ will depend on the spacetime dimensions $n$, which we separate into the following three cases: $n=4$, $n\geq 6$ and $n=5$.
$(1)$ $n=4$: For $n=4$, we fix the parameters
$\breve{\mathtt{A}},\breve{\mathtt{B}},\breve{\mathtt{J}},\breve{\mathtt{K}},\breve{\mathtt{E}},\breve{\mathtt{F}}$
by setting
$$\displaystyle\breve{\mathtt{K}}=\breve{\mathtt{J}}=\frac{2}{3},\qquad\breve{\mathtt{A}}=\breve{\mathtt{B}}=2H{\quad\text{and}\quad}\breve{\mathtt{E}}=\breve{\mathtt{F}}=1.$$
Inserting these parameters into $\bar{\mathbf{A}}_{\ell}^{0}$ and $\bar{\mathcal{B}}_{\ell}$ (see Theorem 3.1), we can ensure that $\bar{\mathcal{B}}_{\ell}=\mathfrak{\bar{A}}_{\ell}\mathbb{P}_{\ell}$ for $\ell=1,2$ by setting
$$\displaystyle\mathfrak{\bar{A}}_{1}=\mathfrak{\bar{A}}_{2}=\begin{pmatrix}-\lambda&0&0\\
0&\frac{3}{2}\underline{h}_{fd}h^{f\hat{e}}&0\\
0&0&-\lambda\end{pmatrix}{\quad\text{and}\quad}\bar{\mathbb{P}}_{1}=\bar{\mathbb{P}}_{2}=\begin{pmatrix}\frac{1}{2}&0&\frac{1}{2}\\
0&\tensor{\delta}{{}^{d}_{\hat{c}}}&0\\
\frac{1}{2}&0&\frac{1}{2}\end{pmatrix}.$$
(5.31)
Lemma 5.1.1.
There exist constants $\iota>0$, $R>0$, $\gamma_{1}>1$, and $\gamma_{2}>\frac{3}{2}$ such that
$$\frac{1}{\gamma_{1}}\mathds{1}\leq\bar{\mathbf{A}}^{0}_{\ell}\leq\mathfrak{\bar{A}}_{\ell}\leq\gamma_{2}\mathds{1}{\quad\text{and}\quad}[\mathfrak{\bar{A}}_{\ell},\,\bar{\mathbb{P}}_{\ell}]=0$$
for all $(t,\widehat{\mathbf{U}})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$ and $\ell=1,2$.
Proof.
It is straightforward to verify from the (5.31) and the definitions of the matrices $\bar{\mathbf{A}}^{0}_{\ell}$ from Theorem 3.1 that $\bar{\mathbf{A}}^{0}_{\ell}\leq\mathfrak{\bar{A}}_{\ell}$ and $[\mathfrak{\bar{A}}_{\ell},\,\bar{\mathbb{P}}_{\ell}]=0$. Moreover, by taking $R>0$ sufficiently small, it is also not difficulty to verify that there exists constants $\gamma_{1}>1$ and $\gamma_{2}>\frac{3}{2}$ such that $\mathds{1}\leq\gamma_{1}\bar{\mathbf{A}}^{0}_{\ell}$ and $\mathfrak{\bar{A}}_{\ell}\leq\gamma_{2}\mathds{1}$, which completes the proof.
∎
$(2)$ $n\geq 6$: For $n\geq 6$, we fix the parameters
$\breve{\mathtt{A}},\breve{\mathtt{B}},\breve{\mathtt{J}},\breve{\mathtt{K}},\breve{\mathtt{E}},\breve{\mathtt{F}}$
by setting
$$\displaystyle\frac{1}{\breve{\mathtt{J}}}=n-\frac{7}{2},\qquad\breve{\mathtt{E}}=1,\quad\frac{H}{\breve{\mathtt{A}}}=\sqrt{\frac{1}{2}\left(n-\frac{11}{2}\right)},$$
(5.32)
$$\displaystyle\frac{1}{\breve{\mathtt{K}}}=\frac{n-1}{2},\qquad\breve{\mathtt{F}}=1{\quad\text{and}\quad}\frac{H}{\breve{\mathtt{B}}}=\frac{n-3}{2}.$$
(5.33)
Substituting these choices into $\bar{\mathbf{A}}_{\ell}^{0}$ and $\bar{\mathcal{B}}_{\ell}$, we find that $\bar{\mathcal{B}}_{\ell}=\mathfrak{\bar{A}}_{\ell}\mathbb{P}_{\ell}$,
$\ell=1,2$, where we have set
$$\displaystyle\mathfrak{\bar{A}}_{1}=\begin{pmatrix}\frac{3}{2}(-\lambda)&0&\sqrt{\frac{1}{2}\left(n-\frac{11}{2}\right)}(-\lambda)\\
0&\left(n-\frac{7}{2}\right)\underline{h}_{fd}h^{f\hat{e}}&0\\
\sqrt{\frac{1}{2}\left(n-\frac{11}{2}\right)}(-\lambda)&0&(n-\frac{9}{2})(-\lambda)\end{pmatrix},\quad\bar{\mathbb{P}}_{1}=\mathds{1},$$
(5.37)
$$\displaystyle\mathfrak{\bar{A}}_{2}=\begin{pmatrix}(n-3)(-\lambda)&0&0\\
0&\frac{1}{2}(n-1)\underline{h}_{fd}h^{f\hat{e}}&0\\
0&0&(n-3)(-\lambda)\end{pmatrix}{\quad\text{and}\quad}\bar{\mathbb{P}}_{2}=\begin{pmatrix}\frac{1}{2}&0&\frac{1}{2}\\
0&\tensor{\delta}{{}^{d}_{\hat{d}}}&0\\
\frac{1}{2}&0&\frac{1}{2}\end{pmatrix}.$$
(5.44)
Lemma 5.1.2.
There exist constants $\iota>0$, $R>0$, $\gamma_{1}>1$, $\gamma_{2}>2n-10$, $\tilde{\gamma}_{1}>1$ and $\tilde{\gamma}_{2}>\frac{2(n-3)}{n-1}$ such that
$$\displaystyle\frac{1}{\gamma_{1}}\mathds{1}\leq\bar{\mathbf{A}}^{0}_{1}\leq\mathfrak{\bar{A}}_{1}\leq\gamma_{2}\mathds{1},\quad\frac{1}{\tilde{\gamma}_{1}}\mathds{1}\leq\bar{\mathbf{A}}^{0}_{2}\leq\frac{2}{n-1}\mathfrak{\bar{A}}_{2}\leq\tilde{\gamma}_{2}\mathds{1}$$
and
$$\displaystyle[\mathfrak{\bar{A}}_{\ell},\bar{\mathbb{P}}_{\ell}]=0$$
for all $(t,\widehat{\mathbf{U}})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$ and $\ell=1,2$.
Proof.
By (5.37), we note, for any $\omega=(\omega_{1},\omega_{2},\omega_{3})^{\text{tr}}$, that
$$\displaystyle\langle\mathfrak{\bar{A}}_{1}\omega,\omega\rangle_{\underline{h}}=-\lambda\frac{3}{2}\omega_{1}^{2}+\left(n-\frac{7}{2}\right)h^{ab}\omega_{2a}\omega_{2b}+(-\lambda)\left(n-\frac{9}{2}\right)\omega_{3}^{2}-2\lambda\sqrt{\frac{1}{2}\left(n-\frac{11}{2}\right)}\omega_{1}\omega_{3},$$
where in deriving this we have employed the relation $\omega_{2a}=\underline{h}_{a\hat{a}}\omega_{2}^{\hat{a}}$. But, by Young’s inequality (see Lemma B.2.1)
$$2\sqrt{\frac{1}{2}\left(n-\frac{11}{2}\right)}|\omega_{1}\omega_{3}|\leq\frac{1}{2}\omega_{1}^{2}+\left(n-\frac{11}{2}\right)\omega_{3}^{2},$$
and so we deduce that
$$\displaystyle-\lambda\omega_{1}^{2}+\left(n-\frac{7}{2}\right)h^{ab}\omega_{2a}\omega_{2b}+$$
$$\displaystyle(-\lambda)\omega_{3}^{2}\leq\langle\bar{\mathfrak{A}}_{1}\omega,\omega\rangle_{\underline{h}}$$
$$\displaystyle\leq$$
$$\displaystyle-2\lambda\omega_{1}^{2}+\left(n-\frac{7}{2}\right)h^{ab}\omega_{2a}\omega_{2b}+(-\lambda)(2n-10)\omega_{3}^{2},$$
where in deriving this we have used the fact that $\lambda<0$, see (1.30).
On the other hand, from the definition of $\bar{\mathbf{A}}^{0}_{1}$ in Theorem 3.1, we observe that $\langle\bar{\mathbf{A}}^{0}_{1}\omega,\omega\rangle_{\underline{h}}=-\lambda\omega_{1}^{2}+h^{ab}\omega_{2a}\omega_{2b}+(-\lambda)\omega_{3}^{2}$. With the help of this identity, we conclude, by taking small enough $R>0$, that there exist constants $\gamma_{1}>1$ and $\gamma_{2}>2n-10$ such that
$$\frac{1}{\gamma_{1}}\mathds{1}\leq\bar{\mathbf{A}}^{0}_{1}\leq\mathfrak{\bar{A}}_{1}\leq\gamma_{2}\mathds{1}.$$
To complete the proof, we note that the other stated inequality follows from similar arguments while the relation $[\mathfrak{\bar{A}}_{\ell},\bar{\mathbb{P}}_{\ell}]=0$, $\ell=1,2$, is a direct consequence of the definitions (5.37)–(5.44).
∎
$(3)$ $n=5$: For $n=5$, we fix the parameters $\breve{\mathtt{K}},\,\breve{\mathtt{F}},\,\breve{\mathtt{B}}$ in the same as (5.33), and we correspondingly define the operators $\mathfrak{\bar{A}}_{2}$ and $\bar{\mathbb{P}}_{2}$ by (5.44).
In addition, we set
$$\displaystyle\breve{\mathtt{E}}=3,\qquad\frac{1}{\breve{\mathtt{J}}}=2{\quad\text{and}\quad}\frac{H}{\breve{\mathtt{A}}}=1,$$
and in insert these parameters into $\bar{\mathbf{A}}_{1}^{0}$ and $\bar{\mathcal{B}}_{1}$. We then observe that $\bar{\mathcal{B}}_{1}=\mathfrak{\bar{A}}_{1}\bar{\mathbb{P}}_{1}$ for
$\bar{\mathfrak{A}}_{1}$ and $\bar{\mathbb{P}}_{1}$ defined by
$$\displaystyle\bar{\mathfrak{A}}_{1}=\begin{pmatrix}-\lambda&0&0\\
0&2\underline{h}_{fd}h^{f\hat{e}}&0\\
-3\lambda&0&-3\lambda\end{pmatrix}{\quad\text{and}\quad}\bar{\mathbb{P}}_{1}=\mathds{1},$$
(5.48)
respectively.
Lemma 5.1.3.
There exist constants $\iota>0$, $R>0$, $\gamma_{2}>32$, $\gamma_{1}>1$, $\tilde{\gamma}_{1}>1$, and $\tilde{\gamma}_{2}>\frac{2(n-3)}{n-1}$ such that
$$\displaystyle\frac{1}{\gamma_{1}}\mathds{1}\leq\bar{\mathbf{A}}_{1}^{0}\leq 8\bar{\mathfrak{A}}_{1}\leq\gamma_{2}\mathds{1},\quad\frac{1}{\tilde{\gamma}_{1}}\mathds{1}\leq\bar{\mathbf{A}}^{0}_{2}\leq\frac{2}{n-1}\mathfrak{\bar{A}}_{2}\leq\tilde{\gamma}_{2}\mathds{1},$$
and
$$\displaystyle[\mathfrak{\bar{A}}_{\ell},\bar{\mathbb{P}}_{\ell}]=0$$
for $(t,\widehat{\mathbf{U}})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$ and $\ell=1,2$.
Proof.
From the definition of $\bar{\mathbf{A}}^{0}_{1}$, see Theorem 3.1, and (5.48), we have
$$\displaystyle\langle\bar{\mathfrak{A}}_{1}\omega,\omega\rangle_{\underline{h}}=$$
$$\displaystyle-\lambda\omega_{1}^{2}+2h^{ab}\omega_{2a}\omega_{2b}+3(-\lambda)\omega_{3}^{2}+3(-\lambda)\omega_{1}\omega_{3}$$
and
$$\displaystyle\langle\bar{\mathbf{A}}^{0}_{1}\omega,\omega\rangle_{\underline{h}}=$$
$$\displaystyle-\lambda\omega_{1}^{2}+h^{ab}\omega_{2a}\omega_{2b}+3(-\lambda)\omega_{3}^{2}$$
for any $\omega=(\omega_{1},\omega_{2},\omega_{3})^{\text{tr}}$. In order to bound $\langle\bar{\mathfrak{A}}_{1}\omega,\omega\rangle_{\underline{h}}$, we first bound $3(-\lambda)\omega_{1}\omega_{3}$ above and below by using Young’s inequality (see Lemma B.2.1) twice with $\epsilon=\frac{\sqrt{3}}{3}$ and $\epsilon=\frac{2+\sqrt{13}}{3}$ to obtain
$$\displaystyle-\frac{3}{2}\frac{\sqrt{3}}{3}(-\lambda)\omega_{1}^{2}-\frac{3}{2}\sqrt{3}(-\lambda)\omega^{2}_{3}\leq 3(-\lambda)\omega_{1}\omega_{3}\leq\frac{3}{2}\frac{2+\sqrt{13}}{3}(-\lambda)\omega_{1}^{2}+\frac{3}{2}\frac{3}{2+\sqrt{13}}(-\lambda)\omega^{2}_{3}.$$
Using this inequality, it follows that
$$\displaystyle\Bigl{(}1-\frac{\sqrt{3}}{2}\Bigr{)}(-\lambda)\omega_{1}^{2}+2h^{ab}\omega_{2a}\omega_{2b}$$
$$\displaystyle+3\Bigl{(}1-\frac{\sqrt{3}}{2}\Bigr{)}(-\lambda)\omega_{3}^{2}\leq\langle\bar{\mathfrak{A}}_{1}\omega,\omega\rangle_{\underline{h}}$$
$$\displaystyle\leq$$
$$\displaystyle\Bigl{(}2+\frac{\sqrt{13}}{2}\Bigr{)}(-\lambda)\omega_{1}^{2}+2h^{ab}\omega_{2a}\omega_{2b}+\Bigl{(}2+\frac{\sqrt{13}}{2}\Bigr{)}(-\lambda)\omega_{3}^{2},$$
from which we deduce that $\frac{1}{8}\bar{\mathbf{A}}_{1}^{0}<\bigl{(}1-\frac{\sqrt{3}}{2}\bigr{)}\bar{\mathbf{A}}_{1}^{0}\leq\bar{\mathfrak{A}}_{1}$. By taking $R>0$ small enough, it is also clear that there exists constants $\gamma_{1}>1$ and $\gamma_{2}>32$ such that $8\bar{\mathfrak{A}}_{1}\leq\gamma_{2}\mathds{1}$ and $\mathds{1}\leq\gamma_{1}\bar{\mathbf{A}}_{1}^{0}$.
To complete the proof, we note that the second inequality follows from similar arguments while the relations $[\mathfrak{\bar{A}}_{\ell},\bar{\mathbb{P}}_{\ell}]=0$, $\ell=1,2$, are a direct consequence of the definitions of $\mathfrak{\bar{A}}_{\ell}$ and $\bar{\mathbb{P}}_{\ell}$.
∎
5.2. Verification of the coefficient assumptions
With the results of the previous section in hand, we now turn to verifying that the Fuchsian equation (5.1) satisfies all of the coefficient assumptions from §4.2.
Due to the block structure of the coefficients (5.12)–(5.23), the verification of the coefficient assumptions for the conformal Einstein (i.e. the firsts 4 blocks) and the conformal Yang–Mills (i.e. the last block) components of the equation can be carried out separately.
Lemma 5.2.1.
Suppose
$\bar{\mathbf{A}}_{i}^{0},\,\bar{\mathbf{A}}_{i}^{b}\tensor{\underline{h}}{{}^{c}_{b}}$, $\bar{\mathcal{B}}_{i}$, $\bar{G}_{i}$, $i=1,\cdots,4$, and $\acute{\mathbf{A}}^{0},\,\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}$, $\acute{\mathcal{B}}$, $\acute{G}$ are defined as in Theorems 3.1 and 3.2, and $\mathfrak{\bar{A}}_{i}$, $\bar{\mathbb{P}}_{i}$, $\acute{\mathfrak{A}}$ and $\acute{\mathbb{P}}$ are as defined131313Recall that $\mathfrak{\bar{A}}_{\mathcal{j}}$ and $\bar{\mathbb{P}}_{\mathcal{j}}$, for $\mathcal{j}=1,2$, vary according to the dimension $n$. in §5.1. Then there exist positive constants $R,\kappa,\gamma_{1},\gamma_{2}>0$ and non-negative constants $\lambda_{\mathcal{l}},\theta,\beta_{\mathcal{k}}\geq 0$, $\mathcal{l}=1,2,3$ and $\mathcal{k}=0,\ldots,7$, such that coefficients of the Fuchsian system (5.1) satisfy the assumptions $(1)$–$(5)$ from §4.2 and the inequality
$$\kappa>\frac{1}{2}\gamma_{1}\biggl{(}\sum_{k=0}^{3}\beta_{2k+1}+2\lambda_{3}\biggr{)}$$
(5.49)
holds.
Proof.
$1.$ The conformal Einstein component. Noting that the gravitational variables used in the Fuchsian formulation of the reduced conformal Einstein equation in the articles
[21, 17, 16, 19] are essentially the same ones used in this article, the same arguments from [21, 17, 16, 19] that are employed to verify the coefficient assumptions can be adapted, with the help of Lemmas 5.1.1–5.1.3, in a straightforward manner to show that the Fuchsian formulation of the reduced Einstein equations in this article satisfies the coefficient assumptions. We omit the details.
$2.$ The conformal Yang–Mills component.
Assumptions $(2)$ & $(4)$: We begin the verification of assumption $(2)$ and $(4)$ by establishing the symmetry of the operators $\acute{\mathbf{A}}^{0}$ and $\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\eta_{c}$. To this end, we observe that the blocks of $\acute{\mathbf{A}}^{0}$ satisfy
$$\displaystyle\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(11)}\bigr{)}}{{}^{\hat{e}}_{e}}\underline{h}^{ef}\underline{h}_{\hat{e}d}$$
$$\displaystyle=-\lambda\hat{h}_{\hat{b}e}\underline{h}^{\hat{b}\hat{e}}\underline{h}^{ef}\underline{h}_{\hat{e}d}=-\lambda\underline{h}^{f\hat{e}}\hat{h}_{\hat{e}d}=\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(11)}\bigr{)}}{{}^{f}_{d}}\quad(\hat{h}_{\hat{b}e}=\tensor{\underline{h}}{{}^{a}_{\hat{b}}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}),$$
$$\displaystyle\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(22)}\bigr{)}}{{}^{\hat{d}}_{f}}\underline{h}_{\hat{d}\hat{c}}\underline{h}^{f\hat{e}}$$
$$\displaystyle=\left(-\lambda h^{\hat{d}d}-\lambda\nu^{r}\nu^{s}g_{rs}\xi^{d}\xi^{\hat{d}}+2\xi^{d}\xi^{\hat{d}}\right)\underline{h}_{df}\underline{h}_{\hat{d}\hat{c}}\underline{h}^{f\hat{e}}$$
$$\displaystyle=\left(-\lambda h^{\hat{d}\hat{e}}-\lambda\nu^{r}\nu^{s}g_{rs}\xi^{\hat{e}}\xi^{\hat{d}}+2\xi^{\hat{e}}\xi^{\hat{d}}\right)\underline{h}_{\hat{d}\hat{c}}=\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(22)}\bigr{)}}{{}^{\hat{e}}_{\hat{c}}},$$
$$\displaystyle\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(33)}\bigr{)}}{{}^{\hat{c}\hat{d}}_{cd}}\underline{h}_{\bar{a}\hat{c}}\underline{h}_{\bar{b}\hat{d}}\underline{h}^{c\hat{a}}\underline{h}^{d\hat{b}}$$
$$\displaystyle=\underline{h}_{ca^{\prime}}\underline{h}_{db^{\prime}}h^{a^{\prime}\hat{c}}h^{b^{\prime}\hat{d}}\underline{h}_{\bar{a}\hat{c}}\underline{h}_{\bar{b}\hat{d}}\underline{h}^{c\hat{a}}\underline{h}^{d\hat{b}}=h^{\hat{a}\hat{c}}h^{\hat{b}\hat{d}}\underline{h}_{\hat{c}\bar{a}}\underline{h}_{\hat{d}\bar{b}}=\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(33)}\bigr{)}}{{}^{\hat{a}\hat{b}}_{\bar{a}\bar{b}}},$$
$$\displaystyle\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(21)}\bigr{)}}{{}_{fe}}\underline{h}^{f\hat{d}}\underline{h}^{e\hat{e}}$$
$$\displaystyle=-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\underline{h}_{df}\xi^{d}\underline{h}^{f\hat{d}}\underline{h}^{e\hat{e}}=-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s\hat{e}}}\xi^{\hat{d}}=\tensor{\bigl{(}\acute{\mathbf{A}}^{0}_{(12)}\bigr{)}}{{}^{\hat{e}\hat{d}}}.$$
We further observe that the blocks
$\acute{\mathbf{A}}^{0}_{(13)}$, $\acute{\mathbf{A}}^{0}_{(31)}$, $\acute{\mathbf{A}}^{0}_{(23)}$, and $\acute{\mathbf{A}}^{0}_{(32)}$ all vanish and that the block $\acute{\mathbf{A}}^{0}_{(44)}$ is symmetric. We therefore conclude from (4.1) that $\acute{\mathbf{A}}^{0}$ is symmetric.
The symmetry of the operator $\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\eta_{c}$ can be established in a similar fashion from the following identities
$$\displaystyle\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(21)}\bigr{)}}{{}_{fe}}\underline{h}^{f\hat{d}}\underline{h}^{e\hat{e}}$$
$$\displaystyle=-\bigl{[}2\xi^{c}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}+\tensor{\underline{h}}{{}^{c}_{e}}\bigr{]}\underline{h}_{df}\xi^{d}\underline{h}^{f\hat{d}}\underline{h}^{e\hat{e}}$$
$$\displaystyle=-\bigl{[}2\xi^{c}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s\hat{e}}}+\tensor{\underline{h}}{{}^{c\hat{e}}}\bigr{]}\xi^{\hat{d}}=\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(12)}\bigr{)}}{{}^{\hat{e}\hat{d}}},$$
$$\displaystyle\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(31)}\bigr{)}}{{}_{\bar{a}\bar{b}e}}\underline{h}^{\bar{a}\hat{a}}\underline{h}^{\bar{b}\hat{b}}\underline{h}^{\hat{e}e}$$
$$\displaystyle=\underline{h}_{e\bar{b}}\underline{h}_{a\bar{a}}h^{ac}\underline{h}^{\bar{a}\hat{a}}\underline{h}^{\bar{b}\hat{b}}\underline{h}^{\hat{e}e}=h^{\hat{a}c}\underline{h}^{\hat{b}\hat{e}}=\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(13)}\bigr{)}}{{}^{\hat{e}\hat{a}\hat{b}}},$$
$$\displaystyle\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(32)}\bigr{)}}{{}^{\hat{d}}_{\bar{a}\bar{b}}}\underline{h}^{\bar{a}\hat{a}}\underline{h}^{\bar{b}\hat{b}}\underline{h}_{\hat{d}f}$$
$$\displaystyle=\underline{h}_{b\bar{b}}\underline{h}_{a\bar{a}}h^{\hat{d}a}h^{bc}\underline{h}^{\bar{a}\hat{a}}\underline{h}^{\bar{b}\hat{b}}\underline{h}_{\hat{d}f}$$
$$\displaystyle=\underline{h}^{\hat{d}\hat{a}}\underline{h}^{\hat{b}c}\underline{h}_{\hat{d}f}=\underline{h}^{\hat{a}d}\underline{h}^{\hat{b}c}\underline{h}_{df}=\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(23)}\bigr{)}}{{}^{\hat{a}\hat{b}}_{f}},$$
$$\displaystyle\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(11)}\bigr{)}}{{}^{\hat{e}}_{e}}\underline{h}^{ef}\underline{h}_{\hat{e}d}$$
$$\displaystyle=-2\xi^{c}\hat{h}_{es}\underline{h}^{s\hat{e}}\underline{h}^{ef}\underline{h}_{\hat{e}d}=-2\xi^{c}\hat{h}_{\hat{e}d}\underline{h}^{f\hat{e}}=\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(11)}\bigr{)}}{{}^{f}_{d}},$$
$$\displaystyle\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(22)}\bigr{)}}{{}^{\hat{d}}_{f}}\underline{h}_{\hat{d}\hat{c}}\underline{h}^{f\hat{e}}$$
$$\displaystyle=-\bigl{(}2\nu^{r}\nu^{s}g_{rs}\xi^{c}\xi^{\hat{d}}\xi^{d}-2\xi^{(d}h^{\hat{d})c}+2\xi^{c}h^{\hat{d}d}\bigr{)}\underline{h}_{df}\underline{h}_{\hat{d}\hat{c}}\underline{h}^{f\hat{e}}$$
$$\displaystyle=-\bigl{(}2\nu^{r}\nu^{s}g_{rs}\xi^{c}\xi^{d}\xi^{\hat{e}}-2\xi^{(\hat{e}}h^{d)c}+2\xi^{c}h^{\hat{e}d}\bigr{)}\underline{h}_{d\hat{c}}=\tensor{\bigl{(}(\acute{\mathbf{A}}^{b}\underline{h}^{c}{}_{b})_{(22)}\bigr{)}}{{}^{\hat{e}}_{\hat{c}}},$$
and the vanishing of the remaining blocks of $\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\eta_{c}$.
Next, we claim that $\acute{\mathbf{A}}^{0}$ and $\acute{\mathfrak{A}}$ satisfy an inequality of the form (4.5). To see why this is the case, we consider an element
$$\vartheta=(Y^{e},Z_{\hat{d}},X_{\hat{a}\hat{b}},W_{s})\in T\Sigma\oplus T^{\ast}\Sigma\oplus T^{0}_{2}\Sigma\oplus T^{\ast}\Sigma.$$
Then by (4.3) and the definition of $\acute{\mathbf{A}}^{0}$ from Theorem 3.2, we have
$$\displaystyle\langle\vartheta,\acute{\mathbf{A}}^{0}\vartheta\rangle_{\underline{h}}=$$
$$\displaystyle-\lambda\tensor{\underline{h}}{{}^{a}_{d}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}Y^{e}Y^{d}-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\xi^{\hat{d}}Z_{\hat{d}}Y^{e}-\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\xi^{d}Y^{e}Z_{d}$$
$$\displaystyle+\bigl{[}-\lambda h^{\hat{d}d}-\lambda\nu^{r}\nu^{s}g_{rs}\xi^{d}\xi^{\hat{d}}+2\xi^{d}\xi^{\hat{d}}\bigr{]}Z_{\hat{d}}Z_{d}+h^{\hat{a}a}h^{\hat{b}b}X_{\hat{a}\hat{b}}X_{ab}+h^{bs}W_{s}W_{b}$$
from which we find, with the help of Lemma B.2.2, that
$$\displaystyle\langle\vartheta,\acute{\mathbf{A}}^{0}\vartheta\rangle_{\underline{h}}\leq$$
$$\displaystyle-\lambda\tensor{\underline{h}}{{}^{a}_{d}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}Y^{e}Y^{d}+\epsilon\underline{h}^{\hat{d}d}Z_{\hat{d}}Z_{d}+\frac{1}{\epsilon}|\xi|^{2}_{\underline{h}}\lambda^{2}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\nu^{c}g_{c\hat{b}}\tensor{\underline{h}}{{}^{\hat{b}}_{b}}Y^{e}Y^{b}$$
$$\displaystyle+\bigl{[}-\lambda h^{\hat{d}d}-\lambda\nu^{r}\nu^{s}g_{rs}\xi^{d}\xi^{\hat{d}}+2\xi^{d}\xi^{\hat{d}}\bigr{]}Z_{\hat{d}}Z_{d}+h^{\hat{a}a}h^{\hat{b}b}X_{\hat{a}\hat{b}}X_{ab}+h^{bs}W_{s}W_{b}$$
(5.50)
and
$$\displaystyle\langle\vartheta,\acute{\mathbf{A}}^{0}\vartheta\rangle_{\underline{h}}\geq$$
$$\displaystyle-\lambda\tensor{\underline{h}}{{}^{a}_{d}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}Y^{e}Y^{d}-\epsilon\underline{h}^{\hat{d}d}Z_{\hat{d}}Z_{d}-\frac{1}{\epsilon}|\xi|^{2}_{\underline{h}}\lambda^{2}\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\nu^{c}g_{c\hat{b}}\tensor{\underline{h}}{{}^{\hat{b}}_{b}}Y^{e}Y^{b}$$
$$\displaystyle+\bigl{[}-\lambda h^{\hat{d}d}-\lambda\nu^{r}\nu^{s}g_{rs}\xi^{d}\xi^{\hat{d}}+2\xi^{d}\xi^{\hat{d}}\bigr{]}Z_{\hat{d}}Z_{d}+h^{\hat{a}a}h^{\hat{b}b}X_{\hat{a}\hat{b}}X_{ab}+h^{bs}W_{s}W_{b}$$
(5.51)
for any $\epsilon>0$ where we have set $|\xi|^{2}_{\underline{h}}=\underline{h}_{ad}\xi^{a}\xi^{d}$.
On the other hand,
$$\displaystyle\langle\vartheta,\acute{\mathfrak{A}}\vartheta\rangle_{\underline{h}}=$$
$$\displaystyle-(n-3)\lambda\tensor{\underline{h}}{{}^{\hat{a}}_{a}}g_{s\hat{a}}\tensor{\underline{h}}{{}^{s}_{e}}Y^{e}Y^{a}-(n-3)\lambda h^{\hat{d}d}Z_{\hat{d}}Z_{d}+h^{\hat{a}a}h^{\hat{b}b}X_{\hat{a}\hat{b}}X_{ab}+\frac{1}{2}h^{bs}W_{s}W_{b}.$$
(5.52)
By setting $\epsilon=|\xi|_{\underline{h}}$ and
taking $R>0$ small enough, it then follows from (5.2)–(5.52) that there exist constants $\kappa,\gamma_{1},\gamma_{2}>0$ such that the inequality (4.5) holds.
Finally, we note that it is
straightforward to check that the operators $\acute{\mathbf{A}}^{0}$, $\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\eta_{c}$ and $\acute{\mathfrak{A}}$ verify the remaining stated properties from the coefficient assumptions $(2)$ and $(4)$. This completes the verification of the coefficient assumptions $(2)$ and $(4)$.
Assumptions $(1)$ & $(5)$: With our choice of $\acute{\mathbb{P}}=\mathds{1}$ and hence $\acute{\mathbb{P}}^{\perp}=0$, we immediately have that $[\acute{\mathbb{P}},\,\acute{\mathfrak{A}}]=0$, and the only non-vanishing relation involving $\Theta^{\prime}(0)$ is the first one, which reads
$$\langle v,\Theta^{\prime}(0)v\rangle_{\underline{h}}=\mathcal{O}(\theta v\otimes v+|t|^{-\frac{1}{2}}\beta_{0}v\otimes v+|t|^{-1}\beta_{1}v\otimes v).$$
It is straightforward to verify that this condition on $\Theta^{\prime}(0)$ is satisfied for an appropriate choice of constants $\theta$, $\beta_{0}$ and $\beta_{1}$. Since it is also straightforward verify the other stated properties from the coefficient assumptions $(1)$ and $(5)$, we will omit the details of the verification of these properties. This completes the verification of the coefficient assumptions $(1)$ and $(5)$.
Assumption $(3)$: Noticing that the maps $\acute{G}_{0}$ and $\acute{G}_{1}$ from the expansion of
$\acute{G}$ given in Theorem 3.2 are both analytic for $(t,\mathbf{U})\in\bigl{(}-\iota,\frac{\pi}{H}\bigr{)}$ for $\iota,R>0$ small enough and vanish for $\mathbf{U}=0$, it follows immediately that there exists constants $\lambda_{\ell}\geq 0$, $\ell=1,2$, such the all the conditions on the map $\acute{G}$ from the coefficient assumption $(3)$ are satisfied. This completes the verification of the coefficient assumption $(3)$.
$3.$ The combined system. The above arguments yield positive constants $R,\kappa,\gamma_{1},\gamma_{2}>0$, and non-negative $\lambda_{\mathcal{l}},\theta,\beta_{\mathcal{k}}\geq 0$, $\mathcal{l}=1,2$ and $\mathcal{k}=0,\cdots,7$, such that the coefficients of the Fuchsian system (5.1) verify the assumptions $(1)$–$(5)$ from §4.2.
To complete the poof, we make a number of observations with regard to the selection of some of the parameters beginning with the observation that the term from the Fuchsian system (5.1) that corresponds to $G_{2}$ vanishes, which allows us to take $\lambda_{3}=0$. Next, we make the following observations with regard to the parameters $\beta_{2k+1}$, $k=0,1,2,3$:
(1)
The $1/t$ singular term from $\langle v,\mathbb{P}(\pi(v))\Theta^{\prime}(0)\mathbb{P}(\pi(v))v\rangle_{\underline{h}}$ is of the form $\mathcal{O}(C_{1}|t|^{-1}\mathbb{P}(\pi(v))v\otimes\mathbb{P}(\pi(v))v\otimes\mathbb{P}(\pi(v))v)$ for some constant $C_{1}>0$. Thus for any $\beta_{1}>0$, we can, by choosing $R>0$ suffuciently small, ensure that $C_{1}R\leq\beta_{1}$.
(2)
The $1/t$ singular term of from $\langle v,\mathbb{P}(\pi(v))\Theta^{\prime}(0)\mathbb{P}^{\perp}(\pi(v))v\rangle_{\underline{h}}$ is of the form $\mathcal{O}(C_{3}|t|^{-1}\mathbb{P}(\pi(v))v\otimes\mathbb{P}(\pi(v))v\otimes\mathbb{P}^{\perp}(\pi(v))v)$ for some constant $C_{3}>0$. As a consequence, for any $\beta_{3}>0$, we can arrange that $C_{3}R^{2}\leq\beta_{3}$ by choosing $R>0$ small. By similar considerations, for any choose of $\beta_{5}>0$, we can guarantee that $C_{5}R^{2}\leq\beta_{5}$ holds by choosing $R>0$ sufficiently small.
(3)
Noting that the term $\langle v,\mathbb{P}^{\perp}(\pi(v))\Theta^{\prime}(0)\mathbb{P}^{\perp}(\pi(v))v\rangle_{\underline{h}}$ only involves the Einstein components, similar arguments as employed in [17, §7.1] (see also in [16]) imply that this term contains no $1/t$ singular term, and as a consequence, we can take $\beta_{7}=0$.
From these considerations and the fact that the parameters $\beta_{k}$ can be chosen independently of $\kappa$ and $\gamma_{1}$, which are both positive, it is then clear that we can ensure that the inequality (5.49) holds by choosing $R>0$ sufficiently small. This complete the proof.
∎
Remark 5.2.1.
Although we have established that Fuchsian system (5.1) satisfies all of the coefficient assumptions from §4.2 for some choice of constants, we have not calculated explicit values for these constants. This is because explicit values are only required for determining the decay rates, which we will not make use of in any subsequent arguments.
Now that we have verified that the Fuchsian system (5.1) satisfies all of the coefficient assumptions from §4.2, we can apply Theorem 4.1, after making the time transformation $t\longmapsto-t$, to it to obtain the global existence result contained in the following theorem.
Theorem 5.1.
Suppose $s\in\mathbb{Z}_{>\frac{n+1}{2}}$, then there exist constants $\delta_{0}>0$ and $C>0$ such that if the initial data $\widehat{\mathbf{U}}_{0}$
satisfies $\|\widehat{\mathbf{U}}_{0}\|_{H^{s}}\leq\delta$ for any $\delta\in(0,\delta_{0}]$, then there exists a unique solution141414Recall that $\Sigma=\mathbb{S}^{n-1}$ and the initial conformal time $t=\frac{\pi}{2H}$ corresponds to the physical time $\tau=0$.
$$\displaystyle\widehat{\mathbf{U}}\in C^{0}\Bigl{(}\Bigl{(}0,\frac{\pi}{2H}\Bigr{]},H^{s}(\Sigma;V)\Bigr{)}\cap C^{1}\Bigl{(}\Bigl{(}0,\frac{\pi}{2H}\Bigr{]},H^{s-1}(\Sigma;V)\Bigr{)}\cap L^{\infty}\Bigl{(}\Bigl{(}0,\frac{\pi}{2H}\Bigr{]},H^{s}(\Sigma;V)\Bigr{)}$$
to the Fuchsian equation (5.1) on $\bigl{(}0,\frac{\pi}{2H}\bigr{]}\times\Sigma$ that satisfies $\widehat{\mathbf{U}}|_{t=\frac{\pi}{2H}}=\widehat{\mathbf{U}}_{0}$.
Moreover, the solution $\widehat{\mathbf{U}}$ satisfies the energy estimate
$$\|\widehat{\mathbf{U}}(t)\|_{H^{s}}^{2}+\int_{t}^{\frac{\pi}{2H}}\frac{1}{\bar{t}}\|\widehat{\mathbb{P}}\widehat{\mathbf{U}}(\bar{t})\|^{2}_{H^{s}}d\bar{t}\leq C\delta^{2}$$
(5.53)
for all $t\in\bigl{(}0,\frac{\pi}{2H}\bigr{]}$.
5.3. Improved estimates for the conformal Yang–Mills component of $\widehat{\mathbf{U}}$
The conformal Yang–Mills fields correspond to the last four entries $(\mathcal{E}^{e},E_{d},H_{ab},\bar{A}_{s})$ of the vector $\widehat{\mathbf{U}}$ defined by (5.24). Given a solution $\widehat{\mathbf{U}}$ from Theorem 5.1 of the Fuchsian system (5.1), we establish improved bounds on the fields $(\mathcal{E}^{e},E_{d},H_{ab},\bar{A}_{s})$ by showing that the renormalized conformal Yang–Mills fields
$$\displaystyle\begin{pmatrix}\mathring{\mathcal{E}}^{e},\mathring{E}_{d},\mathring{H}_{ab},\mathring{A}_{s}\end{pmatrix}^{\text{tr}}=\operatorname{diag}\{t^{-1},t^{-1},t^{-1},t^{-\frac{1}{2}}\}\begin{pmatrix}\mathcal{E}^{e},E_{d},H_{ab},\bar{A}_{s}\end{pmatrix}^{\text{tr}},$$
(5.56)
remain bounded. The precise statement of this result is given in the following corollary.
Corollary 5.3.1.
Suppose $\widehat{\mathbf{U}}$ is a solution of the Fuchsian system (5.1) from Theorem 5.1, the initial data and the constants $s$, $\delta$ and $\delta_{0}$ are as given in that theorem.
Then there exists a constant $C>0$, independent of $\delta\in(0,\delta_{0}]$, such that the renormalized conformal Yang–Mills fields (5.56) are uniformly bounded by
$$\|\mathring{A}_{a}(t)\|_{H^{s}}+\|\mathring{E}_{a}(t)\|_{H^{s}}+\|\mathring{H}_{ab}(t)\|_{H^{s}}\leq C\delta$$
for all $t\in\bigl{(}0,\frac{\pi}{2H}\bigr{]}$.
Remark 5.3.1.
It is not difficult to verify that the renormalized conformal Yang–Mills fields $\mathring{E}_{d}$, $\mathring{H}_{ab}$, and $\mathring{A}_{s}$ are quantitatively equivalent to the physical Yang–Mills variables $\tilde{F}_{ab}$ and $\tilde{A}_{s}$. Because of this, Corollary 5.3.1 yields uniform bounds for the physical Yang–Mills variables.
Proof of Corollary 5.3.1.
We begin by noting from (1.46) and (5.24) that $\mathbf{U}$ and $\widehat{\mathbf{U}}$ are uniquely determine each other and have comparable norms. As a consequence, in the following arguments, we do not distinguish between the two. Next, by substituting (5.56) into (3.137), we get
$$\displaystyle-\acute{\mathbf{A}}^{0}\operatorname{diag}\{t,t,t,\sqrt{t}\}\nu^{c}\underline{\nabla}_{c}\mathring{\mathbf{V}}+\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\operatorname{diag}\{t,t,t,\sqrt{t}\}\underline{\nabla}_{c}\mathring{\mathbf{V}}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{Ht}\acute{\mathcal{B}}\operatorname{diag}\{t,t,t,\sqrt{t}\}\mathring{\mathbf{V}}+\acute{G}(t,\mathbf{U})-\frac{1}{H}\acute{\mathbf{A}}^{0}\operatorname{diag}\Bigl{\{}1,1,1,\frac{1}{2\sqrt{t}}\Bigr{\}}\mathring{\mathbf{V}}$$
(5.57)
where we have set
$$\mathring{\mathbf{V}}:=\begin{pmatrix}\mathring{\mathcal{E}}^{e},\mathring{E}_{\hat{d}},\mathring{H}_{\hat{a}\hat{b}},\mathring{A}_{s}\end{pmatrix}^{\text{tr}}.$$
Then multiplying (5.3) on the left by $(\operatorname{diag}\{t,t,t,\sqrt{t}\})^{-1}$ leads to
$$\displaystyle-\acute{\mathbf{A}}^{0}\nu^{c}\underline{\nabla}_{c}\mathring{\mathbf{V}}+\acute{\mathbf{A}}^{b}\tensor{\underline{h}}{{}^{c}_{b}}\underline{\nabla}_{c}\mathring{\mathbf{V}}=\frac{1}{Ht}\mathring{\mathbf{B}}\mathring{\mathbf{V}}+\mathfrak{S}(t,\mathring{\mathbf{U}})$$
(5.58)
where
$$\displaystyle\mathbf{\mathring{U}}$$
$$\displaystyle=(m,\,p^{a},\,m_{d},\,\tensor{p}{{}^{a}_{d}},\,s^{ab},\,\tensor{s}{{}^{ab}_{d}},\,s,\,s_{d},\,\mathring{\mathbf{V}})^{\text{tr}},$$
$$\displaystyle\mathring{\mathbf{B}}$$
$$\displaystyle=\begin{pmatrix}-(n-4)\lambda\underline{h}^{\hat{b}\hat{e}}\tensor{\underline{h}}{{}^{a}_{\hat{b}}}g_{ab}\tensor{\underline{h}}{{}^{b}_{e}}&0&0&0\\
0&-(n-4)\lambda h^{\hat{d}d}\underline{h}_{df}&0&0\\
0&0&0&0\\
0&0&0&0\end{pmatrix},$$
$$\displaystyle\mathfrak{S}(t,\mathbf{\mathring{U}})$$
$$\displaystyle=\begin{pmatrix}\mathfrak{S}^{\hat{e}}_{1}(t,\mathbf{\mathring{U}}),\mathfrak{S}_{2f}(t,\mathbf{\mathring{U}}),\mathfrak{S}_{3\bar{a}\bar{b}}(t,\mathbf{\mathring{U}}),\mathfrak{S}_{4o}(t,\mathbf{\mathring{U}})\end{pmatrix}^{\text{tr}}$$
and the components of $\mathfrak{S}$ are given by
$$\displaystyle\mathfrak{S}^{\hat{e}}_{1}(t,\mathbf{\mathring{U}})=$$
$$\displaystyle H^{-1}\breve{\mathtt{B}}\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s\hat{e}}}p^{\hat{d}}\mathring{E}_{\hat{d}}+t^{-1}\mathfrak{D}^{\hat{e}}_{1}(t,\mathbf{U})-t^{-\frac{3}{2}}\underline{h}^{\hat{e}e}\Xi_{1e}(E_{d},H_{ab},\bar{A}_{s}),$$
$$\displaystyle\mathfrak{S}_{2f}(t,\mathbf{\mathring{U}})=$$
$$\displaystyle H^{-1}\lambda\nu^{r}g_{rs}\tensor{\underline{h}}{{}^{s}_{e}}\underline{h}_{df}\breve{\mathtt{B}}p^{d}\mathring{\mathcal{E}}^{e}-H^{-1}\breve{\mathtt{B}}^{2}t\bigl{[}-\lambda\nu^{r}\nu^{s}g_{rs}p^{d}p^{\hat{d}}+2p^{d}p^{\hat{d}}\bigr{]}\underline{h}_{df}\hat{E}_{\hat{d}}$$
$$\displaystyle+t^{-1}\mathfrak{D}_{2f}(t,\mathbf{U})+t^{-\frac{3}{2}}\underline{h}_{df}h^{d\hat{a}}\Xi_{1\hat{a}}(E_{d},H_{ab},\bar{A}_{s}),$$
$$\displaystyle\mathfrak{S}_{3\bar{a}\bar{b}}(t,\mathbf{\mathring{U}})=$$
$$\displaystyle t^{-1}\mathfrak{D}_{3\bar{a}\bar{b}}(t,\mathbf{U})-t^{-\frac{3}{2}}\underline{h}_{a\bar{a}}\underline{h}_{b\bar{a}}h^{a\hat{a}}\Xi_{2\hat{a}}^{b}(E_{d},H_{ab},\bar{A}_{s}),$$
$$\displaystyle\mathfrak{S}_{4o}(t,\mathbf{\mathring{U}})=$$
$$\displaystyle t^{-\frac{1}{2}}\mathfrak{D}_{4o}(t,\mathbf{U})+t^{-1}\underline{h}_{or}h^{ra}\Xi_{3a}(E_{d},H_{ab},\bar{A}_{s}).$$
In the above definition, we note that the maps $\mathfrak{S}^{\hat{e}}_{1}(t,\mathbf{\mathring{U}})$, $\mathfrak{S}_{2f}(t,\mathbf{\mathring{U}})$, $\mathfrak{S}_{3\bar{a}\bar{b}}(t,\mathbf{\mathring{U}})$ and $\mathfrak{S}_{4o}(t,\mathbf{\mathring{U}})$, the maps $\mathfrak{D}_{i}$, $i=1,\cdots,4$, and $\Xi_{j}$, $j=1,2,3$, are expressed in terms of the variables $(t,\mathbf{U})$ and $(E_{d},H_{ab},\bar{A}_{s})$. This makes sense since $\mathring{\mathbf{U}}$ is determined by $(t,\mathbf{U})$ as can be seen from (1.46), (5.56) and the definition of $\mathbf{\mathring{U}}$ given above.
We now claim that the maps $\mathfrak{S}^{\hat{e}}_{1}(t,\mathbf{\mathring{U}})$, $\mathfrak{S}_{2f}(t,\mathbf{\mathring{U}})$, $\mathfrak{S}_{3\bar{a}\bar{b}}(t,\mathbf{\mathring{U}})$ and $\mathfrak{S}_{4o}(t,\mathbf{\mathring{U}})$ are analytic for $(t,\mathbf{\mathring{U}})\in(-\iota,\frac{\pi}{H})\times B_{R}(0)$ for $\iota,R>0$ sufficiently small and vanish for $\mathring{\mathbf{U}}=0$.
To see why this is the case,
we observe from Lemmas 3.2.1–3.2.2 and Theorem 3.2 that each term in $\mathfrak{D}_{i}(t,\mathbf{U})$, $i=1,2,3$, involves one of the factors $E_{a},\,H_{bc}$, $[\bar{A}_{a},E_{b}]$, and $[\bar{A}_{a},H_{bc}]$, while each term in $\mathfrak{D}_{4}(t,\mathbf{U})$ involves either $\bar{A}_{a}$ or $E_{b}$ as a factor. From this observation, it can then be verified via a direct calculation that $t^{-1}\mathfrak{D}_{i}(t,\mathbf{U})$, $i=1,2,3$, and $t^{-\frac{1}{2}}\mathfrak{D}_{4}(t,\mathbf{U})$ are analytic in $(t,\,\mathbf{\mathring{U}})$. Analogously, each term in $\Xi_{i}$, $i=1,2$, contains one of the factors $[\bar{A}_{a},E_{b}]$, $[\bar{A}_{a},H_{bc}]$ and $\Xi_{3a}=E_{a}$, and consequently, $t^{-\frac{3}{2}}\Xi_{i}$, $i=1,2$ and $t^{-1}\Xi_{3}$ are analytic in $(t,\,\mathbf{\mathring{U}})$. For example, a calculation shows that
$$\displaystyle t^{-\frac{3}{2}}\underline{h}_{df}h^{d\hat{a}}\Xi_{1\hat{a}}=t^{-\frac{3}{2}}\underline{h}_{df}h^{d\hat{a}}h^{\hat{d}c}[\bar{A}_{c},H_{\hat{d}\hat{a}}]=$$
$$\displaystyle\underline{h}_{df}h^{d\hat{a}}h^{\hat{d}c}[\mathring{A}_{c},\mathring{H}_{\hat{d}\hat{a}}].$$
Using these types of arguments, the analytic dependence of the maps $\mathfrak{S}^{\hat{e}}_{1}(t,\mathbf{\mathring{U}})$, $\mathfrak{S}_{2f}(t,\mathbf{\mathring{U}})$, $\mathfrak{S}_{3\bar{a}\bar{b}}(t,\mathbf{\mathring{U}})$ and $\mathfrak{S}_{4o}(t,\mathbf{\mathring{U}})$ of the variables $(t,\mathbf{\mathring{U}})$ can be verified by direct calculations.
It then follows from the above observations, the arguments from the proof of Lemma 5.2.1, and the fact that $\widehat{\mathbf{U}}$ is a solution of the Fuchsian system (5.1) from Theorem 5.1 that, for $R>0$ chosen small enough, there exist positive constants $\kappa,\gamma_{1},\gamma_{2}>0$ and non-negative constants $\lambda_{\mathcal{l}},\theta,\beta_{\mathcal{k}}\geq 0$, $\mathcal{l}=1,2,3$ and $\mathcal{k}=0,\ldots,7$, such that the Fuchsian system (5.58) satisfies the assumptions $(1)$–$(5)$ from §4.2 and the inequality (5.49). Moreover, we observe from (5.56) that
$$\displaystyle\mathring{\mathbf{V}}|_{t=\frac{\pi}{2H}}=\operatorname{diag}\Bigl{\{}\frac{2H}{\pi},\frac{2H}{\pi},\frac{2H}{\pi},\sqrt{\frac{2H}{\pi}}\Bigr{\}}\begin{pmatrix}\mathcal{E}^{e},E_{d},H_{ab},\bar{A}_{s}\end{pmatrix}^{\text{tr}}|_{t=\frac{\pi}{2H}}.$$
Since $H^{s}$ norm of the right hand side of the above expression is bounded by $\|\widehat{\mathbf{U}}_{0}\|$, which by assumption satisfies
$\|\widehat{\mathbf{U}}_{0}\|\leq\delta$, we have that $\|\mathring{\mathbf{V}}(\frac{\pi}{2H})\|_{H^{s}}\leq C\delta$. We therefore conclude via an application of Theorem 4.1 that
$\mathring{\mathbf{V}}$ is bounded by
$\|\mathring{\mathbf{V}}(t)\|_{H^{s}}\leq C\delta$ for all $t\in\bigl{(}0,\frac{\pi}{2H}\bigr{]}$,
which completes the proof.
∎
6. Gauge transformations
In the proof of the main result, Theorem 1.1, we will need to perform a gauge transformation in order to change from a formulation of the Einstein–Yang–Mills equations that is useful for establishing the local-in-time existence of solutions to a formulation that is suited to establishing global bounds. We collect together in this section the technical results that are needed to establish that this gauge transformation is well defined.
Lemma 6.0.1.
Suppose $(\tilde{g}^{ab},\,\tilde{A}^{\star}_{a})$ is a solution of the Einstein–Yang–Mills equations (1.2)–(1.4) in the temporal gauge (1.17),
$\tilde{\mathfrak{u}}:\widetilde{\mathcal{M}}\rightarrow G$ solves the differential equation
$$\displaystyle{\partial_{\tau}}\tilde{\mathfrak{u}}=-\tilde{A}^{\star}_{0}\tilde{\mathfrak{u}},$$
(6.1)
and let
$$\displaystyle\tilde{A}_{a}=\tilde{\mathfrak{u}}^{-1}\tilde{A}^{\star}_{a}\tilde{\mathfrak{u}}+\tilde{\mathfrak{u}}^{-1}(d\tilde{\mathfrak{u}})_{a}\quad(i.e.,\tilde{F}_{ab}=\tilde{\mathfrak{u}}^{-1}\tilde{F}^{\star}_{ab}\tilde{\mathfrak{u}}).$$
(6.2)
Then $(\tilde{g}^{ab},\,\tilde{A}_{a})$
determines a solution of Einstein–Yang–Mills equations (1.2)–(1.4) in the temporal gauge $\tilde{A}_{a}\tilde{\nu}^{a}=0$ where $\tilde{\nu}^{a}=\tilde{\underline{g}}^{ab}(d\tau)_{b}$.
Proof.
The proof is similar to [26, Theorem $2$] (see also [5, §$6$]), and follows from the observation that the condition
$$\displaystyle\tilde{\nu}^{a}(d\tilde{\mathfrak{u}})_{a}=-\tilde{\nu}^{a}\tilde{A}^{\star}_{a}\tilde{\mathfrak{u}}$$
(6.3)
is equivalent to the differential equation (6.1).
Since, by assumption, we have a solution $\tilde{\mathfrak{u}}$ to the above differential equations, we can use it to define a gauge transformation under which the gauge potential transforms according to
$$\tilde{A}_{a}=\tilde{\mathfrak{u}}^{-1}\tilde{A}^{\star}_{a}\tilde{\mathfrak{u}}+\tilde{\mathfrak{u}}^{-1}(d\tilde{\mathfrak{u}})_{a}.$$
Contracting this with $\tilde{\nu}^{a}$, we find by (6.3) that $\tilde{\nu}^{a}\tilde{A}_{a}=0$.
Moreover, since solutions of the Yang–Mills equations get mapped back into solutions of the Yang–Mills equations under gauge transformations and the Yang–Mills curvature transforms as
$\tilde{F}_{ab}=\tilde{\mathfrak{u}}^{-1}\tilde{F}^{\star}_{ab}\tilde{\mathfrak{u}}$,
the proof follows.
∎
Lemma 6.0.2.
Suppose $(\tilde{g}^{ab},\,\tilde{A}^{\star}_{a})$ is a solution of the Einstein–Yang–Mills equations (1.2)–(1.4) that satisfies the temporal and wave gauge conditions given by (1.17) and (1.18), respectively,
$\tilde{\mathfrak{u}}$ solves the differential equation (6.1), and let
$$\displaystyle g^{ab}=e^{2\Psi}\tilde{g}^{ab},\quad F_{ab}=e^{-\Psi}\tilde{F}_{ab}{\quad\text{and}\quad}A_{a}=e^{-\frac{\Psi}{2}}\tilde{A}_{a},$$
(6.4)
where $\tilde{F}_{ab}$ and $\tilde{A}_{a}$ are defined by (6.2). Then $(g^{ab},A_{a})$ determines a solution of the reduced conformal Einstein–Yang–Mills equations, consisting of (3.1.1) and (3.40)–(3.41), that satisfies the temporal gauge $A_{a}\nu^{a}=0$ and wave gauge $Z^{a}=0$ conditions.
Proof.
By Lemma 6.0.1, we know that $(\tilde{g}^{ab},\tilde{A}_{a})$ determines a solution of the Einstein–Yang–Mills equations (1.2)–(1.4) that satisfies the temporal gauge $\tilde{A}_{a}\tilde{\nu}^{a}=0$. Recalling the definitions of $\tilde{\nu}_{a}$ and $\nu_{a}$ given by (1.9) and (1.11), respectively, we observe that
$$\displaystyle\tilde{\nu}_{a}=-\frac{H}{\sin(Ht)}\nu_{a}=-e^{\Psi}\nu_{a}{\quad\text{and}\quad}\tilde{\nu}^{b}=-e^{-\Psi}\nu^{b}.$$
With the help of these relations, it then follows from (6.4) that
$0=\tilde{A}_{a}\tilde{\nu}^{a}=-e^{\frac{\Psi}{2}}A_{a}e^{-\Psi}\nu^{a}$, and hence, that $A_{a}\nu^{a}=0$. This establishes that the temporal gauge condition is satisfied.
Next, setting
$$\widehat{X}^{a}_{ef}=-\frac{1}{2}\bigl{(}\tilde{\underline{g}}_{bf}\underline{\nabla}_{e}\tilde{\underline{g}}^{ab}+\tilde{\underline{g}}_{eb}\underline{\nabla}_{f}\tilde{\underline{g}}^{ab}-\tilde{\underline{g}}^{ac}\tilde{\underline{g}}_{ed}\tilde{\underline{g}}_{fb}\underline{\nabla}_{c}\tilde{\underline{g}}^{bd}\bigr{)}=\tensor{\delta}{{}^{a}_{f}}{\partial_{e}}\Psi+\tensor{\delta}{{}^{a}_{e}}{\partial_{f}}\Psi-\underline{g}^{ac}\underline{g}_{fe}{\partial_{c}}\Psi,$$
(6.5)
where in obtaining the second equality we have used (1.10), (1.11) and the fact that $\underline{\nabla}$ is the Levi-Civita connection of $\underline{g}_{ab}$,
we observe that
$$g^{fe}\widehat{X}^{a}_{ef}=2g^{ac}{\partial_{c}}\Psi-g^{fe}\underline{g}^{ac}\underline{g}_{fe}{\partial_{c}}\Psi.$$
(6.6)
We also observe, by expressing $\tilde{X}^{a}$ in terms of the conformal metric using (6.4) and employing the relations (3.4)–(3.5) and (6.6), that
$$\displaystyle\tilde{X}^{a}=$$
$$\displaystyle-\tilde{\underline{\nabla}}_{e}\tilde{g}^{ae}+\frac{1}{2}\tilde{g}^{ae}\tilde{g}_{df}\tilde{\underline{\nabla}}_{e}\tilde{g}^{df}$$
$$\displaystyle=$$
$$\displaystyle-e^{-2\Psi}\tilde{\underline{\nabla}}_{e}g^{ae}+\frac{1}{2}e^{-2\Psi}g^{ae}g_{df}\tilde{\underline{\nabla}}_{e}g^{df}+(2-n)e^{-2\Psi}g^{ae}{\partial_{e}}\Psi$$
$$\displaystyle=$$
$$\displaystyle-e^{-2\Psi}(\underline{\nabla}_{e}g^{ae}+\widehat{X}^{a}_{ef}g^{fe}+\widehat{X}^{e}_{ef}g^{af})+\frac{1}{2}e^{-2\Psi}g^{ae}g_{df}(\underline{\nabla}_{e}g^{df}+\widehat{X}^{d}_{ec}g^{cf}+\widehat{X}^{f}_{ec}g^{dc})$$
$$\displaystyle+(2-n)e^{-2\Psi}g^{ae}{\partial_{e}}\Psi$$
$$\displaystyle=$$
$$\displaystyle-e^{-2\Psi}(\underline{\nabla}_{e}g^{ae}+\widehat{X}^{a}_{ef}g^{fe})+\frac{1}{2}e^{-2\Psi}g^{ae}g_{df}\underline{\nabla}_{e}g^{df}+(2-n)e^{-2\Psi}g^{ae}{\partial_{e}}\Psi$$
$$\displaystyle=$$
$$\displaystyle e^{-2\Psi}X^{a}-e^{-2\Psi}\widehat{X}^{a}_{ef}g^{fe}+(2-n)e^{-2\Psi}g^{ae}{\partial_{e}}\Psi$$
$$\displaystyle=$$
$$\displaystyle e^{-2\Psi}Z^{a}+2e^{-2\Psi}(\underline{g}^{ac}-g^{ac}){\partial_{c}}\Psi-ne^{-2\Psi}(g^{ac}-\underline{g}^{ac}){\partial_{c}}\Psi+e^{-2\Psi}(g^{fe}-\underline{g}^{fe})\underline{g}^{ac}\underline{g}_{fe}{\partial_{c}}\Psi.$$
Since the wave gauge condition (1.18) is satisfied by assumption, we conclude that the same is true for wave gauge condition $Z^{a}=0$.
Now, to complete the proof, we observe that, with the help of Lemma 3.1.1, it is straightforward to verify via a direct calculation that $(g^{ab},A_{a})$, which is determined by (6.4), solves the reduced conformal Einstein–Yang–Mills equations consisting of (3.1.1) and (3.40)–(3.41).
∎
In the next proposition, we establish quantitative bounds on the gauge transformation $\tilde{\mathfrak{u}}$ from Lemma 6.0.2.
Proposition 6.0.3.
Suppose $(\tilde{g}^{ab},\tilde{A}^{\star}_{a})$ is a solution of the Einstein–Yang–Mills equations that satisfies the temporal and wave gauge conditions defined by (1.17) and (1.18), respectively, and $\tilde{\mathfrak{u}}$ satisfies the differential equation (6.1) and the initial condition $\tilde{\mathfrak{u}}|_{\tau=0}=\mathds{1}$ such that the corresponding solution $(g^{ab},A_{a})$ of the reduced conformal Einstein–Yang–Mills equations from Lemma 6.0.2, which satisfies both the $A_{a}\nu^{a}=0$ and wave gauge $Z^{a}=0$, yields a solution
$$\displaystyle\widehat{\mathbf{U}}\in C^{0}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s}(\Sigma;V)\Bigr{)}\cap C^{1}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s-1}(\Sigma;V)\Bigr{)}$$
(6.7)
of the Fuchsian system (5.1) with $s>\frac{n+1}{2}$ on
$\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}\times\Sigma$ for some $t_{*}\in\bigl{[}0,\frac{\pi}{2H}\bigr{)}$ that also satisfies
$\|\widehat{\mathbf{U}}|_{t=\frac{\pi}{2H}}\|\leq\delta$ where $\delta\in(0,\delta_{0}]$ and $\delta_{0}>0$ is as given in Theorem 5.1. Then
$$\|\tilde{A}_{a}|_{\tau=0}\|_{H^{s}}=\|\tilde{A}^{\star}_{a}|_{\tau=0}\|_{H^{s}},$$
(6.8)
and there exists a constant $C>0$, independent of $\delta\in(0,\delta_{0}]$ and $t_{*}\in\bigl{[}0,\frac{\pi}{2H}\bigr{)}$, such that $\tilde{\mathfrak{u}}(\tau,x)$, $\mathfrak{u}(t,x)=\tilde{\mathfrak{u}}(\tau(t),x)$, $\tilde{A}_{a}^{\star}(\tau,x)$ and $\tilde{A}_{a}(\tau,x)$,
where $\tau$ and $t$ are related via (1.8), are bounded by
$$\displaystyle\|\tilde{\mathfrak{u}}(\tau(t))\|_{H^{s}}=\|\mathfrak{u}(t)\|_{H^{s}}\leq C,$$
(6.9)
$$\displaystyle\|\tilde{\mathfrak{u}}^{-1}(\tau(t))\|_{H^{s}}=\|\mathfrak{u}^{-1}(t)\|_{H^{s}}\leq C,$$
(6.10)
$$\displaystyle\|\tensor{\tilde{\underline{h}}}{{}^{c}_{b}}(d\tilde{\mathfrak{u}})_{c}(\tau(t))\|_{H^{s}}=\|\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}(t)\|_{H^{s}}\leq C\delta,$$
(6.11)
$$\displaystyle\|\nu^{a}(d\tilde{\mathfrak{u}})_{a}(\tau(t))\|_{H^{s}}=\|\nu^{a}(d\mathfrak{u})_{a}(t)\|_{H^{s}}\leq C\delta^{2},$$
(6.12)
and
$$\displaystyle\|\tilde{A}^{\star}_{a}(\tau(t))\|_{H^{s}}\leq C\|\tilde{A}_{a}(\tau(t))\|_{H^{s}}+C\delta$$
(6.13)
for all $t\in\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}$.
Moreover, if
$$\tilde{g}^{ab}\in\bigcap_{\ell=0}^{s}C^{\ell}([0,\tau_{*}),H^{s-\ell+1}(\Sigma)){\quad\text{and}\quad}\tilde{A}^{\star}_{a}\in\bigcap_{\ell=0}^{s}C^{\ell}([0,\tau_{*}),H^{s-\ell}(\Sigma)),$$
(6.14)
then $(g^{ab},A_{a})$ satisfies
$$g^{ab}\in\bigcap_{\ell=0}^{s}C^{\ell}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s-\ell+1}(\Sigma)\Bigr{)}{\quad\text{and}\quad}A_{a}\in\bigcap_{\ell=0}^{s}C^{\ell}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s-\ell}(\Sigma)\Bigr{)},$$
(6.15)
where $t_{*}=\frac{1}{H}\left(\frac{\pi}{2}-\operatorname{gd}(H^{-1}\tau_{*})\right)$, c.f. (1.7).
Proof.
Before commencing with the proof, we remark that, in the following, the constant $C>0$, which can change from line to line, is independent of $\delta\in(0,\delta_{0}]$ and $t_{*}\in\bigl{[}0,\frac{\pi}{2H}\bigr{)}$.
$(1)$ Bounds on $\widehat{\mathbf{U}}$ and the renormalized conformal Yang–Mills fields: Since, by assumption, the solution (6.7) of the Fuchsian system (5.1) on $\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}\times\Sigma$ satisfies
$\|\widehat{\mathbf{U}}|_{t=\frac{\pi}{2H}}\|\leq\delta$ where $\delta\in(0,\delta_{0}]$, we know from the uniqueness of solutions to (5.1) that it must agree, where defined, the solution to (5.1) from Theorem 5.1 that is generated from the same initial data. In particular, this implies that $\widehat{\mathbf{U}}$ is bounded by
$$\|\widehat{\mathbf{U}}(t)\|_{H^{s}}\leq C\delta,\quad t_{*}<t\leq\frac{\pi}{2H}.$$
(6.16)
This, in turn, implies via Corollary 5.3.1 and the uniqueness of solutions $(\widehat{\mathbf{U}},\mathring{\mathbf{V}})$ to the Fuchsian system comprised of (5.1) and (5.58) that the renormalized conformal Yang–Mills–fields defined by (5.56) are bounded by
$$\|\mathring{A}_{a}(t)\|_{H^{s}}+\|\mathring{E}_{a}(t)\|_{H^{s}}+\|\mathring{H}_{ab}(t)\|_{H^{s}}\leq C\delta,\quad t_{*}<t\leq\frac{\pi}{2H}.$$
(6.17)
$(2)$ Proof of (6.9)–(6.10) and (6.11)–(6.12): Recalling that $\tilde{T}^{a}$ is defined by (1.15), we contract both sides of (6.2) with $\tilde{T}^{a}$ to get
$$\displaystyle 0=\tilde{A}^{\star}_{a}\tilde{T}^{a}=\tilde{\mathfrak{u}}\tilde{A}_{a}\tilde{T}^{a}\tilde{\mathfrak{u}}^{-1}-(d\tilde{\mathfrak{u}})_{a}\tilde{T}^{a}\tilde{\mathfrak{u}}^{-1},$$
which, after rearranging, yields
$$\tilde{T}^{a}(d\tilde{\mathfrak{u}})_{a}=\tilde{\mathfrak{u}}\tilde{A}_{a}\tilde{T}^{a}.$$
(6.18)
From (1.9), (1.26)–(1.29), (1.12)–(1.30) and (1.15), we observe that $\tilde{T}^{a}$ can be expressed as
$$\tilde{T}^{a}=(-\tilde{\lambda})^{-\frac{1}{2}}\tilde{\nu}_{b}\tilde{g}^{ab}=-e^{-\Psi}(-\lambda)^{-\frac{1}{2}}(\xi^{a}-\lambda\nu^{a}).$$
(6.19)
Using this along with the temporal gauge condition (1.41) then allows us to write (6.18) as
$$\displaystyle\tilde{T}(\tilde{\mathfrak{u}})=-(-\lambda)^{-\frac{1}{2}}e^{-\frac{\Psi}{2}}\mathfrak{u}A_{a}\xi^{a},$$
(6.20)
or equivalently, as
$$(-\lambda)\nu^{a}\underline{\nabla}_{a}\mathfrak{u}+\xi^{a}\underline{\nabla}_{a}\mathfrak{u}=\frac{\breve{\mathtt{B}}t\sqrt{Ht}}{\sqrt{\sin(Ht)}}\mathfrak{u}\mathring{A}_{a}p^{a}.$$
(6.21)
Noting that $\lambda$, $\mathring{A}_{a}$, $\xi^{a}$ and $p^{a}$ are uniformly bounded for $t\in\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}$ in $H^{s}(\Sigma)$ and that $\lambda$ is bounded away from $0$ on account of (6.16) and (6.17), and that $\frac{\breve{\mathtt{B}}t\sqrt{Ht}}{\sqrt{\sin(Ht)}}$ is also uniformly bounded for $t\in\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}$, it follows that (6.21) defines a linear symmetric hyperbolic equation for $\mathfrak{u}$, which satisfies the condition $\mathfrak{u}|_{t=\frac{\pi}{2H}}=\mathds{1}$ by assumption.
We can therefore conclude from the standard theory for linear symmetric hyperbolic equations that $\mathfrak{u}$ is bounded by
$$\|\mathfrak{u}(t)\|_{H^{s}}\leq C,\quad 0<t_{*}<t\leq\frac{\pi}{2H},$$
for some constant $C>0$. This establishes the estimate (6.9). Noting that $\mathfrak{u}^{-1}(\underline{\nabla}_{a}\mathfrak{u})\mathfrak{u}^{-1}=-\underline{\nabla}_{a}\mathfrak{u}^{-1}$, the estimate (6.10) can also be shown to hold by similar arguments.
Using (6.19) to express (6.20) as
$$\displaystyle(\xi^{a}-\lambda\nu^{a})\underline{\nabla}_{a}\mathfrak{u}=e^{\frac{\Psi}{2}}\mathfrak{u}\xi^{a}A_{a},$$
(6.22)
we get from applying $\tensor{\underline{h}}{{}^{c}_{b}}\underline{\nabla}_{c}$ to this equation that
$$\tensor{\underline{h}}{{}^{c}_{b}}\underline{\nabla}_{c}\left((\xi^{a}-\lambda\nu^{a})\underline{\nabla}_{a}\mathfrak{u}\right)=\breve{\mathtt{B}}te^{\frac{\Psi}{2}}\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}p^{a}A_{a}+e^{\frac{\Psi}{2}}\mathfrak{u}\tensor{\underline{h}}{{}^{c}_{b}}\tensor{p}{{}^{a}_{c}}A_{a}+\breve{\mathtt{B}}te^{\frac{\Psi}{2}}\mathfrak{u}p^{a}\tensor{\underline{h}}{{}^{c}_{b}}\underline{\nabla}_{c}A_{a},$$
(6.23)
where in deriving this we have used the definitions of (1.32) and (1.34).
In the last term in the above expression, we use the identity (1.45) and (1.42) to re-express it as
$$\breve{\mathtt{B}}te^{\frac{\Psi}{2}}\mathfrak{u}p^{a}\tensor{\underline{h}}{{}^{c}_{b}}\underline{\nabla}_{c}A_{a}=\breve{\mathtt{B}}te^{\frac{\Psi}{2}}\mathfrak{u}p^{a}(e^{\frac{\Psi}{2}}H_{ba}+\tensor{\underline{h}}{{}^{c}_{b}}\underline{\nabla}_{a}A_{c}-e^{\frac{\Psi}{2}}\tensor{\underline{h}}{{}^{c}_{b}}[A_{c},A_{a}]).$$
Substituting this expression into (6.23),
we find, with the help of (1.32)–(1.34), that
$$\displaystyle(\xi^{a}-\lambda\nu^{a})\underline{\nabla}_{a}\left(\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}\right)-\xi^{a}\underline{\nabla}_{a}(e^{\frac{\Psi}{2}}\mathfrak{u}A_{b})=\breve{\mathtt{B}}te^{\frac{\Psi}{2}}\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}p^{a}A_{a}+e^{\frac{\Psi}{2}}\mathfrak{u}\tensor{\underline{h}}{{}^{c}_{b}}\tensor{p}{{}^{a}_{c}}A_{a}-\tensor{\underline{h}}{{}^{c}_{b}}\tensor{p}{{}^{a}_{c}}\tensor{\underline{h}}{{}^{d}_{a}}(d\mathfrak{u})_{d}$$
$$\displaystyle\hskip 28.45274pt+\tensor{\underline{h}}{{}^{c}_{b}}m_{c}\nu^{a}\underline{\nabla}_{a}\mathfrak{u}+\breve{\mathtt{B}}t\mathfrak{u}p^{a}e^{\Psi}H_{ba}-\breve{\mathtt{B}}te^{\frac{\Psi}{2}}p^{c}\tensor{\underline{h}}{{}^{a}_{c}}(d\mathfrak{u})_{a}A_{b}-\breve{\mathtt{B}}te^{\frac{\Psi}{2}}\mathfrak{u}p^{a}[A_{b},A_{a}].$$
(6.24)
On the other hand, the evolution equations (6.22) and (3.92) for $\mathfrak{u}$ and $\bar{A}_{a}$, respectively, imply that
$$\displaystyle\lambda\nu^{d}\underline{\nabla}_{d}(e^{\frac{\Psi}{2}}\mathfrak{u}A_{b})=-\breve{\mathtt{B}}te^{\Psi}\mathfrak{u}A_{a}p^{a}A_{b}+\breve{\mathtt{B}}te^{\frac{\Psi}{2}}p^{a}\tensor{\underline{h}}{{}^{d}_{a}}(d\mathfrak{u})_{d}A_{b}-e^{\Psi}\lambda\mathfrak{u}E_{b}.$$
Adding this to (6), while employing the renormalized conformal Yang–Mills fields (5.56), yields
$$\displaystyle-\lambda\nu^{a}\underline{\nabla}_{a}\chi_{b}+\xi^{a}\underline{\nabla}_{a}\chi_{b}=\Delta^{\star}_{b}$$
(6.25)
where
$$\chi_{b}=\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}-e^{\frac{\Psi}{2}}\mathfrak{u}A_{b}=\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}-e^{\frac{\Psi}{2}}\sqrt{t}\mathfrak{u}\mathring{A}_{b}$$
and
$$\Delta^{\star}_{b}=\breve{\mathtt{B}}t^{\frac{3}{2}}e^{\frac{\Psi}{2}}\chi_{b}p^{a}\mathring{A}_{a}-\tensor{\underline{h}}{{}^{c}_{b}}\tensor{p}{{}^{a}_{c}}\chi_{a}-\breve{\mathtt{B}}t(-\lambda)^{-1}\tensor{\underline{h}}{{}^{c}_{b}}m_{c}p^{a}\chi_{a}+\breve{\mathtt{B}}t^{2}\mathfrak{u}p^{a}\bigl{(}e^{\Psi}\mathring{H}_{ba}-e^{\frac{\Psi}{2}}[\mathring{A}_{b},\mathring{A}_{a}]\bigr{)}-te^{\Psi}\lambda\mathfrak{u}\mathring{E}_{b}.$$
In the following, we will interpret (6.25) as a linear symmetric hyperbolic equation for $\chi_{b}$.
Next, we note that $te^{\Psi}$ is uniformly bounded for $t\in\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}$ by (1.11). Using this, it is then not difficult to verify with the help of the bounds (6.16) and (6.17) that $\lambda$, $\xi$ and $\Delta_{b}^{\star}$ are uniformly bounded in $H^{s}(\Sigma)$ for $t\in\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}$ and $\lambda$ is bounded away from $0$.
We also note $\tilde{\mathfrak{u}}|_{\tau=0}=\mathfrak{u}|_{t=\frac{\pi}{2H}}=\mathds{1}$ implies that
$$\chi_{b}|_{t=\frac{\pi}{2H}}=\bigl{(}\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}\bigr{)}|_{t=\frac{\pi}{2H}}-\bigl{(}e^{\frac{\Psi}{2}}\mathfrak{u}A_{b}\bigr{)}|_{t=\frac{\pi}{2H}}=-\sqrt{H}A_{b}|_{t=\frac{\pi}{2H}}$$
from which we deduce that $\|\chi_{b}|_{t=\frac{\pi}{2H}}\|_{H^{s}}\leq C\delta$.
We can therefore conclude from the standard theory for linear symmetric hyperbolic equations that $\chi_{a}$ is bounded by
$$\|\chi_{b}(t)\|_{H^{s}}\leq C\delta,\quad 0<t_{*}<t\leq\frac{\pi}{2H},$$
for some constant $C>0$.
With the help of the identities
$$\tensor{\tilde{\underline{h}}}{{}^{c}_{b}}=\tensor{\delta}{{}^{c}_{b}}+\tilde{\nu}^{c}\tilde{\nu}_{b}=\tensor{\delta}{{}^{c}_{b}}+\nu^{c}\nu_{b}=\tensor{\underline{h}}{{}^{c}_{b}},$$
(6.26)
which are a consequence of (1.12), (1.13) and (1.15)–(1.16), it then follows from the above estimate and the bounds from Corollary 5.3.1 that
$$\|\tensor{\tilde{\underline{h}}}{{}^{c}_{b}}(d\tilde{\mathfrak{u}})_{c}\|_{H^{s}}=\|\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}\|_{H^{s}}\leq\|\chi_{b}\|_{H^{s}}+e^{\frac{\Psi}{2}}\sqrt{t}\|\mathfrak{u}\mathring{A}_{b}\|_{H^{s}}\leq C\delta,\quad 0<t_{*}<t\leq\frac{\pi}{2H}.$$
Making use of the above estimate for $\|\tensor{\underline{h}}{{}^{c}_{b}}(d\mathfrak{u})_{c}\|_{H^{s}}$ and (6.21), we obtain
$$\|\nu^{a}(d\mathfrak{u})_{a}\|_{H^{s}}\leq C\biggl{(}\|\xi^{a}\underline{\nabla}_{a}\mathfrak{u}\|_{H^{s}}+\biggl{\|}\frac{\breve{\mathtt{B}}t\sqrt{Ht}}{\sqrt{\sin(Ht)}}\mathfrak{u}\mathring{A}_{a}p^{a}\biggr{\|}_{H^{s}}\biggr{)}\leq C\delta^{2},\quad 0<t_{*}<t\leq\frac{\pi}{2H}.$$
$(3)$ Proof of (6.8): Since the initial condition $\tilde{\mathfrak{u}}|_{\tau=0}=\mathds{1}$ (i.e., identity map on $\Sigma_{\tau=0}$), it is clear that $(\tensor{\tilde{\underline{h}}}{{}^{c}_{b}}(d\tilde{\mathfrak{u}})_{c})|_{\tau=0}=0$. From this, we then have, with the help of (6.2) and (6.3), that
$$\tilde{A}_{b}|_{\tau=0}=\bigl{(}\tilde{\mathfrak{u}}^{-1}\tilde{A}^{\star}_{b}\tilde{\mathfrak{u}}\bigr{)}|_{\tau=0}-\tilde{\nu}_{b}\bigl{(}\tilde{\mathfrak{u}}^{-1}(d\tilde{\mathfrak{u}})_{c}\tilde{\nu}^{c}\bigr{)}|_{\tau=0}=\bigl{(}\tilde{\mathfrak{u}}^{-1}\tilde{A}^{\star}_{b}\tilde{\mathfrak{u}}\bigr{)}|_{\tau=0}+\tilde{\nu}_{b}\bigl{(}\tilde{\mathfrak{u}}^{-1}\tilde{\nu}^{a}\tilde{A}^{\star}_{a}\tilde{\mathfrak{u}}\bigr{)}|_{\tau=0}=\bigl{(}\tilde{\mathfrak{u}}^{-1}\tensor{\tilde{\underline{h}}}{{}^{c}_{b}}\tilde{A}^{\star}_{c}\tilde{\mathfrak{u}}\bigr{)}|_{\tau=0}.$$
Thus by (6.26) (i.e. $\tensor{\tilde{\underline{h}}}{{}^{c}_{b}}=\tensor{\underline{h}}{{}^{c}_{b}}$) and $\tilde{\mathfrak{u}}|_{\tau=0}=\mathds{1}$, we conclude that
$$\|\tilde{A}_{a}|_{\tau=0}\|_{H^{s}}=\|\tensor{\underline{h}}{{}^{c}_{a}}(\tilde{\mathfrak{u}}^{-1}\tilde{A}^{\star}_{c}\tilde{\mathfrak{u}})|_{\tau=0}\|_{H^{s}}=\|\tilde{A}^{\star}_{a}|_{\tau=0}\|_{H^{s}},$$
where in deriving the second equality we have used the fact that $\tilde{A}^{\star}_{a}|_{\tau=0}$ is a spatial tensor, which implies that $\|\tilde{A}^{\star}_{a}|_{\tau=0}\|_{H^{s}}=\|\tensor{\underline{h}}{{}^{c}_{a}}\tilde{A}^{\star}_{c}|_{\tau=0}\|_{H^{s}}$.
$(4)$ Proof of (6.13):
Due to the temporal gauge condition $\tilde{A}^{\star}_{a}\tilde{T}^{a}=0$ and (6.26), we deduce from the gauge transformation law (6.2)–(6.3) and the estimates (6.9)–(6.12) that
$$\displaystyle\|\tilde{A}^{\star}_{a}\nu^{a}\|_{H^{s}}$$
$$\displaystyle\leq\|\nu^{a}(d\tilde{\mathfrak{u}})_{a}\tilde{\mathfrak{u}}^{-1}\|_{H^{s}}\leq C\delta^{2}$$
and
$$\displaystyle\|\tensor{\underline{h}}{{}^{a}_{b}}\tilde{A}^{\star}_{a}\|_{H^{s}}$$
$$\displaystyle\leq\|\tensor{\underline{h}}{{}^{a}_{b}}\tilde{\mathfrak{u}}\tilde{A}_{a}\tilde{\mathfrak{u}}^{-1}\|_{H^{s}}+\|\tensor{\underline{h}}{{}^{a}_{b}}(d\tilde{\mathfrak{u}})_{a}\tilde{\mathfrak{u}}^{-1}\|_{H^{s}}$$
$$\displaystyle\leq C\|\tilde{A}_{a}\|_{H^{s}}+C\|\tensor{\tilde{\underline{h}}}{{}^{a}_{b}}(d\tilde{\mathfrak{u}})_{a}\|_{H^{s}}\leq C\bigl{(}\|\tilde{A}_{a}\|_{H^{s}}+\delta\bigr{)}$$
for all $t\in\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}$, which leads to (6.13) according to the Sobolev norms given in §2.1.3.
$(5)$ Solution regularity: To complete the proof, assume that the solution $(\tilde{g}^{ab},\tilde{A}^{\star}_{a})$ of the Einstein–Yang–Mills equations that satisfies (6.14). Noting by (6.2) that
$$A_{a}(t,x)=e^{-\frac{\Psi}{2}}\tilde{\mathfrak{u}}^{-1}(\tau(t),x)\tilde{A}^{\star}_{a}(\tau(t),x)\tilde{\mathfrak{u}}(\tau(t),x)+e^{-\frac{\Psi}{2}}\tilde{\mathfrak{u}}^{-1}(\tau(t),x)(d\tilde{\mathfrak{u}})_{a}(\tau(t),x),$$
it then not difficult to verify from the definitions (6.14) and the estimate
(6.9)–(6.11) that the solution
$(g^{ab},A_{a})$ of the reduced conformal Einstein–Yang–Mills equations satisfies (6.15).
∎
7. Proof of Theorem
With the help of the technical results established in §5 and §6, we are now in a position to prove Theorem 1.1, the main result of this article.
We break the proof of this theorem into five distinct steps.
Step $1$: A local solution of the Einstein–Yang–Mills equations.
We first recall the local well-posedness result from the companion paper [18] and assume that the Einstein–Yang–Mills initial data, see (1.19), is chosen as in the statement of Theorem 1.1, which in particular, implies that it satisfies Einstein–Yang–Mills constraints (1.20)–(1.21) and the gauge constraints (1.22)–(1.23)
on $\Sigma_{0}=\{0\}\times\Sigma$. Then by Theorem $1.1$ from [18] there exists a $\tau_{*}>0$ and a unique solution $(\tilde{g}^{ab},\,\tilde{A}^{\star}_{a})$ to the Einstein–Yang–Mills equations satisfying the temporal and wave gauge conditions (1.17) and(1.18), respectively, and with the regularity
$$\tilde{g}^{ab}\in\bigcap_{\ell=0}^{s}C^{\ell}([0,\tau_{*}),H^{s-\ell+1}(\Sigma)){\quad\text{and}\quad}\tilde{A}^{\star}_{a},\,\tilde{F}^{\star}_{ab}\in\bigcap_{\ell=0}^{s}C^{\ell}([0,\tau_{*}),H^{s-\ell}(\Sigma)).$$
Moreover, if
$$\left\|\left(\tilde{\nu}^{d}\tensor{\tilde{g}}{{}^{ab}_{d}},\,\tensor{\tilde{}\underline{h}}{{}^{d}_{c}}\tensor{\tilde{g}}{{}^{ab}_{d}},\,\tilde{g}^{ab},\,\tilde{E}^{\star}_{a},\,\tilde{A}^{\star}_{d},\,\tilde{H}^{\star}_{ab}\right)\right\|_{L^{\infty}([0,\tau_{*}),W^{1,\infty})}<\infty,$$
then the solution $\bigl{(}\tilde{\nu}^{d}\tensor{\tilde{g}}{{}^{ab}_{d}},\,\tensor{\tilde{}\underline{h}}{{}^{d}_{c}}\tensor{\tilde{g}}{{}^{ab}_{d}},\,\tilde{g}^{ab},\,\tilde{E}^{\star}_{a},\,\tilde{A}^{\star}_{d},\,\tilde{H}^{\star}_{ab}\bigr{)}$ can be uniquely continued, in the above temporal and wave gauge, as a classical solution with the same regularity to a larger time interval $\tau\in[0,\tau^{*})$ where $\tau^{*}\in(\tau_{*},+\infty)$.
Step $2$: A local solution of the reduced conformal EYM system.
By Lemma 6.0.2 and Proposition 6.0.3, the solution $(\tilde{g}_{ab},\,\tilde{A}^{\star}_{a})$ from Step 1 implies the existence of a unique solution
$$(g^{ab},A_{a})\in\bigcap_{\ell=0}^{s}C^{\ell}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s-\ell+1}(\Sigma)\Bigr{)}\times\bigcap_{\ell=0}^{s}C^{\ell}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s-\ell}(\Sigma)\Bigr{)}$$
on $\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}\times\Sigma$ of the reduced conformal Einstein–Yang–Mills equations, see (3.1.1) and (3.40)–(3.41), that satisfies both the temporal and wave gauge conditions, that is, (1.41) and (3.6).
Step $3$: A local solution of the Fuchsian system (5.1).
By Theorems 3.1 and 3.2, the solution $(g^{ab},A_{a})$ of the reduced conformal Einstein–Yang–Mills equations from Step 2 determines via (1.30)–(1.44) and (5.24) a solution
$$\displaystyle\widehat{\mathbf{U}}\in C^{0}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s}(\Sigma;V)\Bigr{)}\cap C^{1}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s-1}(\Sigma;V)\Bigr{)}\cap L^{\infty}\Bigl{(}\Bigl{(}t_{*},\frac{\pi}{2H}\Bigr{]},H^{s}(\Sigma;V)\Bigr{)}$$
on $\bigl{(}t_{*},\frac{\pi}{2H}\bigr{]}\times\Sigma$
of the Fuchsian equation (5.1).
Step $4$: Fuchsian initial data bounds. By assumption, the initial data set satisfies
$$\displaystyle\|(\tilde{g}^{ab}-\tilde{\underline{g}}^{ab},\,\tilde{\underline{\nabla}}_{d}\tilde{g}^{ab},\,\tilde{A}^{\star}_{a},\,d\tilde{A}^{\star}_{ab},\,\tilde{E}^{\star}_{b})|_{\tau=0}\|_{H^{s}}<\delta.$$
Using this bound, it then not difficult to verify from the conformal transformations (1.26)–(1.28), the gauge transformation (6.2), where we recall that $\tilde{\mathfrak{u}}|_{\tau=0}=\mathds{1}$, and the bound (6.8) from Proposition 6.0.3
that there exists a $\delta_{0}>0$ and a constant $C>0$, such that
$$\mathbf{U}_{0}=\mathbf{U}|_{t=\frac{\pi}{2H}},$$
where $\mathbf{U}$ is defined by (1.46)
is bounded by
$$\displaystyle\|\mathbf{U}_{0}\|_{H^{s}}\leq C\|(g^{ab}-\underline{g}^{ab},\underline{\nabla}_{d}g^{ab},\,$$
$$\displaystyle A_{a},E_{a},H_{ab})|_{t=\frac{\pi}{2H}}\|_{H^{s}}\leq C\|(\tilde{g}^{ab}-\tilde{\underline{g}}^{ab},\tilde{\underline{\nabla}}_{d}\tilde{g}^{ab},\tilde{A}_{a},\tilde{E}_{a},\tilde{H}_{ab})|_{\tau=0}\|_{H^{s}}$$
$$\displaystyle\leq$$
$$\displaystyle C\|(\tilde{g}^{ab}-\tilde{\underline{g}}^{ab},\,\tilde{\underline{\nabla}}_{d}\tilde{g}^{ab},\,\tilde{A}^{\star}_{a},\,d\tilde{A}^{\star}_{ab},\,\tilde{E}^{\star}_{b})|_{\tau=0}\|_{H^{s}}<C\delta$$
for all $\delta\in[0,\delta_{0})$, and $\tilde{E}_{c}$ and $\tilde{H}_{ab}$ are defined by $\tilde{E}_{c}=\tensor{\tilde{\underline{h}}}{{}^{a}_{c}}\tilde{\nu}^{b}\tilde{F}_{ab}$ and
$\tilde{H}_{ab}=\tensor{\tilde{\underline{h}}}{{}^{c}_{a}}\tensor{\tilde{\underline{h}}}{{}^{d}_{b}}\tilde{F}_{cd}$,
respectively.
We further observe from the definition (5.24) of $\widehat{\mathbf{U}}$ that
$$\|\widehat{\mathbf{U}}\|_{H^{s}}\leq C\|\mathbf{U}\|_{H^{s}}$$
(7.1)
for some constant $C>0$ that is independent of $\widehat{\mathbf{U}}$ and $\mathbf{U}$. From the above two inequalities, we conclude that
$\widehat{\mathbf{U}}_{0}=\widehat{\mathbf{U}}|_{t=\frac{\pi}{2H}}$
is bounded by
$$\|\widehat{\mathbf{U}}_{0}\|_{H^{s}}\leq C\delta,\quad 0\leq\delta<\delta_{0}.$$
(7.2)
Step $5$: Global existence and stability.
Assuming that $\delta$ is sufficiently small, the bound (7.2) implies via Theorem 5.1 that $\widehat{\mathbf{U}}$ satisfies the energy estimate
$$\|\widehat{\mathbf{U}}(t)\|_{H^{s}}^{2}+\int_{t}^{\frac{\pi}{2H}}\frac{1}{\tau}\|\widehat{\mathbb{P}}\widehat{\mathbf{U}}(\tau)\|^{2}_{H^{s}}d\tau\leq C\delta^{2},\quad t_{*}<t\leq\frac{\pi}{2H},$$
(7.3)
where the constant $C$ is independent of $\delta\in[0,\delta_{0})$ and $t_{*}\in\bigl{[}0,\frac{\pi}{2H}\bigr{)}$.
Next, we will show that the uniform bound (7.3) on $\widehat{\mathbf{U}}$ implies a corresponding uniform bound on the physical variables $\tilde{g}^{ab}-\tilde{}\underline{g}_{ab},\tilde{\underline{\nabla}}_{d}\tilde{g}^{ab},\tilde{F}^{\star}_{ab},\tilde{A}^{\star}_{a}$. To this end, we observe from (1.26) and (1.11) that
$$\displaystyle\tilde{\underline{\nabla}}_{c}\tilde{g}^{ab}=$$
$$\displaystyle\tilde{\underline{\nabla}}_{c}(\tilde{g}^{ab}-\tilde{\underline{g}}^{ab})=\tilde{\underline{\nabla}}_{c}e^{-2\Psi}(g^{ab}-\underline{g}^{ab})+e^{-2\Psi}\tilde{\underline{\nabla}}_{c}(g^{ab}-\underline{g}^{ab})$$
$$\displaystyle=$$
$$\displaystyle\frac{2e^{-2\Psi}}{\tan(Ht)}\nu_{c}(g^{ab}-\underline{g}^{ab})+e^{-2\Psi}[\underline{\nabla}_{c}g^{ab}+\widehat{X}^{a}_{cd}(g^{db}-\underline{g}^{db})+\widehat{X}^{b}_{cd}(g^{ad}-\underline{g}^{ad})]$$
where we recall that $\widehat{X}^{b}_{cd}$ is defined by (6.5). Applying the $H^{s}$ norm to this expression yields
$$\displaystyle\|\tilde{\nu}^{c}\tilde{\underline{\nabla}}_{c}\tilde{g}^{ab}\|_{H^{s}}+\|\tensor{\tilde{}\underline{h}}{{}^{c}_{d}}\tilde{\underline{\nabla}}_{c}\tilde{g}^{ab}\|_{H^{s}}\leq Ct^{2}\|g^{ab}-\underline{g}^{ab}\|_{H^{s}}+e^{-2\Psi}\|\underline{\nabla}_{c}g^{ab}\|_{H^{s}}.$$
(7.4)
From (1.31)–(1.39), we also observe via a straightforward calculation that
$$\displaystyle g^{ab}-\underline{g}^{ab}=$$
$$\displaystyle\bigl{(}e^{\frac{\breve{\mathtt{A}}tm-s}{n-3}}-1\bigr{)}s^{ab}+\bigl{(}e^{\frac{\breve{\mathtt{A}}tm-s}{n-3}}-1\bigr{)}\underline{h}^{ab}-\breve{\mathtt{A}}tm\nu^{a}\nu^{b}-2\breve{\mathtt{B}}tp^{(b}\nu^{a)}$$
(7.5)
and
$$\displaystyle\underline{\nabla}_{c}g^{ab}=$$
$$\displaystyle\frac{m_{c}+\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu_{c}-s_{c}}{n-3}(s^{ab}-\underline{h}^{ab})e^{\frac{\breve{\mathtt{A}}tm-s}{n-3}}+e^{\frac{\breve{\mathtt{A}}tm-s}{n-3}}\tensor{s}{{}^{ab}_{c}}-\Bigl{(}m_{c}+\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu_{c}\Bigr{)}\nu^{a}\nu^{b}$$
$$\displaystyle-\Bigl{(}\tensor{p}{{}^{b}_{c}}+\frac{B}{\breve{\mathtt{K}}H}p^{b}\nu_{c}\Bigr{)}\nu^{a}-\Bigl{(}\tensor{p}{{}^{a}_{c}}+\frac{B}{\breve{\mathtt{K}}H}p^{a}\nu_{c}\Bigr{)}\nu^{b}.$$
(7.6)
Then, with the help of (1.11), (1.46), (7.1) and (7.4)–(7), we see that the uniform bound (7.3) implies that
$$\displaystyle\|(\tilde{g}^{ab}-\tilde{\underline{g}}^{ab})(\tau)\|_{H^{s+1}}+\|\tilde{\nu}^{c}\tilde{\underline{\nabla}}_{c}\tilde{g}^{ab}(\tau)\|_{H^{s}}\leq$$
$$\displaystyle Ct^{2}\|(g^{ab}-\underline{g}^{ab})(t)\|_{H^{s}}+e^{-2\Psi}\|\underline{\nabla}_{c}g^{ab}(t)\|_{H^{s}}$$
$$\displaystyle\leq$$
$$\displaystyle Ct^{2}\|\mathbf{U}(t)\|_{H^{s}}\leq C\delta t^{2}$$
(7.7)
hold for all $\tau\in[0,\tau_{*})$. We also observe as a consequence of the bound (6.13) from Proposition 6.0.3 and the bound from Corollary 5.3.1 on the renormalized conformal Yang–Mills fields (5.56) that
$$\displaystyle\|\tilde{A}^{\star}_{a}(\tau)\|_{H^{s}}+\|\tilde{E}^{\star}_{a}(\tau)\|_{H^{s}}+{}$$
$$\displaystyle\|\tilde{H}^{\star}_{ab}(\tau)\|_{H^{s}}\leq C\bigl{(}\|\tilde{A}_{a}(\tau)\|_{H^{s}}+\delta+\|\tilde{E}_{a}(\tau)\|_{H^{s}}+\|\tilde{H}_{ab}(\tau)\|_{H^{s}}\bigr{)}$$
$$\displaystyle\leq{}$$
$$\displaystyle C\bigl{(}\|\mathring{A}_{a}(t)\|_{H^{s}}+\delta+\|\mathring{E}_{a}(t)\|_{H^{s}}+\|\mathring{H}_{ab}(t)\|_{H^{s}}\bigr{)}\leq C\delta,$$
(7.8)
for all $\tau\in[0,\tau_{*})$,
where we have set
$\tilde{E}^{\star}_{c}=\tensor{\tilde{h}}{{}^{a}_{c}}\tilde{T}^{b}\tilde{F}^{\star}_{ab}$ and $\tilde{H}^{\star}_{ab}=\tensor{\tilde{h}}{{}^{c}_{a}}\tensor{\tilde{h}}{{}^{d}_{b}}\tilde{F}^{\star}_{cd}$.
Together the estimates (7.7) and (7.8) imply that
$$\|(\tilde{\nu}^{d}\tensor{\tilde{g}}{{}^{ab}_{d}},\,\tensor{\tilde{\underline{h}}}{{}^{d}_{c}}\tensor{\tilde{g}}{{}^{ab}_{d}},\,\tilde{g}^{ab},\,\tilde{E}^{\star}_{a},\,\tilde{A}^{\star}_{d},\,\tilde{H}^{\star}_{ab})\|_{L^{\infty}([0,\tau_{*}),W^{1,\infty})}<\infty.$$
(7.9)
Now, by way of contradiction, assume that $\tau_{*}$ is the maximal time of existence for the solution $(\tilde{g}^{ab},\,\tilde{A}^{\star}_{a},)$ and that it is finite, i.e. $\tau_{*}<\infty$. Then since the bound (7.9) implies by the continuation principle from Step 1 that the solution $(\tilde{g}^{ab},\,\tilde{A}^{\star}_{a},)$ can be continued to a larger time interval, we arrive at a contradiction. Thus, we must have that $\tau_{*}=\infty$, which establishes that the solution $(\tilde{g}^{ab},\,\tilde{A}^{\star}_{a})$ exists globally on $[0,\infty)\times\Sigma$ and satisfies the uniform bounds (7.9).
Appendix A Technical calculations
A.1. Geometric identities
In this appendix, we state a number of geometric identities that will be used throughout the article. Since the derivation of the identities is straightforward (although, it should be noted that some are quite lengthy), we, for the most part, omit the details.
Below, we use the same notation for geometric objects as in §1.2–§1.3, §1.7.2 and §2.
Lemma A.1.1.
The vector field $\nu^{a}$ and projection operator $\tensor{\underline{h}}{{}^{a}_{b}}$ defined by (1.12) and (1.13), respectively, satisfy
$$\displaystyle\underline{\nabla}_{a}\nu_{b}=0,\quad\underline{\nabla}_{a}\nu^{b}=0,\quad\underline{\nabla}_{c}\tensor{\underline{h}}{{}^{a}_{b}}=0,$$
(A.1)
and
$$\displaystyle-2\nabla^{(a}\nu^{b)}=\mathcal{L}_{\nu}g^{ab}=\nu^{c}\underline{\nabla}_{c}g^{ab}.$$
(A.2)
Lemma A.1.2.
The Ricci tensor $R^{ab}$ of the conformal metric $g_{ab}$ can be expressed in terms of the connection $\underline{\nabla}_{a}$ and curvature $\tensor{\underline{R}}{{}_{cde}^{a}}$ of the conformal de Sitter metric $\underline{g}_{ab}$, see (1.10), by
$$\displaystyle R^{ab}=\frac{1}{2}g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}g^{ab}+\nabla^{(a}X^{b)}+\underline{R}^{ab}+P^{ab}(g^{-1})+Q^{ab}(g^{-1},\underline{\nabla}g^{-1}),$$
(A.3)
where $X$ is given by (2.1),
$$\displaystyle P^{ab}(g^{-1})=$$
$$\displaystyle-\frac{1}{2}(g^{ac}-\underline{g}^{ac})\underline{g}^{de}\tensor{\underline{R}}{{}_{cde}^{b}}-\frac{1}{2}\underline{g}^{ac}(g^{de}-\underline{g}^{de})\tensor{\underline{R}}{{}_{cde}^{b}}-\frac{1}{2}(g^{ac}-\underline{g}^{ac})(g^{de}-\underline{g}^{de})\tensor{\underline{R}}{{}_{cde}^{b}}$$
$$\displaystyle-\frac{1}{2}(g^{bc}-\underline{g}^{bc})\underline{g}^{de}\tensor{\underline{R}}{{}_{cde}^{a}}-\frac{1}{2}\underline{g}^{bc}(g^{de}-\underline{g}^{de})\tensor{\underline{R}}{{}_{cde}^{a}}-\frac{1}{2}(g^{bc}-\underline{g}^{bc})(g^{de}-\underline{g}^{de})\tensor{\underline{R}}{{}_{cde}^{a}}$$
and
$$\displaystyle Q^{ab}(g^{-1},\underline{\nabla}g^{-1})$$
$$\displaystyle={}$$
$$\displaystyle-\frac{1}{4}\bigl{(}g^{ad}g^{bf}\underline{\nabla}_{d}g_{ce}\underline{\nabla}_{f}g^{ce}+g^{ad}g^{bf}\underline{\nabla}_{f}g_{ce}\underline{\nabla}_{d}g^{ce}+g_{ef}g^{ac}\underline{\nabla}_{c}g^{bd}\underline{\nabla}_{d}g^{ef}+g_{ef}g^{bd}\underline{\nabla}_{d}g^{ac}\underline{\nabla}_{c}g^{ef}\bigr{)}$$
$$\displaystyle+\frac{1}{2}\big{(}g^{ae}g_{fc}\underline{\nabla}_{e}g^{bd}\underline{\nabla}_{d}g^{fc}+g^{ae}g^{bd}\underline{\nabla}_{e}g_{fc}\underline{\nabla}_{d}g^{fc}-\underline{\nabla}_{c}g^{ae}\underline{\nabla}_{e}g^{cb}-\underline{\nabla}_{e}g^{bc}\underline{\nabla}_{c}g^{ea}\big{)}-g^{ac}\tensor{X}{{}^{b}_{cd}}X^{d}$$
$$\displaystyle+g^{af}g^{bd}\tensor{X}{{}^{c}_{fd}}\tensor{X}{{}^{e}_{ce}}-g^{af}g^{bd}\tensor{X}{{}^{c}_{ed}}\tensor{X}{{}^{e}_{cf}}+\tensor{X}{{}^{e}_{ed}}g^{af}\underline{\nabla}_{f}g^{bd}-\tensor{X}{{}^{e}_{fd}}g^{af}\underline{\nabla}_{e}g^{bd}-\tensor{X}{{}^{e}_{fd}}g^{bd}\underline{\nabla}_{e}g^{af}.$$
Lemma A.1.3.
The conformal scalar $\Psi$ defined by (1.11) satisfies
$$\displaystyle\nabla_{a}\Psi=-\frac{1}{\tan(Ht)}\nu_{a},$$
(A.4)
$$\displaystyle\nabla_{a}\nabla_{b}\Psi=\frac{1}{\sin^{2}(Ht)}\nu_{a}\nu_{b}-\frac{1}{\tan(Ht)}\nabla_{a}\nu_{b},$$
(A.5)
$$\displaystyle\Box\Psi=\lambda\frac{1}{\sin^{2}(Ht)}+\frac{1}{\tan(Ht)}X^{c}\nu_{c},$$
(A.6)
and
$$\displaystyle g^{ab}\nabla_{a}\Psi\nabla_{b}\Psi=\lambda\frac{1}{\tan^{2}(Ht)}.$$
(A.7)
Lemma A.1.4.
The conformal metric $g^{ab}$ can be represented as
$$g^{ab}=h^{ab}+\lambda\nu^{a}\nu^{b}-2\xi^{(a}\nu^{b)}$$
(A.8)
where $\nu^{a}$, $h^{ab}$ and $\xi^{a}$ are defined by (1.12) and (1.30).
Moreover the tensor $Q^{edc}$ defined by (3.24) satisfies the following relations:
$$\displaystyle Q^{edc}\nu_{c}=h^{ed}-\lambda\nu^{e}\nu^{d},\quad Q^{edc}\tensor{\underline{h}}{{}^{a}_{d}}=\nu^{e}h^{ca}-\nu^{c}h^{ea},\quad Q^{edc}\nu_{d}=2\nu^{e}\xi^{c}-h^{ec}-\lambda\nu^{c}\nu^{e},$$
$$\displaystyle Q^{edc}\nu_{d}\nu_{c}=\lambda\nu^{e},\quad\tensor{\underline{h}}{{}^{b}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{a}_{d}}=-\nu^{c}h^{ab}.\quad\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{a}_{d}}=-h^{ac},$$
$$\displaystyle\nu_{e}Q^{edc}\nu_{d}\nu_{c}=-\lambda,\quad\nu_{e}Q^{edc}\nu_{d}\tensor{\underline{h}}{{}^{a}_{c}}=-2\xi^{a},\quad\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{a}_{d}}\nu_{c}=0,$$
$$\displaystyle\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{a}_{d}}\tensor{\underline{h}}{{}^{b}_{c}}=-h^{ab},\quad\tensor{\underline{h}}{{}^{a}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{b}_{d}}\nu_{c}=h^{ab},\quad\tensor{\underline{h}}{{}^{a}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{b}_{d}}\tensor{\underline{h}}{{}^{f}_{c}}=0,$$
$$\displaystyle\tensor{\underline{h}}{{}^{a}_{e}}Q^{edc}\nu_{d}=-h^{ac},\quad\tensor{\underline{h}}{{}^{b}_{e}}Q^{edc}\nu_{d}\tensor{\underline{h}}{{}^{a}_{c}}=-h^{ab}\quad\text{and}\quad\tensor{\underline{h}}{{}^{a}_{e}}Q^{edc}\nu_{d}\nu_{c}=0.$$
Lemma A.1.5.
Suppose $\zeta$ and $\zeta_{d}=\underline{\nabla}_{d}\zeta$ satisfy the equation
$$\displaystyle\mathbf{A}^{c}\underline{\nabla}_{c}\begin{pmatrix}\zeta_{d}\\
\zeta\end{pmatrix}=\frac{1}{Ht}\mathcal{B}\begin{pmatrix}\zeta_{d}\\
\zeta\end{pmatrix}+G$$
(A.13)
where
$$\mathbf{A}^{c}=\begin{pmatrix}Q^{edc}&0\\
0&-\nu^{c}\end{pmatrix}.$$
Then $\nu^{e}\zeta_{e}$, $\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\zeta_{e}$ and $\zeta$ satisfy
$$\displaystyle\bar{\mathbf{A}}^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}\zeta_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\zeta_{e}\\
\zeta\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}\begin{pmatrix}-\nu^{e}\zeta_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\zeta_{e}\\
\zeta\end{pmatrix}+\bar{G}$$
(A.20)
where
$$\displaystyle\bar{\mathbf{A}}^{c}=\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}A^{c}\begin{pmatrix}\nu_{d}&\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&1\end{pmatrix}=\begin{pmatrix}\nu_{e}Q^{edc}\nu_{d}&\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}Q^{edc}\nu_{d}&\tensor{\underline{h}}{{}^{f}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&-\nu^{c}\end{pmatrix},$$
(A.29)
$$\displaystyle\bar{\mathcal{B}}=\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}\mathcal{B}\begin{pmatrix}\nu_{d}&\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&1\end{pmatrix}\quad\text{and}\quad\bar{G}=\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}G.$$
(A.38)
Proof.
Noting the decomposition
$$\displaystyle\begin{pmatrix}\zeta_{d}\\
\zeta\end{pmatrix}=Z\begin{pmatrix}-\nu^{e}\zeta_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\zeta_{e}\\
\zeta\end{pmatrix},$$
where
$$\displaystyle Z=\begin{pmatrix}\nu_{d}&\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&1\end{pmatrix},$$
the proof follows from applying
$$\displaystyle Z^{\text{tr}}=\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}$$
to (A.13) and using Lemmas A.1.1 and A.1.5 to obtain the stated equation.
∎
A.2. Calculations for the proof of Theorem 3.1
The following three lemmas contain the detailed calculation needed to complete the proof of Theorem 3.1.
Lemma A.2.1.
The conformal Einstein equation (3.1.3) for $\xi$ can be expressed in the first order form as
$$\displaystyle-\bar{\mathbf{A}}_{2}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}\tensor{p}{{}^{a}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{p}{{}^{a}_{e}}\\
p^{a}\end{pmatrix}+\bar{\mathbf{A}}_{2}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}\tensor{p}{{}^{a}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{p}{{}^{a}_{e}}\\
p^{a}\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{2}\begin{pmatrix}-\nu^{e}\tensor{p}{{}^{a}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{p}{{}^{a}_{e}}\\
p^{a}\end{pmatrix}+\bar{G}_{2}$$
(A.48)
where
$$\displaystyle\bar{\mathbf{A}}_{2}^{0}=\begin{pmatrix}-\lambda&0&0\\
0&h^{f\hat{e}}&0\\
0&0&\breve{\mathtt{F}}(-\lambda)\end{pmatrix},\quad\bar{\mathbf{A}}_{2}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},\quad\bar{G}_{2}=\begin{pmatrix}\nu_{e}\triangle^{ea}_{2}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}^{f}_{e}}\triangle^{ea}_{2}(t,\mathbf{U})\\
0\end{pmatrix}$$
$$\displaystyle\bar{\mathcal{B}}_{2}=\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)(-\lambda)&0&\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\left(\frac{1}{\breve{\mathtt{K}}}-n+2\right)\frac{\breve{\mathtt{B}}}{H}(-\lambda)\\
0&\frac{1}{\breve{\mathtt{K}}}\tensor{h}{{}^{f\hat{e}}}&0\\
\breve{\mathtt{F}}\frac{H}{\breve{\mathtt{B}}}(-\lambda)&0&\breve{\mathtt{F}}\left(\frac{1}{\breve{\mathtt{K}}}-1\right)(-\lambda)\end{pmatrix},$$
$\breve{\mathtt{F}}$ is a constant to be determined and the map $\triangle^{ea}_{2}(t,\mathbf{U})$, which is analytic for $(t,\mathbf{U})\in\bigl{(}-\iota,\frac{\pi}{H}\bigr{)}\times B_{R}(0)$ for $\iota,R>0$ small enough, is given by
$$\displaystyle\triangle^{ea}_{2}(t,\mathbf{U})=$$
$$\displaystyle\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)\frac{\breve{\mathtt{B}}}{H^{2}}p^{a}\breve{\mathtt{A}}m\nu^{e}+\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)\frac{\breve{\mathtt{A}}}{H}m\nu^{e}\nu^{d}\tensor{p}{{}^{a}_{d}}$$
$$\displaystyle-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{d}\nu^{e}\tensor{p}{{}^{a}_{d}}-\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}}\frac{\breve{\mathtt{A}}}{H^{2}}m\nu^{e}p^{a}+\frac{1}{\breve{\mathtt{K}}H}\breve{\mathtt{A}}m\nu^{b}\nu^{e}\tensor{p}{{}^{a}_{b}}-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}\nu^{e}p^{b}\tensor{p}{{}^{a}_{b}}$$
$$\displaystyle-(n-2)\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)\nu^{c}\nu^{e}\tensor{p}{{}^{a}_{c}}-(n-2)\left(\frac{1}{\tan(Ht)}-\frac{1}{Ht}\right)\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}\nu^{e}p^{a}$$
$$\displaystyle-(n-2)\left(\frac{1}{(Ht)^{2}}-\frac{1}{\tan^{2}(Ht)}\right)\breve{\mathtt{B}}t\nu^{e}p^{a}+2\breve{\mathtt{A}}\breve{\mathtt{B}}\frac{t^{2}}{\sin^{2}(Ht)}m\nu^{e}p^{a}$$
$$\displaystyle+(n-2)\breve{\mathtt{B}}t\nu^{e}p^{a}-2P^{cd}\nu^{e}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}-2Q^{cd}\nu^{e}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}-\frac{2}{n-2}X^{c}X^{d}\nu^{e}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}$$
$$\displaystyle+2\Bigl{(}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}g^{ab}g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
Proof.
Making use of the definitions (1.32) and (1.34), and noting the identity
$$\displaystyle g^{cd}\underline{\nabla}_{c}\underline{\nabla}_{d}\xi^{a}=g^{cd}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{1}{Ht}\lambda p^{a}+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{b}\tensor{p}{{}^{a}_{b}}-\frac{1}{\breve{\mathtt{K}}Ht}\lambda\nu^{b}\tensor{p}{{}^{a}_{b}}$$
$$\displaystyle=g^{cd}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}-\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}}\frac{1}{H^{2}t}p^{a}+\frac{1}{\breve{\mathtt{K}}Ht}\nu^{b}\tensor{p}{{}^{a}_{b}}+\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}}\frac{1}{H^{2}}\breve{\mathtt{A}}mp^{a}-\frac{1}{\breve{\mathtt{K}}H}\breve{\mathtt{A}}m\nu^{b}\tensor{p}{{}^{a}_{b}}+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{b}\tensor{p}{{}^{a}_{b}},$$
we can express the conformal Einstein equation (3.1.3) for $\xi^{e}$ as
$$\displaystyle g^{cd}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}={}$$
$$\displaystyle\frac{n-2}{\tan(Ht)}\nu^{c}\tensor{p}{{}^{a}_{c}}-\frac{n-2}{\tan(Ht)}\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{a}+\frac{n-2}{\tan^{2}(Ht)}\breve{\mathtt{B}}tp^{a}+2\breve{\mathtt{A}}\breve{\mathtt{B}}\frac{t^{2}}{\sin^{2}(Ht)}mp^{a}-\frac{1}{\breve{\mathtt{K}}Ht}\nu^{b}\tensor{p}{{}^{a}_{b}}$$
$$\displaystyle+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{1}{Ht}p^{a}-\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}}\frac{\breve{\mathtt{A}}}{H^{2}}mp^{a}+\frac{\breve{\mathtt{A}}}{\breve{\mathtt{K}}H}m\nu^{b}\tensor{p}{{}^{a}_{b}}-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{b}\tensor{p}{{}^{a}_{b}}$$
$$\displaystyle+(n-2)\breve{\mathtt{B}}tp^{a}-2P^{cd}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}-2Q^{cd}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}-\frac{2}{n-2}X^{c}X^{d}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}$$
$$\displaystyle+2\Bigl{(}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}g^{ab}g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
Multiplying this equation by $\nu^{e}$, we find, after rearranging, that
$$\displaystyle\nu^{e}g^{cd}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}={}$$
$$\displaystyle\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)\frac{1}{Ht}\nu^{e}\nu^{d}\tensor{p}{{}^{a}_{d}}-\left(\left(n-1-\frac{1}{\breve{\mathtt{K}}}\right)\frac{1}{\breve{\mathtt{K}}}-n+2\right)\frac{\breve{\mathtt{B}}}{H^{2}t}\nu^{e}p^{a}+\triangle^{ea}_{21}$$
where
$$\displaystyle\triangle^{ea}_{21}={}$$
$$\displaystyle\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}}\frac{\breve{\mathtt{A}}}{H^{2}}m\nu^{e}p^{a}+\frac{\breve{\mathtt{A}}}{\breve{\mathtt{K}}H}m\nu^{b}\nu^{e}\tensor{p}{{}^{a}_{b}}-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}\nu^{e}p^{b}\tensor{p}{{}^{a}_{b}}$$
$$\displaystyle-(n-2)\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)\nu^{c}\nu^{e}\tensor{p}{{}^{a}_{c}}-(n-2)\left(\frac{1}{\tan(Ht)}-\frac{1}{Ht}\right)\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}\nu^{e}p^{a}$$
$$\displaystyle-(n-2)\left(\frac{1}{(Ht)^{2}}-\frac{1}{\tan^{2}(Ht)}\right)\breve{\mathtt{B}}t\nu^{e}p^{a}+2\breve{\mathtt{A}}\breve{\mathtt{B}}\frac{t^{2}}{\sin^{2}(Ht)}m\nu^{e}p^{a}$$
$$\displaystyle+(n-2)\breve{\mathtt{B}}t\nu^{e}p^{a}-2P^{cd}\nu^{e}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}-2Q^{cd}\nu^{e}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}-\frac{2}{n-2}X^{c}X^{d}\nu^{e}\nu_{c}\tensor{\underline{h}}{{}^{a}_{d}}$$
$$\displaystyle+2\Bigl{(}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}g^{bd}g^{a\hat{a}}-\frac{1}{2(n-2)}\nu_{a}\tensor{\underline{h}}{{}^{e}_{b}}g^{ab}g^{d\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
From the above expressions and the identities (3.28), we then get
$$\displaystyle Q^{edc}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}-\left(\nu^{d}g^{ec}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}-\nu^{c}g^{ed}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}\right)=\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)\frac{1}{Ht}Q^{fgc}\nu_{c}(\tensor{\delta}{{}^{e}_{f}}\tensor{\delta}{{}^{d}_{g}}-\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}})\tensor{p}{{}^{a}_{d}}$$
$$\displaystyle\hskip 28.45274pt+\left(\left(n-1-\frac{1}{\breve{\mathtt{K}}}\right)\frac{1}{\breve{\mathtt{K}}}-n+2\right)\frac{\breve{\mathtt{B}}}{H^{2}t}Q^{edc}\nu_{c}\nu_{d}p^{a}+\triangle^{ea}_{22}$$
(A.49)
where
$$\displaystyle\triangle^{ea}_{22}={}$$
$$\displaystyle\left(n-2-\left(n-1-\frac{1}{\breve{\mathtt{K}}}\right)\frac{1}{\breve{\mathtt{K}}}\right)\frac{\breve{\mathtt{A}}\breve{\mathtt{B}}}{H^{2}}mp^{a}\nu^{e}+\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)\frac{\breve{\mathtt{A}}}{H}m\nu^{e}\nu^{d}\tensor{p}{{}^{a}_{d}}+\triangle^{ea}_{21}.$$
Next, we observe, with the help of (3.22), that
$$\displaystyle\nu^{d}g^{ec}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}-\nu^{c}g^{ed}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}=\nu^{d}g^{ec}(\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}-\underline{\nabla}_{d}\tensor{p}{{}^{a}_{c}})$$
$$\displaystyle={}$$
$$\displaystyle\nu^{d}g^{ec}\left(\underline{\nabla}_{c}\underline{\nabla}_{d}\xi^{a}-\underline{\nabla}_{c}\frac{\breve{\mathtt{B}}p^{a}}{\breve{\mathtt{K}}H}\nu_{d}-\underline{\nabla}_{d}\underline{\nabla}_{c}\xi^{a}+\underline{\nabla}_{d}\frac{\breve{\mathtt{B}}p^{a}}{\breve{\mathtt{K}}H}\nu_{c}\right)$$
$$\displaystyle={}$$
$$\displaystyle\nu^{d}g^{ec}\left(\underline{\nabla}_{c}\underline{\nabla}_{d}\xi^{a}-\underline{\nabla}_{d}\underline{\nabla}_{c}\xi^{a}\right)+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}g^{ec}\underline{\nabla}_{c}p^{a}+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}\nu_{c}\nu^{d}g^{ec}\underline{\nabla}_{d}p^{a}$$
$$\displaystyle={}$$
$$\displaystyle-\nu^{d}g^{ec}\tensor{\underline{R}}{{}_{cdb}^{a}}\breve{\mathtt{B}}tp^{b}+\frac{1}{\breve{\mathtt{K}}Ht}g^{ec}\tensor{\underline{h}}{{}^{d}_{c}}\tensor{p}{{}^{a}_{d}}=\frac{1}{\breve{\mathtt{K}}Ht}\tensor{p}{{}^{a}_{d}}h^{de}-\frac{1}{\breve{\mathtt{K}}H}\breve{\mathtt{B}}p^{d}\nu^{e}\tensor{p}{{}^{a}_{d}},$$
and that $\frac{1}{\breve{\mathtt{K}}Ht}\tensor{p}{{}^{a}_{d}}h^{de}=\frac{1}{\breve{\mathtt{K}}Ht}\tensor{p}{{}^{a}_{d}}Q^{abc}\nu_{c}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}}$.
Using these relations along with $n-2-\left(n-1-\frac{1}{\breve{\mathtt{K}}}\right)\frac{1}{\breve{\mathtt{K}}}=\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)$ allows us to express (A.2) as
$$\displaystyle Q^{edc}\underline{\nabla}_{c}\tensor{p}{{}^{a}_{d}}=\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)\frac{1}{Ht}Q^{fgc}\nu_{c}(\tensor{\delta}{{}^{e}_{f}}\tensor{\delta}{{}^{d}_{g}}-\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}})\tensor{p}{{}^{a}_{d}}$$
$$\displaystyle\qquad+\frac{1}{\breve{\mathtt{K}}Ht}Q^{fgc}\nu_{c}\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}}\tensor{p}{{}^{a}_{d}}+\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\left(\frac{1}{\breve{\mathtt{K}}}-n+2\right)\frac{\breve{\mathtt{B}}}{H^{2}t}Q^{edc}\nu_{c}\nu_{d}p^{a}+\triangle^{ea}_{2}$$
(A.50)
where
$$\displaystyle\triangle^{ea}_{2}=-\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{d}\nu^{e}\tensor{p}{{}^{a}_{d}}+\triangle^{ea}_{22}.$$
On the other hand, by Lemma A.1.4, we have that $Q^{ebc}\nu_{b}\nu_{c}\underline{\nabla}_{e}=\lambda\nu^{e}\underline{\nabla}_{e}$. Using this, we can write (3.22) as
$$\displaystyle\breve{\mathtt{F}}Q^{ebc}\nu_{b}\nu_{c}\underline{\nabla}_{e}p^{a}=\breve{\mathtt{F}}Q^{ebc}\nu_{b}\nu_{c}\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\frac{1}{Ht}p^{a}\nu_{e}+\breve{\mathtt{F}}Q^{ebc}\nu_{b}\nu_{c}\frac{1}{\breve{\mathtt{B}}t}\tensor{p}{{}^{a}_{e}}$$
(A.51)
where $\breve{\mathtt{F}}$ is a constant to be determined.
We then collect (A.2) and (A.51) together to get the system
$$\displaystyle\mathbf{A}_{2}^{c}\underline{\nabla}_{c}\begin{pmatrix}\tensor{p}{{}^{a}_{d}}\\
p^{a}\end{pmatrix}=\frac{1}{Ht}\mathcal{B}_{2}\begin{pmatrix}\tensor{p}{{}^{a}_{d}}\\
p^{a}\end{pmatrix}+\begin{pmatrix}\triangle^{ea}_{2}\\
0\end{pmatrix}$$
(A.58)
where
$$\displaystyle\mathbf{A}_{2}^{c}=\begin{pmatrix}Q^{edc}&0\\
0&\breve{\mathtt{F}}Q^{cbe}\nu_{b}\nu_{e}\end{pmatrix},$$
(A.61)
and
$$\displaystyle\mathcal{B}_{2}=Q^{abc}\nu_{c}\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)(\tensor{\delta}{{}^{e}_{a}}\tensor{\delta}{{}^{d}_{b}}-\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}})+\frac{1}{\breve{\mathtt{K}}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}}&\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\left(\frac{1}{\breve{\mathtt{K}}}-n+2\right)\frac{\breve{\mathtt{B}}}{H}\tensor{\delta}{{}^{e}_{a}}\nu_{b}\\
\breve{\mathtt{F}}\frac{H}{\breve{\mathtt{B}}}\tensor{\delta}{{}^{d}_{a}}\nu_{b}&\breve{\mathtt{F}}\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\nu_{b}\nu_{a}\end{pmatrix}$$
$$\displaystyle=\begin{pmatrix}Q^{abc}\nu_{c}&0\\
0&\breve{\mathtt{F}}Q^{abc}\nu_{b}\nu_{c}\end{pmatrix}\begin{pmatrix}\left(n-2-\frac{1}{\breve{\mathtt{K}}}\right)(\tensor{\delta}{{}^{e}_{a}}\tensor{\delta}{{}^{d}_{b}}-\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}})+\frac{1}{\breve{\mathtt{K}}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}}&\left(1-\frac{1}{\breve{\mathtt{K}}}\right)\left(\frac{1}{\breve{\mathtt{K}}}-n+2\right)\frac{\breve{\mathtt{B}}}{H}\tensor{\delta}{{}^{e}_{a}}\nu_{b}\\
\frac{H}{\breve{\mathtt{B}}}\tensor{\delta}{{}^{d}_{a}}&\left(\frac{1}{\breve{\mathtt{K}}}-1\right)\nu_{a}\end{pmatrix}.$$
To complete the proof, we apply Lemma A.1.5 to (A.58) to get the system (A.48).
∎
Lemma A.2.2.
The conformal Einstein equation (3.1.4) for $\mathfrak{h}^{ab}-\underline{h}^{ab}$ can be expressed in first order form as
$$\displaystyle-\bar{\mathbf{A}}_{3}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}+\bar{\mathbf{A}}_{3}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{f}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{3}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}+\bar{G}_{3}$$
(A.71)
where
$$\displaystyle\bar{\mathbf{A}}_{3}^{0}={}$$
$$\displaystyle\begin{pmatrix}-\lambda&0&0\\
0&h^{f\hat{e}}&0\\
0&0&-\lambda\end{pmatrix},\quad\bar{\mathbf{A}}_{3}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{\mathcal{B}}_{3}={}$$
$$\displaystyle\begin{pmatrix}-\lambda\left(n-2\right)&0&0\\
0&0&0\\
0&0&0\end{pmatrix},\quad\bar{G}_{3}=\begin{pmatrix}\nu_{e}\triangle^{e\hat{a}\hat{b}}_{3}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}^{f}_{e}}\triangle^{e\hat{a}\hat{b}}_{3}(t,\mathbf{U})\\
\lambda\nu^{d}\tensor{s}{{}^{\hat{a}\hat{b}}_{d}}\end{pmatrix},$$
and the map $\triangle^{eab}_{3}(t,\mathbf{U})$, which is analytic for $(t,\mathbf{U})\in\bigl{(}-\iota,\frac{\pi}{H}\bigr{)}\times B_{R}(0)$ for $\iota,R>0$ small enough, is given by
$$\displaystyle\triangle^{eab}_{3}(t,\mathbf{U})={}$$
$$\displaystyle\frac{n-2}{H}\breve{\mathtt{A}}m\nu^{e}\nu^{d}\tensor{s}{{}^{ab}_{d}}-\left(\frac{n-2}{Ht}-\frac{n-2}{\tan(Ht)}\right)\nu^{c}\nu^{e}\tensor{s}{{}^{ab}_{c}}$$
$$\displaystyle+\nu^{e}g^{cd}\underline{\nabla}_{c}(S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}})\underline{\nabla}_{d}h^{gf}-2\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}}\tensor{\underline{h}}{{}^{g}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}P^{ab}-2\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}}\tensor{\underline{h}}{{}^{g}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}Q^{ab}$$
$$\displaystyle-\frac{2}{n-2}\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}}\tensor{\underline{h}}{{}^{g}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}X^{a}X^{b}-2(n-2)S^{-1}\nu^{e}\tensor{\mathcal{L}}{{}^{ab}_{cd}}s^{cd}$$
$$\displaystyle+2\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{\hat{e}f}}\tensor{\underline{h}}{{}^{\hat{e}}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}g^{bd}g^{a\hat{a}}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
Proof.
Using the definitions (1.35)–(1.36) for $\tensor{s}{{}^{ab}_{d}}$ and $s^{ab}$, we can express the conformal Einstein equation (3.1.4) for $\mathfrak{h}^{ab}-\underline{h}^{ab}$ as
$$\displaystyle g^{cd}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}={}$$
$$\displaystyle\frac{n-2}{\tan(Ht)}\nu^{c}\tensor{s}{{}^{ab}_{c}}+g^{cd}\underline{\nabla}_{c}(S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}})\underline{\nabla}_{d}h^{ef}-2S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}P^{ab}$$
$$\displaystyle-2S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}Q^{ab}-\frac{2}{n-2}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}X^{a}X^{b}-2(n-2)S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{cd}}s^{cd}$$
$$\displaystyle+2S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{ef}}\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}g^{bd}g^{a\hat{a}}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}),$$
which, in turn, implies by (3.24) that
$$\displaystyle Q^{edc}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}=\nu^{d}g^{ec}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}-\nu^{c}g^{ed}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}+\frac{n-2}{Ht}\nu^{d}\nu^{e}\tensor{s}{{}^{ab}_{d}}+\triangle^{eab}_{31}$$
(A.72)
where
$$\displaystyle\triangle^{eab}_{31}={}$$
$$\displaystyle-\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)(n-2)\nu^{c}\nu^{e}\tensor{s}{{}^{ab}_{c}}+\nu^{e}g^{cd}\underline{\nabla}_{c}(S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}})\underline{\nabla}_{d}h^{gf}$$
$$\displaystyle-2\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}}\tensor{\underline{h}}{{}^{g}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}P^{ab}-2\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}}\tensor{\underline{h}}{{}^{g}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}Q^{ab}$$
$$\displaystyle-\frac{2}{n-2}\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{gf}}\tensor{\underline{h}}{{}^{g}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}X^{a}X^{b}-2(n-2)S^{-1}\nu^{e}\tensor{\mathcal{L}}{{}^{ab}_{cd}}s^{cd}$$
$$\displaystyle+2\nu^{e}S^{-1}\tensor{\mathcal{L}}{{}^{ab}_{\hat{e}f}}\tensor{\underline{h}}{{}^{\hat{e}}_{a}}\tensor{\underline{h}}{{}^{f}_{b}}g^{bd}g^{a\hat{a}}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
However, due to (3.28) and the commutator identity
$$\displaystyle\nu^{d}g^{ec}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}-\nu^{c}g^{ed}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}=\nu^{d}g^{ec}(\underline{\nabla}_{c}\underline{\nabla}_{d}s^{ab}-\underline{\nabla}_{d}\underline{\nabla}_{c}s^{ab})=-\nu^{d}g^{ec}(\tensor{\underline{R}}{{}_{cdf}^{a}}s^{fb}+\tensor{\underline{R}}{{}_{cdf}^{b}}s^{fa})=0,$$
it follows that (A.72) can be equivalently written as
$$\displaystyle Q^{edc}\underline{\nabla}_{c}\tensor{s}{{}^{ab}_{d}}=\frac{n-2}{Ht}Q^{fgc}\nu_{c}(\tensor{\delta}{{}^{e}_{f}}\tensor{\delta}{{}^{d}_{g}}-\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}})\tensor{s}{{}^{ab}_{d}}+\triangle^{eab}_{3}$$
(A.73)
where
$$\displaystyle\triangle^{eab}_{3}={}$$
$$\displaystyle\frac{n-2}{H}\breve{\mathtt{A}}m\nu^{e}\nu^{d}\tensor{s}{{}^{ab}_{d}}+\triangle^{eab}_{31}.$$
On the other hand, by Lemma A.1.4, we have that $Q^{ebc}\nu_{b}\nu_{c}\underline{\nabla}_{e}=\lambda\nu^{e}\underline{\nabla}_{e}$. Using this, we see from (3.23) that
$$Q^{ebc}\nu_{b}\nu_{c}\underline{\nabla}_{e}s^{fg}=Q^{ebc}\nu_{b}\nu_{c}\tensor{s}{{}^{fg}_{e}}.$$
(A.74)
Collecting (A.73) and (A.74) together gives
$$\displaystyle\mathbf{A}_{3}^{c}\underline{\nabla}_{c}\begin{pmatrix}\tensor{s}{{}^{\hat{a}\hat{b}}_{d}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}=\frac{1}{Ht}\mathcal{B}_{3}\begin{pmatrix}\tensor{s}{{}^{\hat{a}\hat{b}}_{d}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}+\begin{pmatrix}\triangle^{e\hat{a}\hat{b}}_{3}\\
Q^{ebc}\nu_{b}\nu_{c}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\end{pmatrix}$$
(A.81)
where
$$\displaystyle\mathbf{A}_{3}^{c}=\begin{pmatrix}Q^{edc}&0\\
0&Q^{cbe}\nu_{b}\nu_{e}\end{pmatrix}$$
and
$$\displaystyle\mathcal{B}_{3}=Q^{abc}\nu_{c}\begin{pmatrix}\left(n-2\right)(\tensor{\delta}{{}^{e}_{a}}\tensor{\delta}{{}^{d}_{b}}-\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}})&0\\
0&0\end{pmatrix}=\begin{pmatrix}-\lambda\left(n-2\right)\nu^{e}\nu^{d}&0\\
0&0\end{pmatrix}.$$
Then by applying Lemma A.1.5 to (A.81), we get
$$\displaystyle\bar{\mathbf{A}}_{3}^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{3}\begin{pmatrix}-\nu^{e}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\\
s^{\hat{a}\hat{b}}\end{pmatrix}+\bar{G}_{3}$$
(A.88)
where
$$\displaystyle\bar{\mathbf{A}}_{3}^{c}=\begin{pmatrix}\nu_{e}Q^{edc}\nu_{d}&\nu_{e}Q^{edc}\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}Q^{edc}\nu_{d}&\tensor{\underline{h}}{{}^{f}_{e}}Q^{edc}\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&Q^{cbe}\nu_{b}\nu_{e}\end{pmatrix},\quad\bar{G}_{3}=\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}\begin{pmatrix}\triangle^{e\hat{a}\hat{b}}_{3}\\
Q^{ebc}\nu_{b}\nu_{c}\tensor{s}{{}^{\hat{a}\hat{b}}_{e}}\end{pmatrix},$$
(A.97)
and
$$\displaystyle\bar{\mathcal{B}}_{3}=\begin{pmatrix}\nu_{e}&0\\
\tensor{\underline{h}}{{}^{f}_{e}}&0\\
0&1\end{pmatrix}\mathcal{B}_{3}\begin{pmatrix}\nu_{d}&\tensor{\underline{h}}{{}^{\hat{e}}_{d}}&0\\
0&0&1\end{pmatrix}=\begin{pmatrix}-\lambda\left(n-2\right)&0&0\\
0&0&0\\
0&0&0\end{pmatrix}.$$
(A.106)
We then complete the proof by setting $\bar{\mathbf{A}}_{3}^{0}=\bar{\mathbf{A}}_{3}^{c}\nu_{c}$, which allows us to write in the stated form (A.71).
∎
Lemma A.2.3.
The conformal Einstein equation (3.1.4) for $q$ can be expressed in first order form as
$$\displaystyle-\bar{\mathbf{A}}_{4}^{0}\nu^{c}\underline{\nabla}_{c}\begin{pmatrix}-\nu^{e}s_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}s_{e}\\
s\end{pmatrix}+\bar{\mathbf{A}}_{4}^{c}\tensor{\underline{h}}{{}^{b}_{c}}\underline{\nabla}_{b}\begin{pmatrix}-\nu^{e}s_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}s_{e}\\
s\end{pmatrix}=\frac{1}{Ht}\bar{\mathcal{B}}_{4}\begin{pmatrix}-\nu^{e}s_{e}\\
\tensor{\underline{h}}{{}^{e}_{\hat{e}}}s_{e}\\
s\end{pmatrix}+\bar{G}_{4}$$
where
$$\displaystyle\bar{\mathbf{A}}_{4}^{0}={}$$
$$\displaystyle\begin{pmatrix}-\lambda&0&0\\
0&h^{f\hat{e}}&0\\
0&0&-\lambda\end{pmatrix},\quad\bar{\mathbf{A}}_{4}^{c}\tensor{\underline{h}}{{}^{b}_{c}}=\begin{pmatrix}-2\xi^{b}&-\tensor{h}{{}^{\hat{e}b}}&0\\
-\tensor{h}{{}^{fb}}&0&0\\
0&0&0\end{pmatrix},$$
$$\displaystyle\bar{\mathcal{B}}_{4}={}$$
$$\displaystyle\begin{pmatrix}-\lambda\left(n-2\right)&0&0\\
0&0&0\\
0&0&0\end{pmatrix},\quad\bar{G}_{4}=\begin{pmatrix}\nu_{e}\triangle^{e}_{4}(t,\mathbf{U})\\
\tensor{\underline{h}}{{}^{f}_{e}}\triangle^{e}_{4}(t,\mathbf{U})\\
\lambda\nu^{d}s_{d}\end{pmatrix},$$
and the map $\triangle^{e}_{4}(t,\mathbf{U})$, which is analytic for $(t,\mathbf{U})\in\bigl{(}-\iota,\frac{\pi}{H}\bigr{)}\times B_{R}(0)$ for $\iota,R>0$ small enough, is given by
$$\displaystyle\triangle^{e}_{4}(t,\mathbf{U})={}$$
$$\displaystyle\frac{(n-2)}{H}\breve{\mathtt{A}}m\nu^{e}\nu^{d}s_{d}-2P^{ab}\nu^{e}\nu_{a}\nu_{b}-2Q^{ab}\nu^{e}\nu_{a}\nu_{b}-\frac{2}{n-2}X^{a}X^{b}\nu^{e}\nu_{a}\nu_{b}$$
$$\displaystyle-\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)(n-2)\nu^{c}\nu^{e}s_{c}+2\nu^{e}\breve{\mathtt{A}}^{2}\frac{t^{2}}{\sin^{2}(Ht)}m^{2}-2(n-2)\nu^{e}\breve{\mathtt{A}}tm$$
$$\displaystyle+\frac{3-n}{(n-1)}\nu^{e}g^{cd}\underline{\nabla}_{c}h_{gf}\underline{\nabla}_{d}h^{gf}-\frac{2(3-n)}{n-1}\nu^{e}h_{ab}P^{ab}-\frac{2(3-n)}{n-1}\nu^{e}h_{ab}Q^{ab}$$
$$\displaystyle-\frac{2(3-n)}{(n-1)(n-2)}\nu^{e}h_{ab}X^{a}X^{b}+2\nu^{e}\Bigl{(}\frac{3-n}{n-1}h_{ab}g^{bd}g^{a\hat{a}}-\frac{3-n+\lambda}{2(n-2)}g^{d\hat{a}}$$
$$\displaystyle-\nu_{a}\nu_{b}g^{bd}g^{a\hat{a}}\Bigr{)}g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
Proof.
Using the definitions (1.31) and (1.37)–(1.38), we can express the conformal Einstein equation (3.1.4) for $q$ as
$$\displaystyle g^{cd}\underline{\nabla}_{c}s_{d}={}$$
$$\displaystyle\frac{(n-2)}{Ht}\nu^{c}s_{c}+\triangle_{4}$$
where
$$\displaystyle\triangle_{4}={}$$
$$\displaystyle-\left(\frac{1}{Ht}-\frac{1}{\tan(Ht)}\right)(n-2)\nu^{c}s_{c}+\frac{2\breve{\mathtt{A}}^{2}t^{2}}{\sin^{2}(Ht)}m^{2}-2(n-2)\breve{\mathtt{A}}tm$$
$$\displaystyle+\frac{3-n}{(n-1)}g^{cd}\underline{\nabla}_{c}h_{ef}\underline{\nabla}_{d}h^{ef}-\frac{2(3-n)}{n-1}h_{ab}P^{ab}-\frac{2(3-n)}{n-1}h_{ab}Q^{ab}$$
$$\displaystyle-\frac{2(3-n)}{(n-1)(n-2)}h_{ab}X^{a}X^{b}-2P^{ab}\nu_{a}\nu_{b}-2Q^{ab}\nu_{a}\nu_{b}-\frac{2}{n-2}X^{a}X^{b}\nu_{a}\nu_{b}$$
$$\displaystyle+2\Bigl{(}\frac{3-n}{n-1}h_{ab}g^{bd}g^{a\hat{a}}-\frac{3-n+\lambda}{2(n-2)}g^{d\hat{a}}+\nu_{a}\nu_{b}g^{bd}g^{a\hat{a}}\Bigr{)}$$
$$\displaystyle\qquad\qquad\cdot g^{c\hat{c}}(H_{\hat{a}\hat{c}}-E_{\hat{a}}\nu_{\hat{c}}+\nu_{\hat{a}}E_{\hat{c}})(H_{dc}-E_{d}\nu_{c}+\nu_{d}E_{c}).$$
With the help of (3.28) and the commutator identity
$$\nu^{d}g^{ec}\underline{\nabla}_{c}s_{d}-\nu^{c}g^{ed}\underline{\nabla}_{c}s_{d}=\nu^{d}g^{ec}(\underline{\nabla}_{c}\underline{\nabla}_{d}s-\underline{\nabla}_{d}\underline{\nabla}_{c}s)=0,$$
it then follows that
$$\displaystyle Q^{edc}\underline{\nabla}_{c}s_{d}-\left(\nu^{d}g^{ec}\underline{\nabla}_{c}s_{d}-\nu^{c}g^{ed}\underline{\nabla}_{c}s_{d}\right)=\frac{(n-2)}{H}\frac{1}{t}Q^{fgc}\nu_{c}(\tensor{\delta}{{}^{e}_{f}}\tensor{\delta}{{}^{d}_{g}}-\tensor{\underline{h}}{{}^{e}_{f}}\tensor{\underline{h}}{{}^{d}_{g}})s_{d}+\triangle^{e}_{4}$$
where
$$\displaystyle\triangle^{e}_{4}={}$$
$$\displaystyle\frac{(n-2)}{H}\breve{\mathtt{A}}m\nu^{e}\nu^{d}s_{d}+\nu^{e}\triangle_{4}.$$
On the other hand, by Lemma A.1.4, we have that $Q^{ebc}\nu_{b}\nu_{c}\underline{\nabla}_{e}=\lambda\nu^{e}\underline{\nabla}_{e}$. Using this, we see from (3.23) that
$$\displaystyle Q^{ebc}\nu_{b}\nu_{c}\underline{\nabla}_{e}s=Q^{ebc}\nu_{b}\nu_{c}s_{e}.$$
Collecting together the above two equations gives
$$\displaystyle\mathbf{A}_{4}^{c}\underline{\nabla}_{c}\begin{pmatrix}s_{d}\\
s\end{pmatrix}=\frac{1}{Ht}\mathcal{B}_{4}\begin{pmatrix}s_{d}\\
s\end{pmatrix}+\begin{pmatrix}\triangle^{e}_{4}\\
Q^{ebc}\nu_{b}\nu_{c}s_{e}\end{pmatrix},$$
(A.113)
where
$$\displaystyle\mathbf{A}_{4}^{c}=\begin{pmatrix}Q^{edc}&0\\
0&Q^{cbe}\nu_{b}\nu_{e}\end{pmatrix}$$
and
$$\displaystyle\mathcal{B}_{4}=Q^{abc}\nu_{c}\begin{pmatrix}\left(n-2\right)(\tensor{\delta}{{}^{e}_{a}}\tensor{\delta}{{}^{d}_{b}}-\tensor{\underline{h}}{{}^{e}_{a}}\tensor{\underline{h}}{{}^{d}_{b}})&0\\
0&0\end{pmatrix}=\begin{pmatrix}-\lambda\left(n-2\right)\nu^{e}\nu^{d}&0\\
0&0\end{pmatrix}.$$
To complete the proof, we can then proceed in the same way as in final step of the proof of Lemma A.2.3. ∎
Appendix B Expansions and inequalities
B.1. Expansions
We recall the well-known Neumann series expansion.
Lemma B.1.1.
If $A$ and $B$ are $n\times n$ matrices with $A$ invertible, then there exists an $\epsilon_{0}>0$ such that the map
$$(-\epsilon_{0},\epsilon_{0})\ni\epsilon\longmapsto(A+\epsilon B)^{-1}\in\mathbb{M}_{n\times n}$$
is analytic and admits the series representation
$$\displaystyle(A+\epsilon B)^{-1}=A^{-1}+\sum_{n=1}^{\infty}(-1)^{n}\epsilon^{n}(A^{-1}B)^{n}A^{-1},\quad|\epsilon|<\epsilon_{0}.$$
In the following proposition, we use the Neumann series expansion to derive some useful geometric expansion formulae. All of the geometric objects are as defined in §1.2 and §1.7.2 .
Proposition B.1.2.
There exists a constant $\epsilon_{0}>0$ such that if $|g^{ab}-\underline{g}^{ab}|<\epsilon_{0}$, then
$$\displaystyle g_{ab}=$$
$$\displaystyle{}\underline{g}_{ab}+\mathcal{S}_{ab}(t,m,p^{d},s,s^{\hat{a}\hat{b}}),$$
(B.1)
$$\displaystyle\underline{\nabla}_{c}h^{\hat{a}\hat{d}}=$$
$$\displaystyle{}\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}(t,m,m_{d},s,s_{d},s^{ab},\tensor{s}{{}^{ab}_{d}}),$$
(B.2)
and
$$\displaystyle\underline{\nabla}_{c}g^{ab}=$$
$$\displaystyle{}\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}(t,m,m_{d},s,s_{d},p^{a},\tensor{p}{{}^{a}_{d}},s^{ab},\tensor{s}{{}^{ab}_{d}}),$$
(B.3)
where the $\mathcal{S}$-maps are analytic in all their variables and satisfy $\mathcal{S}(t,0)=0$.
Proof.
By (1.31), (1.37) and (1.39), we have
$$S=\exp\Bigl{(}\frac{q-(\lambda+1)}{3-n}\Bigr{)}=\exp\Bigl{(}\frac{s-\breve{\mathtt{A}}tm}{3-n}\Bigr{)},$$
and hence, after differentiating, we get
$$\displaystyle\underline{\nabla}_{c}S={}$$
$$\displaystyle S\underline{\nabla}_{c}\ln S=S\underline{\nabla}_{c}\Bigl{(}\frac{q-(\lambda+1)}{3-n}\Bigr{)}$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{3-n}\Bigl{(}s_{c}-m_{c}-\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu_{c}\Bigr{)}\exp\Bigl{(}\frac{s-\breve{\mathtt{A}}tm}{3-n}\Bigr{)}.$$
With the help of these relations, we then observe that
$$\displaystyle\underline{\nabla}_{c}h^{\hat{a}\hat{d}}={}$$
$$\displaystyle\underline{\nabla}_{c}(S\mathfrak{h}^{\hat{a}\hat{d}})=S\underline{\nabla}_{c}\mathfrak{h}^{\hat{a}\hat{d}}+\mathfrak{h}^{\hat{a}\hat{d}}\underline{\nabla}_{c}S$$
$$\displaystyle={}$$
$$\displaystyle\frac{1}{3-n}\Bigl{(}s_{c}-m_{c}-\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu_{c}\Bigr{)}\exp\Bigl{(}\frac{s-\breve{\mathtt{A}}tm}{3-n}\Bigr{)}s^{\hat{a}\hat{d}}$$
$$\displaystyle+\frac{1}{3-n}\Bigl{(}s_{c}-m_{c}-\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu_{c}\Bigr{)}\exp\Bigl{(}\frac{s-\breve{\mathtt{A}}tm}{3-n}\Bigr{)}\underline{h}^{\hat{a}\hat{d}}$$
$$\displaystyle+\exp\Bigl{(}\frac{s-\breve{\mathtt{A}}tm}{3-n}\Bigr{)}\tensor{s}{{}^{\hat{a}\hat{d}}_{c}}.$$
Letting $\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}(t,m,m_{d},s,s_{d},s^{ab},\tensor{s}{{}^{ab}_{d}})$
denote the right hand side of the above expression, it is clear that $\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}$ is analytic in all its variables and satisfies $\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}(t,0)=0$, which establishes the validity of (B.2).
Next, we observe that
$$\displaystyle g^{cd}-\underline{g}^{cd}={}$$
$$\displaystyle(\lambda+1)\nu^{c}\nu^{d}-\xi^{d}\nu^{c}-\xi^{c}\nu^{d}+(S\mathfrak{h}^{cd}-\underline{h}^{cd})$$
$$\displaystyle={}$$
$$\displaystyle\breve{\mathtt{A}}tm\nu^{c}\nu^{d}-\breve{\mathtt{B}}tp^{d}\nu^{c}-\breve{\mathtt{B}}tp^{c}\nu^{d}$$
$$\displaystyle+\exp\Bigl{(}\frac{s-\breve{\mathtt{A}}tm}{3-n}\Bigr{)}s^{cd}+\Bigl{[}\exp\Bigl{(}\frac{s-\breve{\mathtt{A}}tm}{3-n}\Bigr{)}-1\Bigr{]}\underline{h}^{cd}.$$
With the help of this expression, (B.1) follows from applying Lemma B.1.1 to
$g_{ab}=(g^{cd})^{-1}=\bigl{[}\underline{g}^{cd}+(g^{cd}-\underline{g}^{cd})\bigr{]}^{-1}$.
Finally, to verify the remaining expression (B.3), we differentiate $g^{ab}=h^{ab}-2\nu^{(a}\xi^{b)}+\lambda\nu^{a}\nu^{b}$ to get
$$\displaystyle\underline{\nabla}_{c}g^{ab}={}$$
$$\displaystyle\underline{\nabla}_{c}h^{ab}-2\nu^{a}\underline{\nabla}_{c}\xi^{b}+\nu^{a}\nu^{b}\underline{\nabla}_{c}\lambda$$
$$\displaystyle={}$$
$$\displaystyle\underline{\nabla}_{c}h^{ab}-2\nu^{(a}\Bigl{(}\tensor{p}{{}^{b)}_{c}}+\frac{\breve{\mathtt{B}}}{\breve{\mathtt{K}}H}p^{b)}\nu_{c}\Bigr{)}+\nu^{a}\nu^{b}\Bigl{(}m_{c}+\frac{\breve{\mathtt{A}}}{\breve{\mathtt{J}}H}m\nu_{c}\Bigr{)},$$
where in deriving this we have used (1.30), (1.31), (1.32) and (1.34). Letting
$\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}(t,m,m_{d},s,s_{d},p^{a},\tensor{p}{{}^{a}_{d}},s^{ab},\tensor{s}{{}^{ab}_{d}})$
denote the right hand side of the above expression, it is clear that $\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}$ is analytic in all its variables and satisfies $\tensor{\mathcal{S}}{{}^{\hat{a}\hat{d}}_{c}}(t,0)=0$, which establishes the validity of (B.3) and completes the proof. ∎
B.2. Young’s inequality
Lemma B.2.1 (Young’s inequality for scalars).
Suppose $a,b\in\mathbb{R}$ and $\epsilon>0$. Then
$$|ab|\leq\frac{\epsilon}{2}a^{2}+\frac{1}{2\epsilon}b^{2}.$$
Lemma B.2.2 (Young type inequalities for tensors).
Suppose $\underline{h}_{ab}\in T_{2}^{0}\Sigma$ is a positive definite metric and $\underline{h}^{ab}\in T^{2}_{0}\Sigma$ is the inverse of $\underline{h}_{cd}$, $X_{dc}\in T^{0}_{2}\Sigma$ is anti-symmetric, $Y^{a}\in T\Sigma$, $Z_{c}\in T^{\ast}\Sigma$, $A^{d}\in T\Sigma$, $A^{abc}\in T^{3}_{0}\Sigma$, $\tensor{A}{{}^{d}_{c}}\in T^{1}_{1}\Sigma$ and $\delta>0$. Then
$$\displaystyle|X_{dc}A^{d}Y^{c}|\leq$$
$$\displaystyle\frac{\delta}{2}\underline{h}^{ad}\underline{h}^{bc}X_{ab}X_{dc}+\frac{1}{2\delta}\underline{h}_{bd}\tensor{A}{{}^{d}}A^{b}\underline{h}_{ac}Y^{a}Y^{c},$$
(B.4)
$$\displaystyle|X_{dc}A^{adc}Z_{a}|\leq$$
$$\displaystyle\frac{\delta}{2}\underline{h}^{ad}\underline{h}^{bc}X_{ab}X_{dc}+\frac{1}{2\delta}A^{abe}A^{fdc}\underline{h}_{bd}\underline{h}_{ec}Z_{a}Z_{f},$$
(B.5)
and
$$\displaystyle|Z_{d}\tensor{A}{{}^{d}_{c}}Y^{c}|\leq$$
$$\displaystyle\frac{\delta}{2}\underline{h}^{ad}Z_{d}Z_{a}+\frac{1}{2\delta}\underline{h}_{ad}\tensor{A}{{}^{a}_{e}}\tensor{A}{{}^{d}_{b}}Y^{e}Y^{b}.$$
(B.6)
Proof.
The inequality (B.4) is a direct consequence of the calculation
$$\displaystyle\delta\underline{h}^{da}\underline{h}^{bc}X_{ab}X_{dc}\pm 2A^{d}Y^{c}X_{dc}+\frac{1}{\delta}A^{d}Y^{c}\underline{h}_{bd}\underline{h}_{ac}A^{b}Y^{a}$$
$$\displaystyle=$$
$$\displaystyle\Bigl{(}\sqrt{\delta}\underline{h}^{da}\underline{h}^{bc}X_{ab}\pm\frac{1}{\sqrt{\delta}}A^{d}Y^{c}\Bigr{)}\Bigl{(}\sqrt{\delta}X_{dc}\pm\frac{1}{\sqrt{\delta}}\underline{h}_{\hat{b}d}\underline{h}_{\hat{a}c}A^{\hat{b}}Y^{\hat{a}}\Bigr{)}$$
$$\displaystyle=$$
$$\displaystyle\underline{h}^{da}\underline{h}^{bc}\Bigl{(}\sqrt{\delta}X_{ab}\pm\frac{1}{\sqrt{\delta}}\underline{h}_{bf}\underline{h}_{ae}A^{e}Y^{f}\Bigr{)}\Bigl{(}\sqrt{\delta}X_{dc}\pm\frac{1}{\sqrt{\delta}}\underline{h}_{\hat{b}d}\underline{h}_{\hat{a}c}A^{\hat{b}}Y^{\hat{a}}\Bigr{)}\geq 0,$$
where the inequality in the last line is due to positive definiteness of the metric $\underline{h}_{ab}$.
We similarly conclude the validity of the inequalities (B.5)–(B.6) from the following related calculations:
$$\displaystyle 0\leq$$
$$\displaystyle\underline{h}^{ad}\underline{h}^{bc}\Bigl{(}\sqrt{\delta}X_{ab}\pm\frac{1}{\sqrt{\delta}}\underline{h}_{\hat{a}a}\underline{h}_{\hat{b}b}A^{e\hat{a}\hat{b}}Z_{e}\Bigr{)}\Bigl{(}\sqrt{\delta}X_{dc}\pm\frac{1}{\sqrt{\delta}}\underline{h}_{\hat{d}d}\underline{h}_{\hat{c}c}A^{f\hat{d}\hat{c}}Z_{f}\Bigr{)}$$
$$\displaystyle=$$
$$\displaystyle\delta\underline{h}^{ad}\underline{h}^{bc}X_{ab}X_{dc}\pm 2X_{ab}A^{fab}Z_{f}+\frac{1}{\delta}A^{edc}Z_{e}\underline{h}_{\hat{d}d}\underline{h}_{\hat{c}c}A^{f\hat{d}\hat{c}}Z_{f}$$
and
$$\displaystyle 0\leq$$
$$\displaystyle\underline{h}^{dc}\bigl{(}\sqrt{\delta}Z_{d}\pm\frac{1}{\sqrt{\delta}}\underline{h}_{ad}\tensor{A}{{}^{a}_{e}}Y^{e}\Bigr{)}\Bigl{(}\sqrt{\delta}Z_{c}\pm\frac{1}{\sqrt{\delta}}\underline{h}_{bc}\tensor{A}{{}^{b}_{f}}Y^{f}\Bigr{)}$$
$$\displaystyle=\delta Z_{d}Z_{c}\underline{h}^{dc}\pm 2Z_{a}Y^{f}\tensor{A}{{}^{a}_{f}}+\frac{1}{\delta}\tensor{A}{{}^{a}_{e}}Y^{e}\underline{h}_{ba}\tensor{A}{{}^{b}_{f}}Y^{f}.$$
∎
Acknowledgement
C.L. is supported by the Fundamental Research Funds for the Central Universities, HUST: $5003011036$.
J.W. is supported by NSFC (Grant No. 11701482).
References
[1]
Michael T Anderson, Existence and stability of even-dimensional
asymptotically de Sitter spaces, Annales Henri Poincaré 6
(2005), 801–820.
[2]
Thierry Aubin, Some nonlinear problems in Riemannian geometry,
Springer-Verlag, Berlin, Heidelberg, 1998.
[3]
Florian Beyer and Todd A. Oliynyk, Relativistic perfect fluids near
Kasner singularities, preprint [arXiv:2012.03435], 2020.
[4]
Florian Beyer, Todd A. Oliynyk, and J. Arturo Olvera-Santamaría,
The Fuchsian approach to global existence for hyperbolic equations,
Communications in Partial Differential Equations 46 (2020), 1–82.
[5]
Yvonne Choquet-Bruhat, Yang-Mills-Higgs fields in three space time
dimensions, Analyse globale et physique mathématique (Colloque à la
mémoire d’Edmond Combet), Mémoires de la Société
Mathématique de France 1 (1991), 73–97.
[6]
by same author, Cosmological Yang–Mills hydrodynamics, Journal
of Mathematical Physics 33 (1992), no. 5, 1782–1785.
[7]
by same author, General relativity and the Einstein equations, Oxford
University Press, 2009.
[8]
David Fajman, Todd A. Oliynyk, and Zoe Wyatt, Stabilizing relativistic
fluids on spacetimes with non-accelerated expansion, Communications in
Mathematical Physics 383 (2021), 401–426.
[9]
David Fajman and Zoe Wyatt, Attractors of the Einstein-Klein-Gordon
system, Communications in Partial Differential Equations 46
(2020), 1–30.
[10]
Helmut Friedrich, On the existence of $n$-geodesically complete or
future complete solutions of Einstein’s field equations with smooth
asymptotic structure, Communications in Mathematical Physics 107
(1986), 587–609.
[11]
by same author, On the global existence and the asymptotic behavior of solutions
to the Einstein-Maxwell-Yang-Mills equations, Journal of Differential
Geometry 34 (1991), 275–345.
[12]
Mahir Hadžić and Jared Speck, The global future stability of the
FLRW solutions to the dust-Einstein system with a positive cosmological
constant, Journal of Hyperbolic Differential Equations 12 (2015),
87–188.
[13]
Stephen Hawking and George F. R. Ellis, The large scale structure of
space-time, Cambridge University Press, October 2010.
[14]
Wojciech Kamiński, Well-posedness of the ambient metric equations and
stability of even dimensional asymptotically de Sitter spacetimes, 2021.
[15]
Philippe G. LeFloch and Changhua Wei, Nonlinear stability of
self-gravitating irrotational chaplygin fluids in a FLRW geometry, Annales
de l’Institut Henri Poincaré C, Analyse non linéaire 38
(2021), no. 3, 787–814.
[16]
Chao Liu and Todd A. Oliynyk, Cosmological newtonian limits on large
spacetime scales, Communications in Mathematical Physics 364
(2018), no. 3, 1195–1304.
[17]
by same author, Newtonian limits of isolated cosmological systems on long time
scales, Annales Henri Poincaré 19 (2018), no. 7, 2157–2243.
[18]
Chao Liu and Jinhua Wang, A new symmetric hyperbolic formulation and the
local Cauchy problem for the Einstein–Yang–Mills system in the temporal
gauge, arXiv:2111.04540 (2021).
[19]
Chao Liu and Changhua Wei, Future stability of the FLRW spacetime for a
large class of perfect fluids, Annales Henri Poincaré 22
(2021), 715–770.
[20]
Christian Lübbe and Juan Antonio Valiente Kroon, A conformal approach
for the analysis of the non-linear stability of radiation cosmologies,
Annals of Physics 328 (2013), 1 – 25.
[21]
Todd A. Oliynyk, Future stability of the FLRW fluid solutions in the
presence of a positive cosmological constant, Communications in
Mathematical Physics 346 (2016), 293–312.
[22]
by same author, Future global stability for relativistic perfect fluids with
linear equations of state $p=k\rho$ where $1/3<k<1/2$, SIAM Journal on
Mathematical Analysis 53 (2021), no. 4, 4118–4141.
[23]
Adam G. Riess, Alexei V. Filippenko, Peter Challis, Alejandro Clocchiatti, Alan
Diercks, Peter M. Garnavich, Ron L. Gilliland, Craig J. Hogan, Saurabh Jha,
Robert P. Kirshner, B. Leibundgut, M. M. Phillips, David Reiss, Brian P.
Schmidt, Robert A. Schommer, R. Chris Smith, J. Spyromilio, Christopher
Stubbs, Nicholas B. Suntzeff, and John Tonry, Observational evidence
from supernovae for an accelerating universe and a cosmological constant,
The Astronomical Journal 116 (1998), no. 3, 1009–1038.
[24]
Hans Ringström, Future stability of the Einstein-non-linear scalar
field system, Inventiones mathematicae 173 (2008), 123.
[25]
Igor Rodnianski and Jared Speck, The nonlinear future stability of the
FLRW family of solutions to the irrotational Euler-Einstein system with a
positive cosmological constant, Journal of the European Mathematical Society
15 (2013), 2369–2462.
[26]
Irving Segal, The Cauchy problem for the Yang-Mills equations, Journal
of Functional Analysis 33 (1979), no. 2, 175–194.
[27]
Jared Speck, The nonlinear future stability of the FLRW family of
solutions to the Euler–Einstein system with a positive cosmological
constant, Selecta Mathematica 18 (2012), 633–715.
[28]
Marcus Spradlin, Andrew Strominger, and Anastasia Volovich, Les houches
lectures on de Sitter space, arXiv:hep-th/0110007 (2001).
[29]
Christopher Svedberg, Future stability of the Einstein-Maxwell-Scalar
field system, Annales Henri Poincaré 12 (2011), no. 5.
[30]
Robert M . Wald, General relativity, University of Chicago Press, 2010.
[31]
Jinhua Wang, Future stability of the $1+3$ Milne model for the
Einstein-Klein-Gordon system, Classical and Quantum Gravity
36 (2019), no. 22, 225010. |
Quasi-static crack propagation with a Griffith criterion using a variational discrete element method
Frédéric
Marazzato${}^{1,2,3}$, Alexandre Ern${}^{2,3}$ and Laurent Monasse${}^{4}$
${}^{1}$Department of Mathematics, Louisiana State University, Baton Rouge, LA 70803, USA
email: marazzato@lsu.edu
${}^{2}$CERMICS, Ecole des Ponts, 77455 Marne-la-Vallée, France
email: alexandre.ern@enpc.fr
${}^{3}$Inria, 2 rue Simone Iff, 75589 Paris, France
${}^{4}$Université Côte d’Azur, Inria, CNRS, LJAD, EPC COFFEE, 06108 Nice, France
email: laurent.monasse@inria.fr
Abstract
A variational discrete element method is applied to simulate quasi-static crack propagation.
Cracks are considered to propagate between the mesh cells through the mesh facets. The elastic behaviour is parametrized by the continuous mechanical parameters (Young modulus and Poisson ratio).
A discrete energetic cracking criterion coupled to a discrete kinking criterion guide the cracking process.
Two-dimensional numerical examples are presented to illustrate the robustness and versatility of the method.
1 Introduction
Discrete element methods (DEM) are popular in the modeling of granular materials, soil and rock mechanics.
DEM generally use sphere packing to discretize the domain as small spheres interacting through forces and torques [19], but the main difficulty is to derive a suitable set of parameter values for those interactions so as to reproduce a given Young modulus $E$ and Poisson ratio $\nu$ at the macroscopic level [17, 7].
Advantages of DEM are their ability to deal with discontinuous materials, such as fractured or porous materials, as well as the
possibility to take advantage of GPU computations [30].
A first DEM parametrized only by $E$ and $\nu$ has been proposed in [25] for
elastic computations on Voronoi meshes.
In a consecutive work [22], a variational DEM has been proposed for
elasto-plasticity computations on polyhedral meshes using cell-wise reconstructions of the strains.
The numerical results reported in [22] confirmed that the
macroscopic behaviour of elastic continua is indeed correctly reproduced by the
variational DEM. The method developed in
[22] takes its roots in [12] which is indeed a hybrid finite volume method. It is called variational DEM since it is possible to reinterpret the method as a consistent discretization of elasto-plasticity with discrete elements. In particular, a force-displacement interpretation of the method is derived from the usual stress-strain approach. Also, the mass matrix is diagonal and the stencil for the gradient reconstruction is compact as in usual DEM.
DEM for cracking have been developed in [3] and [2] with cracks propagating through the facets of the (Voronoi) mesh and using a critical stress criterion (initiation criterion).
Coupled FEM-DEM techniques for crack computations, as [33] (2d) and [32] (3d), have been introduced to take advantage of the FEM ability in computing elasticity and of the ability of DEM to handle cracked media.
A similar approach, but using a different reconstruction of strains based on moving least-squares interpolations, can be traced back to [5] (2d) and [31] (3d). Crack propagation can be based instead on the Griffith criterion which relies on the computation of the stress intensity factors (SIF) at the crack tip when coupled with the Irwin formula.
Virtual element methods (VEM) have been recently applied to crack propagation [16]. Cracks were allowed to cut through the polyhedral mesh cells as in the extended finite element method (XFEM) which is based on an extended space of basis functions [8] and a level-set description of the crack [24]. Phase-field methods instead smooth the crack and have been developed among others in [6] and subsequent work.
Phase-field methods are not based on SIF computations but rather on a variational formulation of cracking [13].
Furthermore, DEM using cohesive laws have been developed for fragmentation computations [23] with a view towards uniting initiation and propagation. These methods allow one to devise an initiation criterion and also to control the energy dissipation as with a Griffith criterion. The cracks still go through the mesh facets. This is also the case for similar methods of higher-order such as discontinuous Galerkin methods [14].
The main goal of the present work is to develop a variational DEM using a Griffith criterion to compute crack propagation through the mesh facets. The method supports in principle polyhedral meshes, but the present numerical experiments are restricted to triangular meshes.
The proposed method is close to [22] (where there is no cracking) but the degrees of freedom (dofs) are different.
Only cell dofs are used in the present work.
The cracking algorithm hinges on two main ingredients. The first ingredient is an approximation of the energy release rate at every vertex along the crack.
The second ingredient is a kinking criterion used to determine the next breaking facet and thus the crack path. The kinking criterion, in the spirit of [28], consists in selecting for the crack path the inner facet of the mesh that maximizes a quantity representing the local density of elastic energy.
The present work is organized as follows. Section 2 briefly recalls the equations of elasticity and cracking in a Cauchy continuum. Section 3
introduces the proposed variational DEM and presents the space discretization of the governing equations.
Moreover, a numerical test is reported to assess the convergence of the space discretization in the presence of a singularity.
Section 4 addresses the full discretization of the quasi-static cracking problem. Section 5 contains numerical results on quasi-static crack propagation problems in two space dimensions.
Finally, Section 6 draws some conclusions.
2 Governing equations for quasi-static cracking
We consider an elastic fragile material occupying the domain
$\Omega\subset\mathbb{R}^{2}$ in the reference configuration
and evolving over the finite pseudo-time interval $[0,T]$, $T>0$, under the action of a volumetric force $f$ and boundary conditions. The pseudo-time interval $[0,T]$ is discretized by means of $(K+1)$
discrete pseudo-time nodes $(t_{k})_{k\in\{0,\ldots,K\}}$ with $t_{0}:=0$ and $t_{K}:=T$.
The strain regime is restricted to small strains so that we use the linearized strain tensor $\varepsilon(u):=\frac{1}{2}(\nabla u+(\nabla u)^{\mathbf{T}})\in\mathbb{R}^{2\times 2}$, where $u$ is the $\mathbb{R}^{2}$–valued displacement field.
The material is supposed to be homogeneous and isotropic.
The stress tensor $\sigma(u)\in\mathbb{R}^{2\times 2}$ is such that
$$\sigma(u):=\mathbb{C}:\varepsilon(u),$$
(1)
where $\mathbb{C}$ is the fourth-order stiffness tensor. The elastic material is characterized by the Young modulus $E$ and the Poisson ratio $\nu$ or equivalently by the Lamé coefficients $\lambda$ and $\mu$.
The boundary of $\Omega$ is partitioned as $\partial\Omega=\partial\Omega_{D}\cup\partial\Omega_{N}$, a Dirichlet condition is prescribed on $\partial\Omega_{D}$, and a Neumann condition on $\partial\Omega_{N}$, so that we enforce for all $k=0,\cdots,K$,
$$u=u_{D}(t_{k})\ \text{ on }\partial\Omega_{D},\qquad\sigma(u)\cdot n=g_{N}(t_{k})\ \text{ on }\partial\Omega_{N}.$$
(2)
Since cracking can occur, we denote $\Gamma(t_{k})$ the crack at the pseudo-time node $t_{k}$ and the actual domain at the pseudo-time node $t_{k}$ is
$$\Omega(t_{k}):=\Omega\setminus\Gamma(t_{k}).$$
(3)
This implies that $\partial\Omega(t_{k})=\partial\Omega_{D}\cup\partial\Omega_{N}\cup\Gamma(t_{k})$. We enforce a homogeneous Neumann condition on $\Gamma(t_{k})$ for all $k=0,\cdots,K$, i.e.,
$$\sigma(u)\cdot n=0\ \text{ on }\Gamma(t_{k}).$$
(4)
Since we are interested in crack propagation, we assume that $\Omega(0)$ already contains a crack, i.e., $\Gamma(0)\neq\emptyset$.
The crack $\Gamma(t_{k})$ is supposed to be a countably rectifiable
1–manifold for all $k=0,\cdots,K$ (see [9]).
This hypothesis ensures the almost everywhere (a.e.) existence of a normal vector $n$ and a tangent vector $\tau$ to $\Gamma(t_{k})$ at any point $\mathbf{y}\in\Gamma(t_{k})$ [29].
Figure 1 illustrates these quantities.
The stress intensity factors (SIF) at any point $\mathbf{y}\in\Gamma(t_{k})$
are usually defined for a purely elastic material as
$$\left\{\begin{aligned} &K_{1}(\mathbf{y}):=\mathop{\mathrm{lim}}\limits_{\mathbf{y}^{\prime}\to\mathbf{y}}\sigma_{nn}(\mathbf{y}^{\prime})\sqrt{2\pi d(\mathbf{y},\mathbf{y}^{\prime})},\\
&K_{2}(\mathbf{y}):=\mathop{\mathrm{lim}}\limits_{\mathbf{y}^{\prime}\to\mathbf{y}}\sigma_{n\tau}(\mathbf{y}^{\prime})\sqrt{2\pi d(\mathbf{y},\mathbf{y}^{\prime})},\\
\end{aligned}\right.$$
(5)
where $d(\cdot,\cdot)$ is the Euclidean distance in $\mathbb{R}^{2}$.
If the stresses remain bounded in the vicinity of $\mathbf{y}\in\Gamma(t_{k})$, then the SIF are null.
Using the Irwin formula, one can define the energy release rate $\mathcal{G}(\mathbf{y})$ in the plane strain hypothesis as
$$\mathcal{G}(\mathbf{y}):=\frac{1-\nu^{2}}{E}\left(K_{1}(\mathbf{y})^{2}+K_{2}(\mathbf{y})^{2}\right).$$
(6)
Admissible states are characterized by the inequality
$$\mathcal{G}(\mathbf{y})\leq\mathcal{G}_{c},\quad\forall\mathbf{y}\in\Gamma(t_{k}),$$
(7)
where $\mathcal{G}_{c}$ is a material property associated with the capacity of the material to sustain loads without locally failing and thus opening cracks.
The material remains healthy at the point $\mathbf{y}\in\Gamma(t_{k})$
if $\mathcal{G}(\mathbf{y})<\mathcal{G}_{c}$
and breaks if $\mathcal{G}(\mathbf{y})=\mathcal{G}_{c}$. The material parameter
$\mathcal{G}_{c}$ is assumed to be homogeneous for simplicity.
To formulate the governing equations for quasi-static cracking,
we consider the following functional spaces depending on the pseudo-time node $t_{k}$:
$$V_{D}(t_{k}):=\left\{v\in H^{1}(\Omega(t_{k});\mathbb{R}^{d})\ |\ v_{|\partial\Omega_{D}}=u_{D}(t_{k})\right\},\qquad V_{0}(t_{k}):=\left\{v\in H^{1}(\Omega(t_{k});\mathbb{R}^{d})\ |\ v_{|\partial\Omega_{D}}=0\right\},$$
(8)
where standard notation is used for the Hilbert Sobolev spaces.
The weak solution is searched as a pair $(u,\Gamma)$ such that for all $k=0,\cdots,K$, $u(t_{k})\in V_{D}(t_{k})$, $\Gamma(t_{k})\subset\Omega$ is a
1–manifold satisfying the above assumptions, and
$$\left\{\begin{aligned} &a(t_{k};u(t_{k}),\tilde{v})=l(t_{k};\tilde{v}),&\quad&\forall\tilde{v}\in V_{0}(t_{k}),\\
&\mathcal{G}(\mathbf{y})\leq\mathcal{G}_{c},&\quad&\forall\mathbf{y}\in\Gamma(t_{k}).\end{aligned}\right.$$
(9)
Here we introduced the stiffness bilinear form such that for all $(v,\tilde{v})\in V_{D}(t_{k})\times V_{0}(t_{k})$,
$$a(t_{k};v,\tilde{v}):=\int_{\Omega(t_{k})}\varepsilon(v):\mathbb{C}:\varepsilon(\tilde{v}),$$
(10)
and the linear form acting on $V_{0}(t_{k})$ as follows:
$$l(t_{k};\tilde{v}):=\int_{\Omega(t_{k})}f(t_{k})\cdot\tilde{v}+\int_{\partial\Omega_{N}}g_{N}(t_{k})\cdot\tilde{v}.$$
(11)
Note that the Dirichlet condition on $\partial\Omega_{D}$ is enforced strongly, whereas the Neumann condition on $\partial\Omega_{N}\cup\Gamma(t_{k})$ is enforced weakly.
3 Space semi-discretization
In this section, we present the space semi-discretization of (9) using a variational DEM.
3.1 Discrete sets and degrees of freedom
The domain $\Omega$ is discretized with a mesh $\mathcal{T}_{h}$ of size $h$ made of polygons with straight edges.
We assume that $\Omega$ is itself a polygon so that the mesh covers $\Omega$ exactly.
We also assume that the mesh
is compatible with the initial crack position $\Gamma(0)$ and
with the partition of the boundary into the Dirichlet and Neumann parts.
Recall that the space dimension is $d=2$.
Let $\mathcal{C}$ denote the set composed of the mesh cells and, for all $k=0,\dots,K$, let $\mathcal{F}(t_{k})$ denote the set composed of the mesh facets. This set depends on the pseudo-time node $t_{k}$ since a facet $F\in\mathcal{F}(t_{k})$ is replaced, after cracking, by two boundary facets $F_{-},F_{+}\in\mathcal{F}(t_{k})$ ($F_{-},F_{+}$ are the same geometric object, but are different objects regarding the data structure since each one belongs to the boundary of a different mesh cell). The barycentre of a mesh cell $c\in\mathcal{C}$ is denoted by $\mathbf{x}_{c}$ and the barycentre of a mesh facet $F\in\mathcal{F}(t_{k})$ is denoted by $\mathbf{x}_{F}$.
Let $t_{k}$ be a pseudo-time node with $k=0,\cdots,K$.
We partition the set of mesh facets as $\mathcal{F}(t_{k})=\mathcal{F}^{i}(t_{k})\cup\mathcal{F}^{b}(t_{k})$, where
$\mathcal{F}^{i}(t_{k})$ is composed of the internal facets shared
by two mesh cells and $\mathcal{F}^{b}(t_{k})$ is the collection of the boundary facets
sitting on the boundary $\partial\Omega(t_{k})=\partial\Omega_{D}\cup\partial\Omega_{N}\cup\Gamma_{h}(t_{k})$, where $\Gamma_{h}(t_{k})$ denotes the discrete crack at $t_{k}$.
Notice that every boundary facet belongs to the boundary of only one mesh cell.
The subsets $\mathcal{F}^{i}(t_{k})$ and $\mathcal{F}^{b}(t_{k})$ depend on the pseudo-time node $t_{k}$ since, as the facet
$F\in\mathcal{F}^{i}(t_{k})$ cracks, it is replaced by the facets $F_{+},F_{-}\in\mathcal{F}^{b}(t_{k})$.
The discrete crack $\Gamma_{h}(t_{k})$ is
composed of facets belonging to a subset of $\mathcal{F}^{b}(t_{k})$.
This subset is denoted by $\mathcal{F}^{\Gamma}(t_{k})\subset\mathcal{F}^{b}(t_{k})$. We also introduce the partition between boundary facets with Neumann boundary conditions $\mathcal{F}^{b}_{N}(t_{k})$ (recall that homogeneous Neumann boundary conditions are imposed on newly created crack lips) and with Dirichlet boundary conditions $\mathcal{F}^{b}_{D}$ which does not depend on $t_{k}$. One thus has $\mathcal{F}^{b}(t_{k})=\mathcal{F}^{b}_{N}(t_{k})\cup\mathcal{F}^{b}_{D}$.
Vector-valued volumetric degrees of freedom (dofs) for a generic
displacement field $(v_{c})_{c\in\mathcal{C}}\in\mathbb{R}^{d\#(\mathcal{C})}$ are placed at the barycentre of every mesh cell $c\in\mathcal{C}$. We use the compact notation $v_{h}:=(v_{c})_{c\in\mathcal{C}}$ for the collection of all the cell dofs and we write $v_{h}\in V_{h}:=\mathbb{R}^{d\#(\mathcal{C})}$.
Figure 2 illustrates the position of the displacement
dofs.
3.2 Discrete bilinear and linear forms
The discrete stiffness bilinear form hinges on a reconstruction operator that provides a displacement value at every mesh facet by an interpolation formula from neighbouring cell
dofs. Specifically, using the cell dofs of $v_{h}\in V_{h}$ and the Dirichlet boundary conditions, we reconstruct a collection of displacements $v_{\mathcal{F}}:=(v_{F})_{F\in\mathcal{F}(t_{k})}\in\mathbb{R}^{d\#(\mathcal{F}(t_{k}))}$ on all the mesh facets.
The reconstruction operator is
denoted $\mathcal{R}(t_{k};\cdot)$ and we write
$$v_{\mathcal{F}}:=\mathcal{R}(t_{k};v_{h})\in\mathbb{R}^{d\#(\mathcal{F}(t_{k}))}.$$
(12)
The reconstruction operator depends on $t_{k}$ because of the connectivity modifications due to the crack propagation.
Let us first describe the reconstruction operator on boundary facets.
Let $F\in\mathcal{F}^{b}_{D}$ be a Dirichlet boundary facet. Then the reconstruction is simply defined by
evaluating the Dirichlet boundary condition at $\mathbf{x}_{F}$.
Let $F\in\mathcal{F}^{b}_{N}(t_{k})$ be a Neumann boundary facet.
The main idea to define $v_{F}$ is to use a barycentric combination of the cell dofs close to $F$. A similar idea has been considered for finite volume methods in
[12, Sec. 2.2] and for cell-centered Galerkin methods in [10]. We thus select a subset of neighboring cell dofs of $F$, say
$\mathcal{I}_{F}\subset\mathcal{C}$, and set
$$v_{F}:=\sum_{i\in\mathcal{I}_{F}}{\alpha_{i}(\mathbf{x}_{F})v_{i}},$$
(13)
where the $v_{i}$’s are the dofs of $v_{h}$ and
the coefficients $\alpha_{i}(\mathbf{x}_{F})$ are the barycentric coordinates of the facet barycenter
$\mathbf{x}_{F}$ in terms of the selected positions of the dofs. For this construction to be
meaningful, all the points associated with the selected dofs must not lie on the same line,
so that, in particular, the cardinality of $\mathcal{I}_{F}$ is at least $(d+1)=3$.
Let us then describe the reconstruction for an inner facet $F\in\mathcal{F}^{i}(t_{k})$.
We use a reconstruction similar to the one presented above except that the two cells sharing the inner facet $F$ play symmetric roles. We refer to this construction as symmetric reconstruction. Specifically, let $c_{+}$ and $c_{-}$ be the two cells sharing the inner facet $F\in\mathcal{F}^{i}(t_{k})$. Then, we select $\mathcal{I}_{-}$ (resp. $\mathcal{I}_{+}$) as being composed of the cell $c_{+}$ (resp. $c_{-}$) and of all the other cells sharing an inner facet with $c_{-}$ (resp. $c_{+}$). Notice that these two sets are disjoint. We then set
$$v_{F}:=\frac{1}{2}\sum_{i\in\mathcal{I}_{-}\cup\mathcal{I}_{+}}\alpha_{i}(\mathbf{x}_{F})v_{i},$$
(14)
so that, in the case of a simplicial mesh, $2(d+1)$ dofs are used for the reconstruction (always including $c_{-}$ and $c_{+}$). Note that $\sum_{i\in\mathcal{I}_{-}}\alpha_{i}(\mathbf{x}_{F})=\sum_{i\in\mathcal{I}_{+}}\alpha_{i}(\mathbf{x}_{F})=1$ here.
Figure 3 presents an example where $c_{-}=c_{i}$, $c_{+}=c_{j}$, $\mathcal{I}_{-}=\{j,j_{2},j_{3}\}$ and $\mathcal{I}_{+}=\{i,i_{2},i_{3}\}$.
Having defined the reconstructed facet displacements, it is now possible
to devise a discrete $\mathbb{R}^{d\times d}$-valued
piecewise-constant gradient field for the displacement
that we write $G_{\mathcal{C}}(v_{\mathcal{F}}):=(G_{c}(v_{\mathcal{F}}))_{c\in\mathcal{C}}\in\mathbb{R}^{d^{2}\#(\mathcal{C})}$. Specifically, we set in every
mesh cell $c\in\mathcal{C}$,
$$G_{c}(v_{\mathcal{F}}):=\sum_{F\in\partial c}\frac{|F|}{|c|}v_{F}\otimes n_{F,c},$$
(15)
where the summation is over the facets $F$ of $c$ and $n_{F,c}$ is the outward
normal to $c$ on $F$. Note that (15) is motivated by
a Stokes formula and that for all $v_{h}\in V_{h}$, we have
$$G_{c}(\mathcal{R}(t_{k};v_{h}))=\sum_{F\in\partial c}\frac{|F|}{|c|}(\mathcal{R}(t_{k};v_{h})_{F}-v_{c})\otimes n_{F,c},$$
(16)
since $\sum_{F\in\partial c}|F|n_{F,c}=0$.
We define a constant
linearized strain tensor in every mesh cell $c\in\mathcal{C}$ such that
$$\varepsilon_{c}(v_{\mathcal{F}}):=\frac{1}{2}(G_{c}(v_{\mathcal{F}})+G_{c}(v_{\mathcal{F}})^{\mathbf{T}})\in\mathbb{R}^{d\times d},$$
(17)
and a constant stress tensor in every mesh cell $c\in\mathcal{C}$ such that
$$\Sigma_{c}(v_{\mathcal{F}}):=\mathbb{C}:\varepsilon_{c}(v_{\mathcal{F}})\in\mathbb{R}^{d\times d}.$$
(18)
Finally, we define an additional reconstruction that is used to formulate the stabilization bilinear form in the discrete problem (see below). This operator is a cellwise nonconforming $P^{1}$ reconstruction $\mathfrak{R}_{c}$ defined for all $c\in\mathcal{C}$ by
$$\mathfrak{R}_{c}(t_{k};v_{h})(\mathbf{x}):=v_{c}+G_{c}(\mathcal{R}(t_{k};v_{h}))\cdot(\mathbf{x}-\mathbf{x}_{c}),\qquad\forall\mathbf{x}\in c.$$
(19)
3.3 Discrete problem
We set
$$\left\{\begin{aligned} V_{hD}(t_{k})&:=\{v_{h}\in V_{h}\ |\ \mathcal{R}(t_{k};v_{h})_{F}=u_{D}(t_{k};\mathbf{x}_{F}),\ \forall F\subset\partial\Omega_{D}\},\quad\forall k=0,\cdots,K,\\
V_{h0}(t_{k})&:=\{v_{h}\in V_{h}\ |\ \mathcal{R}(t_{k};v_{h})_{F}=0,\ \forall F\subset\partial\Omega_{D}\},\quad\forall k=0,\cdots,K.\end{aligned}\right.$$
(20)
The discrete stiffness bilinear form is such that
for all $(v_{h},\tilde{v}_{h})\in V_{hD}(t_{k})\times V_{h0}(t_{k})$
(compare with (10))
$$a_{h}(t_{k};v_{h},\tilde{v}_{h}):=\sum_{c\in\mathcal{C}}|c|\varepsilon_{c}(\mathcal{R}(t_{k};v_{h})):\mathbb{C}:\varepsilon_{c}(\mathcal{R}(t_{k};\tilde{v}_{h}))+s_{h}(t_{k};v_{h},\tilde{v}_{h}),$$
(21)
where the stabilization bilinear form $s_{h}$ is intended to
render $a_{h}$ coercive and is defined as
$$s_{h}(t_{k};v_{h},\tilde{v}_{h})=\sum_{F\in\mathcal{F}^{i}(t_{k})}\frac{2\mu}{h_{F}}|F|[\mathfrak{R}(t_{k};v_{h})]_{F}\cdot[\mathfrak{R}(t_{k};\tilde{v}_{h})]_{F}+\sum_{F\in\mathcal{F}^{b}_{D}}\frac{2\mu}{h_{F}}|F|[\mathfrak{R}(t_{k};v_{h})]_{F}\cdot[\mathfrak{R}(t_{k};\tilde{v}_{h})]_{F},$$
(22)
where $h_{F}$ is the diameter of the facet $F\in\mathcal{F}(t_{k})$.
For an interior facet $F\in\mathcal{F}^{i}(t_{k})$, writing $c_{-}$
and $c_{+}$ the two mesh cells sharing $F$, i.e., $F=\partial c_{-}\cap\partial c_{+}$,
and orienting $F$ by the unit normal
vector $n_{F}$ pointing from $c_{-}$ to $c_{+}$, the jump of $\mathfrak{R}(t_{k};v_{h})$ across $F$ is defined as
$$[\mathfrak{R}(t_{k};v_{h})]_{F}:=\mathfrak{R}_{c_{-}}(t_{k};v_{h})(\mathbf{x}_{F})-\mathfrak{R}_{c_{+}}(t_{k};v_{h})(\mathbf{x}_{F}).$$
(23)
The sign of the jump is irrelevant in what follows. The role of the
summation over the interior facets in (22)
is to penalize the jumps of the cell reconstruction $\mathfrak{R}$ across the interior facets.
For a Dirichlet boundary facet $F\in\mathcal{F}^{b}_{D}$, we denote $c_{-}$ the unique mesh cell
containing $F$, we orient $F$ by the unit normal vector $n_{F}:=n_{c_{-}}$ which
points outward $\Omega$, and we define
$$[\mathfrak{R}(t_{k};v_{h})]_{F}:=\mathcal{R}(t_{k};v_{h})_{F}-\mathfrak{R}_{c_{-}}(t_{k};v_{h})(\mathbf{x}_{F}).$$
(24)
Let us recall that for $u_{h}\in V_{hD}(t_{k})$, $\mathcal{R}(t_{k};u_{h})_{F}=u_{D}(t_{k};\mathbf{x}_{F})$ and for $v_{h}\in V_{h0}(t_{k})$, $\mathcal{R}(t_{k};v_{h})_{F}=0$.
The role of the summation over the Dirichlet boundary facets
in (22) is to
penalize the jumps between the cell reconstruction $\mathfrak{R}$ and the value interpolated in the Dirichlet boundary facets.
The bilinear form $s_{h}$ is classical in the context of discontinuous Galerkin methods (see [4, 11] for instance, see also [10] for cell-centred Galerkin methods).
It is possible to replace the coefficient $2\mu$ in (22) by $\beta\mu$ with a user-dependent dimensionless parameter $\beta$ of order unity. The numerical experiments reported in [22] indicate that this choice has a marginal influence on the results.
3.4 Verification test case
This section presents a verification test case related to the convergence rate with a singularity at the crack tip.
The crack does not propagate, i.e., we consider a steady setting using the above discrete stiffness bilinear form and load linear form.
The convergence rate of the method in the presence of a singularity is tested in the case of an infinite plate under mode 3 loading at infinity as presented in Figure 4.
A convergence rate of $O(h^{\frac{1}{2}})$, similar to that obtained with Lagrange $P^{1}$ finite elements, is expected.
The reference solution, close to the crack tip ($\frac{r}{a}\ll 1$), reads in polar coordinates [18, p. 28]:
$$u(r,\theta)=\frac{2\tau}{\mu}\sqrt{\frac{ar}{2}}\sin\left(\frac{\theta}{2}\right)e_{z},$$
(25)
where $\tau$ is the modulus of the antiplane shear stress imposed at infinity. The displacement defined in (25) verifies the statics equation in a strong form since $\mathrm{div}(u)=0$. The stresses are
$$\sigma(r,\theta)=\tau\sqrt{\frac{a}{2r}}\left[\sin\left(\frac{\theta}{2}\right)e_{r}-\cos\left(\frac{\theta}{2}\right)e_{\theta}\right]\otimes e_{z}.$$
(26)
The domain shown in Figure 4 being symmetric with respect to the red dashed line, only its right part is considered. As the analytical solution (25) is only valid close to the crack tip, a small ball around the crack tip, which corresponds to the green dashed circle in Figure 4, is meshed. The setting is presented in Figure 5. The convergence towards the analytical solution is checked on the meshed ball with the reference solution imposed as Dirichlet boundary condition over the whole boundary including the crack lips.
The results of the computation, which are reported in Table 1, corroborate an $O(h^{\frac{1}{2}})$ convergence rate in the energy-norm, as expected.
We also observe an $O(h^{\frac{3}{2}})$ convergence rate in the $L^{2}$-norm.
The convergence rates are evaluated as
$$\text{order}=d\log\left(\frac{e_{1}}{e_{2}}\right)\left(\log\left(\frac{n_{2}}{n_{1}}\right)\right)^{-1},$$
(27)
where $e_{1},e_{2}$ denote the errors on the computations with mesh sizes $h_{1},h_{2}$ and the number of dofs $n_{1},n_{2}$.
4 Quasi-static crack propagation
In this section, we formulate the discrete problem for quasi-static crack propagation. The space discretization is achieved by means of the variational DEM scheme presented in the previous section.
At every pseudo-time node $t_{k}$, the problem is solved iteratively with inner iterations enumerated by $m\in\{0,\ldots,M\}$. Since the crack can change at each inner iteration, we use the notation $\Gamma_{h}(t_{k,m})$ for the crack and the notation $\mathcal{F}^{i}(t_{k,m})$ and $\mathcal{F}^{b}(t_{k,m})$ for the partition of the mesh facets at the inner iteration $m$, with the facets located in the crack collected in the subset $\mathcal{F}^{\Gamma}(t_{k,m})$.
Each inner iteration consists in two steps. First, freezing the position of the crack, we find the discrete displacement
$u_{h}(t_{k,m})\in V_{hD}(t_{k})$ solving the quasi-static problem $a_{h}(t_{k,m};u_{h}(t_{k,m}),\tilde{v}_{h})=l_{h}(t_{k};\tilde{v}_{h})$
for all $\tilde{v}_{h}\in V_{h0}(t_{k})$ (the bilinear form $a_{h}$ depends on $t_{k,m}$ since the reconstruction operator changes as the crack propagates). Then we use the newly
computed displacement field $u_{h}(t_{k,m})$ to determine whether crack propagation occurs
and update accordingly the subsets $\mathcal{F}^{i}(t_{k,m+1})$,
$\mathcal{F}^{b}(t_{k,m+1})$, and $\mathcal{F}^{\Gamma}(t_{k,m+1})$. We iterate this procedure until there is no more crack propagation in the second step.
The inner iteration in the discrete quasi-static crack propagation scheme can thus be summarized as follows:
For all $m\in\{0,\ldots,M\}$,
$$\left\{\begin{aligned} &\textup{(i)}&\ &u_{h}(t_{k,m})\in V_{hD}(t_{k})\ \text{s.t.}\ a_{h}(t_{k,m};u_{h}(t_{k,m}),\tilde{v}_{h})=l_{h}(t_{k};\tilde{v}_{h}),\ \forall\tilde{v}_{h}\in V_{h0}(t_{k}),\\
&\textup{(ii)}&\ &(\mathcal{F}^{\Gamma}(t_{k,m+1}),\mathcal{F}^{b}(t_{k,m+1}),\mathcal{F}^{i}(t_{k,m+1}))=\texttt{CRACK\_QS}(\mathcal{F}^{\Gamma}(t_{k,m}),\mathcal{F}^{b}(t_{k,m}),\mathcal{F}^{i}(t_{k,m}),u_{h}(t_{k,m})).\end{aligned}\right.$$
(28)
The rest of this section is devoted to the description of the procedure CRACK_QS.
This procedure consists in the three consecutive steps outlined in Figure 6.
The first step involves the procedure ESTIMATE which considers all the vertices of $\mathcal{F}^{\Gamma}(t_{k,m})$ and computes for each of these vertices an approximate energy release rate. The second step involves the procedure MARK which flags among all the inner facets sharing a vertex with an energy release rate larger than the maximum value $\mathcal{G}_{c}$ the facet that will indeed break.
The selection is made by using a discrete kinking criterion.
The last step uses the procedure UPDATE and simply consists in updating the data structure according to the crack propagation. The procedure is repeated from the recomputation of the solution of the first line of Equation (28) until no facet is marked in the procedure MARK.
4.1 Procedure ESTIMATE
Let $\mathcal{V}^{\Gamma}(t_{k,m})$ be the set of all vertices in the crack $\Gamma(t_{k,m})$.
The procedure ESTIMATE computes an approximate energy release rate $\mathcal{G}_{h}(v)$ for all $v\in\mathcal{V}^{\Gamma}(t_{k,m})$.
Let $\mathcal{F}^{\Gamma}_{\mathbf{v}}(t_{k,m})$ be the set of cracked facets sharing a vertex $\mathbf{v}\in\mathcal{V}^{\Gamma}(t_{k,m})$. (The set $\mathcal{F}^{\Gamma}_{\mathbf{v}}(t_{k,m})$ reduces to a single facet if $\mathbf{v}$ is the crack tip.)
Let $\mathcal{F}^{i}_{\mathbf{v}}(t_{k,m})$ be the set of inner facets sharing a vertex $\mathbf{v}\in\mathcal{V}^{\Gamma}(t_{k,m})$.
An approximate energy release rate for
the vertex $\mathbf{v}\in\mathcal{V}^{\Gamma}(t_{k,m})$ is evaluated as
$$\mathcal{G}_{h}(\mathbf{v}):=\max_{F\in\mathcal{F}^{\Gamma}_{\mathbf{v}}(t_{k,m})}\max_{F^{\prime}\in\mathcal{F}^{i}_{\mathbf{v}}(t_{k,m})}\pi n_{F}\cdot\{\Sigma_{h}(t_{k,m})\}_{F}\cdot[u_{h}(t_{k,m})]_{F^{\prime}},$$
(29)
where $[u_{h}]_{F}:=u_{c_{-}}-u_{c_{+}}$, $\{\Sigma_{h}\}_{F}:=\frac{1}{2}(\Sigma_{c_{-}}+\Sigma_{c_{+}})$, and $n_{F}$ is the normal vector to $F$ pointing from $c_{-}$ to $c_{+}$.
This expression is rooted in the fact that the elastic energy contained in a facet $F$ writes $\frac{1}{2}n_{F}\cdot\{\Sigma_{h}(t_{k,m})\}_{F}\cdot[u_{h}(t_{k,m})]_{F}|F|$ as motivated in [22]. The factor $\pi$ comes from the fact that the density of elastic energy per facet must be multiplied by $2\pi$ to take into account the surface created by cracking (see [18, p. 48]). This is linked to the concept of the crack closure integral.
The output of the procedure ESTIMATE is the collection of approximate energy release rates $\{\mathcal{G}_{h}(\mathbf{v})\}_{\mathbf{v}\in\mathcal{V}^{\Gamma}(t_{k,m})}$.
4.2 Procedure MARK
The goal of the procedure MARK is to identify the unique inner facet $\mathfrak{F}\in\mathcal{F}^{i}(t_{k,m})$ through which the crack will propagate.
The criterion is based on an adaptation of the maximisation of the strain energy density which was introduced in [28].
The vertices of $\mathcal{V}^{\Gamma}(t_{k,m})$ are ordered as they break during a computation and we select the last $N$ vertices in $\mathcal{V}^{\Gamma}(t_{k,m})$ to define the subset $\mathcal{V}^{\Gamma}_{N}(t_{k,m})$. The integer parameter $N$ is set to $N=6$ in our computations; this choice gives satisfactory results while avoiding excessive branching of the crack path.
Finally, we select the vertices in $\mathcal{V}^{\Gamma}_{N}(t_{k,m})$ whose approximate energy release rate is larger than the material parameter $\mathcal{G}_{c}$:
$$\mathcal{V}^{\Gamma*}_{N}(t_{k,m}):=\{\mathbf{v}\in\mathcal{V}^{\Gamma}_{N}(t_{k,m}),\mathcal{G}_{h}(\mathbf{v})\geq\mathcal{G}_{c}\}.$$
(30)
Among all $\mathbf{v}\in\mathcal{V}^{\Gamma*}_{N}(t_{k,m})$, we select the single vertex through which the crack will propagate at $t_{k,m}$ as
$$\mathbf{z}:=\mathop{\mathrm{Argmax}}\limits_{\mathbf{v}\in\mathcal{V}^{\Gamma*}_{N}(t_{k,m})}\mathcal{G}_{h}(\mathbf{v}).$$
(31)
If there is more than one maximizer, one is picked randomly. Note that in most situations, the vertex $\mathbf{z}$ is located at the crack tip.
Having selected the vertex $\mathbf{z}$, we now mark one facet $\mathfrak{F}\in\mathcal{F}^{i}_{\mathbf{z}}(t_{k,m})$ for cracking.
We impose only one restriction on the selection process of the facet to be broken: we limit the number of facets broken per cell to one.
This limit is justified by the fact that when a facet breaks, the resulting geometric singularity creates very high stresses that lead to breaking the other facets of the cells containing the facet thus creating many fragments. The limitation we impose is to avoid this situation.
The setting is illustrated in Figure 7.
The output of the procedure MARK is the facet $\mathfrak{F}$, through which the crack will propagate, defined as
$$\mathfrak{F}:=\mathop{\mathrm{Argmax}}\limits_{\begin{subarray}{c}F\in\mathcal{F}^{i}_{\mathbf{z}}(t_{k,m})\setminus\mathcal{F}^{i}_{\mathcal{C}}(t_{k,m})\end{subarray}}\frac{1}{2}\{\Sigma_{h}(t_{k,m})\}_{F}\cdot\{\varepsilon_{h}(t_{k,m})\}_{F},$$
(32)
where $\mathcal{F}^{i}_{\mathcal{C}}(t_{k,m})$ denotes the set of inner facets contained in a cell with one facet already broken.
4.3 Procedure UPDATE
The subsets $\mathcal{F}^{\Gamma}(t_{k,m+1})$, $\mathcal{F}^{i}(t_{k,m+1})$, and
$\mathcal{F}^{b}(t_{k,m+1})$ can now be updated as follows:
$$\left\{\begin{aligned} &\mathcal{F}^{\Gamma}(t_{k,m+1}):=\mathcal{F}^{\Gamma}(t_{k,m})\cup\{\mathfrak{F}\},\\
&\mathcal{F}^{i}(t_{k,m+1}):=\mathcal{F}^{i}(t_{k,m})\setminus\{\mathfrak{F}\},\\
&\mathcal{F}^{b}(t_{k,m+1}):=\mathcal{F}^{b}(t_{k,m})\cup\{\mathfrak{F}_{-},\mathfrak{F}_{+}\},\end{aligned}\right.$$
(33)
where we recall that $\mathfrak{F}_{-}$ and $\mathfrak{F}_{+}$ are the same geometric object as the inner facet $\mathfrak{F}$, but are now each one on the boundary of a single mesh cell.
Remark 1 (Update of $a_{h}$).
The updates in (33) affect the reconstruction operator used to evaluate the discrete stiffness bilinear form. Figure 8 presents a sketch of an inner facet whose reconstruction has to be recomputed after a neighbouring inner facet breaks. The purpose of recomputing the reconstruction on certain inner facets is to avoid using dof values on both sides of the crack in the same reconstruction.
5 Numerical experiments
Several numerical experiments are presented to show the versatility of the proposed numerical method. The python scripts111https://github.com/marazzaf/DEM_cracking.git for these numerical experiments use the finite element library FEniCS [21]
and scipy222https://scipy.org/. Although the proposed method is able to handle polyhedral meshes, our computations only use triangular meshes. This is a consequence of the current restriction of FEniCS to simplicial meshes.
5.1 Crack speed with prescribed crack path
We consider a test case taken from [20]. The test case consists of an already cracked plate under antiplane shear loading.
The crack is forced to propagate along a straight line represented by the dashed line in Figure 9. The goal of this test case is to study the crack propagation velocity.
The dimensions of the plate are $L=5\text{m}$ and $H=1\text{m}$ and the initial length of the crack is $l_{0}=1\text{m}$.
The constant increment in boundary loading is written $\Delta u_{D}$. The material parameters are $\mu=0.2\text{Pa}$ and $\mathcal{G}_{c}=0.01\text{kN/mm}$.
We are interested in the length of the crack with respect to the cumulated boundary loading displacement $u_{D}$, where the final displacement load is $u_{D}=1\mathrm{m}$.
The reference solution for the crack speed $S$ with respect to the loading speed, taken from [20], is $\sqrt{\frac{\mu H}{\mathcal{G}_{c}}}\approx 4.47$. As this solution is only valid when $L\to\infty$, we checked that doubling the length $L$ of the strip did not lead to any significant change in the crack speeds.
The computations are performed with two structured 2d meshes of triangles with characteristic sizes $h=10$cm and $h=5$cm. Various values of $\Delta u_{D}$ are used in the two computations.
Figure 10 reports the crack length as a function of the cumulated loading displacement $u_{D}$.
One can see that the results with the two meshes are very similar.
The results with $\Delta u_{D}=10^{-3}$m and $\Delta u_{D}=10^{-2}$m are very similar and are in agreement with the analytical solution. For these two values, $\frac{\Delta u_{D}}{h}$ is less than $0.5$, so that the increment in the imposed Dirichlet condition is smaller than the mesh size. This is not the case for $\Delta u_{D}=0.1$m.
The different aspect of the curves for $\Delta u_{D}=0.1$m is explained by the fact that as $\Delta u_{D}$ is large in that case, a large number of facets can break at some of the displacement increments, thus leading to this staircase shape. However, one can notice that at the end of every other displacement increment, the curve for $\Delta u_{D}=0.1$m reaches the same value as the curves computed with the other $\Delta u_{D}$ values.
Table 2 contains the errors of the crack speeds (computed with a least-squares fit on the two numerical computations) with respect to the analytical solution.
The agreement of the computed crack speeds with the analytical solution is very satisfactory for all $\Delta u_{D}$.
5.2 Opening mode with unknown crack path
The setting for this test case is presented in Figure 11.
The dimensions of the plate are $L=32\text{mm}$ and $H=16\text{mm}$ and the initial length of the crack is $l_{0}=4\text{mm}$.
The material parameters are $E=3.09\text{GPa}$, $\nu=0.35$ and $\mathcal{G}_{c}=300\text{kN/mm}$.
First, we use a structured mesh of size $h=0.4\text{mm}$ leading to $25,920$ dofs. The increment in boundary conditions is defined as $\Delta u_{D}=h$.
Figure 12 presents the obtained crack path. We notice an unstable crack propagation, as expected, in the sense that when the propagation starts, it breaks the entire sample at a given $t_{k}$.
We also perform computations on two unstructured meshes of sizes $h=1.4\mathrm{mm}$ and $h=0.74\mathrm{mm}$ corresponding respectively to $2,792$ dofs and $11,044$ dofs. Both meshes do not contain facets with a direction that could lead to a totally straight propagation of the crack.
The finer mesh is not a refinement of the coarser one.
Figure 13 shows the crack paths obtained on the two meshes.
The crack paths obtained are satisfactory as the propagation is rather straight and the results on the two meshes are quite similar.
5.3 Single-edge notched shear test
The setting of this test case comes from [1]. It consists in a square with an already initiated crack loaded in shear on its top surface. The lower surface is recessed while the upper surface is loaded in shear. The two lateral parts are free of stress as well as the crack. Figure 14 illustrates the setting.
The crack is of initial length $l_{0}=0.5\text{mm}$ and the dimension of the sample is $H=1\text{mm}$.
The material parameters are $E=210\text{GPa}$, $\nu=0.3$ and $\mathcal{G}_{c}=2.7\cdot 10^{-3}\text{kN/mm}$. The increment of boundary load is defined as $\Delta u_{D}=10^{-6}\text{mm}$ and the final load is $u_{D,\text{final}}=0.2\text{mm}$.
Three computations are performed on unstructured meshes of size $h=2.8\cdot 10^{-2}$mm (coarse mesh), $h=1.3\cdot 10^{-2}$mm (fine mesh), and $h=7.7\cdot 10^{-3}$mm (finest mesh), leading respectively to $13,396$, $65,956$, and $210,328$ dofs.
Figure 15 shows the computed crack paths. Our results can be compared with [26] which uses a phase-field model discretized by a hybridizable discontinuous Galerkin formulation.
The computations are in satisfactory agreement with those of [26] regarding the general orientation of the crack and the number of branches.
We observe in Figure 15 that the crack propagates downwards along a somewhat curved path (with rather close predictions between the two finer meshes). The trajectory is sightly different from the one predicted in [26] where the crack propagates along a rather straight line which forms a sharp angle with respect to the initial crack. Experimental results would be needed to assess the correctness of these numerical results.
The load-displacement curves are displayed in Figure 16
along with the values of the imposed displacement and the resulting force when
the crack starts propagating.
The force is computed through an integration of the tangential component of the reconstructed normal stress $\Sigma_{h}\cdot n$ on the upper and lower surfaces of the sample. The force has also been computed through a residual method and the difference has been found to be negligible.
One can first notice that up to an imposed displacement of $9.5\mu$m, all the curves are superimposed and exactly reproduce the elastic response of the sample with the fixed initial crack. As the imposed displacement increases beyond the above value,
jumps in the load-displacement curves appear progressively. These jumps are a consequence of facets cracking, and the slope of the elastic response is reduced after each jump owing to the propagation of the crack. This explains the observed zigzag behavior of the response curves. Altogether, crack propagation thus induces a softening of the sample as expected. The load-displacement curve obtained on the coarse mesh stops at the value $u_{D}=0.2$mm for which the crack reaches the rightmost boundary of the sample. Instead, the computations on the two finer meshes support larger values for $u_{D}$ and lead to rather similar predictions. Furthermore,
one can see that the crack starts propagating around an imposed displacement of $10\mu$m, which is similar to the value reported in [1]. The value of the force, however, is different. We believe that this difference can be attributed to the sharp interface representation of the crack in the present method. To substantiate this claim, we performed some
additional computations on the finest mesh using a fixed interface position, $P^{1}$–Lagrange finite elements, and an imposed displacement $u_{D}=5\mu$m. With a sharp interface, the load is $0.13$kN (consistently with the DEM prediction on the same mesh), whereas it is $0.32$kN if there is no crack (the sample is fully sound). If instead the initial crack is represented as a damage field [borden2012p, 26] with a smoothing length $\ell=5h$, the load is close to the value reported in [1, 26], namely $0.20$kN (notice that this value is as expected in the interval $(0.13,0.32)$kN).
5.4 Notched plate with a hole
This test case comes from [26]. The material parameters are $E=6\text{GPa}$, $\nu=0.22$, and $\mathcal{G}_{c}=2.28\cdot 10^{-3}\text{kN/mm}$. We use fixed displacement increments of $\Delta u_{D}=10^{-2}\text{mm}$.
Figure 17 presents a sketch of the sample.
The dimensions of the plate are $L=65\text{mm}$ and $H=120\text{mm}$.
The two holes on the left of the sample have a diameter of $10\text{mm}$ and the hole on the right of the sample has a dimeter of $20\text{mm}$. The initial length of the crack is $l_{0}=10\text{mm}$. One also has $a=20\text{mm}$, $b=55\text{mm}$, $d=69\text{mm}$ and $e=36.5\text{mm}$.
The right hole is free of stress, the lower hole is recessed and the upper hole has an imposed displacement $u=(0,u_{D}(t_{k}))$.
We use three unstructured meshes with $h=2.8$mm, $h=1.5$mm and $h=0.78$mm having respectively $9,926$, $39,380$ and $157,340$ dofs.
Figure 18 shows the computed crack paths.
We compare our results with [26] without taking into account the secondary crack starting from the largest hole as we restrict ourselves to crack propagation and not crack initiation.
We notice that for the three computations, the crack goes towards the largest hole in a similar fashion which also seems consistent with [26].
The load-displacement curves are given in Figures 19 and 20, together with the values of the imposed displacement and the resulting force when the crack starts propagating and when the crack reaches the hole, respectively. Figure 19 focuses on imposed displacements $u_{D}$ up to 0.6mm, whereas Figure 20 explores a wider range for $u_{D}$ on the two finer meshes.
The force is computed through an integration of the vertical component of the reconstructed normal stress $\Sigma_{h}\cdot n$ on the upper left hole.
A similar behaviour of the elastic response and the softening of the sample is observed as in Section 5.3. The crack starts propagating around an imposed displacement $u_{D}=0.27$mm (consistently on the three meshes), which is in reasonable agreement with the caption of [26, Fig. 19] which indicates that propagation has started at the value of $u_{D}=0.3$mm.
A further quantitative comparison including forces is delicate owing to the difficulties mentioned at the end of Section 5.3.
Moreover, we observe from Figure 19 that the predictions on the coarse mesh are still rather inaccurate for higher values of $u_{D}$, whereas Figure 20 indicates that the predictions on the two finer meshes are in satisfactory agreement as far as the load-displacement curves are concerned.
The predictions of the path of crack propagation are also similar on both meshes, but the value of the imposed displacement when the crack reaches the hole is different, as reflected in the caption of Figure 18.
6 Conclusions
We have presented a variational Discrete Element Method (DEM) to compute Griffith crack propagation. The crack propagates through the facets of the mesh and thus between discrete elements.
The variational DEM is a consistent discretization of a Cauchy continuum and only requires three continuum macroscopic parameters for its implementation: the Young modulus, the Poisson ratio, and the critical energy release rate. The displacement degrees of freedom are attached to the barycentre of the mesh cells.
A discrete Stokes formula is used to devise a piecewise constant gradient and linearized strain reconstructions.
An approximation of the energy release rate is computed in the procedure ESTIMATE. The procedure MARK then determines the breaking facet at each pseudo-time node $t_{k}$. Finally, the procedure UPDATE updates the necessary discrete quantities after the facet that has been marked has been broken. A convergence test in antiplane shear has confirmed the efficiency of the variational DEM discretization as well as the $\mathcal{O}(h^{\frac{1}{2}})$ convergence rate in energy norm.
The robustness of the method regarding the computation of the crack speed has been verified. Also, several numerical experiments have shown that the method can provide reasonable crack paths.
This work can be pursued in several directions. A first idea would be to adapt the present methodology to three-dimensional problems with two-dimensional cracks.
A second direction concerns the regularity of the crack surface. Indeed, in the spirit of [13], a crack should be a surface that minimizes energy. To achieve this goal, the variational DEM could be coupled to gradient flows used for surface lifting, as in [27], with the goal of moving the crack surface vertices. One would then have to verify the convergence of the discrete crack area with tools similar to [15].
A third direction for further study is to approximate cohesive cracking laws instead of a Griffith cracking law so as to enable the simulation of crack initiation as well as crack propagation. Inspiration can be found in [23] which uses a DEM with a linear cohesive law.
Finally, a last direction can be to consider an enrichment similar to [8] close to the crack tip so as to obtain a convergence with order $\mathcal{O}(h)$.
Acknowledgements
Partial support by CEA is gratefully acknowledged.
References
[1]
M. Ambati, T. Gerasimov, and L. De Lorenzis.
A review on phase-field models of brittle fracture and a new fast
hybrid formulation.
Comput. Mech., 55(2):383–405, 2015.
[2]
D. André, J. Girardot, and C. Hubert.
A novel DEM approach for modeling brittle elastic media based on
distinct lattice spring model.
Comput. Methods Appl. Mech. Eng., 350:100–122, 2019.
[3]
D. André, M. Jebahi, I. Iordanoff, J.-L. Charles, and J. Néauport.
Using the discrete element method to simulate brittle fracture in the
indentation of a silica glass with a blunt indenter.
Comput. Methods Appl. Mech. Eng., 265:136–147, 2013.
[4]
D. Arnold.
An interior penalty finite element method with discontinuous
elements.
SIAM J. Numer. Anal., 19(4):742–760, 1982.
[5]
T. Belytschko, Y. Y. Lu, and L. Gu.
Element-free Galerkin methods.
Int. J. Numer. Methods Eng., 37(2):229–256, 1994.
[6]
B. Bourdin, G. A. Francfort, and J.-J. Marigo.
Numerical experiments in revisited brittle fracture.
J. Mech. Phys. Solids, 48(4):797–826, 2000.
[7]
M. A. Celigueta, S. Latorre, F. Arrufat, and E. Oñate.
Accurate modelling of the elastic behavior of a continuum with the
discrete element method.
Comput. Mech., 60(6):997–1010, 2017.
[8]
E. Chahine, P. Laborde, and Y. Renard.
Crack tip enrichment in the XFEM using a cutoff function.
Int. J. Numer. Methods Eng., 75(6):629–646, 2008.
[9]
G. Dal Maso.
Generalised functions of bounded deformation.
J. Eur. Math. Soc., 15(5):1943–1997, 2013.
[10]
D. A. Di Pietro.
Cell centered Galerkin methods for diffusive problems.
ESAIM. M2AN, 46(1):111–144, 2012.
[11]
D. A. Di Pietro and A. Ern.
Mathematical aspects of discontinuous Galerkin methods,
volume 69.
Springer Science & Business Media, 2011.
[12]
R. Eymard, T. Gallouët, and R. Herbin.
Discretization of heterogeneous and anisotropic diffusion problems on
general nonconforming meshes SUSHI: a scheme using stabilization and hybrid
interfaces.
IMA J. Numer. Anal., 30(4):1009–1043, 2009.
[13]
G. A. Francfort and J.-J. Marigo.
Revisiting brittle fracture as an energy minimization problem.
J. Mech. Phys. Solids, 46(8):1319–1342, 1998.
[14]
P. Hansbo and K. Salomonsson.
A discontinuous Galerkin method for cohesive zone modelling.
Finite Elem. Anal. Des., 102:1–6, 2015.
[15]
K. Hildebrandt, K. Polthier, and M. Wardetzky.
On the convergence of metric and geometric properties of polyhedral
surfaces.
Geometriae Dedicata, 123(1):89–112, 2006.
[16]
A. Hussein, B. Hudobivnik, F. Aldakheel, P. Wriggers, P.-A. Guidault, and
O. Allix.
A virtual element method for crack propagation.
PAMM, 18(1):e201800104, 2018.
[17]
M. Jebahi, D. André, I. Terreros, and I. Iordanoff.
Discrete element method to model 3D continuous materials.
John Wiley & Sons, 2015.
[18]
M. Kuna.
Finite elements in fracture mechanics.
Springer, 2013.
[19]
C. Labra and E. Oñate.
High-density sphere packing for discrete element method simulations.
Commun. Numer. Methods Eng., 25(7):837–849, 2009.
[20]
T. Li, J.-J. Marigo, D. Guilbaud, and S. Potapov.
Numerical investigation of dynamic brittle fracture via gradient
damage models.
Adv. Model. Simul. Eng. Sci., 3(1):26, 2016.
[21]
Anders Logg, Kent-Andre Mardal, Garth N. Wells, et al.
Automated Solution of Differential Equations by the Finite
Element Method.
Springer, 2012.
[22]
F. Marazzato, A. Ern, and L. Monasse.
A variational discrete element method for quasistatic and dynamic
elastoplasticity.
Int. J. Numer. Methods Eng., 121(23):5295–5319, 2020.
[23]
C. Mariotti, V. Michaut, and J.-F. Molinari.
Modeling of the fragmentation by discrete element method.
In DYMAT 2009 9th Int. Conf. Mechanical and Physical Behaviour
of Materials under Dynamic Loading, pages 1523–1528, 2009.
[24]
N. Moës and T. Belytschko.
X-FEM, de nouvelles frontières pour les éléments finis.
Revue européenne des Eléments, 11(2-4):305–318, 2002.
[25]
L. Monasse and C. Mariotti.
An energy-preserving discrete element method for elastodynamics.
ESAIM. M2AN, 46:1527–1553, 2012.
[26]
A. Muixí, A. Rodríguez-Ferran, and S. Fernández-Méndez.
A hybridizable discontinuous galerkin phase-field model for brittle
fracture with adaptive refinement.
Int. J. Numer. Methods Eng., 121(6):1147–1169, 2020.
[27]
P. Romon.
Introduction à la géométrie différentielle
discrète.
Ellipses, 2013.
[28]
G. C. Sih.
Strain-energy-density factor applied to mixed mode crack problems.
International Journal of fracture, 10(3):305–321, 1974.
[29]
L. Simon.
Lectures on geometric measure theory.
In Proceedings of the Centre for Mathematical Analysis,
Australian National University, volume 3. Australian National University
Centre for Mathematical Analysis, Canberra, 1983.
[30]
M. Spellings, R. L. Marson, J. A. Anderson, and S. C. Glotzer.
GPU accelerated discrete element method (DEM) molecular dynamics
for conservative, faceted particle simulations.
J. Comput. Phys., 334:460–467, 2017.
[31]
N. Sukumar, B. Moran, T. Black, and T. Belytschko.
An element-free Galerkin method for three-dimensional fracture
mechanics.
Comput. Mech., 20(1-2):170–175, 1997.
[32]
F. Zárate, A. Cornejo, and E. Oñate.
A three-dimensional FEM–DEM technique for predicting the evolution
of fracture in geomaterials and concrete.
Comput. Part. Mech., 5(3):411–420, 2018.
[33]
F. Zárate and E. Oñate.
A simple FEM–DEM technique for fracture prediction in materials
and structures.
Comput. Part. Mech., 2(3):301–314, 2015. |
Global phase diagram of
a spin-orbit-coupled Kondo lattice model on the honeycomb lattice
Xin Li
Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
Rong Yu
rong.yu@ruc.edu.cn
Department of Physics and Beijing Key Laboratory of Opto-electronic Functional Materials and Micro-nano Devices, Renmin University of China, Beijing 100872, China
Qimiao Si
qmsi@rice.edu
Department of Physics & Astronomy, Rice Center for Quantum Materials,
Rice University, Houston, Texas 77005,USA
(December 7, 2020)
Abstract
Motivated by the growing interest in the
novel quantum phases in materials with strong electron correlations and spin-orbit coupling,
we study the interplay
between the spin-orbit coupling, Kondo interaction, and magnetic frustration
of a Kondo lattice model
on a two-dimensional honeycomb lattice.
We
calculate the renormalized electronic structure and correlation functions
at the saddle point
based on a fermionic representation of the spin operators.
We find
a global phase diagram of the model at
half-filling, which
contains a variety of phases due to the competing interactions. In addition to a Kondo insulator,
there is a topological insulator with valence bond solid correlations in the spin sector, and two
antiferromagnetic phases.
Due to a competition between the spin-orbit coupling and Kondo interaction,
the direction of the magnetic moments
in the antiferromagnetic phases
can be either within
or perpendicular to
the lattice plane.
The latter antiferromagnetic state is topologically nontrivial for moderate and strong spin-orbit
couplings.
pacs:
I Introduction
Exploring novel quantum phases and the associated phase transitions in systems with strong electron correlations
is a major subject of contemporary condensed matter physics.SpecialIssue2010 ; Sachdev2011a ; SiSteglich_Sci2010
In this context, heavy fermion (HF) compounds play a crucial role. SiSteglich_Sci2010 ; GegenwartSi_NatPhys2007 ; Lohneysen_RMP2007 ; Tsunetsugu_RMP1997 In these materials,
the coexisted itinerant electrons and local magnetic moments (from localized $f$ electrons) interact via the antiferromagnetic
exchange coupling, resulting in the
Kondo effect.Hewson_Book Meanwhile, the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, namely the exchange coupling among the local moments mediated by the itinerant electrons, competes with the Kondo effect.Doniach_Physica1977
This competition
gives rise to
a rich
phase diagram with an antiferromagnetic (AFM) quantum critical point (QCP) and various emergent phases nearby.Custers_Nat2003 ; SiSteglich_Sci2010
In the HF metals,
experiments Schroder_Nat2000 ; Paschen_Nat2004
have provide strong evidence for
local quantum criticality, Si_Nat2001 ; Coleman_JPCM2001
which is characterized by the beyond-Landau physics of Kondo destruction
at the AFM QCP. Across this local QCP, the Fermi surface jumps from large in the paramagnetic HF liquid phase to small in the AFM
phase with Kondo destruction.
A natural question is how this local QCP
connects
to the conventional spin density wave (SDW) QCP, described by the Hertz-Millis theory Hertz_1976 ; Millis_1993 .
A proposed global phase diagram
Si_PhysB2006 ; Si_PSSB2010 ; Pixley_PRL2014 ; SiPaschen
makes this connection
via the tuning
of the quantum fluctuations
in the local-moment magnetism.
Besides the HF metals, it is also interesting to know whether a similar global phase diagram can be realized in Kondo insulators (KIs), where the chemical potential is inside the Kondo hybridization gap when the electron filling is commensurate.
The KIs are nontrivial band insulators because the band gap originates from strong electron-correlation effects.
A Kondo-destruction transition
is expected to accompany the closure of the band gap.
The question that remains open is
whether the local moments immediately order or form a
different type of magnetic states, such as spin liquid
or
valence bond solid (VBS),
when the Kondo destruction takes place.
Recent years have seen extensive
studies about
the effect of a fine spin-orbit coupling (SOC) on the electronic bands.
In topological insulators (TIs), the bulk band gap opens due to a
nonzero
SOC,
and
there exist gapless surface states.
The
nontrivial topology of the bandstructure is protected by the time reversal symmetry (TRS).
Even for a system with broken TRS, the conservation of combination of TRS and translational symmetry can give rise to a topological antiferromagnetic insulator (T-AFMI).MongMoore_2010
In general, these TIs and TAFIs can be tuned to topologically trivial insulators via topological quantum phase transitions. But how the strong electron correlations influence the properties of these symmetry dictated topological phases and related phase transitions is still under active discussion.
The SOC also has important effects in
HF materials SiPaschen .
For example,
the SOC
can produce a topologically nontrivial bandstructure
and
induce exotic Kondo physics.Nakatsuji_PRL2006 ; Chen_PRB2017
it may give
rise to a topological Kondo insulator (TKI),Dzero_PRL2012
which
has been invoked to understand
the resistivity plateau of the heavy-fermion SmB${}_{6}$ at low temperatures.SmB6 .
From
a more general perspective,
SOC provides an additional tuning parameter enriching the global phase diagram of HF
systems SiPaschen ; YamamotoSi_JLTP2010 .
Whether and how the topological nontrivial quantum phases can emerge
in this phase diagram is
a timely issue.
Recent studies have advanced a Weyl-Kondo semimetal phase Lai2018 .
Experimental evidence has come from the new heavy fermion compound
Ce${}_{3}$Bi${}_{4}$Pd${}_{3}$, which display thermodynamic Dzsaber2017 and
zero-field Hall transport Dzsaber2018 properties that provide evidence for the salient features
of the Weyl-Kondo semimetal. These measurements respectively probe
a linearly dispersing electronic excitations with a velocity that is renormalized by several
orders of magnitude and singularities in the Berry-curvature distribution.
This type of theoretical studies are also of interest for
a Kondo lattice model
defined on
a honeycomb lattice,Feng_PRL2012
which
readily accommodates
the
SOC KaneMele_PRL2005 .
In the dilute-carrier
limit, this model supports a nontrivial Dirac-Kondo semimetal (DKSM) phase,
which can be tuned to a TKI by increasing SOC.Feng_2016
In Ref. Feng_PRL2012, , it was shown that, at half-filling,
increasing the Kondo coupling induces a direct transition from a TI to a KI.
A related model, with the conduction-electron part of the Hamiltonian described by
a Haldane model Haldane1988
on the honeycomb lattice, was subsequently studied.Zhong_PRB2012
Here we investigate the global phase diagram of a
spin-orbit-coupled Kondo lattice model on the honeycomb lattice
at half-filling. We show that the
competing interactions
in this model give rise to a very rich phase diagram containing a TI, a KI, and two AFM phases.
We focus on discussing the influence of magnetic frustration
on the phase diagram.
In the TI, the local moments develop a VBS order. In the two AFM phases, the moments are ordered, respectively, in the plane of the honeycomb lattice (denoted as AFM${}_{xy}$) and perpendicular to the plane (AFM${}_{z}$).
Particularly in the AFM${}_{z}$ phase, the conduction electrons may have a topologically nontrivial bandstructure,
although the TRS is explicitly broken. This T-AFM${}_{z}$ state connects to the trivial AFM${}_{z}$ phase via a topological phase transition
as
the SOC
is reduced.
The remainder of the
paper is organized as follows. We start by introducing the model and our theoretical procedure in Sec.II.
In Sec.III we discuss the magnetic phase diagram of the Heisenberg model for the local moments.
Next we obtain the global phase diagram of the full model in Sec. IV.
In Sec V we examine the nature of the conduction-electron bandstructures in the AFM states,
with a focus on their
topological characters. We discuss the
implications of our results
in Sec. VI.
II Model and method
The model we considere here is defined on an effective double-layer honeycomb lattice. The top layer contains conduction electrons realizing the Kane-Mele Hamiltonian KaneMele_PRL2005 . The conduction electrons are Kondo coupled to (i.e.,
experiencing an AF exchange coupling $J_{\rm{K}}$ with) the localized magnetic
moments in the bottom layer. The local moments interact among themselves through direct exchange interaction
as well as the conduction electron mediated RKKY interaction;
this interaction is described
by a simple $J_{1}$-$J_{2}$ model. Both the conduction bands and the localized bands are half-filled.
This Kondo-lattice
Hamiltonian takes the following form on the honeycomb lattice:
$$\displaystyle H$$
$$\displaystyle=$$
$$\displaystyle t\sum_{\langle ij\rangle\sigma}c^{\dagger}_{i\sigma}c_{j\sigma}+%
i\lambda_{\rm{so}}\sum_{\ll ij\gg\sigma\sigma^{\prime}}v_{ij}c^{\dagger}_{i%
\sigma}{\sigma}^{z}_{\sigma\sigma^{\prime}}c_{j\sigma^{\prime}}$$
(1)
$$\displaystyle+$$
$$\displaystyle J_{K}\sum_{i}{\vec{s}}_{i}\cdot{\vec{S}}_{i}+J_{1}\sum_{\langle
ij%
\rangle}{\vec{S}}_{i}\cdot{\vec{S}}_{j}+J_{2}\sum_{\langle\langle ij\rangle%
\rangle}{\vec{S}}_{i}\cdot{\vec{S}}_{j},$$
where $c^{\dagger}_{i\sigma}$ creates a conduction electron at site $i$ with spin index $\sigma$. $t$ is the hopping parameter between the nearest neighboring (NN) sites, and $\lambda_{\rm{so}}$ is the strength of the SOC between next-nearest neighboring (NNN) sites. $v_{ij}=\pm 1$, depending on the direction of the NNN hopping. ${\vec{s}}_{i}=c^{\dagger}_{i\sigma}\vec{\sigma}_{\sigma\sigma^{\prime}}c_{i%
\sigma^{\prime}}$, is the spin operator of the conduction electrons at site $i$ with $\vec{\sigma}=\sigma^{x},\sigma^{y},\sigma^{z}$ being the pauli matrices. ${\vec{S}}_{i}$ refers to the spin operator of the local moments with spin size $S=1/2$. In the model we considered here, $J_{\rm{K}}$, $J_{1}$, and $J_{2}$ are all AF.
By incorporating the Heisenberg
interactions, the Kondo-lattice model we study readily captures the effect of geometrical frustration.
In addition, instead of treating the
Kondo screening and magnetic order in terms of the longitudinal and transverse components of the Kondo-exchange interactions Lacroix_prb1979 ; GMZhang ; Zhong_PRB2012 , we will treat both effects in terms of
interactions that are spin-rotationally invariant;
this will turn out to be important in mapping out the global phase diagram.
We use the spinon representation for ${\vec{S}}_{i}$, i.e., by rewriting ${\vec{S}}_{i}=f^{\dagger}_{i\sigma}\vec{\sigma}_{\sigma\sigma^{\prime}}f_{i%
\sigma^{\prime}}$ along with the constraint $\sum_{\sigma}f^{\dagger}_{i\sigma}f_{i\sigma}=1$, where $f^{\dagger}_{i\sigma}$ is the spinon operator. The constraint is enforced by introducing the Lagrange multiplier term $\sum_{i}\lambda_{i}(\sum_{\sigma}f^{\dagger}_{i\sigma}f_{i\sigma}-1)$ in the Hamiltonian. In order to
study both the non-magnetic and magnetic phases,
we
decouple
the Heisenberg Hamiltonian into two channels:
$$\displaystyle J\bm{S}_{i}\cdot\bm{S}_{j}$$
(2)
$$\displaystyle=$$
$$\displaystyle xJ\bm{S}_{i}\cdot\bm{S}_{j}+(1-x)J\bm{S}_{i}\cdot\bm{S}_{j}$$
$$\displaystyle\simeq$$
$$\displaystyle x\left(\frac{J}{2}|Q_{ij}|^{2}-\frac{J}{2}Q^{*}_{ij}f_{i\alpha}^%
{\dagger}f_{j\alpha}-\frac{J}{2}Q_{ij}f_{j\alpha}^{\dagger}f_{i\alpha}\right)$$
$$\displaystyle+$$
$$\displaystyle(1-x)\left(-J\bm{M}_{i}\cdot\bm{M}_{j}+J\bm{M}_{j}\cdot\bm{S}_{i}%
+J\bm{M}_{i}\cdot\bm{S}_{j}\right)$$
Here $x$ is a
parameter that
is introduced in keeping with the generalized procedure of Hubbard-Stratonovich decouplings
and will be fixed
to conveniently describe the effect of quantum fluctuations.
The corresponding
valence bond (VB) parameter $Q_{ij}$ and sublattice magnetization $\bm{M}_{i}$ are $Q_{ij}=\langle\sum_{\alpha}f_{i\alpha}^{\dagger}f_{j\alpha}\rangle$ and $\bm{M}_{i}=\langle\bm{S}_{i}\rangle$, respectively.
Throughout this paper, we consider the two-site unit cell thus excluding any states that breaks lattice translation symmetry. Under this construction, there are 3 independent VB mean fields $Q_{i}$, $i=1,2,3$, for the NN bonds and 6 independent VB mean fields $Q_{i}$, $i=4,5,...,9$, for the NNN bonds.
They are illustrated in Fig. 1.
We consider only AF exchange interactions,
$J_{1}>0$ and $J_{2}>0$,
and
will
thus
only take into account AF order
with $\bm{M}=\bm{M}_{i\in A}=-\bm{M}_{i\in B}$.
To take into account the Kondo hybridization and the possible magnetic order on an equal footing,
we follow the treatment of the Heisenberg interaction as outlined in Eq. 2 and
decouple the Kondo interaction as follows:
$$\displaystyle J_{K}\bm{S}\cdot\bm{s}$$
(3)
$$\displaystyle\simeq$$
$$\displaystyle y\left(\frac{J_{K}}{2}|b|^{2}-\frac{J_{K}}{2}bf^{\dagger}_{i%
\alpha}c_{i\alpha}-\frac{J_{K}}{2}b^{*}c_{i\alpha}^{\dagger}f_{i\alpha}\right)$$
$$\displaystyle+$$
$$\displaystyle(1-y)\left(-J_{K}\bm{M}_{i}\cdot\bm{m}_{i}+J_{K}\bm{S}_{i}\cdot%
\bm{m}_{i}+J_{K}\bm{s}_{i}\cdot\bm{M}_{i}\right).$$
Here we have introduced the mean-field parameter for the Kondo hybridization, $b=\langle\sum_{\alpha}c_{i\alpha}^{\dagger}f_{i\alpha}\rangle$, and the conduction electron magnetization: $\bm{m_{i}}=\langle\bm{s}_{i}\rangle$. For nonzero $b$, the conduction band will Kondo hybridize with the local moments and the system at half-filling is a KI. On the other hand, when $b$ is zero and $\bm{M}$ is nozero, magnetization ($\bm{m}\neq 0$) on the conduction electron band will be induced by the Kondo coupling, and various AF orders can be stabilized depending on the strength of the SOC.
Just like the parameter $x$ of Eq. 2 is chosen
so that a saddle-point treatment captures the
quantum fluctuations in the form of spin-singlet bond parameters Pixley_PRL2014 ,
the parameter $y$ will be specified according to the criterion that the treatment at the same level
describes the quantum fluctuations in the form of Kondo-insulator state (see below).
III Phase diagram of the Heisenberg model for the local moments
Because of the complexity of the full Hamiltonian,
we start by setting $J_{K}=0$ and discuss the possible ground-state phases
of the $J_{1}$-$J_{2}$ Heisenberg model
for the local moments. By
treating the problem at the saddle-point level
in Eq. (2), we obtain the phase diagram in the $x$-$J_{2}/J_{1}$ plane shown in Fig.2.
Here the $x$-dependence is studied in the same spirit as that of Ref. Pixley_PRL2014,
for the Shastry-Sutherland lattice.
In the parameter regime explored, an AF ordered phase (labeled as “AFM” in the figure)
and a valence bond solid (VBS) phase are stabilized. The AF order stabilized is the two-sublattice Néel order
on the honeycomb lattice, and the VBS order refers to covering of dimer singlets with $|Q_{i}|=Q\neq 0$ for one out of the three NN bonds (e.g. $Q_{1}\neq 0,Q_{2}=Q_{3}=0$) and $|Q_{i}|=0$ for all the NNN bonds.
This VBS state spontaneously breaks the C${}_{3}$ rotational symmetry of the lattice.
We thus define the order parameter for VBS state to be $Q=|\sum_{j=1,2,3}Q_{j}e^{i(2\pi j/3)}|$.
In Fig. 3 we plot the evolution of VBS and AF order parameters $Q$ and $M$
as a function of $J_{2}/J_{1}$. A direct first-order transition
(signaled by the mid-point of the jump of the order parameters) between these two phases is observed
for $x\lesssim 0.6$.
For
the sake of understanding the global phase diagram of the full Kondo-Heisenberg model,
we limit our discussion to $J_{2}/J_{1}<1$, where only the NN VBS is relevant.
A different decoupling scheme approach was used to study
this model Liu_JPCM2016 found results that are, in the parameter regime of overlap,
consistent with ours.
To fix the parameter $x$,
we compare our results with
those about the $J_{1}-J_{2}$ model
derived from previous numerical studies.
DMRG studies Ganesh_PRL2013 found
that
the AFM state is stabilized for $J_{2}/J_{1}<0.22$, and VBS exists for $J_{2}/J_{1}>0.35$, while in between the nature of the ground states are still under debate. In this parameter regime,
the DMRG
calculations suggest
a plaquette resonating valence bond (RVB) state,Ganesh_PRL2013 while other methods
implicate possibly spin liquids.Clark_PRL2011 In light of these numerical results, we take $x=0.4$ in our
calculations.
This leads to a direct transition from AFM to VBS at $J_{2}/J_{1}\simeq 0.27$, close to the values of phase boundaries of these two phases determined by other numerical methods.
IV Global phase diagram of the
Kondo-lattice model
We now turn to
the global phase diagram of the full model by turning on the Kondo coupling.
For definiteness, we set $J_{1}=1$ and
consider $t=1$ and $\lambda_{so}=0.4$.
As prescribed in the previous section, we take $x=0.4$.
Similar considerations for $y$ require that its value allows for quantum fluctuations in the form of
Kondo-singlet formation. This has guided us to take
$y=0.7$
(see below). The corresponding phase diagram as a function of
$J_{K}$ and the frustration parameter $J_{2}/J_{1}$
is shown in Fig. 4.
In our calculation, the phase boundaries are determined by sweeping $J_{K}$ while along multiple horizontal cuts for several fixed $J_{2}/J_{1}$ values, as shown in Fig. 5. For small $J_{K}$ and large $J_{2}/J_{1}$, the local moments and the conduction electrons are still effectively decoupled. The conduction electrons form a TI for finite SOC, and the local moments are in the VBS ground state as discussed in the previous section. When both $J_{K}$ and $J_{2}/J_{1}$ are small, the ground state is AFM. Due to the Kondo coupling, finite magnetization $\bm{m}$ is induced for the conduction electrons. This opens
a
spin density wave (SDW) gap in the conduction band, and therefore the ground state of the system is an AFM insulator. The SOC couples the rotational symmetry in the spin space to the one in real space. As a consequence, the ordered moments in the AFM phase can be either along the $z$ direction (AFM${}_{z}$) or in the $x$-$y$ plane (AFM${}_{xy}$). For finite SOC, these two AFM states have different energies, which can be tuned by $J_{K}$. As shown in the phase diagram, the AFM phase contains two ordered states, the AFM${}_{z}$ and AFM${}_{xy}$. They are separated by a spin reorientation transition at $J_{K}/J_{1}\approx 0.8$. For the value of SOC taken, the AFM state is topologically nontrivial, and is hence denoted as T-AFM${}_{z}$ state. The nature of this state and the associated topological phase transition is discussed in detail in the next section.
For sufficiently large $J_{K}$, the Kondo hybridization $b$ is nonzero (see Fig.5(a)), and the ground state is a KI. Note that for finite SOC, this KI does not have a topological nontrivial edge state, as a consequence of the topological no-go theorem Feng_PRL2012 ; HasanKane_RMP2010 ; QiZhang_RMP2011 . In our calculation
at the saddle-point level, the KI exists
for $y\geq 0.6$;
this provides the basis for taking $y=0.7$, as noted earlier.
Going beyond the saddle-point level,
the dynamical effects of the Kondo coupling will appear,
and we will expect the KI phase to arise for other choices of $y$.
Several remarks are in order. The phase diagram, Fig. 4, has a similar profile of the global phase diagram for the Kondo insulating systems YamamotoSi_JLTP2010 ; Pixley_PRB2018 .
However, the presence
of SOC has enriched the phase diagram.
In the AF state, the ordered moment may lie either within the plane or be perpendicular to it.
These two states have very different topological properties.
We now turn to a detailed discussion of this last point.
V
Topological properties of
the AFM states
In this section we discuss the properties of the AFM${}_{xy}$ and AFM${}_{z}$ states, in particular to address their topological nature. For a clear discussion, we fix $t=1$, $J_{1}=1$, and $J_{2}$=0. Since the Kondo hybridization is
not essential to the nature of
the AFM states,
in this section we simply the discussion by setting
$y=0$.
We start by defining the order parameters of the two states:
$$\displaystyle M_{x}$$
$$\displaystyle=$$
$$\displaystyle\langle S_{f,A}^{x}\rangle=-\langle S_{f,B}^{x}\rangle,$$
(4)
$$\displaystyle M_{z}$$
$$\displaystyle=$$
$$\displaystyle\langle S_{f,A}^{z}\rangle=-\langle S_{f,B}^{z}\rangle,$$
(5)
$$\displaystyle m_{x}$$
$$\displaystyle=$$
$$\displaystyle-\langle s_{c,A}^{x}\rangle=\langle s_{c,B}^{x}\rangle,$$
(6)
$$\displaystyle m_{z}$$
$$\displaystyle=$$
$$\displaystyle-\langle s_{c,A}^{z}\rangle=\langle s_{c,B}^{z}\rangle.$$
(7)
Note that for AFM${}_{xy}$ state we set $M_{x}=m_{y}=0$ without losing generality.
In Fig.(6) we plot the evolution of these AFM order parameters with $J_{K}$ for a representative value of SOC $\lambda_{so}=0.1$. Due to the large $J_{1}$ value we take, the sublattice magnetizations of the local moments are already saturated to $0.5$. Therefore,
at the saddle-point level,
they serve as effective (staggered) magnetic fields to the conduction electrons. The Kondo coupling then induces finite sublattice magnetizations for the conduction electrons, and they increase linearly with $J_{K}$ for small $J_{K}$ values. But $m_{x}$ is generically different from $m_{z}$. This is important for the stabilization of the states.
We then discuss the energy competition between the AFM${}_{xy}$ and AFM${}_{z}$ states.
The conduction electron part of the mean-field Hamiltonian reads:
$$H_{c}=\begin{pmatrix}c_{A\uparrow}^{\dagger}&c_{A\downarrow}^{\dagger}&c_{B%
\uparrow}^{\dagger}&c_{B\downarrow}^{\dagger}&\end{pmatrix}^{T}h_{MF}\begin{%
pmatrix}c_{A\uparrow}\\
c_{A\downarrow}\\
c_{B\uparrow}\\
c_{B\downarrow}\\
\end{pmatrix}$$
(8)
with
$$h_{MF}=\begin{pmatrix}\Lambda(k)&J_{K}M_{x}/2&\epsilon(k)&\\
J_{K}M_{x}/2&-\Lambda(k)&&\epsilon(k)\\
\epsilon^{*}(k)&&-\Lambda(k)&-J_{K}M_{x}/2\\
&\epsilon^{*}(k)&-J_{K}M_{x}/2&\Lambda(k)\end{pmatrix}$$
(9)
for the AFM${}_{xy}$ state and
$$h_{MF}=\begin{pmatrix}\Lambda(k)+J_{K}M_{z}/2&&\epsilon(k)&\\
&-\Lambda(k)-J_{K}M_{z}/2&&\epsilon(k)\\
\epsilon^{*}(k)&&-\Lambda(k)-J_{K}M_{z}/2&\\
&\epsilon^{*}(k)&&\Lambda(k)+J_{K}M_{z}/2\end{pmatrix}$$
(10)
for the AFM${}_{z}$ state. Here $\Lambda(k)=2\lambda_{so}\left(sin(k\cdot a_{1})-sin(k\cdot a_{2})-sin(k\cdot(a%
_{1}-a_{2}))\right)$,
$\epsilon(k)=t_{1}(1+e^{-ik\cdot a_{1}}+e^{-ik\cdot a_{2}})$, $\epsilon^{*}(k)$ is the complex conjugate of $\epsilon(k)$, and $a_{1}=(\sqrt{3}/2,{1}/{2})$,$a_{2}=(\sqrt{3}/2,-{1}/{2})$ are the primitive vectors.
For both states the eigenvalues are doubly degenerate.
$$\displaystyle E^{c}_{\pm,xy}(k)$$
$$\displaystyle=$$
$$\displaystyle\pm\sqrt{\Lambda(k)^{2}+(J_{K}M_{x}/2)^{2}+|\epsilon(k)|^{2}}$$
(11)
$$\displaystyle E^{c}_{\pm,z}(k)$$
$$\displaystyle=$$
$$\displaystyle\pm\sqrt{(\Lambda(k)+J_{K}M_{z}/2)^{2}+|\epsilon(k)|^{2}}$$
(12)
The eigenenergies of the spinon band can be obtained in a similar way:
$$\displaystyle E^{f}_{\pm,xy}(k)$$
$$\displaystyle=$$
$$\displaystyle\pm\frac{1}{2}(3J_{1}M_{x}+J_{K}m_{x}),$$
(13)
$$\displaystyle E^{f}_{\pm,z}(k)$$
$$\displaystyle=$$
$$\displaystyle\pm\frac{1}{2}(3J_{1}M_{z}+J_{K}m_{z}).$$
(14)
The expression of total energy for either state is then
$$\displaystyle E_{tot}$$
$$\displaystyle=$$
$$\displaystyle 2\frac{1}{N_{k}}\sum_{k}E^{c}_{-}(k)+2\frac{1}{N_{k}}\sum_{k}E^{%
f}_{-}(k)$$
(15)
$$\displaystyle+$$
$$\displaystyle 3J_{1}|\bm{M}|^{2}+2J_{K}(\bm{M}\cdot\bm{m}).$$
The first line of the above expression comes from filling the bands up to the Fermi energy (which is fixed to be zero here). The second line is the constant term in the mean-field decomposition. The factor of $2$ in the $k$ summation is to take into account the double degeneracy of the energies. $N_{k}$ refers to the number of $k$ points in the first Brillouin zone.
By comparing the expressions of $E_{-}^{c}(k)$ in Eqns. (11) and (12), we find that adding a small $M_{x}$ is to increase the size of the gap at both of the two (inequivalent) Dirac points, thereby pushing the states further away from the Fermi-energy. While adding a small $M_{z}$ is to enlarge the gap at one Dirac point but reduce the gap size at the other one. Therefore, an AFM${}_{xy}$ state is more favorable than the AFM${}_{z}$ state in lowering the energy of conduction electrons $\sum_{k}E_{-}^{c}(k)$.
On the other hand, from Eqns.(13)-(15), we see that the overall effect of adding a magnetization of the conduction band, $\bm{m}$, is to increase the total energy $E_{tot}$ (the main energy increase comes from the $2J_{K}(\bm{M}\cdot\bm{m})$ term). Because $|m_{z}|<|m_{x}|$ from the self consistent solution, as shown in Fig. 6, the energy increase of the AFM${}_{z}$ state is smaller than that in the AFM${}_{xy}$ state.
With increasing $J_{K}$ the above two effects from the magnetic orders compete, resulting in different magnetic ground states as shown in Fig. 4. This analysis is further supported by our self-consistent mean-field calculation. In Fig. 7 we plot the energy difference between these two states $\Delta E=E_{xy}-E_{z}$ as a function of $J_{K}$ at several $\lambda_{so}$ values. In the absence of SOC, the model has the spin SU(2) symmetry, and the AFM${}_{z}$ and AFM${}_{xy}$ states are degenerate with $\Delta E=0$. For finite $\lambda_{so}$, at small $J_{K}$ values, the energy gain from the $\sum_{k}E_{-}^{c}(k)$ term dominates, $\Delta E>0$, and the ground state is an AFM${}_{z}$ state. With increasing $J_{K}$, the contribution from the $2J_{K}(\bm{M}\cdot\bm{m})$ term is more important. $\Delta E$ crosses zero to be negative, and the AFM${}_{xy}$ state is eventually energetically favorable for large $J_{K}$.
Next we discuss the topological nature of the AFM${}_{z}$ and AFM${}_{xy}$ state. In the absence of Kondo coupling $J_{K}$, the conduction electrons form a TI, which is protected by the TRS. Their the left- and right-moving edge states connecting the conduction and valence bands are respectively coupled to up and down spin flavors (eigenstates of the $S^{z}$ operator) as the consequence of SOC, and these two spin polarized edge states do not mix.
Once the TRS is broken by the AFM order, generically, topologically nontrivial edge states are no longer guaranteed. However, in the AFM${}_{z}$ state, the structure of the Hamiltonian for the conduction electrons is as same as that in a TI. This is clearly shown in Eq. (10): the effect of magnetic order is only to shift $\Lambda(k)$ to $\Lambda(k)+J_{K}M_{z}/2$. In particular, the spin-up and spin-down sectors still do not mix each other. Therefore, the two spin polarized edge states are still well defined as in the TI, and the system is topologically nontrivial though without the protection of TRS.
Note that the above analysis is based on assuming $J_{K}M_{z}\ll\Lambda(k)$, where the bulk gap between the conduction and valence bands is finite. For $J_{K}M_{z}>6\sqrt{3}\lambda_{so}/(1-y)$, the bulk gap closes at one of the inequivalent Dirac points and the system is driven to a topologically trivial phase via a topological phase transition.Feng_PRL2012 .
We also note that a similar AFM${}_{z}$ state arises in a Kondo lattice model without SOC but with
a Haldane coupling,
as analyzed in Ref. Zhong_PRB2012, .
For the AFM${}_{xy}$ state, we can examine the Hamiltonian for the conduction electrons in a similar way. As shown in Eq. (9), the transverse magnetic order $M_{x}$ mixes the spin-up and spin-down sectors. As a result, a finite hybridization gap opens between the two edge states making the system topologically trivial.
To support the above analysis, we perform calculations of the energy spectrum of the conduction electrons in the AFM${}_{z}$ and AFM${}_{xy}$ states, as shown in Eq.(9) and Eq.(10), on a finite slab of size $L_{x}\times L_{y}$, with $L_{x}=200$ and $L_{y}=40$. The boundary condition is chosen to be periodic along the $x$ direction and open and zig-zag-type along the $y$ direction. In Fig. 8 we show the plots of the energy spectra with three different set of parameters: (a) $\lambda_{so}=0.01$, $J_{K}=0.4$, $M_{z}=0.5$, (b) $\lambda_{so}=0.0$, $J_{K}=0.4$, $M_{z}=0.5$, and (c) $\lambda_{so}=0.0$, $J_{K}=0.8$, $M_{x}=0.5$, which respectively correspond to the topologically trivial AFM${}_{z}$ state, topological AFM${}_{z}$ insulator, and AFM${}_{xy}$ state. As clearly seen, the gapless edge states only exist for parameter set (b), where the system is in the topological AFM${}_{z}$ state. Note that in this state, the spectrum is asymmetric with respect to the Brilluion zone boundary ($k_{x}=\pi$), reflecting the explicit breaking of TRS. Based on our analysis and numerical calculations, we construct a phase diagram, shown in Fig. 9, to illustrate the competition of these AFM states. As expected, the AFM${}_{z}$ state is stabilized for $J_{K}\lesssim 0.7$, and is topological for $J_{K}<12\sqrt{3}\lambda_{so}$ (above the red line).
VI Discussion and Conclusion
We
have discussed
the
properties of various phases in the ground-state phase diagram of the
spin-orbit-coupled Kondo lattice model on the honeycomb lattice at half filling.
We
have shown how
the competition of SOC, Kondo interaction, and magnetic frustration
stabilizes
these phases. For example, in the AFM phase the moments can order either along the $z$-direction or within
the
$x$-$y$ plane.
In our model, the AFM
order is driven
by the RKKY interaction, and the competition of SOC and Kondo interaction dictates the direction of the ordered magnetic moments.
Throughout this work, we have discussed the phase diagram of the
model at
half filling.
The phase diagram away from half-filling is also an interesting problem. We expect that the competition between the AFM${}_{z}$ and AFM${}_{xy}$ states
persist at generic fillings,
but the topological feature
will not.
Another interesting filling would be the dilute-carrier limit, where a DKSM exists, and can be tuned to a TKI by increasing SOC.Feng_2016
In this work we
have
considered a particular type of SOC, which is inherent in the bandstructure of the itinerant electrons. In real materials,
there are also
SOC
terms that involve the magnetic ions.
Such couplings will lead to
models beyond the
current work,
and may further enrich the global phase diagram.
In conclusion, we have investigated the ground-state phase diagram of a
spin-orbit coupled Kondo-lattice model at half-filling.
The combination of SOC, Kondo and RKKY interactions
produces
various quantum phases, including a Kondo insulator, a topological insulator with VBS spin correlations, and two AFM phases.
Depending on the strength of SOC, the magnetic moments in the AFM phase can be either ordered
perpendicular to
or in the $x$-$y$ plane. We further show that the $z$-AFM state is topologically nontrivial for strong and moderate SOC,
and can be tuned to a topologically trivial one via a topological phase transition by
varying
either the SOC or the Kondo coupling.
Our results
shed new light on the
global phase diagram
of
heavy fermion materials.
Acknowledgements
We thank W. Ding, P. Goswami, S. E. Grefe, H.-H. Lai,
Y. Liu, S. Paschen, J. H. Pixley, T. Xiang, and G. M. Zhang for useful discussions.
Work at Renmin University was supported by the Ministry of Science and Technology of China, National Program on Key Research Project Grant number 2016YFA0300504, the National Science Foundation of China Grant number 11674392 and the Research Funds of Remnin University of China Grant number 18XNLG24. Work at Rice was in part supported by the NSF Grant DMR-1611392 and the Robert A. Welch Foundation Grant C-1411. Q.S. acknowledges the hospitality and support by a Ulam Scholarship from the Center for Nonlinear Studies at Los Alamos National Laboratory.
References
(1)
Special issue on Quantum Phase Transitions, J. Low Temp.
Phys. 161, 1
(2010).
(2)
S. Sachdev, Quantum Phase Transitions (Cambridge
University Press, Cambridge, 2011),
2nd ed.
(3)
Q. Si and F. Steglich, Science 329, 1161¨C1166 (2010).
(4)
P. Gegenwart, Q. Si, and F. Steglich Nat. Phys. 4, 186 - 197 (2008).
(5)
H. von Löhneysen, A. Rosch, M. Vojta, and P.Wolfle, Rev. Mod.
Phys. 79, 1015 (2007).
(6)
H. Tsunetsugu, M. Sigrist, and K. Ueda, Rev. Mod. Phys. 69, 809 (1997).
(7)
A. C. Hewson, The Kondo Problem to Heavy Fermions,
Cambridge Univ. Press, Cambridge, England (1993).
(8)
S. Doniach, Physica B+C 91, 231-234 (1977).
(9)
J. Custers, et al., Nature 424, 524-527 (2003).
(10)
A. Schröder et al., Nature 407, 351 (2000).
(11)
S. Paschen, T. Luhmann, S. Wirth, P. Gegenwart, O. Trovarelli, C. Geibel, F. Steglich, P. Coleman, and Q. Si, Nature 432, 881 (2004).
(12)
Q. Si, S. Rabello, K. Ingersent, and J. L. Smith, Nature
413, 804 (2001).
(13)
P. Coleman, C. Pépin, Q. Si, and R. Ramazashvili, J. Phys.: Condens. Matt. 13, R723-R738 (2001).
(14)
J. A. Hertz, Phys. Rev. B 14, 1165-1184 (1976).
(15)
A. J. Millis, Phys. Rev. B 48, 7183-7196 (1993).
(16)
Q. Si, Physica B 378-380, 23-27 (2006).
(17)
Q. Si, Phys. Stat. Solid. B 247, 476-484 (2010).
(18)
J. H. Pixley, R. Yu, and Q. Si, Phys. Rev. Lett. 113, 176402 (2014).
(19)
Q. Si and S. Paschen, Phys. Stat. Solid. B 250, 425-438 (2013).
(20)
R. S. K. Mong, A. M. Essin, and J. E. Moore, Phys. Rev. B 81, 245209 (2010).
(21)
S. Nakatsuji et al., Phys. Rev. Lett. 96, 087204 (2006).
(22)
G. Chen, Phys. Rev. B 94, 205107 (2016).
(23)
M. Dzero, K. Sun, V. Galitski, and P. Coleman, Phys. Rev. Lett. 104, 106408 (2010).
(24)
A. Barla et al., Phys. Rev. Lett. 94, 166401 (2005).
(25)
S. Yamamoto and Q. Si, J. Low Temp. Phys. 161, 233-262 (2010).
(26)
H.-H. Lai, S. E. Grefe, S. Paschen, and Q. Si,
PNAS 115, 93 (2018).
(27)
S. Dzsaber et al.,
Phys. Rev. Lett. 118, 246601 (2017).
(28)
S. Dzsaber et al.
arXiv:1811.02819.
(29)
X.-Y. Feng, C.-H. Chung, J. Dai, and Q. Si, Phys. Rev. Lett. 111, 016402 (2013).
(30)
C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801
(2005).
(31)
X.-Y. Feng, H. Zhong, J. Dai, and Q. Si, arXiv:1605.02380 (2016).
(32)
F. D. M. Haldane, Phys. Rev. Lett. 61, 2015 (1988).
(33)
Y. Zhong, Y.-F. Wang, Y.-Q. Wang, and H.-G. Luo, Phys. Rev. B 87, 035128 (2013).
(34)
C. Lacroix and M. Cyrot,
Phys. Rev. B 20, 1969 (1979).
(35)
H. Li, H.-F. Song, and Y. Liu, EuroPhys. Lett. 116, 37005 (2016).
(36)
H. Li, Y. Liu, G.-M. Zhang, and L. Yu, J. Phys.: Condens. Matter 27, 425601 (2015).
(37)
R. Ganesh, J. van den Brink, and S. Nishimoto, Phys. Rev. Lett. 110, 127203 (2013).
(38)
B. K. Clark , D. A. Abanin, S. L. Sondhi, Phys. Rev. Lett. 107, 087204(2011).
(39)
M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045
(2010).
(40)
X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. 83, 1057¨C1110 (2011).
(41)
J. H. Pixley, R. Yu, S. Paschen, and Q. Si, Phys. Rev. B 98, 085110 (2018). |
Nuclear medium effects from hadronic atoms
E. Friedman
A. Gal
Racah Institute of Physics, The Hebrew University, Jerusalem 91904,
Israel
elifried@vms.huji.ac.il, avragal@vms.huji.ac.il
Abstract
The state of the art in the study of
$\pi^{-}$, $K^{-}$ and $\Sigma^{-}$ atoms, along with the in-medium nuclear
interactions deduced for these hadrons, is reviewed. A special emphasis
is placed on recent developments in $\bar{K}$–nuclear physics, where
a strongly attractive density dependent $K^{-}$–nuclear potential of order
150–200 MeV in nuclear matter emerges by fitting $K^{-}$–atom data.
This has interesting repercussions on $\bar{K}$ quasibound nuclear
states, on the composition of strange hadronic matter and on $\bar{K}$
condensation in self bound hadronic systems.
1 Introduction
Hadronic atoms have played an important role in elucidating in-medium
properties of hadron-nucleon ($hN$) interactions near threshold
[1, 2].
Hadronic atom data consist primarily of strong
interaction level shifts, widths and yields, derived from $h^{-}$-atom X-ray
transitions. These data are analyzed in terms of optical potentials
$V_{\rm opt}^{h}=t_{hN}(\rho)\rho$ which are functionals of the nuclear
density $\rho(r)$, capable of handling large data sets across the periodic
table in order to identify characteristic entities that may provide a link
between experiment and microscopic approaches. Here, $t_{hN}(\rho)$ is
a density dependent in-medium $hN$ $t$ matrix at threshold, satisfying the
low-density limit $t_{hN}(\rho)\to t_{hN}^{\rm free}$, with the free-space
$t$ matrix $t_{hN}^{\rm free}$, as $\rho\to 0$. A schematic summary of
lessons gained by analyzing the available hadronic atom data in terms of
optical potentials is given in Table 1, where the number
of data points included in these analyses is shown in parentheses in the
last column. A more exhaustive discussion of $\pi^{-}$, $\Sigma^{-}$ and $K^{-}$
atoms follows below.
2 Partial restoration of chiral symmetry from pionic atoms
Density independent optical potential global fits to pionic atom data across
the periodic table reveal an anomalous $s$-wave repulsion [1, 2],
a major component of which is due to a too repulsive isovector $\pi N$
amplitude $b_{1}$ with respect to the free-space value $b_{1}^{\rm free}$.
This is demonstrated on the l.h.s. of Fig. 1 which shows results
of global fits to pionic atom data as a function of a parameter $\gamma$,
related to the difference between neutron and proton rms radii:
$$r_{n}-r_{p}=\gamma\frac{N-Z}{A}+\delta.$$
(1)
Applying a finite-range folding with rms radius 0.9 fm to the $p$-wave part
of $V_{\rm opt}^{\pi}$, the resulting $\chi^{2}$ minimum for a ‘skin’ neutron
distribution is obtained at $\gamma=1.1\pm 0.1$ fm.111For a recent discussion of the role of neutron distributions in hadronic
atoms, see Ref. [3].
The r.h.s. of the figure shows results of global fits with empirical energy
dependence imposed on the $s$-wave amplitudes $b_{0}$ and $b_{1}$ [4],
and more importantly with a DD renormalization of $b_{1}$:
$$b_{1}(\rho)=\frac{b_{1}}{1-{{\sigma\rho}\over{m_{\pi}^{2}f_{\pi}^{2}}}}\,,$$
(2)
where $f_{\pi}=92.4$ MeV is the pion weak decay constant and $\sigma\approx 50$ MeV is the $\pi N$ $\sigma$ term. Eq. (2) was derived by
Weise [5] considering an in-medium extension of the
Tomozawa-Weinberg (TW) LO chiral limit for $b_{1}$ in terms of
$f_{\pi}$ [6] which is then related to the
quark condensate $<{\bar{q}}q>$:
$$b_{1}(\rho)=-\frac{\mu_{\pi N}}{8\pi f^{2}_{\pi}(\rho)}\,,~{}~{}~{}~{}~{}~{}~{%
}~{}~{}~{}\frac{f_{\pi}^{2}(\rho)}{f_{\pi}^{2}}=\frac{<\bar{q}q>_{\rho}}{<\bar%
{q}q>_{0}}\simeq{1-{{\sigma\rho}\over{m_{\pi}^{2}f_{\pi}^{2}}}}\,.$$
(3)
The figure makes it evident that the magnitude of $b_{1}$ on the r.h.s.,
following the DD renormalization, is systematically smaller than that on the
l.h.s., and at the $\chi^{2}$ minimum it agrees perfectly with $b_{1}^{\rm free}$.
A similar conclusion was reached in Refs. [7, 8] from
measurements of $1s$ ‘deeply bound’ pionic atoms of Sn isotopes. The advantage
of using $1s$ ‘deeply bound’ levels is that the $p$-wave $\pi N$ interaction
plays there a secondary role, but this merit is more than compensated by the
considearably increased errors associated with smaller data sets, as shown
in Table 2. The uncertainty listed in the fourth column makes
it clear that the ‘deeply bound’ atoms alone do not give conclusive evidence
for the need to renormalize $b_{1}$.
In fact, Suzuki et al. [8] considered ‘deeply bound’ $1s$
levels in three Sn isotopes together with ‘normal’ $1s$ levels in ${}^{16}$O,
${}^{20}$Ne and ${}^{28}$Si, 12 data points in total yielding
$b_{1}=-0.1149\pm 0.0074~{}m_{\pi}^{-1}$. However, this small uncertainty excludes
the uncertainty from the $p$-wave $\pi N$ potential which was held fixed in
their analysis. A more realistic uncertainty for this type of deduction is
given in column 3 of the table.
The renormalization of $b_{1}$ derived from pionic atoms is consistent with
that shown recently to be also required in low energy $\pi$-nucleus
scattering, as demonstrated in Fig. 2 for 21.5 MeV $\pi^{\pm}$
scattered off isotopes of Si, Ca, Ni and Zr at PSI [12].
3 $\Sigma$ Nuclear repulsion from $\Sigma^{-}$ atoms
A vast body of $(K^{-},\pi^{\pm})$ spectra indicate a repulsive and moderately
absorptive $\Sigma$ nuclear potential $V^{\Sigma}$, with a substantial isospin
dependence [13, 14]. These data, including recent $(\pi^{-},K^{+})$
spectra [15] and related DWIA analyses [16], provide
credible evidence that $\Sigma$ hyperons generally do not bind in nuclei.
A repulsive component of a DD $\Sigma$ nuclear potential was already deduced
in the mid 1990s from $\Sigma^{-}$ atom data [17, 18], as shown in
Fig. 3. In fact, $V_{\rm R}^{\Sigma}$ is attractive at low
densities outside the nucleus, as enforced by the observed ‘attractive’
$\Sigma^{-}$ atomic level shifts, changing into repulsion on approach of the
nuclear radius. The precise magnitude and shape of $V_{\rm R}^{\Sigma}$
within the nucleus, however, are model dependent as demonstrated by the
difference between potentials DD and F (defined in the Appendix). This
repulsion bears interesting consequences for the balance of strangeness in
the inner crust of neutron stars, primarily by delaying to higher densities,
or even aborting the appearance of $\Sigma^{-}$ hyperons, as shown in
Fig. 4.
The $G$-matrices constructed from Nijmegen soft-core potential models
have progressed throughout the years to produce $\Sigma$ repulsion in
symmetric nuclear matter, as demonstrated in Table 3 using
the parametrization
$$V_{R}^{\Sigma}=V_{0}^{\Sigma}+\frac{1}{A}~{}V_{1}^{\Sigma}~{}{\bf T}_{A}{\cdot%
}{\bf t}_{\Sigma}~{}.$$
(4)
In the latest Nijmegen ESC08 model [20], this repulsion is
dominated by repulsion in the $T=3/2,~{}{{}^{3}S_{1}}-{{}^{3}D_{1}}~{}\Sigma N$ channel where
a strong short distance Pauli exclusion repulsion for quarks arises in SU(6)
quark-model RGM [22] and in chiral EFT [23]
calculations, as seen on the r.h.s. of Fig. 5. These model
calculations also lead to $\Sigma$ nuclear repulsion, shown in momentum space
on the l.h.s. of the figure. A strong repulsion appears also in a recent SU(3)
chiral perturbation calculation [25] which yields
$V_{0}^{\Sigma}\approx 60$ MeV. Phenomenologically $V_{0}^{\Sigma}>0$
and $V_{1}^{\Sigma}>0$, as listed in the table, and the resulting
$\Sigma$-nuclear potential $V_{R}^{\Sigma}$ is repulsive.222In the case of ${}^{4}_{\Sigma}$He, the only known quasibound $\Sigma$
hypernucleus [26, 27], the isovector term provides substantial
attraction owing to the small value of $A$ towards binding the $T=1/2$
hypernuclear configuration, while the isoscalar repulsion reduces the
quasibound level width [28].
4 $\bar{K}$-nucleus potentials from $K^{-}$ atoms
The gross features of low-energy $\bar{K}N$ physics are encapsulated
in the leading-order Tomozawa-Weinberg (TW) vector term of the chiral
effective Lagrangian [6]. The Born approximation for the
$\bar{K}$-nuclear potential $V_{\rm TW}^{\bar{K}}$ due to this TW
interaction term yields a sizable attraction:
$$V_{\rm TW}^{\bar{K}}=-\frac{3}{8f_{\pi}^{2}}~{}\rho\sim-55~{}\frac{\rho}{\rho_%
{0}}~{}~{}~{}~{}({\rm MeV})$$
(5)
for $\rho_{0}=0.16$ fm${}^{-3}$. Iterating the TW term plus the less
significant NLO terms, within an in-medium coupled-channel approach
constrained by the $\bar{K}N-\pi\Sigma-\pi\Lambda$ data near the
$\bar{K}N$ threshold, roughly doubles this $\bar{K}$-nucleus attraction
[29].
A major uncertainty in these chirally based studies arises from fitting
the $\Lambda(1405)$ resonance by the imaginary part of the $(\pi\Sigma)_{I=0}$
amplitude calculated within the same coupled channel chiral scheme.
Yet, irrespective of this uncertainty, the $\Lambda(1405)$ which may be
viewed as a $K^{-}p$ quasibound state quickly dissolves in the nuclear medium
at low density, so that the repulsive free-space scattering length $a_{K^{-}p}$,
as function of $\rho$, becomes attractive well below $\rho_{0}$.
Adding the weakly density dependent $I=1$ attractive scattering length
$a_{K^{-}n}$, the resulting in-medium $\bar{K}N$ isoscalar scattering length
$b_{0}(\rho)={\frac{1}{2}}(a_{K^{-}p}(\rho)+a_{K^{-}n}(\rho)$) translates into
a strongly attractive $V^{\bar{K}}$ [30, 31]:
$$V_{R}^{\bar{K}}(\rho)\sim-{\frac{2\pi}{\mu_{KN}}}{\rm Re}~{}b_{0}(\rho_{0})~{}%
\rho_{0}~{}\frac{\rho}{\rho_{0}}\approx-110~{}\frac{\rho}{\rho_{0}}~{}~{}~{}({%
\rm MeV})\,.$$
(6)
Shallower potentials, $V_{R}^{\bar{K}}(\rho_{0})\sim-(40-60)$ MeV,
were obtained by imposing a Watson-like self-consistency
requirement [30, 32]. It turns out, however, that stronger
attraction, $V_{R}^{\bar{K}}(\rho_{0})\sim-(80-90)$ MeV, arises in similar
chiral approaches [33] when imposing the same requirement while
considering the energy dependence of the in-medium $\bar{K}N$ scattering
amplitude below threshold [34].
Comprehensive fits to the strong-interaction shifts and widths of $K^{-}$-atom
levels, begun in the mid 1990s [35], have yielded DD deeply attractive
and strongly absorptive optical potentials with nuclear-matter depth
$-V_{R}^{\bar{K}}(\rho_{0})\sim(150-200)$ MeV at threshold [35].
The l.h.s. of Fig. 6 illustrates for ${}^{58}$Ni the real part
of $\bar{K}$-nucleus potentials obtained from a global fit to the data in
several models and, in parentheses, the corresponding values of $\chi^{2}$
for 65 $K^{-}$-atom data points. A model-independent Fourier-Bessel (FB)
fit [36] is also shown, within an error band. Just three terms in the
FB series, added to a $t\rho$ potential, suffice to achieve a $\chi^{2}$ as
low as 84 and to make the potential extremely deep, in agreement with the
density-dependent best-fit potentials DD and F. In particular, potential F
provides by far the best fit ever reported for any global $K^{-}$-atom data
fit [37], and the lowest $\chi^{2}$ value as reached by the FB method.
Shown on the r.h.s. of Fig. 6 are overlaps of the $4f$ atomic
radial wavefunction squared with the matter density $\rho_{m}$ in ${}^{58}$Ni
for two of the models exhibited on the l.h.s. of the figure.
The $4f$ atomic orbit is the last circular $K^{-}$
atomic orbit from which the $K^{-}$ meson undergoes nuclear absorption.
The figure demonstrates that, whereas this overlap for the shallower $t\rho$
potential peaks at nuclear density of order $10\%$ of $\rho_{0}$, it peaks at
about $60\%$ of $\rho_{0}$ for the deeper DD potential and has a secondary peak
well inside the nucleus. The double-peak structure indicates the existence of
a $K^{-}$ strong-interaction $\ell=3$ quasibound state for the DD potential.
It is clear that whereas within the $t\rho$ potential there is no sensitivity
to the interior of the nucleus, the opposite holds for the density dependent F
potential which accesses regions of full nuclear density. This owes partly
to the smaller imaginary part of F.
Given the repercussions of deeply attractive potentials on the equation
of state of dense matter, it is important to explore the stability of
these best-fit solutions to variations in the data selection and the
fitting procedure. The most obvious question to ask is whether the
resulting best-fit potentials depend strongly on the size, composition
and accuracy of the data set studied. Regarding size and composition,
following an earlier discussion [38] it has been observed
recently [39] that the DD deep potentials and the DI
relatively shallow potentials, as well as the superiority of DD to
DI in terms of quality of fit, persist upon decreasing the size of
the data set. This is demonstrated in Table 4 upon
reducing the 65 data point global set down to 15 data points from five
targets spread over the entire periodic table (C, Si, Ni, Sn, Pb).
Similar results hold for any four out of these five targets.
It makes sense then to repeat some of the $30-40$ years old $K^{-}$ atom
measurements, making use of modern techniques, in order to acquire
a minimum size canonical set of data with reduced statistical errors
and with common systematics. For the specific set proposed in
Ref. [39], the (lower level) widths are directly measurable
yet not excessively large to make it difficult to observe the feeding
X-ray transition above the background. Similarly, the relative yields
of the upper to lower level transitions are of the order of $10\%$ and
higher. Fitting to such a data set with improved accuracy could resolve
the issue of deep vs. shallow potentials and determine how deep is ‘deep’.
A fairly new and independent evidence in favor of extremely deep
$\bar{K}$-nucleus potentials is provided by $(K^{-},n)$ and $(K^{-},p)$
spectra taken at KEK on ${}^{12}$C [40] and very recently
also on ${}^{16}$O [41] at $p_{K^{-}}=1$ GeV/c. The ${}^{12}$C
spectra are shown on the l.h.s. of Fig. 7, where the solid lines
represent calculations (outlined in Ref. [43]) using potential
depths in the range $160-190$ MeV. The dashed lines correspond to using
relatively shallow potentials of depth about 60 MeV which may be considered
excluded by these data. However, Magas et al. [44] have
recently expressed concerns about protons of reactions other than those
directly emanating in the $(K^{-},p)$ reaction and which could explain
part of the bound-state region of the measured spectrum without invoking
a very deep $\bar{K}$-nuclear potential. A sufficientlly deep potential
would allow quasibound states bound by over 100 MeV, for which the major
$\bar{K}N\to\pi\Sigma$ decay channel is blocked, resulting in relatively
narrow $\bar{K}$-nuclear states. Of course, a fairly sizable extrapolation
is involved in this case using an energy-independent potential determined
largely near threshold. Furthermore, the best-fit $V_{I}^{\bar{K}}$ imaginary
depths of $40-50$ MeV imply that $\bar{K}$-nuclear quasibound states are broad,
as studied in Refs. [37, 47].
A robust consequence of the sizable $\bar{K}$-nucleus attraction is that
$K^{-}$ condensation, when hyperon degrees of freedom are ignored, could
occur in neutron star matter at about 3 times nuclear matter density,
as shown on the r.h.s. of Fig. 7. Comparing it with
Fig. 4 for neutron stars, but where strangeness
materialized through hyperons, one may ask whether $\bar{K}$ mesons
condense also in the presence of hyperons. This question was posed
within RMF calculations of neutron star matter long ago and answered
negatively [45, 46], but only recently it was posed
for strange hadronic matter in Ref. [48] by calculating
multi-$\bar{K}$ nuclear configurations. Fig. 8 demonstrates
a remarkable saturation of $K^{-}$ separation energies $B_{K^{-}}$ calculated
in multi-$K^{-}$ nuclei, independently of the applied RMF model as shown on
the l.h.s. for three different nuclear RMF schemes. The r.h.s. of the
figure demonstrates that this saturation persists already in the most
straightforward $\sigma+\omega$ model, primarily owing to the repulsion
induced by the vector $\omega$ field between like $\bar{K}$ mesons.
The additional vector fields $\rho$ and $\phi$ only add repulsion,
thus strengthening the saturation. The effect of $V_{I}^{\bar{K}}$
is noticeable only below $B_{\bar{K}}\approx 100$ MeV, as seen by the
departure of the lowest green line with respect to the lowest red line.
The saturation values of $B_{K^{-}}$ do not allow conversion of hyperons
to $\bar{K}$ mesons through the strong decays $\Lambda\to p+K^{-}$ or
$\Xi^{-}\to\Lambda+K^{-}$ in multi-strange hypernuclei, which therefore remain
the lowest-energy configuration for multi-strange systems [49]. This
provides a powerful argument against $\bar{K}$ condensation in the laboratory,
under strong-interaction equilibrium conditions [48]. It does not
apply to kaon condensation in neutron stars, where equilibrium configurations
are determined by weak-interaction conditions. This work has been recently
generalized to multi-$K^{-}$ hypernuclei [50].
Appendix: density dependent optical potentials
Here we specify the functional form of two density dependent optical
potentials used in studies of hadronic atoms. For a recent application
to $K^{-}$ atoms, see Table 1 of Ref. [37].
•
The DD form is based on modifying the effective scattering length $b_{0}$
(e.g. Eq. (6)):
$$V^{h}(r)\sim-{\frac{2\pi}{\mu_{hN}}}~{}b_{0}~{}\rho(r)~{}~{}\Rightarrow~{}~{}b%
_{0}\rightarrow b_{0}~{}+~{}B_{0}~{}\{\frac{\rho(r)}{\rho_{0}}\}^{\alpha}~{},~%
{}~{}\alpha>0~{},$$
(7)
where $\rho_{0}=0.16~{}{\rm fm}^{-3}$ is a central nuclear density.
It is possible then to respect the ‘low density limit’ by keeping $b_{0}$ fixed,
$b_{0}=b_{0}^{\rm free}$, while varying the parameters $B_{0}$ and $\alpha$.
•
The F form is based on modifying $b_{0}$ as follows:
$$b_{0}~{}\rightarrow~{}B_{0}~{}F(r)~{}+~{}b_{0}~{}[1~{}-~{}F(r)]~{}~{}.$$
(8)
The density-like function $F(r)$ is defined as
$$F(r)~{}=~{}\frac{1}{e^{x}+1}~{},~{}~{}~{}~{}x~{}=~{}\frac{r-R_{x}}{a_{x}}~{}.$$
(9)
Clearly, $F(r)\rightarrow 1$ for $r<<R_{x}$ which defines an internal region
and similarly $[1-F(r)]\rightarrow 1$ for $r>>R_{x}$ which defines an external
region. Thus $R_{x}$ forms an approximate border between internal and external
regions, and if $R_{x}$ is close to the nuclear surface then the two
regions do correspond to the high-density and low-density regions of nuclei,
respectively. In global fits across the periodic table, $R_{x}$ is parametrized
as $R_{x}=R_{x0}A^{1/3}+\delta_{x}$ and the parameters $B_{0},~{}R_{x0}$ and
$\delta_{x}$ are varied upon in the least-squares fit, while gridding on values
of $a_{x}$ around $0.5$ fm. The parameter $b_{0}$ may be held fixed at its free
$hN$ value, but the results often depend very little on its precise value.
Acknowledgments
On the occasion of Gerry Brown’s 85th birthday Festschrift, we dedicate this
mini review to him who commissioned our two past reviews [1, 2] on
similar subjects. This work was supported in part by the SPHERE collaboration
within the HadronPhysics2 Project No. 227431 of the EU initiative FP7.
References
References
[1]
Batty C J, Friedman E and Gal A 1997 Phys. Rept.
287 385
[2]
Friedman E and Gal A 2007 Phys. Rept. 452 89,
and references therein
[3]
Friedman E 2009 Hyp. Int. 193 33,
and references therein
[4]
Friedman E and Gal A 2004 Phys. Lett. B 578 85
[5]
Weise W 2001 Nucl. Phys. A 690 98c
[6]
Tomozawa Y 1966 Nuovo Cimento A 46 707,
Weinberg S 1966 Phys. Rev. Lett. 17 616
[7]
Kolomeitsev E E, Kaiser N and Weise W 2003
Phys. Rev. Lett. 90 092501
[8]
Suzuki K et al. 2004 Phys. Rev. Lett.
92 072302
[9]
Friedman E and Gal A 2003 Nucl. Phys. A 724 143
[10]
Marton J 2007 Nucl. Phys. A 790 328c
[11]
Geissel H et al. 2002 Phys. Rev. Lett.
88 122301
[12]
Friedman E et al. 2004 Phys. Rev. Lett.
93 122302, 2005 Phys. Rev. C 72 034609
[13]
Dover C B, Millener D J and Gal A 1989 Phys. Rept.
184 1, and references therein
[14]
Bart S et al. [BNL E887] 1999 Phys. Rev. Lett.
83 5238
[15]
Noumi H et al. [KEK E438] 2002
Phys. Rev. Lett. 89 072301, 2003 Phys. Rev. Lett.
90 049902(E),
Saha P K et al. 2004 Phys. Rev. C 70
044613
[16]
Kohno M, Fujiwara Y, Watanabe Y, Ogata K and Kawai M 2004
Prog. Theor. Phys. 112 895,
2006 Phys. Rev. C 74
064613, Harada T and Hirabayashi Y 2005 Nucl. Phys. A 759
143, 2006 767 206
[17]
Batty C J, Friedman E and Gal A 1994 Phys. Lett. B
335 273, Prog. Theor. Phys. Suppl. 117 227
[18]
Mareš J, Friedman E, Gal A and Jennings B K 1995
Nucl. Phys. A 594 311
[19]
Schaffner-Bielich J 2010 Nucl. Phys. A 835 279, and references
therein
[20]
Rijken Th A, Nagels M M and Yamamoto Y 2010
Nucl. Phys. A 835 160, and references therein
[21]
Dover C B, Gal A and Millener D J 1984 Phys. Lett. B
138 337
[22]
Fujiwara Y, Suzuki Y and Nakamoto C 2007
Prog. Part. Nucl. Phys. 58 439, and references therein
[23]
Polinder H, Haidenbauer J and Meißner U G 2006
Nucl. Phys. A 779 244
[24]
Kohno M 2010 Phys. Rev. C 81 014003
[25]
Kaiser N 2005 Phys. Rev. C 71 068201
[26]
Hayano R S et al. 1989 Phys. Lett.
B 231 355
[27]
Nagae T et al. [BNL E905] 1998 Phys. Rev. Lett.
80 1605
[28]
Harada T 1998 Phys. Rev. Lett. 81 5287
[29]
Borasoy B, Nißler R and Weise W 2005 Eur. Phys. J.
A 25 79
[30]
Cieplý A, Friedman E, Gal A and Mareš J 2001
Nucl. Phys. A 696 173
[31]
Weise W and Härtle R 2008 Nucl. Phys. A 804
173
[32]
Ramos A and Oset E 2000 Nucl. Phys. A 671
481
[33]
Cieplý A and Smejkal J 2010 Eur. Phys. J. A
43 191
[34]
Cieplý A, Friedman E, Gal A, Gazda D and Mareš J
2011 Phys. Lett. B 702 402, see also Cieplý A, Friedman E,
Gal A and Krejčiřík V 2011 Phys. Lett. B 698 226
[35]
Friedman E, Gal A and Batty C J 1993 Phys. Lett. B
308 6, 1994 Nucl. Phys. A 579 518
[36]
Barnea N and Friedman E 2007 Phys. Rev. C 75
022202(R)
[37]
Mareš J, Friedman E and Gal A 2006 Nucl. Phys. A
770 84
[38]
Friedman E, Gal A, Mareš J and Cieplý A 1999
Phys. Rev. C 60 024314
[39]
Friedman E 2011 Int. J. Mod. Phys. A 26 468 (Proc. Int. Conf.
on Meson Physics, Krakow 2010)
[40]
Kishimoto T 2007 et al. [KEK E548] Prog. Theor.
Phys. 118 181
[41]
Kishimoto T 2009 Nucl. Phys. A 827 321c
[42]
Glendenning N K and Schaffner-Bielich J 1999 Phys. Rev. C 60
025803
[43]
Yamagata J, Nagahiro H and Hirenzaki S 2006
Phys. Rev. C 74 014604
[44]
Magas V K, Yamagata-Sekihara J, Hirenzaki S, Oset E and
Ramos A 2010 Phys. Rev. C 81 024609
[45]
Knorren R, Prakash M and Ellis P J 1995 Phys. Rev.
C 52 3470
[46]
Schaffner J and Mishustin I N 1996 Phys. Rev.
C 53 1416
[47]
Gazda D, Friedman E, Gal A and Mareš J 2007
Phys. Rev. C 76 055204
[48]
Gazda D, Friedman E, Gal A and Mareš J 2008
Phys. Rev. C 77 045206
[49]
Schaffner-Bielich J and Gal A 2000 Phys. Rev. C
62 034311, and references therein
[50]
Gazda D, Friedman E, Gal A and Mareš J 2009
Phys. Rev. C 80 035205 |
Weakly Supervised Object Localization using Min-Max Entropy: an Interpretable Framework
Soufiane Belharbi1 , Jérôme Rony1, Jose Dolz1, Ismail Ben Ayed1,
Luke McCaffrey2, Eric Granger1
1Laboratoire d’imagerie, de vision et d’intelligence artificielle
Dept. of Systems Engineering, École de technologie supérieure
Montreal, Canada
2Rosalind and Morris Goodman Cancer Research Centre
Dept. of Oncology, McGill University
Montreal, Canada
{soufiane.belharbi.1,jerome.rony.1}@etsmtl.net
{jose.dolz,ismail.benayed,eric.granger}@etsmtl.ca
luke.mccaffrey@mcgill.ca
Abstract
Weakly supervised object localization (WSOL) models aim to locate objects of interest in an image after being trained only on data with coarse image level labels.
Deep learning models for WSOL rely typically on convolutional attention maps with no constraints on the regions of interest which allows these models to select any region, making them vulnerable to false positive regions and inconsistent predictions. This issue occurs in many application domains, e.g., medical image analysis, where interpretability is central to the prediction process.
In order to improve the localization reliability, we propose a deep learning framework for WSOL with pixel level localization. Our framework is composed of two sequential sub-networks: a localizer that localizes regions of interest; followed by a classifier that classifies these regions. Within its end-to-end training, we incorporate the prior knowledge stating that, in an agnostic-class setup, an image is more likely to contain relevant --i.e., object of interest-- and irrelevant regions --i.e., noise, background--. Based on the conditional entropy measured at the classifier level, the localizer is driven to spot relevant regions identified with low conditional entropy, and irrelevant regions identified with high conditional entropy. Our framework is able to recover large and even complete discriminative regions in an image using our recursive erasing algorithm that we incorporate within the backpropagation during training. Moreover, the framework handles intrinsically multi-instances.
Experimental results on public datasets with medical images (GlaS colon cancer) and natural images (Caltech-UCSD Birds-200-2011) show that, compared to state of the art WSOL methods, the proposed approach can provide significant improvements in terms of image-level classification and pixel-level localization. Our framework showed robustness to overfitting when dealing with few training samples. Performance improvements are due in large part to our framework effectiveness at disregarding irrelevant regions. A public reproducible PyTorch implementation is provided111https://github.com/sbelharbi/wsol-min-max-entropy-interpretability.
1 Introduction
Object localization222 Object localization consists in isolating an object of interest by providing the coordinates of its surrounding bounding box. In this work, it is also understood as a task providing pixel level segmentation of the object, which provides more accuracy of the localization. To avoid confusion when presenting the literature, we specify the case being considered. can be considered as one of the most fundamental tasks for image understanding, as it provides crucial clues to challenging visual problems, such as object detection or semantic segmentation. Deep learning methods, and particularly convolutional neural networks (CNNs), are driving recent progress in these tasks. Nevertheless, despite their remarkable performance, a downside of these methods is the large amount of labeled data required for training, which is a time consuming task and prone to observer variability. To overcome this limitation, weakly supervised learning (WSL) has emerged recently as a surrogate for extensive annotation of training data zhou2017brief . WSL involves scenarios where training is performed with inexact or uncertain supervision. In the context of object localization or semantic segmentation, weak supervision typically comes in the form of image level tags KERVADEC201988 ; kim2017two ; pathak2015constrained ; teh2016attention ; wei2017object , scribbles Lin2016 ; ncloss:cvpr18 or bounding boxes Khoreva2017 .
In WSOL, current state of the art methods in object localization and semantic segmentation rely heavily on classification activation maps produced by convolutional networks in order to localize regions of interest zhou2016learning , which can be also used as an interpretation of the model’s decision Zhang2018VisualInterp . Different work has been done in WSOL field to leverage the need to pixel level annotation. We mention bottom-up methods which rely on the input signal to locate the object of interest. Such methods include spatial pooling techniques over activation maps zhou2016learning ; oquab2015object ; sun2016pronet ; zhang2018adversarial ; durand2017wildcat , multi-instance learning ilse2018attention , attend-and-erase based methods SinghL17 ; wei2017object ; kim2017two ; LiWPE018CVPR ; pathak2015constrained . While such methods provide pixel level localization, other methods have been introduced to predict a bounding box instead, named weakly supervised object detectors bilen2016weakly ; kantorov2016contextlocnet ; tang2017multiple ; wan2018min ; shen2018generative . Inspired by human visual attention, top-down methods, which rely on the input signal and a selective backward signal to determine the corresponding object, were proposed including special feedback layers cao2015look , backpropagation error zhang2018top , Grad-CAM selvaraju2017grad ; ChattopadhyaySH18wacv for the gradient of the object class with respect to the activation maps.
Within an agnostic-class setup, input image often contains the object of interest among other parts such as noise, background, and other irrelevant subjects. Most the aforementioned methods do not consider such prior, and feed the entire image to the model. Ignoring such prior in the case where the object of interest in different images has a common shape/texture/color, the model may still be able to localize the most discriminative part of it easily oquab2015object . This is the case of natural images, for instance. However, in the case where the object can appear in different and random shape/structure, or may have relatively similar texture/color to the irrelevant parts, the model may easily confuse between the object and the irrelevant parts. This is mainly due to the fact that the network is free to select any area of the image as a region of interest as long as the selected region allows to reduce the classification loss. Such free selection can lead to high false positive regions and inconsistent localization. This issue can be furthermore understood from the point of view of feature selection and sparsity tibshirani2015statistical . However, instead of selecting the relevant features, the model is required to select a set of pixels --i.e., raw features-- representing the object of interest. Since the only constraint to optimize during such selection is to minimize the classification loss, and without other priors nor pixel level supervision, the optimization may converge to a model that can select any random subset of pixels as long as the loss is minimized. This does not guarantee that the selected pixels represent an object, nor the correct object, nor even make sens to us333wan2018min argue that there is an inconsistency between the classification loss and the task of WSOL; and that typical the optimization may reach sub-optimal solutions with considerable randomness in them.. From optimization perspective, it does not matter which set of pixels is selected (with respect to interpretability), but what matter is to obtain a minimal loss. In practice, and in deep WSOL, this often results in localizing the smallest (–i.e., sparse set–) common discriminative region of the object such as dog’s face with respect to the object ’dog’ kim2017two ; SinghL17 ; zhou2016learning which makes sens since localizing the dog’s face can be statistically sufficient to discriminate the object ’dog’ from other objects. Once such region is located, the classification loss may reach the minimum; and then, the model stops learning.
False positive regions can be problematic in critical domains such as medical applications where interpretability plays a central role in trusting and understanding an algorithm’s prediction. To address this important issue, and motivated by the importance of using prior knowledge in learning to alleviate overfitting when training using few samples mitchell1980need ; krupka2007incorporating ; yu2007incorporating ; sbelharbiarxivsep2017 , we propose to use the aforementioned prior –i.e., an image is likely to contain relevant and irrelevant regions– in order to favorite models that behave as such. To this end, we constrain the model to learn to localize both relevant and irrelevant image regions simultaneously in an end-to-end manner within a weakly supervised scenario, where only image level labels are used for training. We model the relevant –i.e., discriminative– regions as the complement of the irrelevant –i.e., non-discriminative– regions (Fig.1). Our model is composed of two sub-models:
(1) a localizer that aims at localizing regions of interest by predicting a latent mask,
(2) and a classifier that aims at classifying the visible content of the input image through the latent mask.
The localizer is trained, by employing the conditional entropy coverentropy2006 , to simultaneously identify
(1) relevant regions where the classifier has high confidence with respect to the image label,
(2) and irrelevant regions where the classifier is being unable to decide which image label to assign.
This modeling allows the discriminative regions to pop out and be used to assign the corresponding image label, while suppressing non-discriminative areas, leading to more reliable predictions. In order to localize complete discriminative regions, we extend our proposal by training the localizer to recursively erase discriminative parts during training. To this end, we propose a recursive erasing algorithm that we incorporate within the backpropagation. At each recursion, and within the backpropagation, the algorithm localizes the most discriminative region; stores it; then erases it from the input image. At the end of the final recursion, the model has gathered a large extent of the object of interest that is feed next to the classifier.
Thus, our model is driven to localize complete relevant regions while discarding irrelevant regions, resulting in more reliable object localization regions. Moreover, since the discriminative parts are allowed to be extended over different instances, the proposed model handles multi-instances natively.
The main interest of predicting a mask –i.e., localization at pixel level with high precision– instead of localization at a bounding box level which predicts a coarse localization, is the localization precision. In some applications, such as in medical domain, object localization may require high precision such as localizing cells, boundaries, and organs which may have an unstructured shape, and different scale that a bounding box may highly miss-represent. In such cases, a pixel level localization, such as in our proposal, can be more useful.
The main contribution of this paper is new deep learning framework for weakly supervised object localization at pixel level. Our framework is composed of two sequential sub-networks where the first one localizes regions of interest, whereas the second one classifies them. Based on conditional entropy, the end-to-end training of this framework allows to incorporate prior knowledge indicating that, in a class-agnostic setup, the image is more likely to contain relevant regions (object of interest) and irrelevant regions (noise, background). Given the conditional entropy measured at the classifier level, the localizer is driven to localize relevant regions (with low conditional entropy) and irrelevant regions (with high conditional entropy). Such localization is achieved with the main goal of providing a more interpretable and reliable regions of interest.
This paper also contributes a recursive erasing algorithm that is incorporated within backpropagation, along with a practical implementation in order to obtain complete discriminative regions.
Finally, we conduct an extensive series of experiments on two public image datasets (medical and natural scenes), where the results show the effectiveness of the proposed approach in terms of pixel level localization while maintaining competitive accuracy for image-level classification.
2 Background on WSOL
In this section, we briefly review state of the art of WSOL methods that aim at localizing objects of interest using only image level labels as supervision.
Fully convolutional networks with spatial pooling have shown to be effective to obtain localization of discriminative regions zhou2016learning ; oquab2015object ; sun2016pronet ; zhang2018adversarial ; durand2017wildcat . Multi-instance learning based methods have been used within an attention framework to localize regions of interest ilse2018attention . Since neural networks often, kim2017two ; SinghL17 ; zhou2016learning , provide small and most discriminative regions of object of interest, SinghL17 propose to hide large patches in training image randomly in order to force the network to seek other discriminative regions to recover large part of the object of interest. wei2017object use the attention map of a trained network to erase the most discriminative part of the original image. kim2017two use two-phase learning stage where they combine the attention maps of two networks to obtain a complete region of the object. LiWPE018CVPR propose a two-stage approach where the first network classifies the image, and provides an attention map of the most discriminative parts. Such attention is used to erase the corresponding parts over the input image, then feed the resulting erased image to a second network to make sure that there is no discriminative parts left.
Weakly supervised object detectors methods have emerged as an approach for localizing regions of interest using bounding boxes instead of pixel level. Such approaches rely on region proposals such as edge box zitnick2014edge and selective search van2011segmentation ; uijlings2013selective . In teh2016attention , the content of each proposed region is passed through an attention module, then a scoring module to obtain an average image. bilen2016weakly propose an approach to address multi-class object localization. Many improvements of this work have been proposed since then kantorov2016contextlocnet ; tang2017multiple . Other approaches rely on multi-stage training where in the first stage a network is trained to localize then refined in later stages for object detection sun2016pronet ; diba2017weakly ; ge2018multi . In order to reduce the variance of the localization of the boxes, wan2018min propose to reduce an entropy defined on the position of such boxes. shen2018generative propose to use generative adversarial networks to generate the proposals in order to speedup inference since most of the region proposals techniques are time consuming.
Inspired by the human visual attention, top-down methods was proposed. In Simonyan14a ; DB15a ; zeiler2014ECCV , backpropagation error is used in order to visualize saliency maps over the image for the predicted class. In cao2015look , an attention map is built to identify the class relevant regions using feedback layer. zhang2018top propose Excitation backprop that allows to pass along top-down signals downwards in the network hierarchy through a probabilistic framework. Grad-CAM selvaraju2017grad generalize CAM zhou2016learning using the derivative of the class scores with respect to each location on the feature maps which has been furthermore generalized in ChattopadhyaySH18wacv . In practice, top-down method are considered as visual explanatory tools, and they can be overwhelming in term of computation and memory usage even during inference.
While the aforementioned approaches have shown great success mostly with natural images, they still luck mechanism for modeling what is relevant and irrelevant within an image which is crucial to determine the quality of the regions of interest in term of reliability. Erase-based methods SinghL17 ; wei2017object ; kim2017two ; LiWPE018CVPR ; pathak2015constrained follow such concept where the non-discriminative parts are suppressed, throughout constraints, allowing only the discinimative ones to pop out. Explicitly modeling negative evidence within the model has shown to be effective in WSOL PariziVZF14 ; Azizpour2015SpotlightTN ; durand2016weldon ; durand2017wildcat . Among the cited literature, SinghL17 ; wei2017object ; kim2017two ; LiWPE018CVPR combined with wan2018min is probably the closest work to our proposal. Our proposal can also be seen as a supervised dropout srivastava14a . While dropout, applied over the input image, zeroes out pixels randomly, our proposal seeks to zero out irrelevant pixels and keep only the discriminative ones that support image label. In that sens, our proposal mimics a discminitative gate that inhibits irrelevant and noisy regions while allowing only informative and discriminative regions to pass through the gate.
3 The min-max entropy framework for WSOL
3.1 Notations and definitions
Let us consider a set of training samples ${\mathbb{D}=\{(\bm{X}_{i},y_{i})\}_{i=1}^{n}}$ where ${\bm{X}_{i}}$ is an input image with depth $d$, height $h$, and width $w$; a realization of the discrete random variable ${\mathbf{X}}$ with support set ${\mathcal{X}}$; ${y_{i}}$ is the image level label (i.e., image class), a realization of the discrete random variable ${\mathbf{y}}$ with support set ${\mathcal{Y}=\{1,\cdots,c\}}$. We define a decidable region444In this context, the notion of region indicates one pixel. of an image as any informative part of the image that allows predicting the image label. An undecidable region is any noisy, uninformative, and irrelevant part of the image that does not provide any indication nor support for the image class. To model such definitions, we consider a binary mask ${\bm{M}^{+}\in\{0,1\}^{h\times w}}$ where a location $(r,z)$ with value $1$ indicates a decidable region, otherwise it is an undecidable region.
We model the decidability of a given location $(r,z)$ with a binary random variable ${\mathbf{M}}$. Its realization is ${\bm{m}}$, and its conditional probability ${p_{\mathbf{m}}}$ over the input image is defined as follows,
$$p_{\mathbf{M}}(\mathbf{m}=1|\bm{X},(r,z))=\begin{cases}1\quad\text{if }\bm{X}(%
r,z)\text{ is a decidable region}\;,\\
0\quad\text{otherwise.}\end{cases}$$
(1)
We note ${\bm{M}^{-}\in\{0,1\}^{h\times w}=\bm{U}-\bm{M}^{+}}$ a binary mask indicating the undecidable region, where ${\bm{U}=\{1\}^{h\times w}}$. We consider the undecidable region as the complement of the decidable one. We can write: ${\left\lVert\bm{M}^{+}\right\rVert_{0}+\left\lVert\bm{M}^{-}\right\rVert_{0}=h%
\times w}$, where ${\left\lVert\cdot\right\rVert_{0}}$ is the ${l_{0}}$ norm. Following such definitions, an input image ${\bm{X}}$ can be decomposed into two images as
${\bm{X}=\bm{X}\odot\bm{M}^{+}+\bm{X}\odot\bm{M}^{-}}$,
where ${(\cdot\odot\cdot)}$ is the Hadamard product. We note ${\bm{X}^{+}=\bm{X}\odot\bm{M}^{+}}$, and ${\bm{X}^{-}=\bm{X}\odot\bm{M}^{-}}$. ${\bm{X}^{+}}$ inherits the image-level label of ${\bm{X}}$. We can write the pair ${(\bm{X}^{+}_{i},y_{i})}$ in the same way as ${(\bm{X}_{i},y_{i})}$. We note by ${\bm{R}^{+}_{i}}$, and ${\bm{R}^{-}_{i}}$ as the respective approximation of ${\bm{M}^{+}_{i}}$, and ${\bm{M}^{-}_{i}}$ (Sec.3.3). We are interested in modeling the true conditional distribution ${p(\mathbf{Y}|\mathbf{X})}$ where ${p(\mathbf{Y}=y_{i}|\mathbf{X}=\bm{X}_{i})=1}$. ${\hat{p}(\mathbf{Y}|\mathbf{X})}$ is its estimate. Following the previous discussion, predicting the image label depends only on the decidable region, i.e., ${\bm{X}^{+}}$. Thus, knowing ${\bm{X}^{-}}$ does not add any knowledge to the prediction, since ${\bm{X}^{-}}$ does not contain any information about the image label. This leads to:
${p(\mathbf{Y}|\mathbf{X}=\bm{X})=p(\mathbf{Y}|\mathbf{X}=\bm{X}^{+})}$. As a consequence, the image label is conditionally independent of the undecidable region ${\bm{X}^{-}}$ provided the decidable region ${\bm{X}^{+}}$ Kollergraphical2009 : ${p\models\mathbf{Y}\perp\mathbf{X}^{-}|\mathbf{X}^{+}}$, where ${\mathbf{X}^{+},\mathbf{X}^{-}}$ are the random variables modeling the decidable and the undecidable regions, respectively. In the following, we provide more details on how to exploit such conditional independence property in order to estimate ${\bm{R}^{+}}$ and ${\bm{R}^{-}}$.
3.2 Min-max entropy
We consider modeling the uncertainty of the model prediction over decidable, or undecidable regions using conditional entropy (CE). Let us consider the CE of ${\mathbf{Y}|\mathbf{X}=\bm{X}^{+}}$, denoted ${\mathbf{H}(\mathbf{Y}|\mathbf{X}=\bm{X}^{+})}$ and computed as coverentropy2006 ,
$$\mathbf{H}(\mathbf{Y}|\mathbf{X}=\bm{X}^{+})=-\sum_{y\in\mathcal{Y}}\hat{p}(%
\mathbf{Y}|\mathbf{X}=\bm{X}^{+})\;\log\hat{p}(\mathbf{Y}|\mathbf{X}=\bm{X}^{+%
})\;,$$
(2)
Since the model is required to be certain about its prediction over ${\bm{X}^{+}}$, we constrain the model to have low entropy over ${\bm{X}^{+}}$. Eq.2 reaches its minimum when the probability of one of the classes is certain, i.e., ${\hat{p}(\mathbf{Y}=y|\mathbf{X}=\bm{X}^{+})=1}$ coverentropy2006 . Instead of directly minimizing Eq.2, and in order to ensure that the model predicts the correct image label, we cast a supervised learning problem using the cross-entropy between $p$ and ${\hat{p}}$ using the image-level label of ${\bm{X}}$ as a supervision,
$$\displaystyle\mathbf{H}(p_{i},\hat{p}_{i})^{+}$$
$$\displaystyle=-\sum_{y\in\mathcal{Y}}p(\mathbf{Y}=y|\mathbf{X}=\bm{X}^{+}_{i})%
\;\log\hat{p}(\mathbf{Y}=y|\mathbf{X}=\bm{X}^{+}_{i})=-\log\hat{p}(y_{i}|\bm{X%
}^{+}_{i})\;.$$
(3)
Eq.3 reaches its minimum at the same conditions as Eq.2 with the true image label as a prediction. We note that Eq.3 is the negative log-likelihood of the sample ${(\bm{X}_{i},y_{i})}$. In the case of ${\bm{X}^{-}}$, we consider the CE of ${\mathbf{Y}|\mathbf{X}=\bm{X}}^{-}$, denoted ${\mathbf{H}(\mathbf{Y}|\mathbf{X}=\bm{X}^{-})}$ and computed as,
$$\mathbf{H}(\mathbf{Y}|\mathbf{X}=\bm{X}^{-})=-\sum_{y\in\mathcal{Y}}\hat{p}(%
\mathbf{Y}|\mathbf{X}^{-})\log\hat{p}(\mathbf{Y}|\mathbf{X}^{-})\;.$$
(4)
Over irrelevant regions, the model is required to be unable to decide which image class to predict since there is no evidence to support any class. This can be seen as a high uncertainty in the model decision. Therefore, we consider maximizing the entropy of Eq.4. The later reaches its maximum at the uniform distribution coverentropy2006 . Thus, the inability of the model to decide is reached since each class is equiprobable. An alternative to maximizing Eq.4 is to use a supervised target distribution since it is already known (i.e., uniform distribution). To this end, we consider ${q}$ as a uniform distribution,
$$q(\mathbf{Y}=y|\mathbf{X}=\bm{X}^{-}_{i})=1/c\quad,\forall y\in\mathcal{Y}\;,$$
(5)
and caste a supervised learning setup using a cross-entropy between $q$ and ${\hat{p}}$ over ${\bm{X}^{-}}$,
$$\displaystyle\mathbf{H}(q_{i},\hat{p}_{i})^{-}$$
$$\displaystyle=-\sum_{y\in\mathcal{Y}}q(\mathbf{Y}=y|\mathbf{X}=\bm{X}^{-}_{i})%
\;\log\hat{p}(\mathbf{Y}=y|\mathbf{X}=\bm{X}^{-}_{i})=-\frac{1}{c}\sum_{y\in%
\mathcal{Y}}\log\hat{p}(y|\bm{X}^{-}_{i})\;.$$
(6)
The minimum of Eq.6 is reached when ${\hat{p}(\mathbf{Y}|\mathbf{X}=\bm{X}^{-}_{i})}$ is uniform, thus, Eq.4 reaches its maximum. Now, we can write the total training loss to be minimized as,
$$\min\mathop{\mathbb{E}}_{(\bm{X}_{i},y_{i})\in\mathbb{D}}\big{[}\mathbf{H}(p_{%
i},\hat{p}_{i})^{+}+\mathbf{H}(q_{i},\hat{p}_{i})^{-}\big{]}\;.$$
(7)
The posterior probability ${\hat{p}}$ is modeled using a classifier ${\mathcal{C}(.\;,\bm{\theta}_{\mathcal{C}})}$ with a set of parameters ${\bm{\theta}_{\mathcal{C}}}$; it can operate either on ${\bm{X}^{+}_{i}}$ or ${\bm{X}^{-}_{i}}$. The binary mask ${\bm{R}^{+}_{i}}$ (and ${\bm{R}^{-}_{i}}$) is learned using another model ${\mathcal{M}(\bm{X}_{i};\;\bm{\theta}_{\mathcal{M}})}$ with a set of parameters ${\bm{\theta}_{\mathcal{M}}}$. In this work, both models are based on neural networks (fully convolutional networks LongSDcvpr15 in particular). The networks ${\mathcal{M}}$ and ${\mathcal{C}}$ can be seen as two parts of one single network ${\mathcal{G}}$ that localizes regions of interest using a binary mask, then classifies their content. Fig.2 illustrates the entire model.
Due to the depth of ${\mathcal{G}}$, ${\mathcal{M}}$ receives its supervised gradient based only on the error made by ${\mathcal{C}}$. In order to boost the supervised gradient at ${\mathcal{M}}$, and provide it with more hints to be able to select the most discriminative regions with respect to the image class, we propose to use a secondary classification task at the output of ${\mathcal{M}}$ to classify the input ${\bm{X}}$, following lee15apmlr . ${\mathcal{M}}$ computes the posterior probability ${\hat{p}^{s}(\bm{Y}|\bm{X})}$ which is another estimate of ${p(\bm{Y}|\bm{X})}$. To this end, ${\mathcal{M}}$ is trained to minimize the cross-entropy between $p$ and ${\hat{p}^{s}}$,
$$\mathbf{H}(p_{i},\hat{p}_{i}^{s})=-\log\hat{p}^{s}(\mathbf{Y}=y_{i}|\mathbf{X}%
=\bm{X}_{i})\;.$$
(8)
The total training loss to minimize is formulated as,
$$\min\mathop{\mathbb{E}}_{(\bm{X}_{i},y_{i})\in\mathbb{D}}\big{[}\mathbf{H}(p_{%
i},\hat{p}_{i})^{+}+\mathbf{H}(q_{i},\hat{p}_{i})^{-}+\mathbf{H}(p_{i},\hat{p}%
_{i}^{s})\big{]}\;.$$
(9)
3.3 Mask computation
The mask ${\bm{R}^{+}}$ is computed using the last feature maps of ${\mathcal{M}}$ which contains high abstract descriminative activations. We note such feature maps by a tensor ${\bm{A}_{i}\in\mathbb{R}^{c\times h^{\prime}\times w^{\prime}}}$ that contains a spatial map for each class. ${\bm{R}^{+}_{i}}$ is computed by aggregating the spatial activation of all the classes as,
$$\bm{T}_{i}=\sum_{k=1}^{c}\hat{p}^{s}(\mathbf{Y}=k|\mathbf{X}=\bm{X}_{i})*\bm{A%
}_{i}(k)\;,$$
(10)
where ${\bm{T}_{i}\in\mathbb{R}^{h^{\prime}\times w^{\prime}}}$ is the continuous downsampled version of ${\bm{R}^{+}_{i}}$, and ${\bm{A}_{i}(k)}$ is the feature map of the class ${k}$ of the input ${\bm{X}_{i}}$. At convergence, the posterior probability of the winning class is pushed toward $1$ while the rest is pushed down to ${0}$. This leaves only the feature map of the winning class.
${\bm{T}_{i}}$ is upscaled using interpolation555In most neural networks libraries (Pytorch (pytorch.org), Chainer (chainer.org)), the upsacling operations using interpolation/upsamling have a non-deterministic backward. This makes training unstable due to the non-deterministic gradient; and reproducibility impossible. To avoid such issues, we detach the upsacling operation from the training graph and consider it as input data for ${\mathcal{C}}$.
to ${\bm{T}{\uparrow}_{i}\in\mathbb{R}^{h\times w}}$ which has the same size as the input ${\bm{X}}$, then pseudo-thresholded using a sigmoid function to obtain a pseudo-binary ${\bm{R}^{+}_{i}}$,
$$p_{\mathbf{M}}(\mathbf{m}=1|\bm{X}_{i},(r,z))=1/(1+\exp(-\omega\times(\bm{T}{%
\uparrow}_{i}(r,z)-\sigma^{\prime})))\;,$$
(11)
where ${\omega}$ is a constant scalar that ensures that the sigmoid approximately equals to $1$ when ${\bm{T}{\uparrow}_{i}(r,z)}$ is larger than ${\sigma^{\prime}}$, and approximately equals to $0$ otherwise.
3.4 Object completeness using incremental recursive erasing and trust coefficients
Object classification methods tend to rely on small discriminative regions kim2017two ; SinghL17 ; zhou2016learning . Thus, ${\bm{R}^{-}}$ may still contain discriminative parts. Following SinghL17 ; kim2017two ; LiWPE018CVPR ; pathak2015constrained , and in particular wei2017object , we propose a learning incremental and recursive erasing approach that drives ${\mathcal{M}}$ to seek complete discriminative regions. However, in the opposite of wei2017object where such mining is done offline, we propose to incorporate the erasing within the backpropagation using an efficient and practical implementation. This allows ${\mathcal{M}}$ to learn to seek discriminative parts. Therefore, erasing during inference is unnecessary. Our approach consists in applying ${\mathcal{M}}$ recursively before applying ${\mathcal{C}}$ within the same forward. The aim of the recursion, with maximum depth $u$, is to mine more discriminative parts within the non-discriminative regions of the image masked by ${\bm{R}^{-}}$. We accumulate all discriminative parts in a temporal mask ${\bm{R}^{+,\star}}$. At each recursion, we mine the most discriminative part, that has been correctly classified by ${\mathcal{M}}$, and accumulate it in ${\bm{R}^{+,\star}}$. However, with the increase of $u$, the image may run out of discriminative parts. Thus, ${\mathcal{M}}$ is forced, unintentionally, to consider non-discriminative parts as discriminative. To alleviate this risk, we introduce trust coefficients that control how much we trust a mined discriminative region at each step $t$ of the recursion for each sample $i$ as follows,
$$\bm{R}^{+,\star}_{i}\coloneqq\max(\bm{R}^{+,\star}_{i},\Psi(t,i)\;\bm{R}^{+,t}%
_{i})\;,$$
(12)
where ${\Psi(t,i)\in\mathbb{R}^{+}}$ computes the trust of the current mask of the sample $i$ at the step $t$ as follows,
$$\forall t\geq 0,\quad\Psi(t,i)=\exp^{\frac{-t}{\sigma}}\;\Gamma(t,i)\;,$$
(13)
where ${\exp^{\frac{-t}{\sigma}}}$ encodes the overall trust with respect to the current step of the recursion. Such trust is expected to decrease with the depth of the recursion bel16 .
${\sigma}$ controls the slop of the trust function. The second part of Eq.13 is computed with respect to each sample. It quantifies how much we trust the estimated mask for the current sample $i$,
$$\Gamma(t,i)=\begin{cases*}\hat{p}^{s}(\mathbf{Y}=y_{i}|\mathbf{X}=\bm{X}_{i}%
\odot\bm{R}^{-,\star}_{i})&if $\hat{y}_{i}=y_{i}$ and $\mathbf{H}(p_{i},\hat{p%
}^{s}_{i})_{t}\leq\mathbf{H}(p_{i},\hat{p}^{s}_{i})_{0}$ \; ,\\
0&otherwise \; .\end{cases*}$$
(14)
In Eq.14, ${\mathbf{H}(p_{i},\hat{p}^{s}_{i})_{t}}$ is computed over ${(\bm{X}_{i}\odot\bm{R}^{-,\star}_{i})}$. Eq.14 ensures that at a step $t$, for a sample $i$, the current mask is trusted only if the ${\mathcal{M}}$ correctly classifies the erased image, and does not increase the loss. The first condition ensures that the accumulated discriminative regions belong to the same class, and more importantly, the true class. Moreover, it ensures that ${\mathcal{M}}$ does not change its class prediction through the erasing process. This introduces a consistency between the mined regions across the steps and avoids mixing discriminative regions of different classes. The second condition ensures maintaining, at least, the same confidence in the predicted class compared to the first forward without erasing (${t=0}$).
The given trust in this case is equal to the probability of the true class. The regions accumulator is initialized to zero at ${t=0,\bm{R}^{+,\star}_{i}=\{0\}^{h\times w}}$ at each forward in ${\mathcal{G}}$. ${\bm{R}^{+,\star}_{i}}$ is not maintained through epoches; ${\mathcal{M}}$ starts over each time processing the sample $i$. This prevents accumulating incorrect regions that may occur at the beginning of the training. In order to automatize when to stop erasing, we consider a maximum depth of the recursion $u$. For a mini-batch, we keep erasing as along as we do not reach $u$ steps of erasing, and there is at least one sample with a trust coefficient non-zero (Eq.14). Once a sample is assigned a zero trust coefficient, it is maintained zero all along the erasing (Eq.12)(Fig.4).
Direct implementation of Eq.12 is not practical since performing a recursive computation on a large model ${\mathcal{M}}$ requires a large memory that increases with the depth $u$. To avoid such issue, we propose a practical implementation using gradient accumulation at ${\mathcal{M}}$ through the loss Eq.8; such implenetation requires the same memory size as in the case without erasing (Alg.1). We provide more details in the supplementary material (Sec.A.1).
4 Results and analysis
Our experiments focused simultaneously on classification and object localization tasks. Thus, we consider datasets that provide image-level and pixel-level labels for evaluation on classification and object localization tasks. Particularly, the following two datasets were considered: GlaS in medical domain, and CUB-200-2011 on natural scene images.
(1) GlaS dataset was provided in the 2015 Gland Segmentation in Colon Histology Images Challenge Contest666GlaS: warwick.ac.uk/fac/sci/dcs/research/tia/glascontest. sirinukunwattana2017gland .
The main task of the challenge is gland segmentation of microscopic images. However, image-level labels were provided as well. The dataset is composed of 165 images derived from 16 Hematoxylin and Eosin (H&E) histology sections of two grades (classes): benign, and malignant. It is divided into 84 samples for training, and 80 samples for test. Images have a high variation in term of gland shape/size, and overall H&E stain. In this dataset, the glandes are the regions of interest that the pathologists use to prognosis the image grading of being benign or malignant.
(2) CUB-200-2011 dataset777CUB-200-2011: www.vision.caltech.edu/visipedia/CUB-200-2011.html WahCUB2002011 is a dataset for bird species with $11,788$ samples and $200$ species. For the sake of evaluation, and due to time limitation, we selected randomly 5 species and build a small dataset with $150$ samples for training, and $111$ for test; referred to in this work as CUB5. In this dataset, the object of interest are the birds.
In both datasets, we take randomly $80\%$ of train samples for effective training, and $20\%$ for validation to perform early stopping. We provide the used splits and the deterministic code that generated them for both datasets.
In all the experiments, image-level labels are used during training/evaluation, while pixel-level labels are used exclusively during evaluation. The evaluation is conducted at two levels:
at image level where the classification error is reported, and at the pixel level where we report F1 score (Dice index). over the foreground (object of interest), referred to as F1${}^{+}$. When dealing with binary data, F1 score is equivalent to Dice index. We report as well the F1 score over the background, referred to as F1${}^{-}$, in order to measure how well the model is able to identify irrelevant regions.
We compare our method to different methods of WSOL. The methods use similar pre-trained backbone (resent18 heZRS16 ) for feature extraction and differs mainly in the final pooling layer: CAM-Avg uses average pooling zhou2016learning , CAM-Max uses max-pooling oquab2015object , CAM-LSE uses an approximation to maximum sun2016pronet ; PinheiroC15cvpr , Wildcat uses the pooling in durand2017wildcat , and Deep MIL is the work of ilse2018attention with adaptation to multi-class. We use supervised segmentation using U-Net Ronneberger-unet-2015 as an upper bound of the performance for pixel-level evaluation (Full sup.). As a basic baseline, we use a mask full of 1 of the same size of the image as a constant prediction of the objects of interest to show that F1${}^{+}$ alone is not an efficient metric to evaluate pixel-level localization particularly over GlaS set (All-ones, see Tab.1). In our method, ${\mathcal{M}}$ and ${\mathcal{C}}$ share the same pre-trained backbone (resnet101 heZRS16 ) to avoid overfitting while using durand2017wildcat as a pooling function. All methods are trained using stochastic gradient descent using momentum. In our approach, we used the same hyper-parameters over both datasets, while other methods required adaptation to each dataset. We provide a reproducible code888https://github.com/sbelharbi/wsol-min-max-entropy-interpretability, the datasets splits, more experimental details, and visual results in the supplementary material (Sec.B).
A comparison of the obtained results of different methods, over both datasets, is presented in Tab.1 with visual results illustrated in Fig.3. Tab.2 compares the impact of using our recursive erasing algorithm to mine discriminative regions. From Tab.2, we can observe that using our recursive algorithm adds a large improvement in F1${}^{+}$ without degrading F1${}^{-}$. This means that the recursion allows the model to correctly localize larger portions of the object of interest without including false positive regions. In Tab.1, and compared to other WSOL methods, our method obtains relatively similar F1${}^{+}$ score; while it obtains large F1${}^{-}$ over GlaS where it may be easy to obtain high F1${}^{+}$ by predicting a mask full of 1 (Fig.3). However, a model needs to be very selective in order to obtain high F1${}^{-}$ score in order to localize tissues (irrelevant regions) where our model seems to excel at. Cub5 set seems to be more challenging due to the variable size (from small to big) of the birds, their view, the context/surrounding environment, and the few training samples. Our model outperforms all the WSOL methods in both F1${}^{+}$ and F1${}^{-}$ with a large gap due mainly to its ability to discard non-discriminative regions which leaves it only with the region of interest, in this case, the bird. While our model shows improvements in localization, it is still far behind full supervision. In term of classification, all methods obtained low error over GlaS which implies that it is an easy set for classification. However, and surprisingly, the other methods seem to overfit over CUB5, while our model shows a robustness. The obtained results over both datasets demonstrate, compared to WSOL methods, the effectiveness of our approach in term of image classification and object localization with more reliability in term of object localization.
Visual quality of our approach (Fig.3) shows that the predicted regions of interest over GlaS agree with the doctor methodology of colon cancer diagnostics where the glands are used as diagnostic tool. Additionally, the ability to deal with multi-instances when there are multiple glands within the image. Over CUB5, our model succeeds to spot the bird localization in order to predict its category which one may do in such task. We notice that the head, chest, tail, or body particular spots, are often parts that are used by our model to decide the bird’s species which seems a reasonable strategy as well.
5 Conclusion
In this work, we have presented a novel approach for WSOL where we constrained learning relevant and irrelevant regions within the model. Evaluated on two datasets, and compared to state of the art WSOL methods, our approach showed its effectiveness in correctly localizing object of interest with small false positive regions while maintaining a competitive classification error. This makes our approach more reliable in term of interpetability. As future work, we consider extending our approach to handle multiple classes within the image. Different constraints can be applied over the predicted mask, such as texture properties, shape, or other region constraints. However, this requires the mask to be differentiable with respect to the model’s parameters to be able to train the network using such constraints. Predicting bounding boxes instead of heat maps is considered, as well, since they can be more suitable in some applications where pixel-level accuracy is not required.
We discussed in Sec.B.3 a fundamental issue in erasing-based algorithms, that we noticed from applying our approach over CUB5 datasets. We arrived to the conclusion that such algorithms luck the ability to remember the location of the already mined regions of interest which can be problematic in the case where there is only one instance in the image, and, only small discriminative region. This can easily prevent recovering the complete discriminative region since the rest of the regions may not be discriminative enough to be spotted, such as the case of birds when the head is already erased. Assisting erasing algorithms with a memory-like mechanism, or spatial information about the previous mined discriminative regions may drive the network to seek discriminative regions around the previously spotted regions, since the parts of an object of interest are often closely located. Potentially, this may allow the model to spot large portion of the object of interest in this case.
Acknowledgments
This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and the Canadian Institutes of Health Research.
References
[1]
H. Azizpour, M. Arefiyan, S. Naderi Parizi, and S. Carlsson.
Spotlight the negatives: A generalized discriminative latent model.
In BMVC 2015.
[2]
S. Belharbi, C. Chatelain, R. Hérault, and S. Adam.
Neural networks regularization through class-wise invariant
representation learning.
arXiv preprint arXiv:1709.01867, 2017.
[3]
S. Belharbi, R.Hérault, C. Chatelain, and S. Adam.
Deep multi-task learning with evolving weights.
In ESANN 2016.
[4]
H. Bilen and A. Vedaldi.
Weakly supervised deep detection networks.
In CVPR 2016.
[5]
C. Cao, X. Liu, Y. Yang, Y. Yu, J. Wang, Z. Wang, Y. Huang, L. Wang, C. Huang,
W. Xu, et al.
Look and think twice: Capturing top-down visual attention with
feedback convolutional neural networks.
In ICCV 2015.
[6]
A. Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian.
Grad-cam++: Generalized gradient-based visual explanations for deep
convolutional networks.
In WACV 2018.
[7]
T. M. Cover and J. A. Thomas.
Elements of Information Theory.
2006.
[8]
A. Diba, V. Sharma, A. M. Pazandeh, H. Pirsiavash, and L. Van Gool.
Weakly supervised cascaded convolutional networks.
In CVPR 2017.
[9]
T. Durand, T. Mordan, N. Thome, and M. Cord.
Wildcat: Weakly supervised learning of deep convnets for image
classification, pointwise localization and segmentation.
In CVPR 2017.
[10]
Thibaut Durand, Nicolas Thome, and Matthieu Cord.
Weldon: Weakly supervised learning of deep convolutional neural
networks.
In CVPR 2016.
[11]
W. Ge, S. Yang, and Y. Yu.
Multi-evidence filtering and fusion for multi-label classification,
object detection and semantic segmentation based on weakly supervised
learning.
In CVPR 2017.
[12]
G. Ghiasi, T.-Y. Lin, and Q. V. Le.
Dropblock: A regularization method for convolutional networks.
In NIPS 2018.
[13]
K. He, X. Zhang, S.g Ren, and J. Sun.
Deep residual learning for image recognition.
In CVPR 2016.
[14]
M. Ilse, J. M. Tomczak, and M. Welling.
Attention-based deep multiple instance learning.
arXiv preprint arXiv:1802.04712, 2018.
[15]
V. Kantorov, M. Oquab, M. Cho, and I. Laptev.
Contextlocnet: Context-aware deep network models for weakly
supervised localization.
In ECCV 2016.
[16]
H. Kervadec, J. Dolz, M. Tang, E. Granger, Y. Boykov, and I. Ben Ayed.
Constrained-CNN losses for weakly supervised segmentation.
MedIA 2019.
[17]
A. Khoreva, R. Benenson, J.H. Hosang, M. Hein, and B. Schiele.
Simple does it: Weakly supervised instance and semantic segmentation.
In CVPR, 2017.
[18]
D. Kim, D. Cho, D. Yoo, and I. So Kweon.
Two-phase learning for weakly supervised object localization.
In ICCV 2017.
[19]
D. Koller and N. Friedman.
Probabilistic Graphical Models: Principles and Techniques -
Adaptive Computation and Machine Learning.
2009.
[20]
E. Krupka and N. Tishby.
Incorporating prior knowledge on features into learning.
In Artificial Intelligence and Statistics, 2007.
[21]
C. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu.
Deeply-Supervised Nets.
In ICAIS 2015.
[22]
K. Li, Z. Wu, K.-C. Peng, J. Ernst, and Y. Fu.
Tell me where to look: Guided attention inference network.
In CVPR 2018.
[23]
D. Lin, J. Dai, J. Jia, K. He, and J. Sun.
Scribblesup: Scribble-supervised convolutional networks for semantic
segmentation.
In CVPR, 2016.
[24]
J. Long, E. Shelhamer, and T. Darrell.
Fully convolutional networks for semantic segmentation.
In CVPR 2015.
[25]
T.M. Mitchell.
The need for biases in learning generalizations.
Department of Computer Science, Laboratory for Computer Science
Research, 1980.
[26]
M. Oquab, L. Bottou, I. Laptev, and J. Sivic.
Is object localization for free?-weakly-supervised learning with
convolutional neural networks.
In CVPR 2015.
[27]
S. Naderi Parizi, A. Vedaldi, A.w Zisserman, and P. F. Felzenszwalb.
Automatic discovery and optimization of parts for image
classification.
In ICLR 2015.
[28]
D. Pathak, P. Krahenbuhl, and T. Darrell.
Constrained convolutional neural networks for weakly supervised
segmentation.
In ICCV 2015.
[29]
P. H. O. Pinheiro and R. Collobert.
From image-level to pixel-level labeling with convolutional networks.
In CVPR 2015.
[30]
O. Ronneberger, P. Fischer, and T. Brox.
U-net: Convolutional networks for biomedical image segmentation.
In MICCAI 2015.
[31]
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, et al.
Grad-cam: Visual explanations from deep networks via gradient-based
localization.
In ICCV 2017.
[32]
Y. Shen, R. Ji, S. Zhang, W. Zuo, Y. Wang, and F. Huang.
Generative adversarial learning towards fast weakly supervised
detection.
In CVPR 2018.
[33]
K. Simonyan, A. Vedaldi, and A. Zisserman.
Deep inside convolutional networks: Visualising image classification
models and saliency maps.
In ICLRw 2014.
[34]
K. K. Singh and Y. J. Lee.
Hide-and-seek: Forcing a network to be meticulous for
weakly-supervised object and action localization.
In ICCV 2017.
[35]
K. Sirinukunwattana, J. P. Pluim, H. Chen, X. Qi, P.-A. Heng, Y. B. Guo, L. Y.
Wang, B. J. Matuszewski, E. Bruni, U. Sanchez, et al.
Gland segmentation in colon histology images: The glas challenge
contest.
MIA 2017.
[36]
J.T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller.
Striving for simplicity: The all convolutional net.
In ICLRw 2015.
[37]
N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov.
Dropout: A simple way to prevent neural networks from overfitting.
JMLR 214.
[38]
C. Sun, M. Paluri, R. Collobert, R. Nevatia, and L. Bourdev.
Pronet: Learning to propose object-specific boxes for cascaded neural
networks.
In CVPR 2016.
[39]
M. Tang, A. Djelouah, F. Perazzi, Y. Boykov, and C. Schroers.
Normalized Cut Loss for Weakly-supervised CNN Segmentation.
In CVPR, 2018.
[40]
P. Tang, X. Wang, X. Bai, and W. Liu.
Multiple instance detection network with online instance classifier
refinement.
In CVPR 2017.
[41]
E. W. Teh, M. Rochan, and Y. Wang.
Attention networks for weakly supervised object localization.
In BMVC 2016.
[42]
R. Tibshirani, M. Wainwright, and T. Hastie.
Statistical learning with sparsity: the lasso and
generalizations.
Chapman and Hall/CRC, 2015.
[43]
J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders.
Selective search for object recognition.
IJCV 2013.
[44]
K. E. Van de Sande, J. R. Uijlings, T. Gevers, and A. W. Smeulders.
Segmentation as selective search for object recognition.
In ICCV 2011.
[45]
C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie.
The Caltech-UCSD Birds-200-2011 Dataset.
Technical report, California Institute of Technology, 2011.
[46]
F. Wan, P. Wei, J. Jiao, Z. Han, and Q. Ye.
Min-entropy latent model for weakly supervised object detection.
In CVPR 2018.
[47]
Y. Wei, J. Feng, X. Liang, M.-M. Cheng, Y. Zhao, and S. Yan.
Object region mining with adversarial erasing: A simple
classification to semantic segmentation approach.
In CVPR 2017.
[48]
T. Yu, T. Jan, S. Simoff, and J. Debenham.
Incorporating prior domain knowledge into inductive machine learning.
Unpublished doctoral dissertation Computer Sciences, 2007.
[49]
M. D. Zeiler and R. Fergus.
Visualizing and understanding convolutional networks.
In ECCV 2014.
[50]
J. Zhang, S. A. Bargal, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff.
Top-down neural attention by excitation backprop.
IJCV 2018.
[51]
Q.-s. Zhang and S.-c. Zhu.
Visual interpretability for deep learning: a survey.
FITEE 2018.
[52]
X. Zhang, Y. Wei, J. Feng, Y. Yang, and T. Huang.
Adversarial complementary learning for weakly supervised object
localization.
In CVPR 2018.
[53]
B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba.
Learning deep features for discriminative localization.
In CVPR 2016.
[54]
Z.-H. Zhou.
A brief introduction to weakly supervised learning.
NSR 2017.
[55]
C. L. Zitnick and P. Dollár.
Edge boxes: Locating object proposals from edges.
In ECCV 2014.
Appendix A The min-max entropy framework for WSOL
A.1 Object completeness using incremental recursive erasing and trust coefficients
In this section, we present an illustration of our proposed recursive erasing algorithm (Fig.4). Alg.1 illustrates our implementation using accumulated gradient through the backpropagation within the localizer ${\mathcal{M}}$. We note that such erasing is performed only during training.
Appendix B Results and analysis
In this section, we provide more details on our experiments, analysis, and discuss some of the drawbacks of our approach. We took many precautions to make the code reproducible for ou model up to Pytorch’s terms of reproducibility. Please see the README.md file for the concerned section with the code999https://github.com/sbelharbi/wsol-min-max-entropy-interpretability. We checked reproducibility up to a precision of $10^{-16}$. All our experiments were conducted using the seed $0$. We run all our experiments over one GPU with 12GB101010Our code supports multiGPU, and Batchnorm synchronization with our support to reproducibility., and an environment with 10GB of RAM. Finally, this section shows more visual results, analysis, training time, and drawbacks. A link to download all the predictions with high resolution over both test sets is provided.
B.1 Datasets
We provide in Fig.5 some samples from each dataset’s test set along with their mask that indicates the object of interest.
As we mentioned in Sec.4, due to time constraints, we consider a subset from the original CUB-200-2011 dataset, and we referred to it as CUB5. To build it, we select, randomly, 5 classes from the original dataset. Then, pick all the corresponding samples of each class in the provided train and test set to build our train and test set (CUB5). Then, we build the effective train set, and validation set by taking randomly $80\%$, and the left $20\%$ from the train set of CUB5, respectively. We provide the splits, and the code used to generate them. Our code generates the following classes:
1.
019.Gray_Catbird
2.
099.Ovenbird
3.
108.White_necked_Raven
4.
171.Myrtle_Warbler
5.
178.Swainson_Warbler
B.2 Experiments setup
The following is the configuration we used for our model:
Data
1. Patch size (hxw): $480\times 480$. (for training sample patches, however, for evaluation, use the entire input image).
2. Augment patch using random rotation, horizontal/vertical flipping. (for CUB5 only horizontal flipping is performed).
3. Channels are normalized using $0.5$ mean and $0.5$ standard deviation.
4. For GlaS: patches are jittered using brightness=$0.5$, contrast=$0.5$, saturation=$0.5$, hue=$0.05$.
Model
Pretrained resnet101 [13] as a backbone with [9] as a pooling score with our adaptation, using $5$ modalities per class. We consider using dropout [37] (with value $0.75$ over GlaS and $0.85$ over CUB5, over the final map of the pooling function right before computing the score). High dropout is motivated by [34, 12]. This allows to drop most discriminative parts at features with most abstract representation. The dropout is not performed over the final mask, but only on the internal mask of the pooling function. As for the parameters of [9], we consider their $\alpha=0$ since most negative evidence is dropped, and use $kmax=kmin=0.09$. $u=0,u=4,\sigma=10,\sigma^{\prime}=0.5,\omega=8$. For evaluation, our predicted mask is binarized using a $0.5$ threshold to obtain exactly a binary mask. All our presented masks in this work follows this thresholding. Our F1${}^{+}$, and F1${}^{-}$ are computed over this binary mask.
Optimization
1. Stochastic gradient descent, with momentum $0.9$, with Nesterov.
2. Weight decay of $1e-5$ over the weights.
3. Learning rate of $0.001$ decayed by $0.1$ each $40$ epochs with minimum value of $1e-7$.
4. Maximum epochs of 400.
5. Batch size of $8$.
6. Early stopping over validation set using classification error as a stopping criterion.
Other WSOL methods use the following setup with respect to each dataset:
GlaS:
Data
1. Patch size (hxw): $416\times 416$.
2. Augment patch using random horizontal flip.
3. Random rotation of one of: $0,90,180,270$ (degrees).
4. Patches are jittered using brightness=$0.5$, contrast=$0.5$, saturation=$0.5$, hue=$0.05$.
Model
1. Pretrained resnet18 [13] as a backbone.
Optimization
1. Stochastic gradient descent, with momentum $0.9$, with Nesterov.
2. Weight decay of $1e-4$ over the weights.
3. 160 epochs
4. Learning rate of $0.01$ for the first $80$, and of $0.001$ for the last $80$ epochs.
5. Batch size of $32$.
6. Early stopping over validation set using classification error/loss as a stopping criterion.
CUB5:
Data
1. Patch size (hxw): $448\times 448$. (resized while maintaining the ratio).
2. Augment patch using random horizontal flip.
3. Random rotation of one of: $0,90,180,270$ (degrees).
4. Random affine transformation with degrees $10$, shear $10$, scale $(0.3,1.5)$.
Model
Pretrained resnet18 [13] as a backbone.
Optimization
1. Stochastic gradient descent, with momentum $0.9$, with Nesterov.
2. Weight decay of $1e-4$ over the weights.
3. 90 epochs.
4. Learning rate of $0.01$ decayed every $30$ with $0.1$.
5. Batch size of $8$.
6. Early stopping over validation set using classification error/loss as a stopping criterion.
Running time:
Adding recursive computation in the backpropagation loop is expected to add an extra computation time. Tab.3 shows the training time (of 1 run) of our model with and without recursion over identical computation resource. The observed extra computation time is mainly due to gradient accumulation (line 12. Alg.1) which takes the same amount of time as parameters’ update (which is expensive to compute). The forward and the backward are practically fast, and take less time compared to gradient update. We do not compare the running between the datasets since they have different number/size of samples, and different pre-processing that it is included in the reported time. Moreover, the size of samples has an impact over the total time during the training over the validation set.
B.3 Results
In this section, we provide more visual results over the test set of each dataset. All the predictions with high resolution over the test set of both datasets can be downloaded from this Google drive link: https://drive.google.com/file/d/18K3BawR9Aqz6igK60H6IRGx-klkAwyJk/view?usp=sharing.
Over GlaS dataset (Fig.LABEL:fig:fig1-exp-sup-mat-1, LABEL:fig:fig1-exp-sup-mat-2), the visual results show clearly how our model, with and without erasing, can handle multi-instance. Adding the erasing feature allows recovering more discriminative regions. The results over CUB5 (Fig.LABEL:fig:fig3-exp-sup-mat-4, LABEL:fig:fig3-exp-sup-mat-5, LABEL:fig:fig3-exp-sup-mat-6, LABEL:fig:fig3-exp-sup-mat-7, LABEL:fig:fig3-exp-sup-mat-8) while are interesting, they show a fundamental limitation to the concept of erasing in the case of one-instance. In the case of multi-instance, if the model spots one instance, then, erases it, it is more likely that the model will seek another instance which is the expected behavior. However, in the case of one instance, and where the discriminative parts are small, the first forward allows mainly to spot such small part and erase it. Then, the leftover may not be sufficient to discriminate. For instance, in CUB5, in many cases, the model often spots the head. Once it is hidden, the model is unable to find other discriminative parts. A clear illustration to this issue is in Fig.LABEL:fig:fig3-exp-sup-mat-4, row 5. The model spots correctly the head, but was unable to spot the body while the body has similar texture, and it is located right near to the found head. We believe that the main cause of this issue is that the erasing concept forgets where discriminative parts are located. Erasing algorithms seem to be missing this feature that can be helpful to localize the entire object of interest by seeking around the found disciminative regions. In our erasing algorithm, once a region is erased, the model forgets about its location. Adding a memory-like, or constraints over the spatial distribution of the mined discriminative regions may potentially alleviate this issue.
It is interesting to notice the strategy used by our model to localize some types of birds. In the case of the 099.Ovenbird, it relies on the texture of the chest (white doted with black), while it localizes the white spot on the bird neck in the case of 108.White_necked_Raven. One can notice as well that our model seems to be robust to small/occluded objects. In many cases, it was able to spot small birds in a difficult context where the bird is not salient. |
Monodromy analysis of the computational power of the Ising topological quantum computer
Lachezar S. Georgiev
Institute for Nuclear Research and Nuclear Energy,
Bulgarian Academy of Sciences,
72 Tzarigradsko Chaussee,
1784 Sofia, Bulgaria
Insitut für Mathematische Physik, Technische Universität Braunschweig,
Mendelssohnstr. 3, 38106 Braunschweig, Germany
Insitut für Theoretische Physik, Leibniz Universität Hannover,
Appelstr. 2, 30167 Hannover, Germany
\runningheads
L. S. GeorgievMonodromy Analysis of the Ising TQC
{start}\coauthor
Andre Ahlbrecht2,3,
1,2 and
\coauthor
Reinhard F. Werner2,3
1
2
3
{Abstract}
We show that all quantum gates which could be implemented by braiding of Ising anyons in the
Ising topological quantum computer preserve the $n$-qubit Pauli group. Analyzing the structure
of the Pauli group’s centralizer, also known as the Clifford group, for $n\geq 3$ qubits, we prove
that the image of the braid group is a non-trivial subgroup of the Clifford group and therefore
not all Clifford gates could be implemented by braiding. We show explicitly the Clifford gates
which cannot be realized by braiding estimating in this way the ultimate computational power
of the Ising topological quantum computer.
1 Introduction
Quantum computers are expected to be much more powerful than the classical supercomputers
due to a combination of quantum phenomena such as coherent superpositions, entanglement and
paralelism [1].
Topological Quantum Computers are a class of quantum computers in which information is encoded
in non-local topological quantum numbers, such as the anyon fusion channels, and quantum gates are
implemented by anyon braiding protecting in this way quantum information processing from noise
[2, 3].
In this paper we will analyze the properties of the monodromy subgroup of the braid group for the Ising
anyon TQC and will demonstrate that all quantum gates which can be executed by braiding are Clifford
gates [1, 4], that are important for fault-tolerant quantum computation,
while not all Clifford gates could be implemented by braiding Ising anyons.
1.1 What are anyons?
Anyons are particle-like collective excitations, which are supposed to exist in strongly correlated two-dimensional
electron systems, that may carry fractional electric charge (measured in shot-noise experiments with
Laughlin quantum Hall states) and obey exotic exchange statistics: due to the properties of the two-dimensional
rotation group $SO(2)$ the statistics under exchange of identical particles is governed by representations
of the braid group rather than the permutation group. The Abelian anyon many-body states belong to
one-dimensional representations of the braid group and acquire nontrivial phases when neighboring anyons
are exchanged. If the many-body states belong to a higher-dimensional representation of the braid group
the corresponding particles are called non-Abelian anyons, or plektons, and the results of particle exchanges
are not only phases but could also be more general unitary transformations [3, 5].
Anyons might be observable in fractional quantum Hall samples as well as in
high-temperature superconductors [5] and cold atoms in optical lattices (intersecting laser beams).
1.2 Fusion paths: labeling anyonic states of matter
The fact that the exchanges of non-Abelian anyons may generate non-trivial matrices acting on the many-body state
implies that in fact this state must belong to a degenerate multiplet of states with many anyons at fixed positions.
Therefore the anyon’s positions and quantum numbers are not sufficient for specifying a multi-anyon state,
i.e., some additional non-local information is necessary. It appears that in order to specify the state
of many non-Abelian anyons we need to fix the fusion channels of any two neighbors.
The reason is that the same multi-anyon configuration may correspond to different independent states (CFT blocks)
because of the possibility of multiple fusion channels.
Consider, the process of fusing two anyons of type “$a$” and “$b$”, represented by some operators $\Psi_{a}$
and $\Psi_{b}$ . The result is expressed by the following fusion rule
$$\Psi_{a}\times\Psi_{b}=\sum\limits_{a=1}^{g}N_{ab}^{c}\Psi_{c}$$
where $N_{ab}^{c}$ are the fusion coefficients. There are two classes of anyons:
•
Abelian:
$\forall a,b\ \exists!c$ such that $N_{ab}^{c}\neq 0$
•
non-Abelian:
if for some $a$ and $b$ $N_{ab}^{c}\neq 0$ for more than one $c$
In TQC we are interested in information encoding for non-Abelian anyons:
because by definition there are more than 1 fusion channels we could encode information in the
index of the fusion channel.
For example, for Ising anyons $\Psi_{I}(z)=\sigma(z)$, represented by the chiral spin filed operator
of CFT dimension $\Delta=1/16$, which are characterized by the fusion rule
$$\sigma\times\sigma={\mathbb{I}}+\psi,$$
where $\psi$ is the Majorana fermion,
information is encoded as follows: if a pair of $\sigma$ fields is in the vacuum fusion channel we call
the state $|0\rangle$, while if it is in the Majorana channel we call the state $|1\rangle$
$$\displaystyle|0\rangle=(\sigma,\sigma)_{{\mathbb{I}}}$$
$$\displaystyle\longleftrightarrow$$
$$\displaystyle\quad\sigma\times\sigma\to{\mathbb{I}}$$
$$\displaystyle|1\rangle=(\sigma,\sigma)_{\psi}$$
$$\displaystyle\longleftrightarrow$$
$$\displaystyle\quad\sigma\times\sigma\to\psi$$
(the subscript of a pair denotes its fusion channel).
The important point here is that fusion channel is independent of the fusion process
details–it is a topological quantity. It is also non-local because the fusion channel is independent
of the anyon separation and is preserved even for large separation of anyons.
Finally, it is robust and persistent–if we fuse two particles and then split them again the fusion channel
does not change. Therefore, the message is that, in addition to the positions and all local quantum numbers
of the anyons, a multi-anyon state could be unambiguously specified
by the fusion path, i.e., the concatenation of elementary fusion channels for each neighbors
in an array of anyonic fields at fixed positions. This is conveniently presented in the form of Bratteli
diagrams (see, [6] for more details).
1.3 Quantum gates: adiabatic transport of anyons
As we mentioned above, we intend to execute quantum gates over our quantum register by
adiabatic exchange of anyons. The adiabatic approximation requires that the system
has a gapped Hamiltonian, i.e., a non-zero energy gap $\Delta$ to exist between the ground state
and the excitations. The external parameters describing the transport are the anyons positions $R_{1},\ldots,R_{k}$
in the plane (pinned by trapping potentials).
We recall that we are interested in operations which keep the anyon’s positions fixed
so that final configuration is at most a permutation of the anyons in the original one.
The elementary braiding processes are realized by taking one anyon adiabatically around its neighbor
on a large time scale $t\in[0,T]$ with $T\gg\Delta^{-1}$
(for example, if the energy gap of the $\nu=5/2$ FQH state is $\Delta=500$mK then the minimum time interval
is $T_{\min}\sim 10^{-10}$s). The adiabatic theorem for a non-degenerate ground state implies that
if the system is initially in the ground state then the final state after executing a complete loop in the
parameter space, is up to phase again the ground state
$$\psi^{(R_{1},\ldots,R_{k})}_{f}(z_{1},\ldots,z_{N})=\mathrm{e}^{i\phi}\psi^{(R%
_{1},\ldots,R_{k})}_{i}(z_{1},\ldots,z_{N}),$$
(1)
where $z_{i}$ are the coordinates of the electrons while $R_{j}$ are the coordinates of the anyons in the plane.
However, if the ground state is degenerate and separated by a gap from the excited states, then the adiabatic
theorem implies that the final state after traversing a complete loop is again a ground state from the same
degenerate multiplet but may be different from the original one. In this case the phase $\mathrm{e}^{i\phi}$ in
(1) must be replaced by a unitary operator acting on the multiplet [3].
Ignoring the dynamical phase $\mathrm{e}^{i\frac{1}{\hbar}\int dtE({\bf R}(t))}$ contribution to $\mathrm{e}^{i\phi}$ we will focus
on the (non-) Abelian Berry phase $\mathrm{e}^{i\alpha}$ defined by
$$\alpha=\oint d{\vec{\bf R}}\,\cdot\,\langle\psi(\vec{\bf R})|\vec{\nabla}_{\bf
R%
}|\psi(\vec{\bf R})\rangle,\quad\vec{\bf R}=(R_{1},\ldots,R_{k})$$
(2)
There are three contributions to the Berry phase (2), which have different status with respect to the
topological protection:
•
geometrical phase which is of the type of the Aharonov–Bohm phase
•
topological phase (quasiparticle statistics, independent of the geometry)
•
monodromy of CFT wave functions with anyons
The geometrical phase is proportional to the area of the loop and is not topologically protected.
When the many-body states are described by chiral
CFT wave functions which are also orthonormal, the Berry connection is trivial [7].
Because of this, the entire effect of the adiabatic
transport is given by the explicit monodromy of the multi-valued CFT correlators. Therefore, when we construct
quantum gates for TQC, we can directly deal with the braid generators and forget about the Berry connection
induced by the adiabatic transport.
2 $n$ Ising qubits: $2n+2$ Ising anyons on antidots
Because the dimension of the computational space, spanned by the Pfaffian wave functions with $2n+2$ Ising
anyons at fixed positions $\eta_{1},\ldots\eta_{2n+2}$ in the plane [8], is $2^{n}$ we could use
the many-body states represented by these wave functions to realize $n$ Ising qubits.
Our qubit encoding scheme is roughly that we use one pair of $\sigma$ fields to represent one qubit, whose state
is $|0\rangle$ if the pair is in the fusion channel of the vacuum or $|1\rangle$ if it is in the Majorana channel.
However, since we would like to represent qubits by chiral CFT correlation functions of Ising $\sigma$ fields,
which are non-zero only if the total fermion parity inside the correlator is trivial, we need one extra pair of
$\sigma$ fields, which is inert from the viewpoint of TQC but compensates if necessary total parity of the encoded
qubits, i.e., our one-qubit states could be written as 4-pt correlators
$$|c_{1}\rangle=\langle(\sigma\sigma)_{c_{1}}(\sigma\sigma)_{c_{0}}\rangle_{%
\mathrm{CFT}},\quad\mathrm{with}\quad c_{0}=c_{1},$$
where the subscript of the pair $(\sigma\sigma)_{c}$ denotes its fusion channel $c$.
Similarly, we can represent $n$ qubits as a correlator of $(n+1)$ pairs of Ising anyons $\sigma$
where the last pair compensates the total fermion parity
$$|c_{1},\ldots,c_{i},\ldots,c_{n}\rangle\to\langle(\sigma\sigma)_{c_{1}}\cdots(%
\sigma\sigma)_{c_{i}}\cdots(\sigma\sigma)_{c_{n}}(\sigma\sigma)_{c_{0}}\rangle%
_{\mathrm{CFT}},$$
with $c_{i}=\pm$ being the fermion parity of the $i$-th pair of $\sigma$ fields.
This encoding scheme is illustrated in Fig. 1
2.1 Braid matrices: multi-anyon wave-function approach
The explicit form of the generators of the Ising representation of the braid group ${\mathcal{B}}_{2n+2}$
has been conjectured in [8] to coincide with the finite subgroup of the
$\pi/2$-rotations from SO$(2n+2)$ (more precisely, with one of the spinor representations of its double cover
Spin$(2n+2)$). More robust results have been obtained in [9, 10, 11].
However, a more natural approach, which is based on the direct computation of the braid matrices
by analytic continuation of the multi-anyon Pfaffian wave functions, using the different operator product expansions
of the Ising $\sigma$ fields in the Neveu–Schwarz and Ramond sectors of the Ising model, has been exploited
in [12], which allowed to write explicitly the generators $B_{1}^{(2n+2,\pm)},\ldots B_{2n+1}^{(2n+2,\pm)}$
of the positive/negative parity representations (denoted by $\pm$ in the superscript) of ${\mathcal{B}}_{2n+2}$
for arbitrary number $n$ of qubits [12]
$$B_{2j-1}^{(2n+2,\pm)}=\underbrace{{\mathbb{I}}_{2}\otimes\cdots\otimes{\mathbb%
{I}}_{2}}_{j-1}\otimes\left[\matrix{1&0\cr 0&i}\right]\otimes\underbrace{{%
\mathbb{I}}_{2}\otimes\cdots\otimes{\mathbb{I}}_{2}}_{n-j},\ \ \textrm{for}\ %
\ 1\leq j\leq n,$$
(3)
$$B_{2j}^{(2n+2,\pm)}=\underbrace{{\mathbb{I}}_{2}\otimes\cdots\otimes{\mathbb{I%
}}_{2}}_{j-1}\otimes\frac{\mathrm{e}^{i\frac{\pi}{4}}}{\sqrt{2}}\left[\matrix{%
1&0&0&-i\cr 0&1&-i&0\cr 0&-i&1&0\cr-i&0&0&1}\right]\otimes\underbrace{{\mathbb%
{I}}_{2}\otimes\cdots\otimes{\mathbb{I}}_{2}}_{n-j-1},$$
(4)
for $n\geq 2$ and $1\leq j\leq n-1$, as well as
$$B_{2n}^{(2n+2,\pm)}=\underbrace{{\mathbb{I}}_{2}\otimes\cdots\otimes{\mathbb{I%
}}_{2}}_{n-1}\otimes\frac{\mathrm{e}^{i\frac{\pi}{4}}}{\sqrt{2}}\left[\matrix{%
1&-i\cr-i&1}\right].$$
(5)
The last braid generators $B_{2n+1}^{(2n+2,\pm)}$ of the $(\pm)$-parity representations of ${\mathcal{B}}_{2n+2}$
cannot be written in a similar form for general $n$ because they do not have a tensor product structure.
Yet, these diagonal matrices can be determined using Eq. (32) in [6], the results after
Eq. (24) in [13] and Proposition 2 in [12]
$$B_{2n+1}^{(2n+2,\pm)}=\frac{\mathrm{e}^{i\frac{\pi}{4}}}{\sqrt{2}}\left({%
\mathbb{I}}_{2^{n}}\mp i\underbrace{\sigma_{3}\otimes\cdots\otimes\sigma_{3}}_%
{n}\right),$$
(6)
where $\sigma_{3}$ is the third Pauli matrix.
The above equations (3), (4), (5) and (6)
provide the most explicit and compact form of the generators
of the two representations of ${\mathcal{B}}_{2n+2}$ with opposite fermion parity. Because
the Berry connection for adiabatic transport of Ising anyons is trivial [7],
these braid matrices can be ultimately used to implement topologically protected quantum gates
by adiabatic transport in the Ising TQC.
3 Pauli group for $n$ qubits: quantum correctable errors
One fundamental structure in any quantum information processing platform is the group of Pauli matrices, or Pauli
gates, which are part of the definition of the computational basis. They are important not only because they represent
a group of essential quantum operations but also because of the quantum error correction specifics.
The point is that there are two independent types of errors which can be considered as deviations of the
point, representing the qubit, of the Bloch sphere. Bit-flip ($\sigma_{x}$) errors are deviations along the meridians
while phase-flip ($\sigma_{z}$) errors are deviations along the parallels of the sphere.
While it is obvious that arbitrary errors can be decomposed into bit-flip and phase-flip errors, unlike in classical
error correction, the errors which could compromise a qubit are continuous quantities. Fortunately, it appears that
this continuum of (arbitrary) errors can be corrected by correcting only a discrete subset of those errors: e.g., as in
the Shor code [1].
Due to this virtue of the quantum correctability it is sufficient to consider and correct errors which belong to the Pauli
group. The $n$-qubit Pauli group is defined as the finite group containing all Pauli matrices $\sigma_{j}$ acting on any
of the qubits, including phases of $\pm i$
$${\mathcal{P}}_{n}=\left\{i^{m}\sigma_{\alpha(1)}\otimes\cdots\otimes\sigma_{%
\alpha(n)}\,\left|\quad\alpha(j),\ m\in\{0,1,2,3\}\right.\right\},$$
(7)
(with $\sigma_{0}={\mathbb{I}}_{2}$).
The projective Pauli group is isomorphic to ${\mathbb{Z}}_{2}^{2n}$ (see Eq. (17) in [6]), and its
center is ${\mathbb{Z}}_{4}$, so that the order of the $n$-qubit Pauli group
(7) is $|{\mathcal{P}}_{n}|=2^{2n+2}$.
3.1 $n$-qubit Clifford group: symplectic description
Because of the fundamental importance of the Pauli group its stabilizer also plays a very important role.
By definition the stabilizer of the $n$-qubit Pauli group, known as the $n$-qubit Clifford group, is the group of all
unitary $2^{n}\times 2^{n}$ matrices which preserve the Pauli group
$${\mathcal{C}}_{n}=\left\{U\in SU(2^{n})\ |\ U^{*}\cdot{\mathcal{P}}_{n}\cdot U%
\subset{\mathcal{P}}_{n}\ \right\}.$$
(8)
The fact that the Clifford unitaries commute with the Pauli operators makes them ideal for quantum error
correction because they do not introduce new errors while correcting the existing ones.
The Pauli group is naturally a subgroup of the Clifford group (8), i.e., ${\mathcal{P}}_{n}\subset{\mathcal{C}}_{n}$.
The Clifford group is infinite, however, the projective Clifford group $[{\mathcal{C}}_{n}]\equiv{\mathcal{C}}_{n}/Z$, where $Z$ is
its center, is finite.
Furthermore, there is an interesting isomorphism [6] between the projective Clifford group $[{\mathcal{C}}_{n}]$ factorized
by the projective Pauli group $[{\mathcal{P}}_{n}]\equiv{\mathcal{P}}_{n}/{\mathbb{Z}}_{4}$ and the symplectic group $Sp_{2n}(2)$
(the group of symplectic $2n\times 2n$ matrices with elements $0$ and $1$)
$$[{\mathcal{C}}_{n}]/[{\mathcal{P}}_{n}]\simeq Sp_{2n}(2).$$
This isomorphism allows us to compute the order of the projective Clifford group
using the known order of the symplectic group $Sp_{2n}(2)$ [14] (see also Appendix A in
[6] as well as the order of the projective Pauli group
$$\left|{\mathcal{C}}_{n}/Z\right|=2^{n^{2}+2n}\prod_{j=1}^{n}(4^{j}-1)\,.$$
(9)
This result will be important when we try to estimate the computational power of the Ising TQC.
3.2 Braiding gates as Clifford gates
A very useful observation in the context of TQC with Ising anyons is that the $n$-qubit Pauli group completely
coincides with the monodromy subgroup of the braid group representation ${\mathcal{B}}_{2n+2}$
$${\mathcal{P}}_{n}\equiv\mathrm{Image}\left({\mathcal{M}}_{2n+2}\right).$$
(10)
Then, because the monodromy group is a normal subgroup of the braid group, ${\mathcal{M}}_{2n+2}\subset{\mathcal{B}}_{2n+2}$, i.e.,
$$\forall b\in{\mathcal{B}}_{2n+2},\forall m\in{\mathcal{M}}_{2n+2}:\quad b^{-1}%
\,m\,b\in{\mathcal{M}}_{2n+2},$$
it follows that all braiding gates are Clifford gates, i.e., the image of the braid group ${\mathcal{B}}_{2n+2}$ is a subgroup
of the Clifford group for $n$ Ising qubits
$$\mathrm{Image}({\mathcal{B}}_{2n+2})\subset{\mathcal{C}}_{n}.$$
To prove this, notice first that
${\mathcal{P}}_{n}\subset\mathrm{Image}\left({\mathcal{M}}_{2n+2}\right)$ because all Pauli gates could be expressed in terms of
the squares of the elementary braid generators, which belong to the monodromy group, i.e.,
for $1\leq j\leq 2n+1$, we have for the spinor representations generators $R_{j}^{(n+1,\pm)}$ of ${\mathcal{B}}_{2n+2}$
(which have been proven in Proposition 2 in [12] to be equivalent to our representations with
generators $B_{j}^{(2n+2,\pm)}$ derived directly from the wave-function)
$$\left(R_{2i-1}^{(n+1,+)}\right)^{2}=\underbrace{{\mathbb{I}}_{2}\otimes\cdots%
\otimes{\mathbb{I}}_{2}}_{i-1}\otimes\sigma_{3}\otimes\underbrace{{\mathbb{I}}%
_{2}\otimes\cdots\otimes{\mathbb{I}}_{2}}_{n-i},$$
(11)
$$\left(R_{2i}^{(n+1,+)}\right)^{2}=\underbrace{{\mathbb{I}}_{2}\otimes\cdots%
\otimes{\mathbb{I}}_{2}}_{i-1}\otimes\sigma_{2}\otimes\sigma_{2}\otimes%
\underbrace{{\mathbb{I}}_{2}\otimes\cdots\otimes{\mathbb{I}}_{2}}_{n-i-1},$$
(12)
$$\left(R_{2n}^{(n+1,\pm)}\right)^{2}=\mp\underbrace{\sigma_{3}\otimes\cdots%
\otimes\sigma_{3}}_{n-1}\otimes\sigma_{1}$$
(13)
$$\left(R_{2n+1}^{(n+1,\pm)}\right)^{2}=\pm\underbrace{\sigma_{3}\otimes\cdots%
\otimes\sigma_{3}}_{n}.$$
(14)
Therefore the Pauli gates could be explicitly written in terms of monodromies
$$\sigma_{2}^{(n)}=i\left(R_{2n}^{(n+1,+)}\right)^{2}\left(R_{2n+1}^{(n+1,+)}%
\right)^{2}\,,$$
$$\sigma_{2}^{(n-j)}=i\left(R_{2n-2j}^{(n+1,+)}\right)^{2}\sigma_{2}^{(n-j+1)},%
\quad 1\leq j\leq n-1.$$
On the other hand, as shown in [6], the monodromy generators $A_{ij}$, with
$1\leq i<j\leq 2n+2$, which can be presented in the form
$$A_{ij}\equiv U_{ij}^{-1}B_{i}^{2}U_{ij},\quad\mathrm{where}\quad U_{ij}=\prod_%
{k=i+1}^{j-1}B_{k},$$
can be expressed in terms of the Pauli generators [6] due to Eqs. (11),
(12), (13) and (14) because
$$A_{kl}^{\pm}=-(-i)^{l-k+1}\left(R_{k}^{\pm}\right)^{2}\left(R_{k+1}^{\pm}%
\right)^{2}\cdots\left(R_{l-2}^{\pm}\right)^{2}\left(R_{l-1}^{\pm}\right)^{2},$$
where $1\leq k<l\leq 2n+2$ and we omitted the $(n+1)$ in the superscripts.
This completes the proof of the statement that all quantum gates which can be implemented by braiding of
Ising anyons are actually Clifford gates.
3.3 Orders of the image of the braid group and the Clifford group
We saw in the previous subsection that all braiding gates are Clifford gates.
Unfortunately, the converse is not true–not all Clifford gates could be implemented by
braiding of Ising anyons. To see this let us compare the order of the Clifford group (9) with that
of the braid group ${\mathcal{B}}_{2n+2}$ in the Ising representation, which is given by [10, 6]
$$\left|\mathrm{Image}\left({\mathcal{B}}_{2n+2}\right)\right|=2^{2n+2}(2n+2)!,%
\quad n\geq 2,$$
(15)
and the order for $n=1$ (including the center) is $\left|\mathrm{Image}\left({\mathcal{B}}_{4}\right)\right|=96$,
see Ref. [15].
Using Eqs. (15) and (9) we compare in Table 1 the orders of the projective
braid and Clifford groups for a few qubits.
It is obvious that the order of the Clifford group grows faster with the number of the qubits than the image
of the braid group.
In Fig. 2 plot the logarithm of the ratio of the order of the projective Clifford group and
the order of the image of the braid group, corresponding to $n$ qubits, as a function of the number of qubits $n$.
This logarithm still grows quadratically with $n$ which means that the order of the Clifford group
grows exponentially faster with $n$ than the order of the image of the braid group.
As can be seen from Table 1, the only exceptions are $n=1$ and $2$ for which the entire
Clifford group could be implemented by braiding [15].
Therefore, it is not possible to realize all Clifford gates for $n\geq 3$ by braiding of Ising anyons.
4 Conclusions
In this paper we have demonstrated that the $n$-qubit Pauli group for TQC with Ising anyons coincides exactly
with the representation of the monodromy subgroup ${\mathcal{M}}_{2n+2}$ of the braid group ${\mathcal{B}}_{2n+2}$ describing
the exchanges of $2n+2$ Ising anyons. This implies that all braiding gates are actually Clifford gates, which is important for fault-tolerant quantum computation. However, not all Clifford gates are realizable by braiding only.
The typically missing Clifford gates from the braid realization are the SWAP gates [6]
which simply exchange the quantum states of two qubits in an $n$-qubit quantum register [1].
This is another limitation of the Ising-anyon topological quantum computer, which has already been known
[16] to be
non-universal for quantum computation since some non-Clifford gates are not realizable by braiding.
Acknowledgments
We would like to thank Lyudmil Hadjiivanov, Holger Vogts, Sergey Bravyi, Volkher Scholz and
Johannes Guetschow for useful discussions.
L.S.G. has been supported as a Research Fellow by the Alexander von Humboldt
foundation. This work has been partially supported by the BG-NCSR under Contract
No. DO 02-257.
References
[1]
M. Nielsen and I. Chuang, Quantum Computation and Quantum Information.
Cambridge University Press, 2000.
[2]
A. Kitaev, “Fault-tolerant quantum computation by anyons,” Ann. of Phys.
(N.Y.) 303 (2003) 2.
[3]
S. D. Sarma, M. Freedman, C. Nayak, S. H. Simon, and A. Stern, “Non-Abelian
anyons and topological quantum computation,” Rev. Mod. Phys. 80
(2008) 1083, arXiv:0707.1889.
[4]
W. van Dam and M. Howard, “Tight noise thresholds for quantum computation with
perfect stabilizer operations,” Phys. Rev. Lett. 103 (2009)
170504.
[5]
A. Stern, “Anyons and the quantum Hall effect - a pedagogical review,” Ann. Phys. 323 (2008) 204–249,
arXiv:0711.4697.
[6]
A. Ahlbrecht, L. S. Georgiev, and R. F. Werner, “Implementation of Clifford
gates in the Ising-anyon topological quantum computer,” Phys. Rev. A
79 (2009) 032311, arXiv:0812.2338.
[7]
N. Read, “Non-abelian adiabatic statistics and Hall viscosity in quantum
Hall states and $p_{x}+ip_{y}$ paired superfluids,” Phys. Rev. B
79 (2009) 045308, arXiv:0805.2507.
[8]
C. Nayak and F. Wilczek, “$2n$ quasihole states realize $2^{n-1}$-dimensional
spinor braiding statistics in paired quantum Hall states,” Nucl.
Phys. B 479 (1996) 529–553,
cond-mat/9605145.
[9]
D. Ivanov, “Non-Abelian statistics of half-quantum vortices in $p$-wave
superconductors,” Phys. Rev. Lett. 86 (2001) 268–271.
[10]
N. Read, “Non-Abelian braid statistics versus projective permutation
statistics,” J. Math. Phys. 44 (2003) 558.
[11]
J. Franko, E. C. Rowell, and Z. Wang, “Extraspecial 2-groups and images of
braid group representations,” J. Knot Theory Ramifications 15 no.
4 (2006) 413, arXiv:math/0503435.
[12]
L. S. Georgiev, “Ultimate braid-group generators for exchanges of Ising
anyons,” (2008) arXiv:0812.2334.
[13]
L. S. Georgiev, “Computational equivalence of the two inequivalent spinor
representations of the braid group in the topological quantum computer based
on Ising anyons,” (2008)
arXiv:0812.2337.
[14]
R. A. Wilson, The Finite Simple Groups.
Springer, Berlin, 2007.
[15]
L. S. Georgiev, “Towards a universal set of topologically protected gates for
quantum computation with Pfaffian qubits,” Nucl. Phys. B 789
(2008) 552–590, hep-th/0611340.
[16]
M. Freedman, M. Larsen, and Z. Wang, “The two-eigenvalue problem and density
of Jones representation of braid groups,” (2000)
arXiv:math/0103200. |
On Far-Infrared and Submm Circular Polarization
B. T. Draine
Dept. of Astrophysical Sciences,
Princeton University, Princeton, NJ 08544, USA
draine@astro.princeton.edu
Abstract
Interstellar dust grains are often aligned.
If the
grain alignment direction varies along the line of sight,
the thermal emission becomes circularly-polarized.
In the diffuse interstellar medium,
the circular polarization at far-infrared
and submm wavelengths is predicted to be very small, and probably
unmeasurable. However,
circular polarization may reach detectable levels in
infrared dark clouds and protoplanetary disks. Measurement of circular
polarization could help constrain the structure of the magnetic field
in infrared dark clouds, and may shed light on the mechanisms responsible
for grain alignment in protoplanetary disks.
infrared dark clouds (787),
interstellar dust (836),
protoplanetary disks (1300),
radiative transfer (1335)
††©2021. All rights reserved.
1 Introduction
Since the discovery of starlight polarization over 70 years ago
(Hiltner 1949; Hall 1949),
polarization has become a valuable tool for study of both the physical
properties of interstellar dust and the structure of the interstellar
magnetic field. Starlight polarization arises because
initially unpolarized starlight becomes linearly polarized as a result
of linear dichroism produced by aligned dust grains
in the interstellar medium (ISM).
While the physics of dust grain alignment is not yet fully understood,
early investigations (Davis & Greenstein 1951) showed how
spinning dust grains
could become aligned with their shortest axis
parallel to the magnetic field direction. Subsequent studies
have identified a number of important physical processes that were
initially overlooked
(see the review by Andersson et al. 2015),
but it remains clear that in the
diffuse ISM the magnetic field establishes the direction
of grain aligment, with the dust grains tending to align
with their short axes parallel to the local magnetic field.
van de Hulst (1957) noted
that if the magnetic field direction was not uniform,
starlight propagating through the dusty
ISM would become circularly
polarized.
This was further discussed by Serkowski (1962) and
Martin (1972).
The birefringence of the dusty ISM is responsible for
converting linear polarization to circular polarization
(Serkowski 1962; Martin 1972).
The strength of the resulting circular polarization depends on the
changes in the magnetic field direction and also on the optical properties
of the dust.
Circular polarization of optical light from the Crab Nebula
was observed by Martin et al. (1972).
Circular polarization
of starlight was subsequently observed by
Kemp (1972) and Kemp & Wolstencroft (1972);
the observed degree of circular polarization, $|V|/I\lesssim 0.04\%$,
was small but measurable.
As had been predicted, the circular polarization $V$ changed sign
as the wavelength varied from blue to red,
passing through zero near the wavelength $\sim$$0.55\micron$
where the linear polarization peaked (Martin & Angel 1976).
Because the circular polarization depends on the change in magnetic field
along the line of sight, it can in principle
be used to study the structure of the Galactic magnetic field.
Data for 36 stars near the Galactic
Plane suggested a systematic bending of the field for Galactic longitudes
$80^{\circ}\lesssim\ell<100^{\circ}$
(Martin & Campbell 1976).
However,
these studies do not appear to have been pursued,
presumably because
sufficiently bright and reddened stars are sparse.
In the infrared, circular polarization has been measured for bright
sources in molecular clouds
(Serkowski & Rieke 1973; Lonsdale et al. 1980; Dyck & Lonsdale 1981).
Measurements of linear and
circular polarization were used
to constrain the magnetic field structure in the
Orion molecular cloud OMC-1
(Lee & Draine 1985; Aitken et al. 2006).
Circular polarization has also been observed in the infrared
(K${}_{\rm s}$ band) in
reflection nebulae
(Kwon et al. 2014, 2016, 2018),
but in this case
scattering is important
(Fukushima et al. 2020).
,
making interpretation dependent
on the uncertain scattering geometry.
It was long understood that the nonspherical and aligned grains responsible
for starlight polarization must emit far-infrared radiation which would
be linearly polarized.
Observations of this polarized emission now allow the
magnetic field direction projected on the sky to be mapped in the general ISM
(see, e.g., Planck Collaboration et al. 2015a, b; Fissel et al. 2016).
Ground-based observations have provided polarization
maps for high surface-brightness regions at submm frequencies
(e.g., Dotson et al. 2010),
and the Statospheric Observatory for Infrared Astronomy (SOFIA) is
providing polarization maps of bright regions in the far-infrared
(e.g., OMC-1: Chuss et al. 2019).
ALMA observations of mm and submm emission from protoplanetary disks
find that the radiation is often linearly polarized.
Scattering may contribute to the polarization
(Kataoka et al. 2015),
but the observed polarization directions and wavelength dependence
appear to
indicate that a substantial fraction of
the polarized radiation arises from thermal
emission from aligned dust grains
(Lee et al. 2021).
Previous theoretical discussions of circular polarization
were mainly concerned with
infrared and optical wavelengths where
initially unpolarized starlight
becomes polarized as a result of linear dichroism.
In a medium with changing polarization direction, the resulting circular
polarization is small because the linear polarization itself is
typically
only a few %, and the optical “phase shift”
produced by the aligned medium
is likewise small. At far-infrared wavelengths,
however, the radiation is already
substantially polarized when it is emitted, with linear polarizations
of 20% or more under favorable conditions
(Planck Collaboration et al. 2020).
While absorption optical depths tend to be small at long wavelengths,
the optical properties of the dust are
such that phase shift cross sections at submillimeter wavelengths
can be much larger than
absorption cross sections, raising the possibility that a medium with changing
alignment direction might exhibit measurable levels of circular polarization
at far-infrared or submm wavelengths.
The present paper discusses polarized radiative transfer in a medium with
partially aligned nonspherical grains, including both absorption and
thermal emission.
We estimate the expected degree of circular polarization
for emission from molecular clouds and protoplanetary disks.
For nearby molecular clouds, the far-infrared circular polarization is
very small, and probably unobservable.
The circular
polarization is predicted to be larger for so-called infrared dark
clouds (IRDCs), although it is still small.
For protoplanetary disks the circular polarization may be measurable,
but will depend on how the
direction of grain alignment changes in the disk.
The paper is organized as follows. The equations describing propagation of
partially-polarized radiation are presented in Section
2, and
the optics of partially-aligned dust mixtures
are summarized in Section 3.
Section 4 estimates the circularly polarized emission from
molecular clouds, including IRDCs.
Section 5 discusses the alignment of solid particles in
stratified protoplanetary disks resembling HL Tau.
If the grain alignment is due to dust-gas streaming, the emission may
be circularly-polarized.
The results are discussed in Section 6,
and summarized in Section 7.
2
Polarized Radiative Transfer
2.1 Refractive Index of a Dusty Medium
Aligned dust grains result in linear dichroism – the attenuation
coefficient depends on the linear polarization of the radiation.
Linear dichroism is
responsible for the polarization of starlight – initially unpolarized
light from a star becomes linearly polarized as the result of
polarization-dependent attenuation by aligned dust grains.
We adopt the convention that the electric field
$E\propto{\rm Re}[e^{imkz-i\omega t}]$ for a wave
propagating in the $+\hat{\bf z}$ direction, where
$k\equiv\omega/c=2\pi/\lambda$ is the wave vector
in vacuo, and $m(\omega)$
is the complex refractive index of the dusty medium.
For radiation polarized with ${\bf E}\parallel\hat{\bf e}_{j}$,
the complex refractive index is
$$m_{j}\equiv 1+m_{j}^{\prime}+im_{j}^{\prime\prime}~{}~{}~{}.$$
(1)
The real part $m_{j}^{\prime}$ describes retardation of the wave,
relative to propagation in vacuo. The phase delay $\phi$ varies as
$$\frac{d\phi_{j}}{dz}=\frac{2\pi}{\lambda}m_{j}^{\prime}=n_{d}C_{{\rm pha},j}~{}~{}~{},$$
(2)
where $n_{d}$ is the number density of dust grains, and
$C_{{\rm pha},j}$ is the “phase shift” cross section of a grain.
The imaginary part $m_{j}^{\prime\prime}$ describes attenuation of the energy
flux $F$:
$$\frac{d\ln F}{dz}=-\frac{4\pi}{\lambda}m_{j}^{\prime\prime}=-n_{d}C_{{\rm ext},j}~{}~{}~{},$$
(3)
where $C_{{\rm ext},j}$ is the extinction cross section.
2.2 Transfer Equations for the Stokes Parameters
Consider a beam of radiation characterized by the usual
Stokes vector ${\bf S}\equiv(I,Q,U,V)$.
The equations describing transfer of radiation through a dichroic
and birefringent medium with
changing magnetic field direction have
been discussed by Serkowski (1962) and
Martin (1974).222
Our axes $\hat{\bf x}$ and $\hat{\bf y}$ correspond, respectively,
to axes 2 and 1 in Martin (1974).
The discussions have asssumed that the aligned grains
polarize the light by preferential attenuation of one of the polarization
modes, with circular polarization then arising from differences in propagation
speed of the linearly polarized modes.
For submicron particles, scattering is negligible
at far-infrared wavelengths, because the grain is small compared to the
wavelength.
However, the grains are themselves able to radiate,
and aligned grains will emit polarized radiation.
Let the direction of the static magnetic field ${\bf B}_{0}$ be
$$\hat{\bf b}\equiv\frac{{\bf B}_{0}}{|{\bf B}_{0}|}=(\hat{\bf n}\cos\Psi+\hat{\bf e}\sin\Psi)\sin\gamma+\hat{\bf z}\cos\gamma$$
(4)
where $\hat{\bf n}$ and $\hat{\bf e}$ are unit vectors in the North and East
directions, $\hat{\bf z}=\hat{\bf n}\times\hat{\bf e}$
is the direction of propagation, and $\sin\gamma=1$ if $\hat{\bf b}$ is in the
plane of the sky.
Let $\hat{\bf x}$ and $\hat{\bf y}$ be orthonormal vectors in the plane of the sky, with
$\hat{\bf x}$ parallel to the projection of ${\bf B}_{0}$ on the plane of the sky
(see Figure 1):
$$\displaystyle\hat{\bf x}$$
$$\displaystyle\,=\,$$
$$\displaystyle\hat{\bf n}\cos\Psi+\hat{\bf e}\sin\Psi$$
(5)
$$\displaystyle\hat{\bf y}$$
$$\displaystyle=$$
$$\displaystyle-\hat{\bf n}\sin\Psi+\hat{\bf e}\cos\Psi~{}.$$
(6)
If the dust grains are partially
aligned with their short axes tending to be parallel to ${\bf B}_{0}$, we expect
$C_{{\rm ext},y}>C_{{\rm ext},x}$.
At long wavelengths ($\lambda\gg 10\micron$) we also expect
$C_{{\rm pha},y}>C_{{\rm pha},x}$.
We assume that the dust grains themselves have no overall chirality, hence
circular dichroism and circular birefringence can be neglected so long
as the response of the magnetized plasma is negligible, which is
generally the case for $\nu\gtrsim 30\,{\rm GHz}$.
Following the notation of Martin (1974), define
$$\displaystyle\delta$$
$$\displaystyle\,\equiv\,$$
$$\displaystyle n_{d}~{}\frac{(C_{{\rm ext},y}+C_{{\rm ext},x})}{2}=\frac{2\pi}{\lambda}\left(m_{x}^{\prime\prime}+m_{y}^{\prime\prime}\right)$$
(7)
$$\displaystyle\Delta\sigma$$
$$\displaystyle\equiv$$
$$\displaystyle n_{d}~{}\frac{(C_{{\rm ext},y}-C_{{\rm ext},x})}{2}=\frac{2\pi}{\lambda}\left(m_{y}^{\prime\prime}-m_{x}^{\prime\prime}\right)$$
(8)
$$\displaystyle\Delta\epsilon$$
$$\displaystyle\equiv$$
$$\displaystyle n_{d}~{}\frac{(C_{{\rm pha},y}-C_{{\rm pha},x})}{2}=\frac{2\pi}{\lambda}\frac{\left(m_{y}^{\prime}-m_{x}^{\prime}\right)}{2}~{}.$$
(9)
If scattering is neglected,
the propagation of the Stokes parameters is given
by333Eq. (10) conforms to the IEEE and
IAU conventions for the Stokes parameters
(Hamaker & Bregman 1996): $Q>0$ for ${\bf E}$ along the N-S direction,
$U>0$ for ${\bf E}$ along the NE-SW direction, $V>0$ for “right-handed”
circular polarization
(${\bf E}$ rotating in the counterclockwise direction as viewed on the sky).
$$\frac{d}{dz}\left(\begin{array}[]{c}I\\
Q\\
U\\
V\\
\end{array}\right)=\left(\begin{array}[]{c c c c}-\delta&\Delta\sigma\cos 2\Psi&\Delta\sigma\sin 2\Psi&0\\
\Delta\sigma\cos 2\Psi&-\delta&0&\Delta\epsilon\sin 2\Psi\\
\Delta\sigma\sin 2\Psi&0&-\delta&-\Delta\epsilon\cos 2\Psi\\
0&-\Delta\epsilon\sin 2\Psi&\Delta\epsilon\cos 2\Psi&-\delta\\
\end{array}\right)\left(\begin{array}[]{c}I-B(T_{d})\\
Q\\
U\\
V\\
\end{array}\right)~{},$$
(10)
where
$B(T_{d})$ is the intensity of blackbody radiation
for dust temperature $T_{d}$.
Eq. (10)
differs from Martin (1974) only by
replacement of $I$ by $(I-B)$ on the
right-hand side to allow for thermal emission
(see also Reissl et al. 2016).
It is apparent that Eq. (10)
is consistent with thermal equilibrium blackbody radiation, with $d{\bf S}/dz=0$
for
${\bf S}=(B,0,0,0)$.
3
Optical Properties of the Dust
We now assume that the grains can be
approximated by spheroids.
Draine & Hensley (2021a) found that observations of
starlight polarization and
far-infrared polarization appear to be
consistent with dust with oblate spheroidal shapes, with
axial ratio $b/a\approx 1.6$ providing a good fit to observations.
Let $\hat{\bf b}$ be a “special” direction in space for grain alignment: the short
axis $\hat{\bf a}_{1}$ of the grain
may be preferentially aligned either parallel or perpendicular
to $\hat{\bf b}$. For grains in the diffuse ISM, $\hat{\bf b}$ is the
magnetic field direction, and the short axis $\hat{\bf a}_{1}$ tends to be
parallel to $\hat{\bf b}$. In protostellar disks, however, other alignment
mechanisms may operate, and $\hat{\bf b}$ may not be parallel to the magnetic
field.
We approximate
the grains by oblate spheroids, spinning with short axis
$\hat{\bf a}_{1}$ parallel to the angular momentum ${\bf J}$.
For oblate spheroids,
the fractional alignment is defined to be
$$f_{\rm align}\equiv\frac{3}{2}\langle(\hat{\bf a}_{1}\cdot\hat{\bf b})^{2}\rangle-\frac{1}{2}~{}~{}~{},$$
(11)
where $\langle...\rangle$ denotes averaging over the grain population.
If ${\bf J}\parallel\hat{\bf b}$, then $f_{\rm align}\rightarrow 1$;
if ${\bf J}$ is randomly-oriented, then $f_{\rm align}=0$;
if ${\bf J}\perp\hat{\bf b}$, then $f_{\rm align}\rightarrow-\frac{1}{2}$.
The “modified picket fence approximation” (Draine & Hensley 2021a)
relates $\delta$, $\Delta\sigma$,
and $\Delta\epsilon$ to $f_{\rm align}$ and the angle $\gamma$:
$$\displaystyle\delta$$
$$\displaystyle~{}=~{}$$
$$\displaystyle n_{d}\left[\frac{C_{{\rm abs},a}+2C_{{\rm abs},b}}{3}+f_{\rm align}\left(\cos^{2}\gamma-\frac{1}{3}\right)\frac{\left(C_{{\rm abs},b}-C_{{\rm abs},a}\right)}{2}\right]$$
(12)
$$\displaystyle\Delta\sigma$$
$$\displaystyle=$$
$$\displaystyle n_{d}f_{\rm align}\sin^{2}\gamma~{}\frac{\left(C_{{\rm abs},b}-C_{{\rm abs},a}\right)}{2}$$
(13)
$$\displaystyle\Delta\epsilon$$
$$\displaystyle=$$
$$\displaystyle n_{d}f_{\rm align}\sin^{2}\gamma~{}\frac{\left(C_{{\rm pha},b}-C_{{\rm pha},a}\right)}{2}~{}~{}~{}.$$
(14)
In the Rayleigh limit (grain radius $a\ll\lambda$) we have
(Draine & Lee 1984)
$$\displaystyle C_{{\rm abs},j}$$
$$\displaystyle~{}=~{}$$
$$\displaystyle\frac{2\pi V}{\lambda}\frac{\epsilon_{2}}{|1+(\epsilon-1)L_{j}|^{2}}$$
(15)
$$\displaystyle C_{{\rm pha},j}$$
$$\displaystyle~{}=~{}$$
$$\displaystyle\frac{\pi V}{\lambda}\frac{\left\{(\epsilon_{1}-1)\left[1+L_{j}(\epsilon_{1}-1)\right]+\epsilon_{2}^{2}L_{j}\right\}}{|1+(\epsilon-1)L_{j}|^{2}}~{}~{}~{},$$
(16)
where $\epsilon(\lambda)\equiv\epsilon_{1}+i\epsilon_{2}$ is the complex dielectric
function of the grain material, and
$L_{a}$ and $L_{b}=(1-L_{a})/2$ are dimensionless “shape factors”
(van de Hulst 1957; Bohren & Huffman 1983) that depend on the
axial ratio of the spheroid.
Draine & Hensley (2021b) have estimated
$\epsilon(\lambda)$ of astrodust for
different assumed axial ratios.
Figure 2 shows the
dimensionless ratios $\Delta\sigma/\delta$ and $\Delta\epsilon/\delta$
for oblate astrodust spheroids with $b/a=1.6$
($L_{a}=0.464$, $L_{b}=0.268$)
and
$f_{\rm align}=0.5$, for the case where the magnetic field is in the plane of
the sky ($\sin\gamma=1$).
The relatively high opacity that enables “astrodust” to reproduce the
observed far-infrared emission and polarization also implies that $\epsilon_{1}$
has to be fairly large at long wavelengths
(Draine & Hensley 2021b).
This causes $\Delta\epsilon/\delta$ to be relatively large, as seen in
Figure 2.
For $\lambda\gtrsim 70\micron$, oblate astrodust grains with $b/a=1.6$ have
$$\displaystyle\frac{\Delta\sigma}{\delta}$$
$$\displaystyle\,\approx\,$$
$$\displaystyle 0.38f_{\rm align}\sin^{2}\gamma$$
(17)
$$\displaystyle\frac{\Delta\epsilon}{\delta}$$
$$\displaystyle\approx$$
$$\displaystyle 9.0\left(\frac{\lambda}{\,{\rm mm}}\right)^{0.7}f_{\rm align}\sin^{2}\gamma~{}~{}~{}.$$
(18)
Eq. (17) and (18)
neglect the weak dependence of $\delta$ on $f_{\rm align}$ and $\gamma$
(see Eq. 12).
Eqs. (17) and
((18) are shown in Figure 2 for
$f_{\rm align}\sin^{2}\gamma=0.5$.
4
Circular Polarization from Interstellar Clouds
4.1 Grain Alignment
A spinning grain develops a magnetic moment from the Barnett effect
(if it has unpaired electrons)
and the Rowland effect (if it has a net charge).
For submicron grains, the resulting net magnetic
moment is large enough that
the Larmor precession period in the local interstellar magnetic field is
short compared to the timescales for other mechanisms
to change the direction of
the grain’s angular momentum ${\bf J}$. The rapid
precession of ${\bf J}$ around
the local magnetic field ${\bf B}_{0}$ and the resulting averaging of grain
optical properties
establishes ${\bf B}_{0}$ as the special direction for grain alignment –
grains will
be aligned with their short axis preferentially oriented either parallel or
perpendicular to ${\bf B}_{0}$.
Paramagnetic dissipation,
radiative torques, or systematic streaming of the grains relative to the
gas will determine whether the grains
align with their short axes preferentially
parallel or perpendicular to ${\bf B}_{0}$. Although the details of the physics of
grain alignment are not yet fully understood, it is now
clear
that grains in diffuse and translucent clouds tend to align with short axes
$\hat{\bf a}_{1}$ tending to be parallel to ${\bf B}_{0}$, i.e., with
$f_{\rm align}>0$ (see Eq. 11).
If the dust
grains are modeled by oblate spheroids with axial ratio $b/a=1.6$,
a mass-weighted alignment fraction $f_{\rm align}\approx 0.5$
can reproduce the
highest observed levels of polarization of both starlight and far-infrared
emission from dust in diffuse clouds (including
diffuse molecular clouds) (Draine & Hensley 2021a).
In dark clouds, the fractional polarization of the thermal emission is
generally lower than in diffuse clouds.
The lower fractional polarization may indicate
lower values of $f_{\rm align}$ within dark clouds,
but it could also
result from a nonuniform
magnetic field in the cloud, with the overall linear polarization fraction
reduced by beam-averaging over regions with different
polarization directions.
If the reduced values of linear polarization are
due to
changes in magnetic field direction along the line-of-sight,
the emission from the cloud could
become partly circularly-polarized. We now estimate
what levels of circular polarization might be present.
4.2
Nearby Molecular Clouds
Planck has observed linearly polarized emission from many molecular
clouds.
To estimate the levels of circular polarization that might be present,
we consider one
illustrative example, in the “RCrA-Tail” region in the R Corona Australis
molecular cloud
(see Fig. 11 in Planck Collaboration et al. 2015a).
The polarized emission in this region has a number of local maxima.
One of the polarized flux maxima coincides with a total emission peak
near
$(\ell,b)\approx(-0.9^{\circ},-18.7^{\circ})$,
with total intensity $I(353\,{\rm GHz})\approx 4\,{\rm MJy}\,{\rm sr}^{-1}$ and
linear polarization fraction $p\approx 2.5\%$.
For an assumed dust temperature
$T_{d}\approx 15\,{\rm K}$, the observed intensity
$I(353\,{\rm GHz})=4\,{\rm MJy}\,{\rm sr}^{-1}$ implies
$\tau(353\,{\rm GHz})\approx 1.3\times 10^{-4}$.
For diffuse ISM dust (see, e.g. Hensley & Draine 2021), this would
correspond to $A_{V}\approx 5\,$mag.
For simple assumptions about the angle $\Psi$ characterizing the
projection of the magnetic field on the sky, we can obtain approximate
analytic solutions to the radiative transfer equations
(10), valid for $\tau\ll 1$
(see Appendix A).
Define $d\tau^{\prime}\equiv\delta dz$.
Suppose that $T_{d}$, $(\Delta\sigma/\delta)$,
and $(\Delta\epsilon/\delta)$ are constant, and
assume that the
magnetic field direction has a smooth twist along the line of sight,
with $\Psi$ varying linearly with $\tau^{\prime}$ as $\tau^{\prime}$ varies
from $0$ to $\tau$:
$$\Psi=\Psi_{0}+\alpha\tau^{\prime}~{}~{}~{},~{}~{}~{}~{}\alpha\equiv\frac{\Delta\Psi}{\tau}~{}~{}~{}.$$
(19)
For $\tau\ll 1$,
the linear and circular polarization fractions are then
(see Appendix A)
$$\displaystyle p$$
$$\displaystyle~{}\approx~{}$$
$$\displaystyle\left(\frac{\Delta\sigma}{\delta}\right)\frac{\left[1-\cos(2\Delta\Psi)\right]^{1/2}}{\Delta\Psi}$$
(20)
$$\displaystyle\frac{V}{I}$$
$$\displaystyle\approx$$
$$\displaystyle\left(\frac{\Delta\sigma}{\delta}\right)\left(\frac{\Delta\epsilon}{\delta}\right)\frac{\tau}{2\Delta\Psi}\left[1-\frac{\sin(2\Delta\Psi)}{2\Delta\Psi}\right]~{}~{}~{}.$$
(21)
Eq. (20–21) are for the special case of
an isothermal medium with a uniform twist in the alignment direction.
If we assume diffuse cloud dust properties (Eq. 17, 18) but with
$f_{\rm align}\sin^{2}\gamma=0.075$
and a twist angle $\Delta\Psi=90^{\circ}$,
we can reproduce the observed polarization $p\approx 2.5\%$
in the RCrA-Tail region.
With these parameters, Eq. (21) predicts
circular polarization
$V/I\approx 7\times 10^{-7}(\lambda/850\micron)^{-1.1}$,
far below
current sensitivity limits.
It is clear that
measurable levels of circular polarization in the far-infrared
will require much larger optical
depths $\tau$.
4.3 Infrared Dark Clouds
Typical giant molecular clouds ,
such as the Orion Molecular Cloud,
have mass surface densities resulting in
$A_{V}\approx 10{\rm\,mag}$ of extinction, and are therefore
referred to as “dark clouds”.
However, in the inner Galaxy, a number of clouds have
been observed that appear to be “dark” (i.e., opaque) in the
mid-infrared.
These “infrared dark clouds” (IRDCs) have
dust masses per area an order of magnitude larger
than “typical” giant molecular clouds.
Because of the much larger extinction in IRDCs, the circular
polarization may be much larger than in normal GMCs.
The “Brick” (G0.253+0.016) is a well-studied IRDC
(Carey et al. 1998; Longmore et al. 2012).
With an estimated mass $M>10^{5}M_{\odot}$ and high estimated density
($n_{\rm H}>10^{4}\,{\rm cm}^{-3}$), the Brick appears to be forming stars
(Marsh et al. 2016; Walker et al. 2021), although with
no signs of high-mass star formation.
It has been mapped at $70-500\micron$ by Herschel Space Observatory
(Molinari et al. 2016) and at
$220\,{\rm GHz}$ by ACT
(Guan et al. 2021).
Polarimetric maps have been made at $220\,{\rm GHz}$ by ACT, and at $850\,{\rm GHz}$
by the CSO (Dotson et al. 2010).
The Northeastern region at $(\ell,b)=(16^{\prime},2^{\prime})$
has $I(600\,{\rm GHz})\approx 5000\,{\rm MJy}\,{\rm sr}^{-1}$
(Molinari et al. 2016)
and $I(220\,{\rm GHz})\approx 90\,{\rm MJy}\,{\rm sr}^{-1}$
(Guan et al. 2021).
For an assumed
dust temperature $T_{d}\approx 20\,{\rm K}$, this
indicates optical depths
$\tau(600\,{\rm GHz})\approx 0.05$, $\tau(220\,{\rm GHz})\approx 0.005$.
Astrodust would then have
$\tau(850\,{\rm GHz})\approx 0.09$, and $\tau(353\,{\rm GHz})\approx 0.014$ –
about 100 times
larger than in the R CrA molecular cloud.
The fractional polarization is expected to be approximately independent
of frequency in the submm.
At $220\,{\rm GHz}$,
(Guan et al. 2021) report
a linear polarization
of $1.8\%$
at 220 GHz for the Northeastern end of the cloud,
The CSO polarimetry suggests a similar
fractional polarization .
While this fractional polarization is relatively small compared to the
highest values ($\sim 20\%$) observed by Planck in diffuse clouds, it is
still appreciable, requiring significant grain alignment in a substantial
fraction of the cloud volume
(i.e., not just in the surface layers of the IRDC).
The inferred average magnetic field direction $\Psi\approx 20^{\circ}$
(Guan et al. 2021)
differs by
$\sim$$60^{\circ}$ from the $\Psi\approx 80^{\circ}$ field direction
indicated by the $220\,{\rm GHz}$ polarization
outside the cloud, demonstrating that the magnetic field in this region
is far from uniform.
As a simple example,
we suppose, as we did for the RCrA-Tail region above, that
the projected field rotates
by $\Delta\Psi=90^{\circ}$ from the far side of the “Brick” to the near side.
We calculate the
circular polarization
at $850\,{\rm GHz}$ ($350\micron$)
for the estimated total optical depth $\tau(850\,{\rm GHz})=0.09$ of the Brick.
We use the estimated properties of astrodust in the diffuse ISM,
with $f_{\rm align}\sin^{2}\gamma=$$0.075$
to approximately reproduce the $\sim$$1.8\%$
polarization observed for the Brick.
Figure 2 shows the polarization state of the radiation as it propagates
through the cloud from $\tau^{\prime}=0$ to $\tau^{\prime}=\tau$. The fractional
polarization $p$ starts off at $\sim$2.9%,
dropping to $\sim$1.8% at
$\tau^{\prime}=\tau$ as the result of the assumed magnetic field twist of
$\Delta\Psi=90^{\circ}$.
The resulting $850\,{\rm GHz}$
circular polarization $V/I$ is small, only
$\sim$$0.025\%$.
Measuring such low levels of circular polarization will be challenging.
For $\Delta\epsilon/\delta\propto\lambda^{0.7}$
(see Figure 2) and the absorption coefficient
$\delta\propto\lambda^{-1.8}$
, the circular polarization from an IRDC
is expected to vary as $V/I\propto\lambda^{-1.1}$.
For the adopted parameters ($\Delta\Psi=90^{\circ}$,
$\tau(850\,{\rm GHz})=0.09$,
$f_{\rm align}\sin^{2}\gamma=$$0.075$), the Brick would have
$$\frac{V}{I}\approx{0.025}\%\left(\frac{350\micron}{\lambda}\right)^{1.1}~{}~{}~{}$$
(22)
While much larger than for normal GMCs, this estimate for the
circularly-polarized emission from the Brick is small, and
measuring it will be challenging.
5
Circular Polarization from Protoplanetary Disks
Protoplanetary disks can have dust surface densities well in excess of
IRDCs, raising the possibility that $\tau$ may be large enough to generate
measurable circular polarization if the grains are locally aligned
and the
alignment direction varies along the optical path.
5.1 Grain Alignment in Protoplanetary Disks
Gas densities in protoplanetary disks exceed interstellar gas densities by
many orders of magnitude.
The observed thermal emission spectra from young protoplanetary
disks appear to require that
most of the solid material be in particles
with sizes that may be as large as $\sim$mm
(Beckwith & Sargent 1991; Natta & Testi 2004; Draine 2006),
orders of magnitude larger than
in the diffuse ISM.
The physics of grain alignment in protoplanetary
disks differs substantially
from the processes in the diffuse ISM.
One important difference from interstellar clouds
is that in protoplanetary disks the Larmor precession period
for the grain sizes of interest is long
compared to the time for the grain
to undergo collisions with a mass of gas atoms equal to the grain mass
(Yang 2021).
With Larmor precession no longer important,
the magnetic field no longer determines the preferred direction
for grain alignment. Instead, the “special” direction
may be either the local direction of
gas-grain streaming – in which case, $\hat{\bf b}\parallel{\bf v}_{\rm drift}$ – or
perhaps the direction of anisotropy in the radiation field
– in which case, $\hat{\bf b}\parallel{\bf r}$. Whether grains will tend to align
with short axes $\hat{\bf a}_{1}$ parallel or perpendicular to $\hat{\bf b}$
is a separate
question.
5.1.1 Alignment by Radiative Torques?
Radiative torques resulting from outward-directed radiation provide
one
possible mechanism for grain alignment.
Starlight torques have been found to be very important for both
spinup and alignment of interstellar grains
(Draine & Weingartner 1996, 1997; Weingartner & Draine 2003; Lazarian & Hoang 2007a).
With mm-sized grains, both stellar radiation and infrared emission from the
disk may be capable of exerting systematic torques large enough to
affect the spin of the grain. However, the radiation pressure
$\sim L_{\star}/4\pi R^{2}c\approx 5\times 10^{-9}(L_{\star}/L_{\odot})(100\,{\rm AU}/R)^{2}\,{\rm erg}\,{\rm cm}^{-3}$ is small compared to the gas pressure
$\sim 8\times 10^{-5}(n_{\rm H}/10^{10}\,{\rm cm}^{-3})(T/100\,{\rm K})\,{\rm erg}\,{\rm cm}^{-3}$.
If the grain streaming velocity exceeds $\sim 10^{-4}c_{s}$, where $c_{s}$
is the sound speed, systematic torques exerted by gas atoms may dominate
radiative torques.
Studies of realistic grain geometries are needed to clarify the relative
importance of gaseous and radiative torques.
5.1.2 Alignment by Grain Drift?
The differential motion of dust and gas in three-dimensional disks
has been discussed by
Takeuchi & Lin (2002).
Grains well above or below
the midplane will sediment toward the midplane, with
${\bf v}_{\rm drift}\parallel{\bf z}_{\rm disk}$, where
$z_{\rm disk}$ is height above the midplane.
Dust grains close to the midplane will be in near-Keplerian orbits, but will
experience a “headwind”, with ${\bf v}_{\rm drift}\parallel\hat{\phi}$.
Vertical and azimuthal drift velocities will in general differ, with different
dependences on grain size and radial distance from the protostar.
Gold (1952) proposed
grain drift relative to the gas as an alignment mechanism.
For hypersonic motion,
Gold concluded that needle-shaped particles would tend to align
with their short axes perpendicular to
${\bf v}_{\rm drift}$.
Purcell (1969) analyzed spheroidal shapes, finding that significant
alignment requires hypersonic
gas-grain velocities
if the grains are treated as rigid bodies.
The degree of grain alignment
is increased when dissipative processes
within the grain are included (Lazarian 1994), but the degree of
alignment is small unless the streaming is supersonic.
Lazarian & Hoang (2007b) discussed mechanical alignment of
subsonically-drifting grains with “helicity”, arguing that helical
grains would preferentially acquire angular momentum parallel or antiparallel
to ${\bf v}_{\rm drift}$; internal dissipation would then cause the short
axis to tend to be parallel to ${\bf v}_{\rm drift}$.
Lazarian & Hoang (2007b) based their analysis on a simple geometric
model of a spheroidal grain with a single projecting panel.
More realistic irregular geometries have been considered by
Das & Weingartner (2016) and Hoang et al. (2018).
However, these studies all assumed Larmor
precession to be rapid compared to the gas-drag time, and are therefore
not directly
applicable to protoplanetary disks.
It appears possible that, averaged over
the ensemble of irregular grain shapes,
the net effect of gas-grain streaming in protoplanetary disks may
be (1) suprathermal angular momenta tending to be perpendicular to
${\bf v}_{\rm drift}$, and (2) tendency of grains to align with short axes
perpendicular to ${\bf v}_{\rm drift}$.
Below, we consider the
consequences of this conjecture.
5.2 The HL Tau Disk as an Example
ALMA has observed a number of protoplanetary disks
(e.g., Andrews et al. 2018). HL Tau remains one of the
best-observed cases: it is nearby ($\sim$$140\,{\rm pc}$), bright, and
moderately inclined ($i\approx 45^{\circ}$).
The optical depth in the disk is large, with
beam-averaged $\tau(3.1\,{\rm mm})\approx 0.13$ at $R\approx 100\,{\rm AU}$.444
At $R\approx 100\,{\rm AU}$, $I_{\nu}(3.1\,{\rm mm})\approx 1.1\times 10^{3}\,{\rm MJy}\,{\rm sr}^{-1}$
(Kataoka et al. 2017; Stephens et al. 2017),
implying $\tau\approx 0.13$ if
the dust temperature $T_{d}\approx 30\,{\rm K}$ (Okuzumi & Tazaki 2019).
Given that the dust is visibly concentrated in rings, and the
possibility that there may be
additional unresolved substructure, the actual optical depth of the
emitting regions at $100\,{\rm AU}$ is likely to be larger.
The polarization in HL Tau has been mapped by ALMA at $870\micron$,
$1.3\,{\rm mm}$, and $3.1\,{\rm mm}$
(Kataoka et al. 2017; Stephens et al. 2017).
The observed polarization patterns show
considerable variation from one frequency to another,
complicating interpretation.
Both intrinsic polarization from
aligned grains and polarization resulting from scattering appear
to be contributing to the overall polarization.
Mori & Kataoka (2021) argue that
polarized
emission makes a significant contribution to the polarization,
at least at $3.1\,{\rm mm}$.
The $3.1\,{\rm mm}$ polarization pattern is generally azimuthal
(Stephens et al. 2017). If due to polarized emission,
this would require that the radiating dust
grains have
short axes preferentially oriented in the radial direction.
The alignment mechanism is unclear.
Kataoka et al. (2019) favor radiative torques, with the
grain’s short axis assumed to be parallel to the radiative flux, in the
radial direction.
This would be consistent with the observation that the linear
polarization tends to be in the azimuthal direction.
If radiative torques are responsible for grain alignment in protoplanetary
disks, then we do not expect the thermal emission from the disk to be
circularly polarized, because the grains in the upper and lower layers
of the disk will tend to have the same alignment direction as the grains near
the midplane.
If there is no change in the direction of
the grain alignment along a ray, there will be no circular polarization.
Here we instead
suppose that grain alignment is dominated by gas-grain streaming due
to systematic motion of the dust grains relative to the local gas.
If we define $\hat{\bf b}\parallel{\bf v}_{\rm drift}$ we can apply the
discussion above.
As discussed above,
we conjecture that the irregular grains align with short axes tending
to be perpendicular to ${\bf v}_{\rm drift}$, thus $f_{\rm align}<0$.
As before, let $\gamma$ be the angle between the line-of-sight and
$\hat{\bf b}$,
and let $\Psi$ be
the angle (relative to north) of the projection of $\hat{\bf b}$ on the
plane of the sky.
For illustration, we take the disk to have the major axis in the E-W direction
(see Figure 5), with inclination $i$.
Thus vertical drifts correspond to $\Psi=0$ and $180^{\circ}$.
The treatment of radiative transfer developed above for magnetized
clouds can be reapplied to protoplanetary disks –
the only difference is that if the grains align with their short
axis tending to be perpendicular to ${\bf v}_{\rm drift}$
then $f_{\rm align}<0$, implying
$\Delta\sigma<0$ and $\Delta\epsilon<0$.
The direction and magnitude of ${\bf v}_{\rm drift}$ will vary with height in the disk.
${\bf v}_{\rm drift}$ may be approximately normal to the disk plane
for grains that are falling toward the midplane, whereas ${\bf v}_{\rm drift}$
will be azimuthal
for grains near the midplane, with Keplerian rotation causing them to move
faster than the pressure-supported gas disk.
Thus, grain orientations may vary both vertically
and azimuthally. With $\Psi$ varying along a ray, the emerging radiation may
be partially circularly-polarized.
The observed linear polarization of a few percent suggests that
$|\Delta\sigma/\delta|\approx$ a few %.
We do not expect $\Psi$ to vary linearly with $\tau$ as in
Eq. (A1):
the variation of $\Psi$ along the ray will depend on
the varying grain dynamics along the ray.
To investigate what levels of circular polarization might be present,
we consider an idealized model with three dust layers:
layer 2 is the dust near the midplane,
and layers 1 and 3 contain the dust below and above the midplane.
Conditions in layers 1 and 3 are assumed to be identical.
Let $\tau_{j}$ be the optical depth
through layer $j$.
Assume that $\hat{\bf b}$ is normal to the disk in layers 1 and 3, and
azimuthal in layer 2 (see Figure 5).
Thus $\Psi_{1}=\Psi_{3}$.
For small values of $\tau_{1}$, $\tau_{2}$, and $\tau_{3}$ we can approximate
the radiative transfer (see Appendix B):
$$\displaystyle I_{1}$$
$$\displaystyle=$$
$$\displaystyle B_{1}\tau_{1}\left(1-\frac{1}{2}\tau_{1}\right)e^{-\tau_{2}-\tau_{3}}$$
(23)
$$\displaystyle I_{2}$$
$$\displaystyle=$$
$$\displaystyle B_{2}\tau_{2}\left(1-\frac{1}{2}\tau_{2}\right)e^{-\tau_{3}}$$
(24)
$$\displaystyle I_{3}$$
$$\displaystyle=$$
$$\displaystyle B_{3}\tau_{3}\left(1-\frac{1}{2}\tau_{3}\right)$$
(25)
$$\displaystyle I$$
$$\displaystyle\,\approx\,$$
$$\displaystyle I_{1}+I_{2}+I_{3}$$
(26)
$$\displaystyle Q$$
$$\displaystyle\approx$$
$$\displaystyle-\left(\frac{\Delta\sigma}{\delta}\right)_{\!3}\cos(2\Psi_{3})\left[I_{1}+I_{3}-\tau_{3}(I_{1}+I_{2})\right]-\left(\frac{\Delta\sigma}{\delta}\right)_{\!2}\cos(2\Psi_{2})\left(I_{2}-\tau_{2}I_{1}\right)$$
(27)
$$\displaystyle U$$
$$\displaystyle\approx$$
$$\displaystyle-\left(\frac{\Delta\sigma}{\delta}\right)_{\!3}\sin(2\Psi_{3})\left[I_{1}+I_{3}-\tau_{3}(I_{1}+I_{2})\right]-\left(\frac{\Delta\sigma}{\delta}\right)_{\!2}\sin(2\Psi_{2})\left(I_{2}-\tau_{2}I_{1}\right)$$
(28)
$$\displaystyle V$$
$$\displaystyle\approx$$
$$\displaystyle\sin(2\Psi_{2}-2\Psi_{1})\left[\left(\frac{\Delta\epsilon}{\delta}\right)_{\!2}\left(\frac{\Delta\sigma}{\delta}\right)_{\!1}\tau_{2}I_{1}+\left(\frac{\Delta\epsilon}{\delta}\right)_{\!3}\left(\frac{\Delta\sigma}{\delta}\right)_{\!2}\tau_{3}I_{2}\right]~{}.$$
(29)
The direction and magnitude
of linear polarization at selected positions are shown
in Figure 5 for a stratified disk model with parameters
given in Table 1, viewed at inclination
$\theta_{i}=45^{\circ}$.
Figure 5c,d show the linear and circular polarization
as a function of azimuthal angle (in the disk plane) for this model.
In addition to accurate results from numerical integration, the results
from the analytic approximation (Eqs. 27–29)
are also plotted. The analytic
approximation is seen to provide
fair accuracy, even though $\tau_{2}=0.2$ is not small.
The circular polarization $V/I$ is quite accurate, but
in Figure 5(c),
the analytic approximation slightly
overestimates the linear polarization fraction.
However, the analytic approximations were developed for $\tau\ll 1$,
and here the total optical depth
$\tau_{1}+\tau_{2}+\tau_{3}=0.3$ is not small.
For this model, the linear polarization varies from $1.4\%$ to $3.2\%$ around
the disk, with average value $\sim$$2.5\%$.
The linear polarization tends to be close to
the azimuthal direction, with largest
values on the major axis, and smallest values along the minor axis of
inclined disk (see Figure 5).
The predicted circular polarization $|V|/I$
is small but perhaps detectable, with
$V/I$ varying from positive to negative from one
quadrant to another (see Figure 5),
with maxima $|V|/I\approx 0.2\%$ (see Figure 5(d)).
Stephens et al. (2017) mapped $V$ over the HL Tau disk
at $3.3\,{\rm mm}$, $1.3\,{\rm mm}$, and $870\micron$.
The $3.3\,{\rm mm}$ $V$ map does not appear to show any statistically significant
detection, with upper limits $|V/I|\lesssim 1\%$.
At $1.3\,{\rm mm}$ and $870\micron$ the NW side of the major axis may have
$V/I\approx-1\%$, but whether this is real rather than an instrumental
artifact remains unclear. In any event, the likely importance of
scattering at these shorter wavelengths will complicate interpretation.
6
Discussion
For typical molecular clouds we conclude that
the circular polarization will
be undetectably
small at the far-infrared and submm wavelengths where the clouds
radiate strongly. Probing the magnetic field structure in such clouds
using circular polarization is feasible only at shorter infrared wavelengths
where the extinction is appreciable, using
embedded infrared sources (stars, protostars, ).
The thermal dust emission from
so-called IR dark clouds (IRDCs) in the inner Galaxy – such
as the “Brick” – can show appreciable levels of linear polarization,
demonstrating both that there is appreciable grain alignment
and that the magnetic field structure in the cloud, while showing
evidence of rotation, is relatively coherent.
IRDCs
have large enough column densities that the resulting circular polarization
may reach detectable levels. For one position on the Brick and
plausible assumptions concerning the field,
we estimate a circular polarization
$|V/I|\approx{0.025}\%$ at $850\,{\rm GHz}$.
If the circular polarization can be detected and mapped in IRDCs,
it would provide constraints on the 3-dimensional magnetic field structure.
Unfortunately, the predicted $V/I$ is small, especially
at longer wavelengths (we expect
$V/I\propto\lambda^{-1.1}$), and detection will be
challenging.
Protoplanetary disks may offer the best opportunity to measure
circular polarization at submm wavelengths.
If there are significant
changes in the direction of grain alignment between the dust near the
midplane and dust well above and below
the midplane, linear dichroism and birefringence will
produce circular polarization.
Alignment processes in protoplanetary disks remain uncertain, but we
suggest that grain drift may cause the grains near the midplane to
be aligned with long axes
preferentially in the azimuthal direction, while grains above and below
the midplane may
be aligned with long axes
tending to be in the vertical direction (normal to the disk).
If the grains are small enough that scattering can be neglected, we calculate
the linear and circular polarization that would be expected for such
a model.
A characteristic quadrupole pattern of circular polarization is predicted
for this kind of grain alignment (see Figure 5).
Eq. (29) can be used to estimate the circular polarization
at wavelengths $\lambda\gtrsim 100\micron$ where thermal emission is
strong and the grains may be approximated by the Rayleigh limit.
We present a simple example to show the linear and circular polarization that
might be present in protoplanetary disks, such as the disk around HL Tau.
This example is not being put forward as a realistic model for HL Tau, but
simply to illustrate the possible circular polarization from dust aligned by
streaming in a stratified disk.
If observed,
this would help clarify the physical processes responsible for grain
alignment in protoplanetary disks. Absence of this circular polarization
would indicate that the preferred direction for grain
alignment in high-altitude regions is the same as the preferred direction
near the midplane, or else that grain alignment occurs
only in the midplane, or only in the upper layers.
7
Summary
1.
We present the transfer equations for the Stokes parameters, including
the effects of thermal emission.
Once the properties of the medium are specified, these equations
can easily be integrated numerically.
For small optical depths, analytic solutions are given for
clouds with a uniform twist to the magnetic field, and for stratified
clouds with uniform alignment within individual strata.
2.
Using the “astrodust” grain model (Draine & Hensley 2021b)
we calculate the relevant optical properties of dust grains for producing
linear and circular polarization in the far-infrared and submm.
By adjusting the assumed degree of dust alignment $f_{\rm align}$, these dust
properties may approximate the properties of dust in protoplanetary disks,
at wavelengths where scattering can be neglected.
3.
At submm wavelengths, the “phase shift” cross section $C_{\rm pha}$
tends to be
much larger than the absorption cross section
$C_{\rm abs}$.
We estimate $C_{\rm pha}/C_{\rm abs}\approx 24(\lambda/\,{\rm mm})^{0.7}$.
4.
The far-IR emission from dust in diffuse clouds, and in normal
molecular clouds, will have very low levels of circular polarization,
below current and foreseen sensitivities.
5.
If the magnetic field in IRDCs has a significant
systematic twist,
the emission from IRDCs may have
$V/I\approx{0.025}\%(\lambda/350\micron)^{-1.1}$
6.
If dust grains in protoplanetary disks are aligned in different
directions in different strata, the resulting submm emission may
be circularly polarized with peak
$V/I\approx 0.2\%(\lambda/350\micron)^{-1.1}$
for one simple example
.
Measuring the circular polarization can constrain
the mechanisms responsible for grain alignment in protoplanetary disks.
This work was supported in part by NSF grant AST-1908123.
I thank
Chat Hull and Joseph Weingartner for helpful discussions,
Robert Lupton for availability of the SM package.
References
Aitken et al. (2006)
Aitken, D. K., Hough, J. H., & Chrysostomou, A. 2006, MNRAS, 366, 491,
doi: 10.1111/j.1365-2966.2005.09873.x
Andersson et al. (2015)
Andersson, B.-G., Lazarian, A., & Vaillancourt, J. E. 2015, ARA&A, 53,
501, doi: 10.1146/annurev-astro-082214-122414
Andrews et al. (2018)
Andrews, S. M., Huang, J., Pérez, L. M., et al. 2018, ApJ, 869,
L41, doi: 10.3847/2041-8213/aaf741
Beckwith & Sargent (1991)
Beckwith, S. V. W., & Sargent, A. I. 1991, ApJ, 381, 250,
doi: 10.1086/170646
Bohren & Huffman (1983)
Bohren, C. F., & Huffman, D. R. 1983, Absorption and Scattering of Light by
Small Particles (New York: Wiley)
Carey et al. (1998)
Carey, S. J., Clark, F. O., Egan, M. P., et al. 1998, ApJ, 508, 721,
doi: 10.1086/306438
Chuss et al. (2019)
Chuss, D. T., Andersson, B. G., Bally, J., et al. 2019, ApJ, 872, 187,
doi: 10.3847/1538-4357/aafd37
Das & Weingartner (2016)
Das, I., & Weingartner, J. C. 2016, MNRAS, 457, 1958,
doi: 10.1093/mnras/stw146
Davis & Greenstein (1951)
Davis, L. J., & Greenstein, J. L. 1951, ApJ, 114, 206,
doi: 10.1086/145464
Dotson et al. (2010)
Dotson, J. L., Vaillancourt, J. E., Kirby, L., et al. 2010, ApJS, 186,
406, doi: 10.1088/0067-0049/186/2/406
Draine (2006)
Draine, B. T. 2006, ApJ, 636, 1114, doi: 10.1086/498130
Draine & Hensley (2021a)
Draine, B. T., & Hensley, B. S. 2021a, ApJ, 919, 65,
doi: 10.3847/1538-4357/ac0050
Draine & Hensley (2021b)
—. 2021b, ApJ, 909, 94, doi: 10.3847/1538-4357/abd6c6
Draine & Lee (1984)
Draine, B. T., & Lee, H. M. 1984, ApJ, 285, 89, doi: 10.1086/162480
Draine & Weingartner (1996)
Draine, B. T., & Weingartner, J. C. 1996, ApJ, 470, 551,
doi: 10.1086/177887
Draine & Weingartner (1997)
—. 1997, ApJ, 480, 633, doi: 10.1086/304008
Dyck & Lonsdale (1981)
Dyck, H. M., & Lonsdale, C. J. 1981, in Infrared Astronomy, ed. C. G.
Wynn-Williams & D. P. Cruikshank, Vol. 96, 223–233
Fissel et al. (2016)
Fissel, L. M., Ade, P. A. R., Angilè, F. E., et al. 2016, ApJ,
824, 134, doi: 10.3847/0004-637X/824/2/134
Fukushima et al. (2020)
Fukushima, H., Yajima, H., & Umemura, M. 2020, MNRAS, 496, 2762,
doi: 10.1093/mnras/staa1718
Gold (1952)
Gold, T. 1952, MNRAS, 112, 215, doi: 10.1093/mnras/112.2.215
Guan et al. (2021)
Guan, Y., Clark, S. E., Hensley, B. S., et al. 2021, ApJ, 920, 6,
doi: 10.3847/1538-4357/ac133f
Hall (1949)
Hall, J. S. 1949, Science, 109, 166, doi: 10.1126/science.109.2825.166
Hamaker & Bregman (1996)
Hamaker, J. P., & Bregman, J. D. 1996, A&AS, 117, 161
Hensley & Draine (2021)
Hensley, B. S., & Draine, B. T. 2021, ApJ, 906, 73,
doi: 10.3847/1538-4357/abc8f1
Hiltner (1949)
Hiltner, W. A. 1949, Nature, 163, 283, doi: 10.1038/163283a0
Hoang et al. (2018)
Hoang, T., Cho, J., & Lazarian, A. 2018, ApJ, 852, 129,
doi: 10.3847/1538-4357/aa9edc
Kataoka et al. (2019)
Kataoka, A., Okuzumi, S., & Tazaki, R. 2019, ApJ, 874, L6,
doi: 10.3847/2041-8213/ab0c9a
Kataoka et al. (2017)
Kataoka, A., Tsukagoshi, T., Pohl, A., et al. 2017, ApJ, 844, L5,
doi: 10.3847/2041-8213/aa7e33
Kataoka et al. (2015)
Kataoka, A., Muto, T., Momose, M., et al. 2015, ApJ, 809, 78,
doi: 10.1088/0004-637X/809/1/78
Kemp (1972)
Kemp, J. C. 1972, ApJ, 175, L35, doi: 10.1086/180979
Kemp & Wolstencroft (1972)
Kemp, J. C., & Wolstencroft, R. D. 1972, ApJ, 176, L115,
doi: 10.1086/181036
Kwon et al. (2016)
Kwon, J., Tamura, M., Hough, J. H., Nagata, T., & Kusakabe, N. 2016,
AJ, 152, 67, doi: 10.3847/0004-6256/152/3/67
Kwon et al. (2014)
Kwon, J., Tamura, M., Hough, J. H., et al. 2014, ApJ, 795, L16,
doi: 10.1088/2041-8205/795/1/L16
Kwon et al. (2018)
Kwon, J., Nakagawa, T., Tamura, M., et al. 2018, AJ, 156, 1,
doi: 10.3847/1538-3881/aac389
Lazarian (1994)
Lazarian, A. 1994, MNRAS, 268, 713, doi: 10.1093/mnras/268.3.713
Lazarian & Hoang (2007a)
Lazarian, A., & Hoang, T. 2007a, MNRAS, 378, 910,
doi: 10.1111/j.1365-2966.2007.11817.x
Lazarian & Hoang (2007b)
—. 2007b, ApJ, 669, L77, doi: 10.1086/523849
Lee et al. (2021)
Lee, C.-F., Li, Z.-Y., Yang, H., et al. 2021, ApJ, 910, 75,
doi: 10.3847/1538-4357/abe53a
Lee & Draine (1985)
Lee, H. M., & Draine, B. T. 1985, ApJ, 290, 211, doi: 10.1086/162974
Longmore et al. (2012)
Longmore, S. N., Rathborne, J., Bastian, N., et al. 2012, ApJ, 746,
117, doi: 10.1088/0004-637X/746/2/117
Lonsdale et al. (1980)
Lonsdale, C. J., Dyck, H. M., Capps, R. W., & Wolstencroft, R. D.
1980, ApJ, 238, L31, doi: 10.1086/183251
Marsh et al. (2016)
Marsh, K. A., Ragan, S. E., Whitworth, A. P., & Clark, P. C. 2016,
MNRAS, 461, L16, doi: 10.1093/mnrasl/slw080
Martin (1972)
Martin, P. G. 1972, MNRAS, 159, 179, doi: 10.1093/mnras/159.2.179
Martin (1974)
—. 1974, ApJ, 187, 461, doi: 10.1086/152655
Martin & Angel (1976)
Martin, P. G., & Angel, J. R. P. 1976, ApJ, 207, 126,
doi: 10.1086/154476
Martin & Campbell (1976)
Martin, P. G., & Campbell, B. 1976, ApJ, 208, 727, doi: 10.1086/154656
Martin et al. (1972)
Martin, P. G., Illing, R., & Angel, J. R. P. 1972, MNRAS, 159, 191,
doi: 10.1093/mnras/159.2.191
Molinari et al. (2016)
Molinari, S., Schisano, E., Elia, D., et al. 2016, A&A, 591, A149,
doi: 10.1051/0004-6361/201526380
Mori & Kataoka (2021)
Mori, T., & Kataoka, A. 2021, ApJ, 908, 153,
doi: 10.3847/1538-4357/abd08a
Natta & Testi (2004)
Natta, A., & Testi, L. 2004, in Astr. Soc. Pac. Conf. Ser. 323, Star Formation in the
Interstellar Medium: In Honor of David Hollenbach, ed. D. Johnstone, F. C.
Adams, D. N. C. Lin, D. A. Neufeld, & E. C. Ostriker, 279
Okuzumi & Tazaki (2019)
Okuzumi, S., & Tazaki, R. 2019, ApJ, 878, 132,
doi: 10.3847/1538-4357/ab204d
Planck Collaboration et al. (2015a)
Planck Collaboration, Ade, P. A. R., Aghanim, N., et al.
2015a, A&A, 576, A104, doi: 10.1051/0004-6361/201424082
Planck Collaboration et al. (2015b)
—. 2015b, A&A, 576, A106, doi: 10.1051/0004-6361/201424087
Planck Collaboration et al. (2020)
Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641,
A12, doi: 10.1051/0004-6361/201833885
Purcell (1969)
Purcell, E. M. 1969, Physica, 41, 100, doi: 10.1016/0031-8914(69)90243-2
Reissl et al. (2016)
Reissl, S., Wolf, S., & Brauer, R. 2016, A&A, 593, A87,
doi: 10.1051/0004-6361/201424930
Serkowski (1962)
Serkowski, K. 1962, Advances in Astronomy and Astrophysics, 1, 289,
doi: 10.1016/B978-1-4831-9919-1.50009-1
Serkowski & Rieke (1973)
Serkowski, K., & Rieke, G. H. 1973, ApJ, 183, L103,
doi: 10.1086/181263
Stephens et al. (2017)
Stephens, I. W., Yang, H., Li, Z.-Y., et al. 2017, ApJ, 851, 55,
doi: 10.3847/1538-4357/aa998b
Takeuchi & Lin (2002)
Takeuchi, T., & Lin, D. N. C. 2002, ApJ, 581, 1344,
doi: 10.1086/344437
van de Hulst (1957)
van de Hulst, H. C. 1957, Light Scattering by Small Particles (New York:
John Wiley & Sons)
Walker et al. (2021)
Walker, D. L., Longmore, S. N., Bally, J., et al. 2021, MNRAS, 503,
77, doi: 10.1093/mnras/stab415
Weingartner & Draine (2003)
Weingartner, J. C., & Draine, B. T. 2003, ApJ, 589, 289,
doi: 10.1086/374597
Yang (2021)
Yang, H. 2021, ApJ, 911, 125, doi: 10.3847/1538-4357/abebde
Appendix A
Uniform Twist
Assume a single dust temperature $T_{d}$.
Define $d\tau^{\prime}\equiv\delta dz$.
Suppose $\Psi$ varies linearly with $\tau$, with total twist $\Delta\Psi$:
$$\Psi(\tau^{\prime})=\Psi_{0}+\alpha\tau^{\prime}~{}~{}~{}~{},~{}~{}~{}~{}\alpha=\frac{\Delta\Psi}{\tau}~{}~{}~{}.$$
(A1)
Assuming ${\bf S}=(0,0,0,0)$ for $\tau=0$, and integrating
Eq. (10) while retaining only low-order
terms in $\tau$, we obtain:
$$\displaystyle I$$
$$\displaystyle~{}\approx~{}$$
$$\displaystyle B(T_{d})\tau\left\{1-\frac{1}{2}\tau-\left(\frac{\Delta\sigma}{\delta}\right)^{2}\tau\frac{[1-\cos(2\Delta\Psi)]}{4(\Delta\Psi)^{2}}\right\}$$
(A2)
$$\displaystyle Q$$
$$\displaystyle\approx$$
$$\displaystyle-\left(\frac{\Delta\sigma}{\delta}\right)B(T_{d})\tau\left(1-\tau\right)\frac{\left[\sin(2\Psi)-\sin(2\Psi_{0})\right]}{2\Delta\Psi}$$
(A3)
$$\displaystyle U$$
$$\displaystyle\approx$$
$$\displaystyle-\left(\frac{\Delta\sigma}{\delta}\right)B(T_{d})\tau\left(1-\tau\right)\frac{\left[\cos(2\Psi_{0})-\cos(2\Psi)\right]}{2\Delta\Psi}$$
(A4)
$$\displaystyle V$$
$$\displaystyle\approx$$
$$\displaystyle\left(\frac{\Delta\sigma}{\delta}\right)\left(\frac{\Delta\epsilon}{\delta}\right)\frac{B(T_{d})\tau^{2}}{2\Delta\Psi}\left\{1-\frac{1}{2}\tau+\frac{\tau}{(2\Delta\Psi)^{2}}\left[\cos(2\Delta\Psi)-1\right]-\frac{(1-\tau)}{2\Delta\Psi}\sin(2\Delta\Psi)\right\}$$
(A5)
$$\displaystyle p$$
$$\displaystyle\equiv$$
$$\displaystyle\frac{(Q^{2}+U^{2})^{1/2}}{I}\,\approx\,\frac{1}{\Delta\Psi}\left(\frac{\delta\sigma}{\delta}\right)\frac{(1-\tau)\left[1-\cos(2\Delta\Psi)\right]^{1/2}}{1-\frac{1}{2}\tau-\tau\left(\frac{\Delta\sigma}{\delta}\right)^{2}\frac{1-\cos(2\Delta\Psi)}{4(\Delta\Psi)^{2}}}~{}~{}~{}.$$
(A6)
These results are valid for $\tau\ll 1$, and general twist angle
$\Delta\Psi$.
Appendix B Three Zone Model
Suppose the dust is located in three zones,
with dust temperatures $T_{d1}$, $T_{d2}$, and $T_{d3}$.
The aligned dust grains have $\Psi=\Psi_{1}$ for
$0<\tau<\tau_{1}$, $\Psi=\Psi_{2}$ for
$\tau_{1}<\tau<\tau_{1}+\tau_{2}$, and
$\Psi=\Psi_{3}$ for
$\tau_{1}+\tau_{2}<\tau<\tau_{1}+\tau_{2}+\tau_{3}$.
Suppose all $\tau_{j}\ll 1$.
Define
$$\displaystyle I_{1}$$
$$\displaystyle\,\equiv\,$$
$$\displaystyle B(T_{d1}\left[1-e^{-\tau_{1}}\right]e^{-\tau_{2}-\tau_{3}}\,\approx\,B(T_{d1})\,\tau_{1}\left[1-\frac{1}{2}\tau_{1}\right]e^{-\tau_{2}-\tau_{3}}$$
(B1)
$$\displaystyle I_{2}$$
$$\displaystyle\equiv$$
$$\displaystyle B(T_{d2})\left[1-e^{-\tau_{2}}\right]e^{-\tau_{3}}\,\approx\,B(T_{d2})\,\tau_{2}\left[1-\frac{1}{2}\tau_{2}\right]e^{-\tau_{3}}$$
(B2)
$$\displaystyle I_{3}$$
$$\displaystyle\equiv$$
$$\displaystyle B(T_{d3})\left[1-e^{-\tau_{3}}\right]\,\approx\,B(T_{d3})\,\tau_{3}\left[1-\frac{1}{2}\tau_{3}\right]$$
(B3)
If ${\bf S}=(0,0,0,0)$ for $\tau=0$, then the radiation emerging from layer 3
has
$$\displaystyle I$$
$$\displaystyle\,\approx\,$$
$$\displaystyle I_{1}+I_{2}+I_{3}$$
(B4)
$$\displaystyle Q$$
$$\displaystyle\approx$$
$$\displaystyle-\left(\frac{\Delta\sigma}{\delta}\right)_{\!1}\!\cos(2\Psi_{1})I_{1}-\left(\frac{\Delta\sigma}{\delta}\right)_{\!2}\!\cos(2\Psi_{2})[I_{2}-\tau_{2}I_{1}]-\left(\frac{\Delta\sigma}{\delta}\right)_{\!3}\!\cos(2\Psi_{3})[I_{3}-\tau_{3}(I_{1}+I_{2})]~{}~{}~{}~{}~{}$$
(B5)
$$\displaystyle U$$
$$\displaystyle\approx$$
$$\displaystyle-\left(\frac{\Delta\sigma}{\delta}\right)_{\!1}\!\sin(2\Psi_{1})I_{1}-\left(\frac{\Delta\sigma}{\delta}\right)_{\!2}\!\sin(2\Psi_{2})[I_{2}-\tau_{2}I_{1}]-\left(\frac{\Delta\sigma}{\delta}\right)_{\!3}\!\sin(2\Psi_{3})[I_{3}-\tau_{3}(I_{1}+I_{2})]~{}~{}~{}~{}~{}$$
(B6)
$$\displaystyle V$$
$$\displaystyle\approx$$
$$\displaystyle\left(\frac{\Delta\epsilon}{\delta}\right)_{\!2}\left(\frac{\Delta\sigma}{\delta}\right)_{\!1}\sin\left(2\Psi_{2}-2\Psi_{1}\right)\tau_{2}I_{1}+\left(\frac{\Delta\epsilon}{\delta}\right)_{\!3}\left(\frac{\Delta\sigma}{\delta}\right)_{\!1}\sin\left(2\Psi_{3}-2\Psi_{1}\right)\tau_{3}I_{1}$$
(B7)
$$\displaystyle+\left(\frac{\Delta\epsilon}{\delta}\right)_{\!3}\left(\frac{\Delta\sigma}{\delta}\right)_{\!2}\sin\left(2\Psi_{3}-2\Psi_{2}\right)\tau_{3}I_{2}~{}~{}.$$ |
Optimization of Convolutional Neural Network using Microcanonical Annealing Algorithm
Vina Ayumi1,
L.M. Rasdi Rere1,2,
Mohamad Ivan Fanany1,
Aniati Murni Arymurthy1,
1 Machine Learning and Computer Vision Laboratory,
Faculty of Computer Science, Universitas Indonesia
2 Computer System Laboratory, STMIK Jakarta STI&K
* vina.ayumi@ui.ac.id
Abstract
Convolutional neural network (CNN) is one of the most prominent architectures and algorithm in Deep Learning. It shows a remarkable improvement in the recognition and classification of objects. This method has also been proven to be very effective in a variety of computer vision and machine learning problems. As in other deep learning, however, training the CNN is interesting yet challenging. Recently, some metaheuristic algorithms have been used to optimize CNN using Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing and Harmony Search. In this paper, another type of metaheuristic algorithms with different strategy has been proposed, i.e. Microcanonical Annealing to optimize Convolutional Neural Network. The performance of the proposed method is tested using the MNIST and CIFAR-10 datasets. Although experiment results of MNIST dataset indicate the increase in computation time (1.02x - 1.38x), nevertheless this proposed method can considerably enhance the performance of the original CNN (up to 4.60%). On the CIFAR10 dataset, currently, state of the art is 96.53% using fractional pooling, while this proposed method achieves 99.14%.
Keywords— Metaheuristic, Microcanonical Annealing, Convolutional Neural Network, MNIST, CIFAR10
1 Introduction
Essentially, Deep learning (DL) is motivated by the artificial intelligent (AI) research, where the objective is to replicate the human brain capability, i.e. to observe, learn, analyze and make a decision, particularly for complex problems [14]. DL is about learning the representation of a hierarchical feature, and it contains a variety of methods, such as neural network, hierarchical of probabilistic models, and supervised as well as unsupervised learning algorithms.[28]. The current good reputation of DL is due to the decrease in the price of computer hardware, improvement in the computational processing capabilities, and advanced research in the Machine Learning and Signal Processing [11].
In general, DL models can be classified into discriminative, generative, and hybrid models[11]. Recurrent neural network (RNN), deep neural networks (DNN), and convolutional neural networks (CNN) are some examples of Discriminative models. Examples of generative models are deep Boltzmann machine (DBM), regularized autoencoder, and deep belief network (DBN). In the case of the hybrid model, it refers to a combination of generative and discriminative models. An example of such hybrid model is a pre-trained deep CNN using DBN, where it can improve the performance of deep CNN better than if it uses only random initialization. Among all of these DL techniques, this paper focuses on CNN.
Although the good reputation of DL for solving any learning problem is known, how to train it is challenging. The successful proposal to optimize this technique using layered-wise pre-training was proposed by Hinton and Salakhutdinov [7]. Some other methods are Hessian-free optimization suggested by Marten [12], and Krylov Subspace Descent by Vinyal et al. [16]
Recently, some of the metaheuristic algorithms have been used to optimize DL, especially CNN. Some papers[29][15][18][20] [19] report that these methods can improve the accuracy of CNN. Metaheuristic is a powerful method to solve difficult optimization problems, and it has been used in almost all research area of engineering, science, and even industrial application [26]. In general, this method works with three main objectives, i.e. solving big problems, solving the problem faster, and finding robust algorithms [3]. Besides, they are not difficult to be designed, flexible, and relatively easy to be applied.
Almost all metaheuristics algorithms inspired by nature, which is based on several principles of phenomena in physics, biology, and ethology. Some examples of biology phenomena are Differential Evolution (DE), Evolution Strategy (ES), Genetic Algorithm (GA). Phenomena of physics are Threshold Accepting method (TA), Microcanonical Annealing (MA), Simulated Annealing (SA), and Ethology phenomena are Ant Colony Optimization (ACO), Firefly Algorithm (FA), Particle Swarm Optimization (PSO)[2]. Another metaheuristic phenomenon is inspired by music, such as Harmony Search algorithm [10].
Classifications of metaheuristic can also be based on single-solution based metaheuristic (S-metaheuristic) and population-based metaheuristic (P-metaheuristic). Examples of S-metaheuristic are SA, TA, MA, Guided Local Search, and Tabu Search. In the case of P-metaheuristic, it can be divided into Swarm Intelligent (SI) and Evolutionary Computation (EC). Examples of SI are FA, PSO, ACO, Bee Colony Optimization and examples of EC are GA, ES, DE [2].
Of the various types of the metaheuristic algorithm, in this paper we use the MA, with the consideration that the S-Metaheuristic is simple to implement on DL, and to the best of our knowledge, has never been used for optimizing CNN.
The Macrocanonic algorithm is the variant of Simulated Annealing. Uses an adaptation of the Metropolis algorithm, the conventional SA algorithm aims to bring a system to equilibrium at decreasing temperatures [1]. On the other hand, MA based on Creutz’s microcanonical simulation technique, where the system’s evolution is controlled by its internal energy, not by its temperature. The advantages Creutz algorithm over the Metropolis algorithm is since it does not require the generation of quality random numbers or the evaluation of transcendental functions, thus allowing much faster implementation. Experiments on the Creutz method indicate that, it can be programmed to run an order of magnitude faster than the conventional Metropolis method for discrete systems [4]. A further significant advantage is that microcanonical simulation does not require high-quality random numbers.
The organization of this paper is as follows: Section 1 is an introduction; Section 2 provides an overview of Microcanonical Annealing; Section 3 describes the method convolutional neural network; Section 4 presents the proposed method; Section 5 gives the results of the experiment, and lastly, Section 6 presents the conclusion of this paper.
2 Microcanonical Annealing
Microcanonical Annealing (MA) corresponds to a variant of simulated annealing (SA). This technique is based on the Creutz algorithm, known as “demon” algorithm or microcanonical Monte Carlo simulation. In which the algorithm tolerates attainment of the equilibrium of thermodynamic in an isolated system, where in this condition, total energy of the system $E_{p}$ is constant [2].
Total energy is the sum of kinetic energy $E_{k}$ and potential energy $E_{p}$ of the system, as the equation (2) follow:
$$E_{total}=E_{k}+E_{p}$$
(1)
In case of minimum optimization problem, potential energy $E_{p}$ is the objective function to be minimized, and the kinetic energy is used as temperature in SA, that is forced to remain positive [2]. When the change of energy is negative
$(-\Delta E)$, while it increases the kinetic energy $(E_{k}\leftarrow E_{k}-\Delta E)$, this new states is accepted. Otherwise, it is accepted when $-\Delta E<E_{k}$, and the energy obtained in the form of potential energy is cut off from the kinetic energy. So that the total energy remains constant. The standard algorithm for MA is shown in Algorithm 1.[2].
3 Convolutional neural network
One variant of the standard multilayer perceptron (MLP) is CNN. Its capability in reducing the dimension of data, extracting the feature sequentially, and classifying in one structure of network are distinguished advantages of this method, especially, for pattern recognition compared with the conventional approaches [27].
The classical CNN by LeCun et al [9] is an extension of traditional MLP based on three ideas: local receive fields, weights sharing, and spatial/temporal sub-sampling. There are two types of processing layers, which are convolution layers and sub-sampling layers. As demonstrated in Fig.1, the processing layers contain three convolution layers C1, C3, and C5, combined in between with two sub-sampling layers S2 and S4, and output layer F6. These convolution and sub-sampling layers are arranged into planes called features maps.
In convolution layer, each neuron is locally linked to a small input region (local receptive field) in the preceding layer. All neurons with similar feature maps obtain data from different input regions until the whole plane input is skimmed, but the similar weights are used together (weights sharing).
The feature maps are spatially down-sampled in sub-sampling layer, in which the map size is reduced by a factor 2. For instance, the feature map in layer C3 of size 10x10 is sub-sampled to a conforming feature map of size 5x5 in the subsequent layer S4. The last layer is F6 that is the process of classification [9].
Basically, a convolution layer is correlated with some feature maps, the size of the kernel, and connections to the previous layer. Each feature map is the result of a sum of convolution from the maps of the previous layer, by their corresponding kernel and a linear filter. Furthermore, a bias term is added to the map then and applying it to a non-linear function. The k-th feature map $M_{ij}^{k}$ with the weights $W^{k}$ and bias $b_{k}$ is obtained using the $\tanh$ function as follow:
$$M_{ij}^{k}=\tanh((W^{k}\times x)_{ij}+b_{k})$$
(2)
The purpose of a sub-sampling layer is the spatially invariant reached by reducing the feature maps resolution, where each feature map is pooled relating to one of the feature map of the previous layer
where each map feature is collected relating to one of the maps of the features of the previous layer. Where $a_{i}^{n\times n}$ are the inputs, $\beta$ is a scalar of trainable, and $b$ is bias of trainable, the sub-sampling function,is given by the following equation:
$$a_{j}=\tanh\left(\beta\sum_{N\times N}{a_{i}^{n\times n}+b}\right)$$
(3)
After several convolutions and sub-samplings, the last structure is a classification layer. This layer works as an input for a series of fully connected layers that will execute the classification task. In this layer, each output neuron is assigned to one class label, and in the case of CIFAR10 or MNIST data set, this layer contains ten neurons corresponding to their classes.
4 Design of proposed methods
In this proposed method, the algorithm of MA is used to train CNN to find the condition of best accuracy, as well as to minimize estimated error and indicator of network complexity. This objective can be realized by computing the loss function of vector solution or the standard error on the training set. The following is the loss function used in this paper:
$$f=\frac{1}{2}\left({\frac{\sum_{i=N}^{N}{(x-y)^{2}}}{N}}\right)^{0.5}$$
(4)
where the expected output is $x$, the real output is $y$, and some of the training samples are $N$. The two situations are used in this method for termination criterion. The first is when the maximum iteration has been reached and the second is when the loss function is less than a certain constant. Both conditions mean that the most optimal state has been achieved.
The architecture of this proposed method is i-6c-2s-12c-2s, where the number of C1 is 6, and C3 is 12. The size of kernel for all convolution layer is 5x5, and the scale of sub-sampling is 2. This architecture is a simple CNN structure (LeNet-5), not a complex structure like AlexNet[8], SPP[6], and GoogLeNet[23]. In this paper, these architecture is designed for MNIST dataset.
Technically in these proposed methods, CNN will compute the values of bias and weight. These values ($x$) are used to calculate the loss function $f(x)$.
The values of $x$ are used as a vector of solution in MA, which will be optimized, by adding a value of $\Delta x$ randomly. Meanwhile, $f(x)$ is used as a potential energy $E_{k}$ in MA.
In this proposed method, $\Delta x$ is one of the important parameters. The value of accuracy will be improved significantly by providing an appropriate value of the $\Delta x$ parameter. As an example of one epoch, if $\Delta x=0.001\times rand$, then the maximum accuracy is 87.60%, in which this value is 5.21% greater than the original CNN (82.39%). However, if $\Delta x=0.0001\times rand$, its accuracy is 85.45% and its is only 3.06% greater than the original CNN.
Another important parameter of the proposed method is the size of neighborhood. For example in one epoch, if neighborhood is 5, 10 or 20, and then the accuracy values are respectively 85.74%, 87.52%, or 88.06%. While the computing time are respectively 98.06 seconds, 99.18 seconds and 111.80 seconds.
Furthermore, this solution vector is updated based on MA algorithm. In case of termination criterion has been reached, all of biases and weights for all layers on the system will be updated.
5 Experiment and results
In this paper, there are two categories of experiments conducted, based on the dataset. The first experiment was using MNIST dataset, and the second experiment using CIFAR10 dataset. Some of the examples image for MNIST dataset are shown in Figure 2 and for CIFAR10 dataset are shown in Figure 3.
5.1 Experiment using MNIST data set
The experiment for MNIST data set was implemented in MATLAB-R2011a, windows 10, on a PC with processor Intel Core i7-4500u, and 8 GB RAM running memory, with five experiments for each epoch. The original program of this experiment is DeepLearn Toolbox from Palm[17]. In this research, the program of CNN is modified with the algorithm of MA.
In all experiment, the size of neighborhood was set to 10, maximum of iteration (maxit) = 10, as well as kinetic energy = 100. We also set the parameter of CNN i.e., the learning rate ($\alpha=1$) and the batch size (100).
On the MNIST dataset, all of the experiment results of the proposed methods are compared with the experiment result from the original CNN. The results of CNN and CNN based on MA is summarized in Table 1, for accuracy (A1, A2) and computation time (T1, T2), as well as Figure 4 for Error and Figure 5 for computation time.
In case of 100 epochs, as is shown in Figure 6 and 7, the accuracy of original CNN is 98.65% and the accuracy of CNN by MA is 98.75%. The computation time of both methods are 10731 seconds and 17090s seconds respectively.
In general, the experiments conducted for MNIST data set shown that the proposed methods are better than the original CNN, for any given epoch. As an example for the second epoch, the accuracy of original CNN is 89.06%, while for CNNMA is 91.33%. Accuracy improvement of the proposed method, compared to original CNN, varies of each epoch, with a range of values between 1.12% (CNNMA, 9 epoch) up to 4.60% (CNNMA, 1 epoch).
The computation time for the proposed method, compared to the original CNN, is in the range of $1.02\times$ (CNNMA, three epochs : 302.84/297.31) up to $1.38\times$ (CNNMA, eight epochs: 1062.11/768.24).
5.2 Experiment using CIFAR10 dataset
The experiment of CIFAR10 dataset was conducted in MATLAB-R2014a, Ubuntu 14.04 LTS 64, on a PC with Processor Intel Core i7-5820K, Four GPU GTX Titan X, Memory DDR2 RAM 64.00 GB, Hard disk 240 GB. The original program is MatConvNet from [24]. In this paper, the program was modified with MA algorithm. The results can be seen in Fig. 8 for top-1 error and top-5 error.
The proposed method has proven very effective on CIFAR10 dataset with an accuracy of 99.6%, for the last epoch in the top-1 error. In Table II different results from state of the art approaches are listed as a comparison. Another work proposed fine-tuning CNN using metaheuristic algorithm, harmony search (HS) [21] also compared in Table II.
6 Conclusion
This paper proposed a type of metaheuristic called Microcanonical Annealing algorithm to optimize the Convolutional Neural Network. Experimental result using MNIST and CIFAR-10 dataset demonstrated that although MA requires more computational time, the accuracy is reasonably better than the standard CNN without metaheuristic. This paper shows that on MNIST dataset, Microcanonical Annealing can improve the accuracy of Convolutional Neural Network, for all variations of epoch up to 4.60%. The results obtained for CIFAR10 dataset, with an accuracy of 99.14% (top-1 error), indicates that the proposed method is able to compete on the current state of the art approaches (96.53%), in the field of image classification. For the future study, fining the proper MA parameters need to be investigated. Furthermore application of this proposed method using the other benchmark data set need to be explored, such as MMI and CKP facial expression data set, as well as ORI and ImageNet. For future research, we will investigate further on the computation time comparison between Microcanonical Annealing to the Simulated Annealing. We also need to examine further the accuracy on the CIFAR-10 dataset using other GPU-based deep learning frameworks such as Torch, Theano, Tensorflow, and Keras with more number of iterations.
References
1.
S. T. Barnard.
Stereo matching by hierarchical , microcanonical annealing.
In Proceedings of International Joint Conferences on Artificial
Intelligence Organization.
2.
I. Boussaid, J. Lepagnot, and P. Siarry.
A survey on optimization metaheuristics.
Information Science, 237:82 – 117, 2013.
3.
El-Ghazali Talbi.
Metaheuristics From Design to Implementation.
John Wiley & Sons, Hoboken, New Jersey, 2009.
4.
M. C. G. Bhanot and H. Neuberger.
Microcanonical simulation of ising system.
Nuclear Physics, B235[FS11]:pp. 417–434, 1984.
5.
B. Graham.
Fractional max-pooling.
CoRR, abs/1412.6071, 2014.
6.
K. He, X. Zhang, S. Ren, and et. al.
Spatial pyramid pooling in deep convolutional networks for visual
recognition.
in: proceedings og the ECCV.
7.
G. Hinton and R. Salakhutdinov.
Reducing the dimensionality of data with neural network.
Science, 313:504–507.
8.
A. Krizhhevsky, I. Sutskever, and G. E. Hinton.
Imagenet classification with eep convolutional neural networks.
in Proc. Advances in Neural Information Processing Systems 25,
Lake Tahoe, Nevada, 2012.
9.
Y. LeCun, K. Kavukcuoglu, and C. Farabet.
Convolution nework and applications in vision.
in Proc. IEEE International Symposium on Circuit and Systems,
pages 253–256.
10.
K. S. Lee and Z. W. Geem.
A new meta-heuristic algorithm for continuous engineering
optimization: harmony search theory and practice.
Comput. Methods Appl. Mech. Engrg, 194:3902–3933, 2005.
11.
Li Deng and Dong Yu.
Deep Learning: Methods and Application.
Foundation and Trends in Signal Processing, Redmond, WA 98052;
USA, 2013.
12.
J. Martens.
Deep learning via hessian-free optimization.
in Proc. The27th International Conference on Machine Learning,
Haifa, Israel, 2010.
13.
D. Mishkin and J. Matas.
All you need is a good init.
CoRR, abs/1511.06422, 2015.
14.
M. M. Najafabadi and et. al.
Deep learning applications and challenges in big data analytics.
Journal of Big Data, pages 1–21, 2015.
15.
R. Oullette, M. Browne, and K. Hirasawa.
Genetic algorithm optimization of a convolution neural network for
autonomous crack detection.
Congress on Evolutionary Computation (CEC), pages 317 – 326.
16.
O.Vinyal and D. Poyey.
Krylov subspace descent for deep learning.
in Proc. The15th International Conference on Artificial
Intelligent and Statistics (AISTATS), La Palma, Canada Island, 2012.
17.
R.B. Palm.
Prediction as a candidate for learning deep hierarchical model
of data.
(Master thesis), Technical University of Denmark, Denmark, 2012.
18.
L. M. R. Rere, M. I. Fanany, and A. M. Arymurthy.
Simulated annealing algorithm for deep learning.
Procedia Computer Science, 72:137–144, 2015.
19.
L. M. R. Rere, M. I. Fanany, and A. M. Arymurthy.
Metaheuristic algorithms for convolution neural network.
Computational Intelligence and Neuroscience, 2016:13, 2016.
20.
G. Rosa and et.al.
Fine-tuning convolution neural networks using harmony search.
Progress in Pattern Recognition, Image Analysis, Computer
Vision, and Applications, pages 683 – 690.
21.
G. Rosa, J. Papa, A. Marana, W. Scheire, and D. Cox.
Fine-tuning convoutional neural networks using harmony search.
Lecture Notes in Computer Science, 9423:683–690, 2015.
22.
J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller.
Striving for simplicity: The all convolutional net.
CoRR, abs/1412.6806, 2014.
23.
C. Szegedy, W. L. aLiangpei Zhang, and et. al.
Going deeper with convolutions.
in: proceedings on the CVPR.
24.
A. Vedaldi and K. Lenc.
Matconvnet - convolutional neural networks for matlab.
25.
A. Vedaldi and K. Lenc.
Matconvnet - convolutional neural networks for matlab.
In Proceeding of the ACM Int. Conf. on Multimedia.
26.
Xin-She Yang.
Engineering Optimization: an introduction with metaheuristic
application.
John Wiley & Sons, Hoboken, New Jersey, 2010.
27.
Yoshua Bengio.
Learning Deep Architecture for AI, volume 2:No.1.
Foundation and Trends in Machine Learning, 2009.
28.
L. Zhang and et. al.
Deep learning for remote sensing image understanding.
Journal of Science (Editorial), pages 1 – 2.
29.
Y. Zhining and P. Yunming.
The genetic convolutional neural network model based on random
sample.
ijunesstt, pages 317 – 326. |
Annual Progress Report 2003, Institute of Laser Engineering, Osaka University (2004), pp.147-150
A New Dynamical Domain Decomposition Method for Parallel Molecular Dynamics Simulation on Grid
Vasilii Zhakhovskii
Katsunobu Nishihara
Yuko Fukuda
and Shinji Shimojo
( )
INTRODUCTION
Classical molecular dynamics (MD) approach is widely applied for simulation of complex many-particle systems in many areas of physics, chemistry and biochemistry. The particles such as atoms and molecules interact with each other through a given force function. By step-by-step integration of Newton’s equations of motion, trajectories of all particles are obtained. Useful information can be extracted from the particle trajectories by using a suitable averaging procedure.
Typically the interactions among the particles can be described by short-range forces, hence MD simulation has local spatial character. Therefore the particle motion for a short period can be determined only from particles located in neighborhood of a given particle. This fact was used to develop successful parallel MD algorithms based on a static spatial domain decomposition (SDD) of simulation area [1]. There are a few approaches to improve the load balancing of SDD [2],[3], which works well for simulations with uniform density distribution and without significant flows of particles.
A large scale MD simulation requires a large number of high performance computers. Recently the Grid computing has been developed and allows us to use many computers connected through a net. However performance of the computers and the net may not be the same and their performance may not be known before the use. In addition, other users may also run their programs on the same computer network without notice. If we use many computers located in different sites, some of the computers may be very busy for certain times because of other users. A dynamical domain decomposition method is therefore required to obtain a good adaptive load balancing for heterogeneous computing environments such as Grid. We have developed a new Lagrangian material particle – dynamical domain decomposition method ($\mathrm{MPD}^{3}$) for large scale parallel MD simulation on a heterogeneous computing net, which results in large reduction of elapsed time.
Recently $\mathrm{MPD}^{3}$ method was applied for simulations of hydrodynamics problems like the Richtmyer-Meshkov instability [4] and laser destruction of solids [5]. The flows of matter in both cases result in great imbalance among processors in SDD. The new method is also applicable for simulations of any dynamical processes with strongly nonuniform density distribution accompanied with phase transition, shock waves and cracks.
In this report we present a new $\mathrm{MPD}^{3}$ algorithm and its performance tested for real physical problems in various computing environments, such as PC clusters connected within LAN and super computers connected with Super SINET, with the use of Globus toolkit 2.0.
ALGORITHM
Let us consider many particles interacting each other with short-range forces in a MD simulation box that is divided into subdomains. MD simulation is carried out using many processors connected through nets, where a number of the processors is $N_{p}$. Each processor calculates MD dynamics of the particles belonging to a subdomain. We consider following computing environment: other user’s programs and system programs are running on some of the $N_{p}$ processors and a number of other programs may change during the simulation. Our main goal is to develop successful algorithm to accomplish a good load balance among the processors to reduce elapsed time. It should be noted that the computational loading of each processor may also change in time due to the dynamical processes in the subdomains. Therefore the load balance algorithm has to be iteratively adapted for time-dependent computing environment and simulation dynamics.
We assume here that each simulation step in the algorithm can be divided into two parts – MD simulation of particle motion and exchange of particle data among the processors. A program may receive both the CPU-dependent time and the elapsed time for each part. Here the CPU-dependent time is time duration spent only for our simulation in each processor. If other programs run on some of the processors, the elapsed time becomes longer than the CPU-dependent time in those processors.
The normalized MD working time of the $i\!$-th processor $P(i)$ can be defined as
$$P(i)=\frac{t_{MD}(i)}{t_{eMD}(i)},\qquad 0<P(i)\leq 1\vspace{-1mm}$$
(1)
where $t_{MD}$ is the CPU-dependent time spent for the MD simulation without communication with other processors, and $t_{eMD}$ is the elapsed time for MD simulation. The $P(i)$ is useful to estimate how other programs are loaded on the $i\!$-th processor. If there is no other program, it becomes one.
Let us define the weighting factor $W(i)$ of the $i\!$-th CPU as
$$W(i)=\frac{t_{w}(i)}{P(i)t_{e}(i)},\qquad 0<W(i)\leq 1\vspace{-1mm}$$
(2)
Here $t_{w}$ is the CPU-dependent working time, and $t_{e}$ is the elapsed time, where both include time duration for the MD simulation and the communication. It is reasonable to suppose that the $i\!$-th CPU is in a good balance with the $j\!$-th CPU, if their weighting factors almost equal each other: $W(i)\cong W(j)$ .
A simulation domain can be divided into $N_{p}$ simple subdomains like rectangular boxes before the MD simulation, where $N_{p}$ equals a number of processors (CPUs). Each CPU calculates each sub-domain. $N(i)$ is a number of particles in the $i\!$-th subdomain. We define the position of the center $\mathbf{R}(i)$ of the $i\!$-th subdomain as following:
$$\mathbf{R}(i)=\frac{1}{N(i)}\sum_{k=1,N(i)}\mathbf{r}(k),\vspace{-1mm}$$
(3)
where $k$ denotes a particle number in the $i\!$-th subdomain, $\mathbf{r}(k)$ is a position of the $k\!$-th particle. For identical particles, $\mathbf{R}(i)$ is a center of mass of the $i\!$-th subdomain.
For each pair of the $i,j\!$-subdomains, we define a boundary plane between them at the midpoint
$\mathbf{R}_{1/2}(i,j)=(\mathbf{R}(i)+\mathbf{R}(j))/2$ and perpendicular to the connecting vector
$\mathbf{R}(i,j)=\mathbf{R}(i)-\mathbf{R}(j)$. After that all of the particles near the boundary are associated with one of the $i,j\!$-subdomains. By repeating this procedure for all pairs, the simulation domain will be finally divided into steady $N_{p}$ Voronoi polygons. The map of Voronoi polygons is known as the Dirichlet tessellation and used for grid generation in computational fluid dynamics.
For a system with a uniform number density, the area of each Voronoi polygon is the same and thus the number of particles becomes the same in each domain. Therefore the Voronoi decomposition gives a good balance only for a homogeneous computing net and uniform density distribution. Nevertheless the Voronoi decomposition is a good point to start MD simulation. If the particle exchange and diffusion between the polygons are forbidden, the dynamical behavior of these polygons looks as motion of Lagrangian particles from the hydrodynamics point of view. This is the reason why we name the moving Voronoi polygon as a material particle (MP).
To obtain a good load balance in the heterogeneous computing environments the MP centers should be adjust
time-dependently so that for example a busy computer calculates less number of particles compared with others. We propose the following simple and efficient iterative algorithm to calculate a displacement of the MP center.
The displacement of the $i\!$-th MP center at the current $n$ simulation step can be evaluated as
$$\Delta\mathbf{R}^{n}(i)=\frac{aL(i)}{N_{n}(i)}\sum_{j=1,N_{n}(i)}(W(i)-W(j))%
\frac{\mathbf{R}(i,j)}{|\mathbf{R}(i,j)|},\vspace{-1mm}$$
(4)
where $N_{n}(i)$ is a number of neighbor MPs surrounding the $i\!$-th MP, $L(i)$ is a linear size of the $i\!$-th MP,
and $a\in[0,1]$ is an adjustable parameter of the method.
At the next MD step $(n+1)$ a new position of the $i\!$-th MP $\mathbf{R}^{n+1}(i)$ is given by
$$\mathbf{R}^{n+1}(i)=\mathbf{R}^{n}(i)+\Delta\mathbf{R}^{n}(i)\vspace{-1mm}$$
(5)
It is clear from Eqs.(4) and (5) that a good load balance may not be reached within a few steps. In addition, the well-balanced Voronoi decomposition may not exist for small numbers of CPUs $N_{p}$.
Nevertheless, as observed in our simulations with not so many processors of $N_{p}=8,12,14,16$, the good-balanced decompositions are achieved within 10% of imbalance $|W(i)-W(j)|$ between the fastest CPU and the slowest one.
The simplified $\mathrm{MPD}^{3}$ algorithm can be designed as follows:
0)
Initialization. Simple initial domain decomposition.
1)
Exchange particles between neighbor MPs according to the Voronoi method.
2)
Evaluate the new MP positions and iterate 1)-step so long as every MPs reach steady shapes.
3)
Start simulation. Exchange the MP positions, timing data, particles and etc. among the neighbor MPs. Evaluate a new desired position of the MP center by using Eqs.(4) and (5).
4)
Advance MD integration step and measure all timing data:
$t_{MD}(i),t_{eMD}(i),t_{w}(i),t_{e}(i)$.
5)
Repeat 3)- and 4)-steps.
It should be pointed out that the new $\mathrm{MPD}^{3}$ method shares features with the usual spatial decomposition method as well as the particle decomposition algorithm [1]. In other words the simulated particles are distributed among CPUs in according to their positions in the dynamical clustered medium of subdomains/MPs which depend on particle motion. A particle keeps its number and membership of MP only for a relatively short period. To prevent the increase of cache memory missing we renumber particles inside MP in some geometrical order, for instance by using a rectangular mesh. It has been observed in our MD simulations that the renumbered neighbor particles lie short distances from each other in computer memory on average. The renumbering significantly improves cache hitting and CPU performance.
TESTING
The $\mathrm{MPD}^{3}$ method was implemented in the full vectorized Fortran program with calls of standard MPI 1.1 subroutines. The $\mathrm{MPD}^{3}$ program was tested in various computing environments and physical problems.
We measured performance of the $\mathrm{MPD}^{3}$ method in details for two testing problems, namely 1D and 2D decompositions, which demonstrated applicability of the new method to computer clusters and Grid computing.
The first simple test is 1D decomposition for MD simulation of a steady crystal bar at rest with the use of PC clusters consisting of different cpu performances connected within LAN. There are two dual-processor computers, 2-Xeon 2.2 GHz and 2-AMD 1.5 GHz, connected by 100 Mbps LAN, namely 4 CPUs in total. Communication is managed by the well known free MPICH software (see http://www-unix.mcs.anl.gov/mpi/mpich/ ). This model task is expected to check adaptability of the $\mathrm{MPD}^{3}$ to the heterogeneous computing net. Figure 1 shows the initial decomposition on the top and the well-balanced final decomposition on the bottom. The corresponding timing data are presented in Fig.2. On the top, it can be seen how the MPs belonging to either fast or slow CPUs exchange their particles among them to achieve a good load balance. The second graph from the top shows communication time between CPUs. Share memory communication is the superior, and the line with the bandwidth of 100 Mbps also demonstrates very good performance of which duration is less than 1% of the ealpsed time shown in the bottom of Fig.2. It is interesting to note that the method is inert to a short random load during few tens of simulation steps as shown around 40 mdu.
Due to the adaptation process in the $\mathrm{MPD}^{3}$ method the waiting time is reduced to half of its initial value at the simulation step of 50 in MD unit as shown in the 3rd graph from the top. The $\mathrm{MPD}^{3}$ method results in the reduction of the elapsed time from 9.1 sec/step to 6.5 sec/step.
The second test deals with MD simulations of high-speed collision of two solid cylinders with different radii
with the use of the $\mathrm{MPD}^{3}$ method. It is clear that the static domain decomposition may have a hard load imbalance because of large flow velocity of the particles. We used two super computers, NEC SX-5 of Cybermedia Center at Osaka University and SX-7 of Information Synergy Center at Tohoku University. They are connected with Super-SINET with the use of Globus toolkit 2.0.
Figure 3 shows a set of snapshots from the beginning to the end of simulation. Initially the simplest rectangular decomposition is chosen for dividing simulation domain among 7 SX-5 CPUs and 7 SX-7 CPUs as shown in Fig.3.1. After 14 iterations, the steady Voronoi decomposition is established. At the moment the left cylinder consists of 8 MPs, of which 7 MPs belong to SX-5 and other 1 MP to SX-7, and the right cylinder consists of 6 MPs belonging to SX-7, see Fig.3.2. As it has been already pointed out this is the starting point of the MD simulation. After that the performance of each CPU is measured.
The $\mathrm{MPD}^{3}$ algorithm shall attempt a load balance between CPUs. In Fig.3.3, the thick black line denotes the boundary between SX-5 and SX-7 at the end of the 1st period corresponding to (1) in Fig.4. At this time SX-5 at Osaka University is overloaded by other users while SX-7 at Toholu University is less loaded. As indicated in the figure, it results in reduction of the areas corresponding to the MPs belonging to SX-5. The two neighbor CPUs can be in balance by giving the particles from busy CPU to free one.
The snapshot in Fig.3.4 shows more or less uniform distribution in MP sizes. It reflects that the
simulation is carried out in exclusive usage mode on both computers as shown in (2) Fig.4.
The adaptation from Fig.3.3 to Fig.3.4 states takes about 500 simulation steps.
The next snapshot in Fig.3.5 shows the adaptive deformation of the domains due to two body collision. It corresponds to the end of the (4)th period in Fig.4 when the simulation has been performed in local Grid environment on the two different nodes of the NEC SX-5 at Osaka University connected with Giga bit ether.
The low bandwidth line between the nodes results in the long waiting time still. Fig.3.6 shows a mass density map of two bodies at the end of the simulation.
At the beginning of the (5)th period in Fig.4 the simulation program was recompiled for standard MPI environment on single node of SX-5 machine. It results in a pronounced reduction of CPU-dependent time. During the (6)th period in Fig.4, SX-5 works in exclusive usage mode. The waiting time becomes near $\sim 20\%$ of the CPU time/step and can not be decreased further. Most probably the number of CPUs is too small to optimize the MP sizes in the frame of Voronoi decomposition.
CONCLUSION
We have demonstrated that the $\mathrm{MPD}^{3}$ method is a highly adaptive dynamic domain decomposition algorithm for MD simulation on both PC clusters and Grid computing environments, even if other programs are running on the same environment. It has been shown that the well-balanced decomposition results from dynamical Voronoi polygon tessellation, where each polygon is considered as a material or Lagrangian particle and its center is displaced to reach the minimum elapsed time with a good load balance. Our approach can be extended to other particle methods like Monte Carlo, particle-in-cell, and smooth-particle-hydrodynamics.
The $\mathrm{MPD}^{3}$ method works perfectly for 1D decomposition, but for 2D case the load balance may depend on a geometrical configuration of simulation problems and a number of CPUs in use. We suppose that the $\mathrm{MPD}^{3}$ is especially optimal for large scale simulation with a large number of computers.
Although we do not check parallel efficiency (scalability) of the $\mathrm{MPD}^{3}$ method, it must be nearly the same ($\!\sim 90\%$) as the SDD method on the uniform medium [1].
We are grateful to the Cybermedia Center at Osaka University and Information Synergy Center at Tohoku University for the organization of computer experiments.
References
[1]
S.Plimpton, J. Comput. Phys. 117,1,(1995)
[2]
L.Nyland et al, J. Parallel and Distributed Computing 47,125,(1997)
[3]
Y.Deng, R.F.Peierls, and C.Rivera, J. Comput. Phys. 161,1,(2000)
[4]
V. Zhakhovskii, K. Nishihara, and M. Abe in Inertial Fusion Science and Applications,
IFSA 2001, (Elsevier, Paris, 2002), pp.106-110
[5]
S.I.Anisimov, V.V.Zhakhovskii, N.A.Inogamov, K.Nishihara, A.M.Oparin, and Yu.V.Petrov
JETP Lett. 77, 606 (2003), and see the previous report as well. |
The Message or the Messenger?
Inferring Virality and Diffusion Structure from Online Petition Signature Data
Chi Ling Chan (ORCID 0000-0003-4665-714X)
1Stanford University, Stanford CA, USA
1callmechiling@gmail.com
jzlai@stanford.edu
davies@stanford.edu
Justin Lai (ORCID 0000-0002-2591-8673)
1Stanford University, Stanford CA, USA
1callmechiling@gmail.com
jzlai@stanford.edu
davies@stanford.edu
Bryan Hooi (ORCID 0000-0002-5645-1754)
2Carnegie Mellon University, Pittsburgh PA, USA
2bhooi@andrew.cmu.edu
Todd Davies (ORCID 0000-0001-9082-4887)
1Stanford University, Stanford CA, USA
1callmechiling@gmail.com
jzlai@stanford.edu
davies@stanford.edu
Abstract
Goel et al. [14] examined diffusion data from Twitter to conclude that online petitions are shared more virally than other types of content. Their definition of structural virality, which measures the extent to which diffusion follows a broadcast model or is spread person to person (virally), depends on knowing the topology of the diffusion cascade. But often the diffusion structure cannot be observed directly. We examined time-stamped signature data from the Obama White House’s We the People petition platform. We developed measures based on temporal dynamics that, we argue, can be used to infer diffusion structure as well as the more intrinsic notion of virality sometimes known as infectiousness. These measures indicate that successful petitions are likely to be higher in both intrinsic and structural virality than unsuccessful petitions are. We also investigate threshold effects on petition signing that challenge simple contagion models, and report simulations for a theoretical model that are consistent with our data.
Keywords:petitions, virality, broadcast, diffusion
1 Introduction
This study infers the “virality” and diffusion structure of petitions by examining the temporal dynamics of petition signatures. Viral characteristics can be understood as the opposite of broadcast characteristics. Whereas a broadcast structure refers to large diffusion events in which a single source spreads content to a large number of people, viral diffusion refers to a cascade of sharing events each between a sender and their associates. Intuitively, a petition that exhibits virality is more likely to attract signatures, in part, through the intrinsic appeal of its message, whereas a petition that exhibits more broadcast characteristics is dependent on mass distribution by one or more well-connected senders (the messenger(s)) in order to gain signatures.
We used time- and location-coded signature data from the Obama White House’s We The People (WTP) petition site.111We the People petitions from the Obama years are archived at https://petitions.obamawhitehouse.archives.gov. Whereas MoveOn.org and Change.org provide “continuous user engagement” through social media and email, WTP simply provides a static page for each petition, with standard sharing buttons. Since petitions on WTP are less likely to be broadcast, at least directly, than on other petition sites, we reasoned that they would be more likely to depend on person-to-person sharing, and hence viral characteristics, to reach the signature threshold for success [26]. We wished to infer how petitions were shared without direct access to diffusion data. The data we did have – about signatures and the success or failure of each petition, constitute the variables of greatest interest for petitions, and, as is usually the case, the true diffusion data were not observable because they involve an unknown amount of private communication (including emails, phone calls, and face to face dialogue). We therefore sought indirect ways to measure viral and broadcast components of diffusion.
2 Related Work
2.0.1 Virality
Recent literature has examined the virality of online petitions and their differences from other social media. Compared to other online activities, petition-signing typically requires personal endorsement/commitment before sharing. As such, researchers have noted that Twitter cascades about petitions exhibit more structural virality than those for news, pictures, and videos [14, 15]. Structural virality (SV) is a continuous measure that distinguishes between a single, large broadcast (high SV) and viral spread over multiple generations (low SV).
Structural virality is defined as the average distance between all pairs of nodes in a diffusion tree, a quantity that, for a given cascade size (number of Tweets and Re-Tweets) is minimized when all Re-Tweets are direct offspring of a single source Tweet (pure broadcast diffusion). It is maximized when each Re-Tweet is itself directly Re-Tweeted just once, indicating a string of successors influenced by each predecessor, rather than one predecessor with a great deal of influence. Goel et. al found little to no correlation between structural virality and popularity (cascade size) within any given type of shared content, including petitions (for which they found $r=$.04) [14].
2.0.2 Temporal Adoption Patterns
In the literature on diffusion and contagion, the dominant model posits an S-shaped cumulative adoption curve [9, 5, 37, 49, 42, 23], which exhibits an initial period of exponential growth that levels off when the population runs out of potential adopters (see Figure 1).
Whether this diffusion pattern applies in the online setting has been a question of interest, given the increasing prominence of online mobilization. Investigations of online network mobilizations have identified an S-shaped curve, with critical mass reached only after participants have responded to evidence from early participants [17]. However, evidence remains mixed. In a study that tracks the growth curves of 20,000 petitions on the government petitioning site in the United Kingdom, Yasseri et al. [48] noted that a petition’s fate is virtually set after the first 24 hours of its introduction, a finding echoed in a study [20] which tracked 8000 petitions on the UK petitioning site No. 10 Downing Street. The S-shaped period of stasis before reaching a critical point appears largely absent, consistent with most studies of online petitioning. These findings call into question the explanatory power of the S-curve model for online petitions. Given that research on online petitions is a relatively new area of study [46], however, there remains a lack of empirical calibration and external validity - a point acknowledged by authors of most of these studies [17, 20].
A second question has to do with the diffusion mechanism behind observed patterns. Empirical studies have largely interpreted the S-curve as evidence of social contagion [37, 5], but others suggest that the same curve could arise from broadcast distribution mechanisms such as mass media sharing [43]. It remains ambiguous whether viral diffusion is in fact the diffusion mechanism driving growth momentum, as is typically assumed in classical diffusion studies. Related research has shown that information cascades in online networks occur rarely, and studies of online petitions have found that the vast majority of signatures are dominated by a tiny fraction of massively successful petitions [48, 25, 19]. Events that ‘go viral’ are an exception rather than the rule [14]. Untangling broadcast from viral diffusion mechanisms, however, has been difficult largely because these studies have based themselves on aggregate diffusion data.
Recent studies have been able to overcome this limitation by analyzing cascade structures directly. Goel et al. analyzed cascades from a billion diffusion events on Twitter and offered a fine-grained analysis of how viral and broadcast diffusion interact [14]. They found that large diffusion events exhibit extreme diversity of structural forms, and demonstrate various mixes of viral and broadcast diffusion, such that the S-curve is but one of many combinations. Extending Goel’s analysis of viral and broadcast diffusion, we might ask related questions about how patterns develop over time: When do viral and/or broadcast diffusion set in, and how do they combine to generate the adoption patterns observed? Hale et al. observed that signatures are typically gathered via “punctuated equilibria,” specific points that trigger large cascades within a short time, resulting in leptokurtic distributions (characterized by sharp peaks in signature counts) [20], which suggest broadcast events.
2.0.3 Goal Thresholds
A related but less well-studied aspect of online petitioning and diffusion has to do with the effects of threshold requirements that define petition success. Signature thresholds are a typical feature in most online petitions and are crucial for goal-setting by campaigners. Social psychology studies demonstrate that people invest greater effort as they approach a goal [29, 8, 30], a phenomenon known as the goal-gradient hypothesis, first described by behaviorist Clark Hull (1934) observing rats running faster as they approach a food reward in a maze [22]. For online petitions, there is suggestive evidence in Hale (2013) for threshold effects at the 500-signature mark (the minimum required for an official response on No.10 Downing Street)[19], but it remains unclear if this can be extrapolated across other petition platforms such as WTP.
3 Data and Methods
3.1 WTP petitions data
In this observational study, we relied on an aggregated adoption dataset of 3682 publicly searchable petitions using the public API of WTP.gov, the official online petitioning platform of the White House. These petitions were created within a time period that began from the inception of the platform on September 20, 2011 and ended on March 30, 2015, as our study commenced.
Two critical thresholds for We the People should be noted. First, to be publicly listed and searchable within the site, a petition had to reach 150 signatures within 30 days. Second, to cross the second threshold for review by the White House and be distributed to policy officials for an official response, a petition had to obtain 100,000 signatures (25,000 until January 2013) within 30 days. Responses were posted and linked to the petition on WhiteHouse.gov, and emailed to all petition signers. All petitions in our dataset crossed the 150 signatures threshold, and were publicly listed (the API allowed the retrieval of only petitions that were publicly searchable).
3.2 Variables in signatures and petitions dataset
The WTP API provided data on both petitions and their individual signatures submitted via the petition site. Data on petitions included timestamp of creation, the body of text that campaigners submitted, status at the time of the API query (open/pending response/responded/closed), and the signature count. The data included the time a signature was submitted, as well as geographical details (state, Zipcode if the signature was from the U.S., country, city) of the signatory.
From the API, we assembled a dataset comprising all signatures and petitions and organized them into the following variables: 1) petition ID; 2) signature ID; 3) Unix timestamp of signature; and 4) Zipcode of signatory. This was then merged with a dataset of petitions containing the following data: 1) petition ID; 2) petition title; 3) petition description; 4) signature count; 5) signature status; and 6) Unix timestamp of creation (Unix timestamps were recoded to reflect number of days since a petition’s creation).222Our data are available at https://github.com/justinlai/petitiondata.
For this study, a petition was considered successful if it reached the 100,000 (or 25,000) signature threshold necessary for White House review.333For an alternative perspective on e-petition “success,” see Wright (2016) [47]. Using this criterion, a large majority (98.4%) of petitions failed. Of all visible petitions, 1.6% reached the 100,000 threshold. This success rate is consistent with the predicted pattern that only a small fraction of campaigns eventually succeed [48].
3.3 Some measures of virality
Because we could not observe the diffusion network of a petition on WTP, we could not use the same measure of structural virality (SV) used by Goel et al. [14]. Furthermore, as they note, their concept of SV may have no relationship to the concept of ‘‘infectiousness,’’ or the probability that, in this case, a recipient of a petition announcement will sign the petition.444Indeed, for some other types of content such as the spread of memes, initial infectiousness appears not to be a good predictor of later success [39, 45]. Therefore, we propose indirect measures of SV, which we also distinguish from infectiousness which we call intrinsic virality (IV). An indirect measure of SV is the exceed ratio, while IV may be assessed through the first-day, second-day signature comparison.
3.3.1 Exceed Ratios
The exceed ratio is a measure indicating the contribution of temporal peaks to a petition’s total signature count. A temporal peak is a period (e.g. day or hour) when the signature count exceeds those within both of its adjacent time periods of the same duration. For every temporal peak, we calculate the signatures received on that peak day minus the number received on either the day before or the day after, whichever is larger. The total exceed ratio $E_{Tot}$ is the sum of these differences across all temporal peaks, divided by the total signature count. Notationally, $E_{Tot}$, for a given petition over $T$ time periods, in which $S[i]$ signatures are obtained in period $i$, and $L$ refers to the set of all peak periods within $T$, is thus defined as:
$$E_{Tot}=\frac{\Sigma_{i\in L}(S(i)-\max[S(i-1),S(i+1)])}{\Sigma_{i=1}^{T}S(i)}$$
The total exceed ratio is a measure of broadcast-ness, and therefore are inverse measure of structural virality. Broadcast content is more likely to rely on large diffusion events in which a period’s signature count is larger than those in its adjacent periods, whereas viral content would likely have fewer such events.
We can also calculate a global-peak-only exceed ratio $E_{GPO}$ by dividing the adjacent-periods signature difference for just the global peak period by total signatures, as an indication of the effect of the largest broadcast event.
3.3.2 First-Day, Second-Day Signature Comparison
The first-day, second-day signature comparison (FDSD) for a given petition asks whether the number of signatures received on the first day is exceeded by those on the second day. We posit this as a simple measure of intrinsic virality (IV, or infectiousness, viz, intrinsic message appeal for signing and sharing), on the assumption that a petition with high IV will be more likely both to be passed on and to be signed by recipients than will one with lower IV. The first day is the best day to make this follow-on comparison if we assume petitions are most likely to be announced in broadcast events on their first day. If this is the case, then FDSD provides the best standard comparison across petitions, which on other days are likely to vary in whether or not they are broadcast. Since many petitions gain no traction after the first day, the FDSD also allows us to capture that lack of enthusiasm for the largest number of petitions under study.
4 Results
4.1 Overall adoption patterns
Overall, a total of 24.5 million signatures were collected by 3682 petitions in our dataset, of which 59 (1.6%) reached the 100,000 signature threshold required for a response from the White House. Successful petitions garnered 31.8% of total signatures.
Each petition’s signature data can be plotted as an adoption curve, showing the number of signatures reached day by day. Figures 4 and 4 in Appendix A show temporal signature histograms for randomly chosen successful and unsuccessful petitions, to give a sense of how these look. Figures 6 and 6 of Appendix B show aggregated cumulative adoption patterns and the 30 day temporal thresholds for all the petitions (successful and unsuccessful).
4.2 Structural virality vs. broadcast events
We found that unsuccessful petitions have a 47.4% higher average total exceed ratio $E_{Tot}$ than successful petitions in the daily distribution of signatures. For the hourly distribution of signatures, this rises to 55.4% ($p<$.0001 for both comparisons by a two-tailed t-test). This suggests that successful petitions are less dependent on broadcast events (peaks) for growth, and therefore higher in structural virality.
The daily global-peak-only exceed ratio $E_{GPO}$ for successful petitions was 0.105 (sd=.11), and for unsuccessful ones was 0.155 (sd=.19). This appears to conflict with the statement by Goel et al. that “if popularity is consistently related to any one feature, it is the size of the largest broadcast,” since in our data the indicator of a larger single broadcast is higher for unsuccessful (therefore less popular) petitions ($p=$.042 by a two-tailed t-test).555In footnote 10 on p. 187, Goel et al. clarify that this statement applies to normalized and not just to absolute size [14].
4.3 Intrinsic virality (infectiousness)
Among the 59 successful petitions, 68% had more signatures in their second day than their first day, but among the 3623 unsuccessful petitions, this percentage was only 38%. Thus, there is a clear relationship between a petition’s success and having more signatures in its second than its first day ($\chi^{2}$ test $p$-value $<10^{-5}$).
This finding agrees with our earlier intuition that poorly performing petitions tend to receive an initial burst of signatures but then decay quickly. Moreover, this measure is particularly interesting as it only relies on the first two days, and thus can be used as an early indicator of whether a petition is likely to succeed. By the reasoning in subsection 3.3, we infer that successful petitions are higher in intrinsic virality than are unsuccessful ones. This may seem like an obvious truth, but it contradicts influential models of diffusion which imply that “the largest and most viral cascades are not inherently better than those that fail to gain traction, but are simply more fortunate” [14] (citing [44]).
4.4 Additional measures: viral and broadcast diffusion across all petitions
We can also analyze our dataset as a whole, without distinguishing between petitions that do and do not pass the success threshold. Additional measures of interest for looking at virality across all petitions are described in Table 2. A petition’s ‘global peak’ is the day during which it received the most signatures (where day 1 is the day the petition was introduced). This is another indirect way to measure intrinsic virality, on the assumption that a petition with more appeal will be more likely to grow in its signature count rather than die out, and hence would be expected to have a later global peak. The dependent variable, $total$, is the total number of signatures a petition acquires over the 60 day period. Here we will refer to more and less popular petitions to indicate numerical differences rather than the binary, successful and unsuccessful categories.
4.4.1 Regression Results
We perform linear regressions of the number of signatures over our measures in Table 2. The results are shown in Table 3.
As shown in Columns $1$, $3$ and $4$ of Table 3, we find that skewness and kurtosis are also significantly correlated with signature count. Under all three model specifications, petitions with right skewed distributions (i.e. larger right tails) tend to end up with more signatures (which indicates intrinsic virality, i.e. more signatures late in the process due to more people signing and passing on the petition), as do petitions with lower kurtosis (i.e. having less sharp peaks). This latter finding would be predicated by higher structural virality, since kurtosis is related to our exceed ratio as an indicator of events (the inverse of virality).
As shown in column $2$ of Table 3, linear regression suggests that on average, a petition that peaks $1$ day later ends up with $262.89$ more signatures ($p\approx 1.3\times 10^{-9}$). The relationship remains about as strong, and still highly significant, when we control for the number of local peaks, and the skewness and kurtosis of the petition’s temporal distribution (column $3$) as well as when we replace the dependent variable by its logarithm (to ensure that petitions with large signature counts do not excessively influence the fitted coefficients).
Hence, we find that petitions with later global peak days tend to end up with more signatures than petitions with earlier peaks, which we take to be an indication of intrinsic virality. This finding is also illustrated in Figure 2, in which petitions are separated into those with global peaks on day $1$, $2$, and so on; and we observe, the mean numbers of signatures seems to broadly increase as the peak gets later, which agrees with the regression findings.
As noted above, more popular petitions have more local peaks. However, further analysis reveals that this phenomenon occurs mainly because less popular petitions receive very few signatures after day $30$ (as discussed in our threshold section) particularly when they do not reach the $100,000$ mark, and hence have fewer local peaks. Indeed, when we consider only days $1$ to $30$ and perform another regression of $\log(total)$ against $num\_local\_peaks$, the regression coefficient becomes much smaller and no longer statistically significant (coefficient of $-0.009,p=0.437$).
4.5 Threshold Effects
Our analysis of petition adoption curves reveals: (a) a goal-gradient threshold effect, i.e. petitions start to receive fewer signatures after reaching the $100,000$ signatures mark; and (b) a temporal threshold effect, i.e. if a petition becomes $30$ days old without receiving $100,000$ signatures, it suddenly starts to receive fewer signatures. (See Figs. 6 and 6 in Appendix B.) We take this as evidence that WTP site users are paying attention to the social context when they decide to sign, which is at odds with a “simple contagion” model in which the probability of signing as a result of each new communication remains the same (see, e.g. [6, 14, 18]).
5 A Theoretical Model
In this section we describe a model that explains the observed signatures as a mixture of broadcasts (which induce a group of users to sign the petition, e.g. a news broadcast) and viral spreading of the petition from people who have just signed it to new users. Under this model, at each time step, a petition has a small probability of a broadcast occurring. Each broadcast brings in a number of users, where the number is drawn from a log-normal distribution, which allows for large variance in broadcast sizes similar to what we observe in actual data. Naturally, this distribution can be replaced by any appropriate distribution in other applications depending on the researcher’s prior beliefs.
At the same time, viral spread is happening constantly - each user who has just signed the petition has a small probability of spreading the petition to each other user who has not signed it yet. The strength of the viral spread can be parametrized by the basic reproduction number $R_{0}$, the average number of people that each signer spreads the petition to in a completely susceptible population. In addition, since we observe in the real data a fairly constant and low ‘background’ level at which users sign the petition, we also add a similar low background probability for each user in the population to sign the petition at each time step, independent of the existing broadcast and viral mechanism.
For our simulations, the probability of broadcasts was chosen to give an average of $3$ broadcasts per petition, with broadcast size following a log-normal distribution: if $X$ is a broadcast size, then $\log X\sim\mathcal{N}(\mu,\sigma^{2})$, where we use $\mu=5,\sigma=1.5$. There is always at least one broadcast, and the first broadcast occurs on day $1$. For viral spread, the initial susceptible population is set at $10000$, and $R_{0}$ is chosen from a uniform distribution between $0.7$ and $1.9$. The background level is set such that each user who has not signed the petition has a $0.002$ chance of signing it at each time step.
In this model, $R_{0}$ could be thought of as what we have called “instrinsic virality” (IV) of the message, in that it varies across petitions and measures the likelihood of signing and passing along that petition across all recipients in the population. The average broadcast size, by contrast, is assumed to be the same for each petition, since we assume that to be a feature of the population (the messengers) rather than of the petition itself.
5.0.1 Replicating Empirical Findings on Shape and Success
Are the empirical findings from subsection 4.4 relating the shape of a petition’s adoption curve and its popularity also present in the simulations? If they are, this provides a possible explanation for the empirical findings; if not, they suggest a way of improving the model. To answer this question, we simulate $5000$ petitions using the broadcast and viral model, and do a linear regression of the logarithm of the total number of signatures received by a petition against measures of shape as we did earlier (in Table 3). As before, we use the logarithm of the number of signatures as the response variable to prevent outliers from excessively influencing the fit. Table 4 shows the regression coefficients when using these simulations (col. 1) compared to the original regression coefficients for the actual data (col. 2). Table 4 shows that all the coefficients for the regression on simulated data are significant with the same sign as in the original petition data. Most are of fairly similar magnitudes, with the exception of num_local_peaks, which has a stronger effect in the actual data. However, we observed earlier that this variable is significant in the actual data largely as an artifact of the long runs of zeros for less successful petitions.
Since the simulations are based on a simple model, we can explain these regression findings. Petitions with low $R_{0}$ peak early when they receive an initial broadcast but then lose momentum extremely quickly due to the lack of strong viral spread; hence, they have earlier global peaks, short right tails, and a highly peaked distribution. Petitions with high $R_{0}$ accumulate signatures more gradually due to having stronger viral spread, then lose momentum gradually as the population runs out of users who have not signed the petition, thus having later global peaks, larger right tails, and a less peaked distribution. Since petitions with high $R_{0}$ end up with more total signatures, these account for the regression coefficients.
This does not necessarily imply that the same effects are present in the actual data. But these simulations provide a plausible explanation for the empirical findings that more successful petitions have later global peaks, more skewed and less peaked distributions. The simulation results suggest that under a simple model of viral spread, as long as different petitions have varying values of $R_{0}$ (i.e. rate of viral spread), we should expect correlations between total number of signatures and these measures of petition shape. As we have observed in our simulations, this is because higher $R_{0}$ petitions have different characteristic shapes than low $R_{0}$ petitions, and also end up with more signatures.
6 Discussion and Future Work
We have studied the temporal dynamics of adoption and diffusion patterns in online petition-signing, in order to understand what makes petitions gain traction and growth momentum. In this final section, we return to the questions that motivated this study and discuss theoretical and practical implications of our observations and modeling.
While Goel et. al noted that there is a very weak correlation for popularity and structural virality (the average distance between nodes) for petition sharing on Twitter [14], our research finds that our measures of intrinsic virality (first-day second-day comparison, skewness, and global peak day) are highly predictive of petition popularity/success. Our measures of exceed ratio and kurtosis indicate further that threshold-successful and/or more popular petitions are higher in structural virality. Intrinsic virality we take to be a property of a petition’s message, whereas structural virality is the inverse of broadcast-ness, which we take to be a feature of its messengers.
The fact that intrinsic virality, which is based on the appeal of the message rather than how it is spread, appears to predict success for petitions on We the People, is a partial answer to a question posed by Goel et al. about the relationship between structural virality and what they call “infectiousness” (intrinsic virality). They look at different models in which infectiousness is assumed to be either fixed or varying between different messages (petitions, in our case), and remain uncommitted about which one better describes real data. Our results for the first-day, second-day comparison and other intrinsic virality measures argue that not all messages are created equal, and that early indications of a message’s intrinsic appeal, before the diffusion structure has had a chance to be determined, are correlated with eventual success for petitions. Two other findings challenging whether the assumptions of Goel et al. apply to WTP are that (a) the global-peak-only exceed ratio for daily signature totals was lower for successful than for failed petitions on We the People, which indicates that the largest (relative) broadcast event may not be the best predictor of petition popularity/success; and (b) signatures on WTP exhibit a strong threshold effect (consistent with the goal gradient hypothesis), which is at odds with a simple contagion model, of which theirs is a special case ([14], p. 189, footnote 14).
Regression analysis over all the petitions, in subsection 4.4, indicated that more successful petitions exhibit: 1) a later global peak; 2) a larger right tail (positive skewness); 3) a less peaked distribution (lower kurtosis). Features 1 and 2 are indicative of intrinsic virality (IV), while feature 3 is indicative of structural virality (SV). Petitions that under-perform often experience early bursts of momentum at the outset, but the decay of such spikes is usually rapid. Based on our simulations, we find that a simple model combining broadcasts and viral diffusion, in which different petitions have different strength of viral diffusion (or $R_{0}$) (analogous to IV) can account for these three findings, primarily due to the different characteristic shapes between high and low $R_{0}$ petitions.
Our FDSD variable finding revealed that a petition is likely to fail if the number of signatures gathered on its second day is lower than its first day, indicating it has low intrinsic virality. But previous research has shown the first day alone to be a very important predictor of petition success [20, 48]. This could be taken as evidence for the importance of the initial broadcast event, in addition to structural and intrinsic virality, for petition popularity. Absent an effective broadcast, it is highly unlikely that a viral effect will set in and bring about the necessary momentum for growth in support.
The WTP data contain location- as well as timestamped signatures. This opens the possibility of further testing for geographical diffusion effects. An initial attempt to predict petition success from average land distance between Zipcodes in adjacent signature pairs did not uncover a difference between successful and unsuccessful petitions. However, this may be due to the fact that the overall average distance confounds both diffusion distance between signers (which might be lower for viral transmission) and the the fact that more popular petitions are likely eventually to find an audience across larger distances than are less popular ones that fizzle early. This question awaits a good measure that can disentangle these potentially opposing effects.
7 Acknowledgements
We wish to thank Marek Hlavac for technical assistance, and Lee Ross and Howard Rheingold for timely and valuable feedback on an earlier version of this work (which was submitted by the first author as her masters thesis [7]), as well as three anonymous reviewers for their helpful comments.
Appendix 0.A Signature Graphs for Individual Petitions
Appendix 0.B Aggregated Temporal Signature Graphs
References
[1]
Adar, E. and Adamic, L. (2005). Tracking information epidemics in blogspace. In IEEE/WIC/ACM International Conference on Web Intelligence. IEEE Computer Society,Compiegne University of Technology, France
[2]
Bandura, A., & Cervone, D. (1986). Differential engagement of self-reactive influences in cognitive motivation. Organizational behavior and human decision processes, 38(1), 92-113.
[3]
Bakshy, E., Hofman, J. M., Mason, W. A., & Watts, D. J. (2011). Everyone’s an influencer: quantifying influence on twitter. In Proceedings of the fourth ACM international conference on Web search and data mining (pp. 65-74). ACM.
[4]
Bakshy, E., Karrer, B., and Adamic, L. (2009). Social influence and the diffusion of user-created content. In Proceedings of the tenth ACM conference on Electronic commerce. Association of Computing Machinery, 325-334.
[5]
Bass, F.M. (1969). A new product growth for model consumer durables. Man. Sci. 15, 5, 215-227.
[6]
Centola, D., Macy, M. (2007). Complex contagions and the weakness of long ties. American Journal of Sociology 113(3), 702-734.
[7]
Chan, C.L. (2015). Temporal dynamics of adoption and diffusion
patterns in online petitioning. M.S. thesis, Stanford University.
[8]
Cheema, A., & Bagchi, R. (2011). The effect of goal visualization on goal pursuit: Implications for consumers and managers. Journal of Marketing, 75(2), 109-123.
[9]
Coleman, J., Katz, E., & Menzel, H. (1957). The diffusion of an innovation among physicians. Sociometry, 253-270.
[10]
Cryder, C. E., Loewenstein, G., & Seltman, H. (2013). Goal gradient in helping behavior. Journal of Experimental Social Psychology, 49(6), 1078-1083.
[11]
Dodds, P. S., & Watts, D. J. (2005). A generalized model of social and biological contagion. Journal of Theoretical Biology, 232(4), 587-604.
[12]
Gladwell, M. (2002). The tipping point: How little things can make a big difference. Little, Brown and Company
[13]
Gleeson, J.P., Cellai, D., Onnela, J.-P., Porter, M.A., Reed-Tsochas, F. (2013). A simple generative model of collective online behaviour. arXiv preprint arXiv:1305.7440
[14]
Goel, S., Anderson, A., Hofman, J., & Watts, D. (2016). The structural virality of online diffusion. Man. Sci., 62(1), 180-196.
[15]
Goel, S., Watts, D. J., & Goldstein, D. G. (2012). The structure of online diffusion networks. In Proceedings of the 13th ACM conference on electronic commerce (pp. 623-638). ACM.
[16]
Goldenberg, Jacob, Barak Libai, and Eitan Muller. (2001). ”Talk of the network: A complex systems look at the underlying process of word-of-mouth.” Marketing letters 12.3: 211-223.
[17]
Gonzalez-Bailon, S., Borge-Holthoefer, J., Rivero, A., Moreno, (2011). The dynamics of protest recruitment through an online network. Scientific Reports 1(197).
[18]
Granovetter, M. (1978). Threshold models of collective behavior. American Journal of Sociology 83, 6, 1420-1443
[19]
Hale, S. A., John, P., Margetts, H. Z., & Yasseri, T. (2014). Investigating Political Participation and Social Information Using Big Data and a Natural Experiment. In APSA 2014 Annual Meeting Paper.
[20]
Hale, S. A., Margetts, H., & Yasseri, T. (2013). Petition growth and success rates on the UK No. 10 Downing Street website. In Proceedings of the 5th Annual ACM Web Science Conference (pp. 132-138). ACM.
[21]
Heath, C., Larrick, R. P., & Wu, G. (1999). Goals as reference points. Cognitive psychology, 38(1), 79-109.
[22]
Hull, C. L. (1934). The rat’s speed-of-locomotion gradient in the approach to food. Journal of Comparative Psychology, 17(3), 393.
[23]
Iyengar, R., Van Den Bulte, C., and Valente, T. W. (2010). Opinion leadership and social contagion in new product diffusion. Marketing Science.
[24]
Jones, B. D., & Baumgartner, F. R. (2005). The politics of attention: How government prioritizes problems. University of Chicago Press.
[25]
Jungherr, A., & Jürgens, P. (2010). The Political Click: Political Participation through E?Petitions in Germany. Policy & Internet, 2(4), 131-165.
[26]
Karpf, David. (2017). Analytic Activism. Digital Listening and the New Political Strategy. Corby: Oxford University Press.
[27]
Kempe, D., Kleinberg, J., and Tardos, E. (2003). Maximizing the spread of influence through a social network. In 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association of Computing Machinery.
[28]
Kitsak, M., Gallos, L. K., Havlin, S., Liljeros, F., Muchnik, L., Stanley, H. E., & Makse, H. A. (2010). Identification of influential spreaders in complex networks. Nature Physics, 6(11), 888-893.
[29]
Kivetz, R., Urminsky, O., & Zheng, Y. (2006). The goal-gradient hypothesis resurrected: Purchase acceleration, illusionary goal progress, and customer retention. Journal of Marketing Research, 43(1), 39-58.
[30]
Koo, M., & Fishbach, A. (2012). The small-area hypothesis: Effects of progress monitoring on goal adherence. Journal of Consumer Research, 39(3), 493-509.
[31]
Leskovec, J., Singh, A. and Kleinberg, J. (2006). Patterns of influence in a recommendation network. Advances in Knowledge Discovery and Data Mining, 380-389.
[32]
Lin, Y.-R., Margolin, D., Keegan, B., Baronchelli, A., Lazer, D. (2013). : #bigbirds never die: Understanding social dynamics of emergent hashtags.
[33]
Locke, E. A. (1968). Toward a theory of task motivation and incentives.Organizational behavior and human performance, 3(2), 157-189.
[34]
Lopez-Pintado, D. and Watts, D. (2008). Social influence, binary decisions and col-
lective dynamics. Rationality and Society 20, 4, 399-443
[35]
Margetts, H.Z, John, P., Escher, T., Reissfelder, S. (2011). Social information and political participation on the Internet: An experiment. European Political Science Review 3(3), 321-344.
[36]
Margetts, H.Z., John, P., Hale, S.A., Reissfelder, S. (2013): Leadership without leaders? starters and followers in online collective action. Political Studies
[37]
Rogers, E. (1962). Diffusion of innovations. Free Press
[38]
Rothkopf, E. Z., & Billington, M. J. (1979). Goal-guided learning from text: inferring a descriptive processing model from inspection times and eye movements. Journal of educational psychology, 71(3), 310.
[39]
Salganik, M. J., Dodds, P. S., & Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854-856.
[40]
Staab, S., Domingos, P., Golbeck, J., Ding, L., Finin, T., Joshi, A., & Nowak, A. (2005). Social networks applied. Intelligent Systems, IEEE, 20(1), 80-93.
[41]
Sun, E., Rosenn, I., Marlow, C., and Lento, T. (2009). Gesundheit! modeling contagion through facebook news feed. In Proc. of International AAAI Conference on Weblogs and Social Media.
[42]
Valente, T. W. (1995). Network models of the diffusion of innovations (Vol. 2, No. 2). Cresskill, NJ: Hampton Press.
[43]
Van den Bulte, C. & Lilien, G. L. (2001). Medical innovation revisited: Social contagion versus marketing effort. Amer. J. Sociol, 106, 5, 1409-1435.
[44]
Watts, D. J. (2002). A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences, 99, 9, 5766-5771.
[45]
Weng, L., Menczer, F., & Ahn, Y. Y. (2014, March). Predicting Successful Memes Using Network and Community Structure. In ICWSM.
[46]
Wright, S. (2015). 9. E-petitions. Handbook of digital politics, 136.
[47]
Wright, S. (2016). âSuccessâ and online political participation: The case of Downing Street E-petitions. Information, Communication & Society, 19(6), 843-857.
[48]
Yasseri, T., Hale, S. A., & Margetts, H. (2013). Modeling the rise in internet-based petitions. arXiv preprint arXiv:1308.0239.
[49]
Young, H.P. (2009). Innovation diffusion in heterogeneous populations: Contagion, social influence, and social learning. American Economic Review 99, 5, 1899-1924 |
Indexing Highly Repetitive String Collections
Gonzalo Navarro
University of Chile
Chile
Gonzalo Navarro, Center for Biotechnology and Bioengineering (CeBiB) and
Millennium Institute for Foundational Research on Data (IMFD), Department of
Computer Science, University of Chile, Beauchef 851, Santiago, Chile,
gnavarro@dcc.uchile.cl.
Abstract
Two decades ago, a breakthrough in indexing string collections made it
possible to represent them within their compressed space while at the same
time offering indexed search functionalities. As this new technology
permeated through applications like bioinformatics, the string collections
experienced a growth that outperforms Moore’s Law and challenges our ability
of handling them even in compressed form. It turns out, fortunately, that
many of these rapidly growing string collections are highly repetitive,
so that their information content is orders of magnitude lower than their
plain size. The statistical compression methods used for classical collections,
however, are blind to this repetitiveness, and therefore a new set of techniques
has been
developed in order to properly exploit it. The resulting indexes form a new
generation of data structures able to handle the huge repetitive string
collections that we are facing.
In this survey we cover the algorithmic developments that have led to these
data structures. We describe the distinct compression paradigms that have been
used to exploit repetitiveness,
the fundamental algorithmic ideas that form the base of all the existing
indexes, and the various structures that have been proposed,
comparing them both in theoretical and practical aspects. We conclude with
the current challenges in this fascinating field.
keywords: Text indexing, string searching, compressed data structures,
repetitive string collections.
\category
E.1Data structures
\categoryE.2Data storage representations
\categoryE.4Coding and information theoryData compaction and compression
\categoryF.2.2Analysis of algorithms and problem complexity
Nonnumerical algorithms and problems
[Pattern matching, Computations on discrete structures, Sorting and
searching]
\categoryH.2.1Database management
Physical design
[Access methods]
\categoryH.3.2Information storage and retrieval
Information storage
[File organization]
\categoryH.3.3Information storage and retrieval
Information search and retrieval
[Search process]
\terms
Algorithms
{bottomstuff}
Funded by Basal Funds FB0001, Mideplan, Chile; by the Millennium
Institute for Foundational Research on Data, Chile; and by Fondecyt
Grant 1-200038, Chile.
{authinfo}
\permission
Contents
1 Introduction
2 Notation and Basic Concepts
2.1 Strings
2.2 Pattern Matching
2.3 Suffix Trees and Suffix Arrays
2.4 Karp-Rabin Fingerprints
3 Compressors and Measures of Repetitiveness
3.1 The Unsuitability of Statistical Entropy
3.2 Lempel-Ziv Compression: Measures $z$ and $z_{no}$
3.2.1 The compressor
3.2.2 The measure
3.2.3 A weaker variant
3.2.4 Evaluation
3.3 Bidirectional Macro Schemes: Measure $b$
3.4 Grammar Compression: Measures $g$ and $g_{rl}$
3.4.1 Run-length grammars
3.5 Collage Systems: Measure $c$
3.6 Burrows-Wheeler Transform: Measure $r$
3.7 Lexicographic Parsing: Measure $v$
3.8 Compact Directed Acyclic Word Graphs: Measure $e$
3.9 String Attractors: Measure $\gamma$
3.10 String Complexity: Measure $\delta$
3.11 Relations
4 Accessing the Compressed Text and Computing Fingerprints
4.1 Enhanced Grammars
4.1.1 Extracting substrings
4.1.2 Karp-Rabin fingerprints
4.1.3 Extracting rule prefixes and suffixes in real time
4.1.4 Run-length grammars
4.1.5 All context-free grammars can be balanced
4.2 Block Trees and Variants
4.2.1 Extracting substrings
4.2.2 Karp-Rabin fingerprints
4.2.3 Other variants
4.3 Bookmarking
5 Parsing-Based Indexing
5.1 Geometric Structure to Track Primary Occurrences
5.1.1 Finding the intervals in $\mathcal{X}$ and $\mathcal{Y}$
5.1.2 Finding the points in the two-dimensional range
5.2 Tracking Secondary Occurrences
5.2.1 Lempel-Ziv parsing and macro schemes
5.3 Block Trees
5.4 Grammars
5.5 Resulting Tradeoffs
5.5.1 Using more space
5.5.2 History
6 Suffix-Based Indexing
6.1 Based on the BWT
6.1.1 Finding the interval
6.1.2 Locating the occurrences
6.1.3 Optimal search time
6.2 Based on the CDAWG
7 Other Queries and Models
7.1 Counting
7.2 Suffix Trees
7.3 Document Retrieval
7.3.1 Document listing
7.3.2 Document counting
7.3.3 Top-$k$ retrieval
7.4 Heuristic Indexes
7.4.1 Relative Lempel-Ziv
7.4.2 Alignments
8 Current Challenges
8.1 Practicality and Implementations
8.2 Construction and Dynamism
8.3 New Queries
A History of the Contributions to Parsing-Based Indexing
1 Introduction
Our increasing capacity for gathering and exploiting all sorts of data around us
is shaping modern society into ways that were unthinkable a couple of decades
ago. In bioinformatics, we have stepped in 20 years from sequencing the first
human genome to completing projects for sequencing 100,000 individuals111https://www.genomicsengland.co.uk/about-genomics-england/the-100000-genomes-project. Just storing such a collection requires about
70 terabytes, but a common data analysis tool like a suffix tree [Apostolico (1985)]
would require
5.5 petabytes. In astronomy, telescope networks generating terabytes per hour
are around the corner222https://www.nature.com/articles/d41586-018-01838-0. The web is estimated to have 60 billion pages, for a
total size of about 4 petabytes counting just text content333https://www.worldwidewebsize.com and https://www.keycdn.com/support/ the-growth-of-web-page-size, see the average HTML size.. Estimations of the yearly amount of data generated in the world
are around 1.5 exabytes444http://groups.ischool.berkeley.edu/archive/how-much-info/how-much-info.pdf.
Together with the immense opportunities brought by the data in all sorts of
areas,
we are faced to the immense challenge of efficiently storing, processing, and
analyzing such volumes of data. Approaches such as parallel and distributed
computing, secondary memory and streaming algorithms reduce time, but still
pay a price proportional to the data size in terms of amount of
computation, storage requirement, network use, energy consumption, and/or
sheer hardware. This is problematic because the growth rate of the data has
already surpassed Moore’s law
in areas like bioinformatics and astronomy [Stephens et al. (2015)]. Worse, these methods
must access the data in secondary memory, which is much slower than the main
memory. Therefore, not only we have to cope with orders-of-magnitude larger
data volumes, but we must operate on orders-of-magnitude slower storage devices.
A promising way to curb this growth is to focus on how much actual
information is carried out by those data volumes. It turns out that many of
the applications where the data is growing the fastest feature large degress of
repetitiveness in the data, that is, most of the content in each
element is equal to content of other elements. For example, let us focus on
sequence and text data. Genome repositories typically store many genomes of
the same species. Two human genomes differ by about 0.1% [Przeworski
et al. (2000)], and
Lempel-Ziv-like compression [Lempel and Ziv (1976)] on such repositories
report compression ratios (i.e., compressed divided by uncompressed space)
around 1% [Fritz
et al. (2011)]. Versioned document collections like Wikipedia stored
10 terabytes by 2015, and it reported over 20 versions per article, with the
version (i.e., near-repetitions) growing faster than original articles, and
1% Lempel-Ziv compression ratios555https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia. Versioned software repositories like GitHub
stored over 20 terabytes in 2016 and also reported over 20 versions per
project666https://blog.sourced.tech/post/tab_vs_spaces and http://blog.coderstats.net/github/ 2013/event-types. Degrees of 40%–80% of
duplication have been observed in tweets [Tao
et al. (2013)], emails [Elsayed and
Oard (2006)],
web pages [Henzinger (2006)], and general software repositories [Kapser and
Godfrey (2005)] as well.
These sample numbers show that we can aim to 100-fold reductions in the data
representation size by using appropriate compression methods on highly
repetitive data sets. Such a reduction would allow us handling much larger data
volumes in main memory, which is much faster. Even if the reduced data still
does not fit in main memory, we can expect a 100-fold reduction in storage,
network, hardware, and/or energy costs.
Just compressing the data, however, is not sufficient to reach this goal,
because we still need to decompress it in order to carry out any processing on
it. For the case of text documents, number of ‘‘version control’’ systems like
CVS777https://savannah.nongnu.org/projects/cvs,
SVN888https://subversion.apache.org, and
Git999https://git-scm.com,
support particular types of repetitive collections, namely versioned ones,
where documents follow a controlled structure (typically linear or hierarchical)
and the systems can track which document is a variant of which. Those systems
do a good job in reducing space while supporting direct access to any version
of any document, mainly by storing the set of “edits” that distinguish each
document from a close version that is stored in plain form.
Still, just direct access to the compressed data is not sufficient. In order to
process the data efficiently, we need data structures built on top it.
What is needed is a more ambitious
concept, a compressed data structure [Navarro (2016)]. Such a data
structure aims not only at representing the data within space close to its
actual information content, but also, within that space, at efficiently
supporting direct access, queries, analysis, and manipulation of the data
without ever decompressing it. This is in sharp contrast with classical data
structures, which add (sometimes very significant) extra space on top of the
raw data (e.g., the suffix trees already mentioned typically use 80 times the
space of a compacted genome).
Compressed data structures are now 30 years old [Jacobson (1989)], have made their
way into applications and companies [Navarro (2016)], and include mature libraries101010https://github.com/simongog/sdsl-lite. Most compressed data
structures, however, build on statistical compression [Cover and
Thomas (2006)],
which is blind to repetitiveness [Kreft and
Navarro (2013)], and therefore fail to get even
close to the compression ratios we have given for highly repetitive scenarios.
The development of compressed data structures aimed at highly repetitive data
is much more recent, and builds on variants of dictionary compression
[Cover and
Thomas (2006), Sahinalp and
Rajpoot (2003)].
In this survey we focus on pattern matching on string collections, one
of the most fundamental problems that arise when extracting information from
text data. The problem consists in, given a collection of strings, build a
data structure (called an index) so that, later, given a short query
string (the pattern), we efficiently locate the places where the pattern occurs
in the string collection. Indexed pattern matching is at the core of areas like
Information Retrieval [Büttcher et al. (2010), Baeza-Yates and
Ribeiro-Neto (2011)],
Data Mining [Liu (2007), Linstead et al. (2009), Silvestri (2010)],
Bioinformatics [Gusfield (1997), Ohlebusch (2013), Mäkinen et al. (2015)],
Multimedia Retrieval [Typke
et al. (2005), Su
et al. (2010)],
and others.
The need of compressed data structures for pattern matching was recognized
a couple of decades ago [Ferragina and
Manzini (2000), Grossi and
Vitter (2000)], and there are nowadays mature and
successful indexes [Navarro and
Mäkinen (2007), Grossi (2011)] that have made their way to applications;
see for example bioinformatic software like Bowtie111111http://bowtie-bio.sourceforge.net, BWA121212http://bio-bwa.sourceforge.net, or Soap2131313http://soap.genomics.org.cn. Just like the general compressed data structures,
however,
these indexes build on statistical compression, and therefore do not exploit
the high degree of repetitiveness that arise in many applications.
This challenge was explicitly recognized almost a decade later [Sirén et al. (2008)],
in the same bioinformatic context. After about another decade,
the specific challenges of text indexing
on highly repetitive string collections have become apparent, but also there
has been significant progress and important results have been reached.
These include, for example, searching in optimal time within
dictionary-compressed space, and developing new and more robust
measures of compressibility.
Our aim in this survey is to give an exhaustive, yet friendly, coverage of the
discoveries in this area. We start with the fascinating issue of
how to best measure compressibility via repetitiveness, just like the entropy
of \shortciteNSha48 is the right concept to measure compressibility via
frequency skews. Section 3 covers a number of repetitiveness
measures, from ad-hoc ones like the size of a Lempel-Ziv parse [Lempel and Ziv (1976)]
to the most recent and abstract ones based on string attractors and string
complexity [Kempa and
Prezza (2018), Kociumaka
et al. (2020)], and the relations between them.
Section 4 explores the problem of giving direct access to any
part of a string that is compressed using some of those measures, which
distinguishes a compressed data structure from sheer compression: some measures
enable compression but apparently not direct access. The more
ambitious topic of developing indexes whose size is bounded in terms of some
of those measures is developed next, building on parsings in
Section 5 and on string suffixes in
Section 6. The progress on other problems related to
pattern matching is briefly covered in Section 7.
Finally, Section 8 discusses the challenges that remain open.
2 Notation and Basic Concepts
We asume basic knowledge on algorithms, data structures, and algorithm analysis.
In this section we define some fundamental concepts on strings, preceded by
a few more general concepts and notation remarks.
Computation model
We use the RAM model of computation, where we assume the programs run
on a random-access memory where words of $w=\Theta(\log n)$ bits are accessed
and manipulated in constant time, where $n$ is the input size. All the typical
arithmetic and logical operations on the machine words are carried out in
constant time, including multiplication and bit operations.
Complexities
We will use big-$O$ notation for the time complexities, and in many cases for
the space complexities as well. Space complexities are measured in amount of
computer words, that is, $O(X)$ space means $O(X\log n)$ bits. By
$\mathrm{poly}\,x$ we mean any polynomial in $x$, that is, $x^{O(1)}$, and
$\mathrm{polylog}\,x$ denotes $\mathrm{poly}\,(\log x)$.
Logarithms will be to the base 2 by default. Within big-$O$ complexities,
$\log x$ must be understood as $\lceil\log(2+x)\rceil$, to avoid border
cases.
2.1 Strings
A string $S=S[1\mathinner{.\,.}n]$ is a sequence of symbols drawn from a set
$\Sigma$ called the alphabet. We will assume $\Sigma=[1\mathinner{.\,.}\sigma]=\{1,2,\ldots,\sigma\}$. The length of $S[1\mathinner{.\,.}n]$ is $n$, also denoted
$|S|$. We use $S[i]$ to denote the $i$-th symbol of $S$ and $S[i\mathinner{.\,.}j]=S[i]\ldots S[j]$ to denote a substring of $S$. If $i>j$, then
$S[i\mathinner{.\,.}j]=\varepsilon$, the empty string. A prefix of $S$ is a substring
of the form $S[1\mathinner{.\,.}j]$ and a suffix is a substring of the form
$S[i\mathinner{.\,.}n]$. With $SS^{\prime}$ we denote the concatenation of the strings $S$
and $S^{\prime}$, that is, the symbols of $S^{\prime}$ are appended after those of $S$.
Sometimes we identify a single symbol with a string of length 1, so that
$aS$ and $Sa$, with $a\in\Sigma$, denote concatenations as well. The string
$S[1\mathinner{.\,.}n]$ read backwards is denoted $S^{rev}=S[n]\cdots S[1]$; note that
in this case the terminator does not appear at the end of $S^{rev}$.
The lexicographic order among strings is defined as in a dictionary.
Let $a,b\in\Sigma$ and let $S$ and $S^{\prime}$ be strings. Then $aS\leq bS^{\prime}$ if
$a<b$, or if $a=b$ and $S\leq S^{\prime}$; and $\varepsilon\leq S$ for every $S$.
For technical convenience, in most cases we will assume that strings $S[1\mathinner{.\,.}n]$
are terminated with a special symbol $S[n]=\$$, which does not appear elsewhere
in $S$ nor in $\Sigma$. We assume that $\$$ is smaller than every other symbol
in $\Sigma$ to be consistent with the lexicographic order.
2.2 Pattern Matching
The indexed pattern matching problem consists in, given a sequence
$S[1\mathinner{.\,.}n]$, build a data structure (called an index) so that, later,
given a query string $P[1\mathinner{.\,.}m]$, one efficiently spots the $occ$ places in
$S$ where $P$ occurs, that is, one outputs the set
$Occ=\{i,~{}S[i\mathinner{.\,.}i+m-1]=P\}$.
With “efficiently” we mean that, in an indexed scenario, we expect the search
times to be sublinear in $n$, typically of the
form $O((\mathrm{poly}\,m+occ)\,\mathrm{polylog}\,n)$.
The optimal search time, since we have to read the input and write the
output, is $O(m+occ)$. Since $P$ can be represented in $m\log\sigma$ bits, in
a few cases we will go further and assume that $P$ comes packed into
$O(\log_{\sigma}m)$ consecutive machine words, in which case the RAM-optimal
time is $O(m/\log_{\sigma}n+occ)$.
In general we will handle a collection of $\$$-terminated strings,
$S_{1},\ldots,S_{d}$, but we model the collection by concatenating the strings
into a single one, $S[1\mathinner{.\,.}n]=S_{1}\cdots S_{d}$, and doing pattern matching
on $S$.
2.3 Suffix Trees and Suffix Arrays
Suffix trees and suffix arrays are the most classical pattern
matching indexes. The suffix tree [Weiner (1973), McCreight (1976), Apostolico (1985)] is a trie
(or digital tree) containing all the suffixes of $S$. That is, every suffix of
$S$ labels a single root-to-leaf path in the suffix tree, and no node has two
distinct children labeled by the same symbol. Further, the unary paths (i.e.,
paths of nodes with a single child) are compressed into single edges labeled
by the concatenation of the contacted edge symbols. Every internal node in the
suffix tree corresponds to a substring of $S$ that appears more than once, and
every leaf corresponds to a suffix. The leaves of the suffix tree indicate the
position of $S$ where their corresponding suffixes start. Since there are $n$
suffixes in $S$, there are $n$ leaves in the suffix tree, and since there are
no nodes with single children, it has less than $n$ internal nodes. The suffix
tree can then be represented within $O(n)$ space, for example by representing
every string labeling edges with a couple of pointers to an occurrence of the
label in $S$. The suffix tree can also be built in linear (i.e., $O(n)$) time
[Weiner (1973), McCreight (1976), Ukkonen (1995), Farach-Colton et al. (2000)].
The suffix tree is a very popular data structure in stringology and
bioinformatics [Apostolico (1985), Crochemore and
Rytter (2002), Gusfield (1997)], supporting a large number of complex
searches (by using extra information, such as suffix links, that we omit here).
The most basic search is pattern matching: since all the occurrences of
$P$ in $S$ are prefixes of suffixes of $S$, we find them all by descending
from the root following the successive symbols of $P$. If at some point we
cannot descend by some $P[i]$, then $P$ does not occur in $S$. Otherwise, we
exhaust the symbols of $P$ at some suffix tree node $v$ or in the middle of
some edge leading to $v$. We then say that $v$ is the locus of $P$: every
leaf descending from $v$ is a suffix starting with $P$. If the children $v_{1},\ldots,v_{k}$ of every suffix tree node $v$ are stored with perfect hashing (the
keys being the first symbols of the strings labeling the edges $(v,v_{i})$), then
we reach the locus node in time $O(m)$. Further, since the suffix tree has no
unary paths, the $occ$ leaves with the occurrences of $P$ are traversed from
$v$ in time $O(occ)$. In total, the suffix tree supports pattern matching in
optimal time $O(m+occ)$. With more sophisticated structures, it supports
RAM-optimal time search, $O(m/\log_{\sigma}n+occ)$ [Navarro and
Nekrich (2017)].
A convenient way to regard the suffix tree is as the Patricia tree
[Morrison (1968)] of all the suffixes of $S$. The Patricia tree, also known as
blind trie [Ferragina and
Grossi (1999)] (their technical differences are not important
here) is a trie where we compact the unary paths and retain only the first
symbol and the length of the string labeling each edge. In this case we use
the first symbols to choose the appropriate child, and simply trust that the
omitted symbols match $P$. When arriving at the potential locus $v$ of $P$,
we jump to any leaf, where a potential occurrence $S[i\mathinner{.\,.}i+m-1]$ of $P$ is
pointed, and compare $P$ with $S[i\mathinner{.\,.}i+m-1]$. If they match, then $v$ is the
correct locus of $P$ and all its leaves match $P$; otherwise $P$ does not
occur in $S$. A pointer from each node $v$ to a leaf descending from it is
needed in order to maintain the verification within the optimal search time.
The suffix array [Manber and
Myers (1993)] of $S[1\mathinner{.\,.}n]$ is the array $A[1\mathinner{.\,.}n]$ of the
positions of the suffixes of $S$ in lexicographic order. If the children of
the suffix tree nodes are lexicographically ordered by their first symbol,
then the suffix array corresponds to the leaves of the suffix tree.
The suffix array can be built directly, without building the suffix tree, in
linear time [Kim
et al. (2005), Ko and Aluru (2005), Kärkkäinen et al. (2006)].
All the suffixes starting with $P$ form a range in the suffix array
$A[sp\mathinner{.\,.}ep]$. We can find the range with binary search in time $O(m\log n)$,
by comparing $P$ with the
strings $S[A[i]\mathinner{.\,.}A[i]+m-1]$, so as to find the smallest and largest suffixes
that start with $P$. The search time can be reduced to $O(m+\log n)$ by using
further data structures [Manber and
Myers (1993)].
Example 2.1
Figure 1 shows the suffix tree and array of the string
$S=\mathsf{alabaralalabarda\$}$. The search for $P=\mathsf{lab}$ in the
suffix tree leads to the grayed locus node: the search in fact falls in
the middle of the edge from the parent to the locus node. The two leaves
descending from the locus contain the positions $2$ and $10$, which is where
$P$ occurs in $S$. In the suffix array, we find with binary search the interval
$A[13\mathinner{.\,.}14]$, where the answers lie.
2.4 Karp-Rabin Fingerprints
\shortciteN
KR87 proposed a technique to compute a signature or fingerprint of a string via hashing, in a way that enables (non-indexed)
string matching in $O(n)$ average time. The signature $\kappa(Q)$ of a string
$Q[1\mathinner{.\,.}q]$ is defined as
$$\kappa(Q)~{}~{}=~{}~{}\left(\sum_{i=1}^{q}Q[i]\cdot b^{i-1}\right)\!\!\!\mod p,$$
where $b$ is an integer and $p$ a prime number. It is not hard to devise the
arithmetic operations to compute the signatures of composed and decomposed
strings, that is, compute $\kappa(Q\cdot Q^{\prime})$ from $\kappa(Q)$ and
$\kappa(Q^{\prime})$, or $\kappa(Q)$ from $\kappa(Q\cdot Q^{\prime})$ and $\kappa(Q^{\prime})$, or
$\kappa(Q^{\prime})$ from $\kappa(Q\cdot Q^{\prime})$ and $\kappa(Q^{\prime})$ (possibly storing some
precomputed exponents together with the signatures).
By appropriately choosing $b$
and $p$, the probability of two substrings having the same fingerprint is very
low. Further, in $O(n\log n)$ expected time, we can find a function $\kappa$
that ensures that no two strings of $S[1\mathinner{.\,.}n]$ have the same fingerprint
[Bille
et al. (2014)]. The resulting fingerprints $\kappa^{\prime}$ are collission-free
only over strings of lengths that are powers of two, thus we define
$\kappa(Q)=\langle\kappa^{\prime}(Q[1\mathinner{.\,.}2^{\lfloor\log_{2}q\rfloor%
}]),\kappa^{\prime}(Q[q-2^{\lfloor\log_{2}q\rfloor}+1\mathinner{.\,.}q])\rangle$.
3 Compressors and Measures of Repetitiveness
In statistical compression, where the goal is to exploit frequency skew, the
so-called statistical entropy defined by \shortciteNSha48 offers a measure
of compressibility that is both optimal and reachable. While statistical
entropy is defined for infinite sources, it can be adapted to individual
strings. The resulting measure for individual strings, called empirical
entropy [Cover and
Thomas (2006)], turns out to be a reachable lower bound (save for
lower-order terms) to the space a semistatic statistical compressor can
achieve on that string.
Statistical entropy, however, does not adequately capture
other sources of compressibility, particularly repetitiveness. In this arena,
concepts are much less clear. Beyond the ideal but uncomputable measure of
string complexity proposed by \shortciteNKol65, most popular measures of
(compressibility by exploiting) repetitiveness are ad-hoc, defined as the
result of particular compressors, and there is not yet a measure that is both
reachable and optimal within a reasonable set of compression techniques. Still,
many measures work well and have been used as the basis of compressed and
indexed sequence representations. In this section we describe the most relevant
concepts and measures.
3.1 The Unsuitability of Statistical Entropy
\shortciteN
Sha48 introduced a measure of compressibility that exploits the
different frequencies of the symbols emitted by a source. In its simplest
form, the source is “memoryless” and emits each symbol $a\in\Sigma$ with
a fixed probability $p_{a}$. The entropy is then defined as
$${\mathcal{H}}(\{p_{a}\})~{}~{}=~{}~{}\sum_{a\in\Sigma}p_{a}\log\frac{1}{p_{a}}.$$
When all the frequencies $p_{a}$ are equal to $1/\sigma$, the entropy is maximal,
${\mathcal{H}}=\log\sigma$. In general, the entropy decreases as the
frequencies are more skewed. This kind of entropy, which exploits frequencies,
is called statistical entropy.
In a more general form, the source may remember the last $k$ symbols emitted,
$C[1\mathinner{.\,.}k]$ and the probability $p_{a|C}$ of the next symbol $a$ may depend on
them. The entropy is defined in this case as
$${\mathcal{H}}(\{p_{a,C}\})~{}~{}=~{}~{}\sum_{C\in\Sigma^{k}}p_{C}\sum_{a\in%
\Sigma}p_{a|C}\log\frac{1}{p_{a|C}},$$
where $p_{C}$ is the global probabilty of the source emitting $C$.
Other more general kinds of sources are considered, including those that have
“infinite” memory of all the previous symbols emitted. \shortciteNSha48 shows
that any encoder of a random source of symbols with entropy $\mathcal{H}$ must
emit, on average, no less than $\mathcal{H}$ bits per symbol. The measure is
also reachable:
arithmetic coding [Witten
et al. (1987)] compresses $n$ symbols from such a source into
$n\mathcal{H}+2$ bits.
Shannon’s entropy can also be used to measure the entropy of a finite
individual sequence $S[1\mathinner{.\,.}n]$. The idea is to assume that the only
source of compressibility of the sequence are the different probabilities of
its symbols. If we take the probabilities as independent, the result is the
zeroth order empirical entropy of $S$:
$${\mathcal{H}}_{0}(S)~{}~{}=~{}~{}\sum_{a\in\Sigma}\frac{n_{a}}{n}\log\frac{n}{%
n_{a}},$$
where $n_{a}$ is the number of times $a$ occurs in $S$ and we assume $0\log 0=0$.
This is exactly the Shannon’s entropy of a memoryless source with probabilities
$p_{a}=n_{a}/n$, that is, we use the relative frequencies of the symbols in $S$
as an estimate of the probabilities of a hypothetical source that generated
$S$ (indeed, the most likely source). The string $S$ can then be encoded in
$n{\mathcal{H}}_{0}(S)+2$ bits with arithmetic coding based on the symbol
frequencies.
If we assume, instead, that the symbols in $S$ are better prediced by knowing
their $k$ preceding symbols, then we can use the $k$th order empirical
entropy of $S$ to measure its compressibility:
$${\mathcal{H}}_{k}(S)~{}~{}=~{}~{}\sum_{C\in\Sigma^{k}}\frac{n_{C}}{n}\cdot%
\mathcal{H}_{0}(S_{C}),$$
where $S_{C}$ is the sequence of the symbols following substring $C$ in $S$ and
$n_{C}=|S_{C}|$. Note that, thanks to the unique $-terminator of $S$, $n_{C}$ is
also the number of times $C$ occurs in $S$,141414Except if $C$ corresponds
to the last $k$ symbols of $S$, but this does not affect the measure because
this substring contains $\$$, so it is unique and $n_{C}=0$. and the measure
corresponds to the Shannon entropy of a source with memory $k$. Once again, an
arithmetic coder encodes $S$ into $n{\mathcal{H}}_{k}(S)+2$ bits.
At this point, it is valid to wonder what disallows us to take $k=n$, so that
$n_{C}=n_{S}=1$ if $C=S$ and $n_{C}=0$ for the other strings of length $k=n$,
and therefore $\mathcal{H}_{n}(S)=0$. We could then encode $S$ with arithmetic
coding into 2 bits!
The trick is that the encoding sizes we have given assume that the decoder
knows the distribution, that is, the probabilities $p_{a}$ or $p_{a|C}$ or, in
the case of empirical entropies, the frequencies $n_{a}$ and $n_{C}$. This may
be reasonable when analyzing the average bit rate to encode a source that
emits infinite sequences of symbols, but not when we consider actual compression
ratios of finite sequences.
Transmitting the symbol frequencies (called the model) to the decoder
(or, equivalently, storing it together with the compressed string) in plain form
requires $\sigma\log n$ bits for the zeroth order entropy, and
$\sigma^{k+1}\log n$ bits for the $k$th order entropy. With this simple
encoding of the model, we cannot hope achieve compression for $k\geq\log_{\sigma}n$, because encoding the model then takes more space than the
uncompressed string. In fact, this is not far from the best that can be done:
\shortciteNGag06 shows that, for about that value of $k$, $n{\mathcal{H}}_{k}(S)$
falls below Kolmogorov’s complexity, and thus there is no hope of encoding $S$
within that size. In his words, “$k$th-order empirical entropy stops being
a reasonable complexity metric for almost all strings”.
With the restriction $k\leq\log_{\sigma}n$, consider now that we concatenate
two identical strings, $S\cdot S$. All the relative symbol frequencies in
$S\cdot S$ are identical to those in $S$, except for the $k-1$ substrings
$C$ that cover the concatenation point; therefore we can expect that
${\mathcal{H}}_{k}(S\cdot S)\approx{\mathcal{H}}_{k}(S)$. Indeed, it can be shown
that ${\mathcal{H}}_{k}(S\cdot S)\geq{\mathcal{H}}_{k}(S)$ [Kreft and
Navarro (2013), Lem. 2.6]. That is, the empirical entropy is
insensitive to the repetitiveness, and any compressor reaching the empirical
entropy will compress $S\cdot S$ to about twice the space it uses to compress
$S$. Instead, being aware of repetitivenss allows us to compress $S$ in any form
and then somehow state that a second copy of $S$ follows.
This explains why the compressed indexes based on statistical entropy
[Navarro and
Mäkinen (2007)] are not suitable for indexing highly repetitive string collections.
In those, the space reduction that can be obtained by spotting the
repetitiveness is much more significant than what can be obtained by exploiting
skewed frequencies.
Dictionary methods [Sahinalp and
Rajpoot (2003)], based on representing $S$ as the
concatenation of strings from a set (a “dictionary”), generally obtained from
$S$ itself, are more adequate to exploit repetitiveness: a small dictionary of
distinct substrings of $S$
should suffice if $S$ is highly repetitive. Though many dictionary methods can
be shown to converge to Shannon’s entropy, our focus here is their ability to
capture repetitiveness. In the sequel we cover various such methods, not only
as compression methods but also as ways to measure repetitiveness.
We refer the reader to \shortciteNCT06 for a deeper discussion of the concepts
of Shannon entropy and its relation to dictionary methods.
3.2 Lempel-Ziv Compression: Measures $z$ and $z_{no}$
\shortciteN
LZ76 proposed a technique to measure the “complexity” of individual
strings based on their repetitiveness (in our case, complexity can be
interpreted as compressibility). The compressor LZ77
[Ziv and Lempel (1977)] and many other variants [Bell
et al. (1990)] that derived from this measure
have become very popular; they are behind compression software like zip,
p7zip, gzip, arj, etc.
3.2.1 The compressor
The original Lempel-Ziv method parses (i.e., partitions) $S[1\mathinner{.\,.}n]$ into
phrases (i.e., substrings) as follows, starting from $i\leftarrow 1$:
1.
Find the shortest prefix $S[i\mathinner{.\,.}j]$ of $S[i\mathinner{.\,.}n]$ that does not
occur in $S$ starting before position $i$.
2.
The next phrase is then $S[i\mathinner{.\,.}j]$.
3.
Set $i\leftarrow j+1$. If $i\leq n$, continue forming phrases.
This greedy parsing method can be proved to be optimal (i.e., producing the
least number of phrases) among all left-to-right parses (i.e., those where
phrases must have an occurrence starting to their left) [Lempel and Ziv (1976), Thm. 1].
A compressor can be obtained by
encoding each phrase as a triplet: If $S[i\mathinner{.\,.}j]$ is the next phrase, then
$S[i\mathinner{.\,.}j-1]$ occurs somewhere to the left of $i$ in $S$. Let $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ be
one such occurrence (called the source of the phrase), that is, $i^{\prime}<i$.
The next triplet is then $\langle i^{\prime},j-i,S[j]\rangle$. When $j-i=0$, any
empty substring can be the source, and it is customary to assume $i^{\prime}=0$, so
that the triplet is $\langle 0,0,S[j]\rangle$.
Example 3.2
The string $S=\mathsf{alabaralalabarda\$}$ is parsed as
$\mathsf{a|l|ab|ar|alal|abard|a\$}$, where we use the vertical bar to
separate the phrases. A possible triplet encoding is
$\langle 0,0,\mathsf{a}\rangle\langle 0,0,\mathsf{l}\rangle\langle 1,1,\mathsf{%
b}\rangle\langle 1,1,\mathsf{r}\rangle\langle 1,3,\mathsf{l}\rangle\langle 3,4%
,\mathsf{d}\rangle\langle 11,1,\mathsf{\$}\rangle$.
From the triplets, we easily recover $S$ by starting with an empty string $S$
and, for each new triplet $\langle p,\ell,c\rangle$, appending
$S[p\mathinner{.\,.}p+\ell-1]$ and then $c$.151515Since the source may overlap the formed phrase, the copy of
$S[p\mathinner{.\,.}p+\ell-1]$ to the end of $S$ must be done left to right: consider
recovering $S=\mathsf{a}^{n-1}\mathsf{\$}$ from the encoding
$\langle 0,0,\mathsf{a}\rangle\langle 1,n-2,\mathsf{a}\rangle\langle 0,0,%
\mathsf{\$}\rangle$.
This extremely fast decompression is one of the reasons of the popularity of
Lempel-Ziv compression. Another reason is that, though not as easily as
decompression, it is also possible to carry out the compression
(i.e., the parsing) in $O(n)$ time [Rodeh
et al. (1981), Storer and
Szymanski (1982)].
Recently, there has been a lot of research on doing the parsing within little
extra space, see for example \shortciteNFIKS18 and references therein.
3.2.2 The measure
For this survey we will use a slightly different variant of the Lempel-Ziv
parsing,
which is less popular for compression but more coherent with other measures of
repetitiveness, and simplifies indexing. The parsing into phrases is redefined
as follows, also starting with $i\leftarrow 1$.
1.
Find the longest prefix $S[i\mathinner{.\,.}j]$ of $S[i\mathinner{.\,.}n]$ that
occurs in $S$ starting before position $i$.
2.
If $j\geq i$, that is, $S[i\mathinner{.\,.}j]$ is nonempty, then the next
phrase is $S[i\mathinner{.\,.}j]$, and we set $i\leftarrow j+1$.
3.
Otherwise, the next phrase is the explicit symbol $S[i]$, which
has not appeared before, and we set $i\leftarrow i+1$.
4.
If $i\leq n$, continue forming phrases.
We define the Lempel-Ziv measure of $S[1\mathinner{.\,.}n]$ as the number $z=z(S)$ of
phrases into which $S$ is parsed by this procedure.
Example 3.3
Figure 2 shows how the string $S=\mathsf{alabaralalabarda\$}$
is parsed into $z(S)=11$ phrases, $\mathsf{\framebox{$\mathsf{a}$}|\framebox{$\mathsf{l}$}|a|\framebox{$\mathsf{b%
}$}|a|\framebox{$\mathsf{r}$}|ala|labar|\framebox{$\mathsf{d}$}|a|\framebox{$%
\mathsf{\$}$}}$,
with the explicit symbols boxed.
The two parsing variants are closely related. If the original variant forms the
phrase $S[i\mathinner{.\,.}j]$ with $j>i$, then $S[i\mathinner{.\,.}j-1]$ is the longest prefix of
$S[i\mathinner{.\,.}n]$ that appears starting to the left of $i$, so this new variant will
form the phrase $S[i\mathinner{.\,.}j-1]$ (if $i<j$) and its next phrase will be either
just $S[j]$ or a longer prefix of $S[j\mathinner{.\,.}n]$. It is not hard to see that the
compression algorithms for both variants are the same, and that the greedy
parsing is also optimal for this variant. It follows that $z^{\prime}\leq z\leq 2z^{\prime}$,
where $z^{\prime}$ is the number of phrases created with the original method. Thus,
$z$ and $z^{\prime}$ are the same in asymptotic terms. In particular, the triplet
encoding we described shows that one can encode $S$ within $O(z\log n)$ bits
(or $O(z)$ words), which makes $z$ a reachable compressibility measure.
3.2.3 A weaker variant
\shortciteN
SS82 use a slightly weaker Lempel-Ziv parse, where the source
$S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ of $S[i\mathinner{.\,.}j]$ must be completely contained in
$S[1\mathinner{.\,.}i-1]$. That is, it must hold that $j^{\prime}<i$, not just $i^{\prime}<i$.
The same greedy parsing described, using this stricter condition, also
yields the least number of phrases [Storer and
Szymanski (1982), Thm. 10 with $p=1$].
The phrase encoding and decompression proceed in exactly the same way, and
linear-time parsing is also possible [Crochemore et al. (2012)].
The number of phrases obtained in this case will be called $z_{no}$, with $no$
standing for “no overlap” between phrases and their sources.
This parsing simplifies, for example, direct pattern matching on the
compressed string [Farach and
Thorup (1998)] or creating context-free grammars from the
Lempel-Ziv parse [Rytter (2003)].161616A Lempel-Ziv based index also claims
to need this restricted parsing [Kreft and
Navarro (2013)], but in fact they can handle the
original parsing with no changes. It comes with a price, however. Not only
$z_{no}(S)\geq z(S)$ holds for every string $S$ because the greedy parsings
are optimal, but also
$z_{no}$ can be $\Theta(\log n)$ times larger than $z$, for example
on the string $S=\mathsf{a}^{n-1}\mathsf{\$}$, where $z=3$ (with parsing
$\framebox{$\mathsf{a}$}|\mathsf{a}^{n-2}|\mathsf{\framebox{$\mathsf{\$}$}}$) and $z_{no}=\Theta(\log n)$
(with parsing $\mathsf{\framebox{$\mathsf{a}$}|\mathsf{a}|\mathsf{a}^{2}|\mathsf{a}^{4}|%
\mathsf{a}^{8}|}\cdots$).
3.2.4 Evaluation
Apart from fast compression and decompression, a reason for the popularity of
Lempel-Ziv compression is that all of its variants converge to the statistical
entropy [Lempel and Ziv (1976)], even on individual strings [Kosaraju and
Manzini (2000)], though statistical
methods converge faster (i.e., their sublinear extra space over the empirical
entropy of $S[1\mathinner{.\,.}n]$ is a slower-growing function of $n$).
In particular, it holds that $z_{no}=O(n/\log_{\sigma}n)$, so the space
$O(z_{no})$ is in the worst case $O(n\log\sigma)$ bits, that is, proportional
to the plain size of $S$.
More important for us is that Lempel-Ziv captures repetitiveness. In our
preceding example of the string $S\cdot S$, we have $z(S\cdot S)\leq z(S)+1$
and $z_{no}(S\cdot S)\leq z_{no}(S)+1$
(i.e., we need at most one extra phrase to capture the second copy of $S$).
Despite the success of Lempel-Ziv compression and its frequent use as a gold
standard to quantify repetitiveness, the measure $z$ (and also $z_{no}$) has
some pitfalls:
•
It is asymmetric, that is, $z(S)$ may differ from $z(S^{rev})$.
For example, removing the terminator $\mathsf{\$}$ to avoid complications,
$\mathsf{alabaralalabarda}$ is parsed into $z=10$ phrases, whereas its reverse
$\mathsf{adrabalalarabala}$ requires only $z=9$.
•
It is not monotonic when removing prefixes, that is, $z(S^{\prime})$
can be larger than $z(S\cdot S^{\prime})$. For example, $\mathsf{aaabaaabaaa}$ is parsed
into $z=4$ phrases, $\mathsf{\framebox{$\mathsf{a}$}|aa|\framebox{$\mathsf{b}$}|aaabaaa}$, but
$\mathsf{aabaaabaaa}$ needs $z=5$, $\mathsf{\framebox{$\mathsf{a}$}|a|\framebox{$\mathsf{b}$}|aa|abaaa}$.
•
Although it is the optimal size of a left-to-right parse, $z$ is
arguably not optimal within a broader class of plausible
compressed representations. One can represent $S$ using fewer phrases by
allowing their sources to occur also to their right in $S$,
as we show with the next measure.
3.3 Bidirectional Macro Schemes: Measure $b$
\shortciteN
SS82 proposed an extension of Lempel-Ziv parsing that allows sources
to be to the left or to the right of their corresponding phrases, as long as
every symbol can
eventually be decoded by following the dependencies between phrases and sources.
They called such a parse a “bidirectional macro scheme”. Analogously to the
Lempel-Ziv parsing variant we are using, a phrase is either a substring that
appears elsewhere, or an explicit symbol.
The dependencies between sources and
phrases can be expressed through a function $f$, such that $f(i)=j$ if position
$S[i]$ is to be obtained from $S[j]$; we set $f(i)=0$ if $S[i]$ is an explicit
symbol. Otherwise, if $S[i\mathinner{.\,.}j]$ is a copied phrase, then $f(i+t)=f(i)+t$
for all $0\leq t\leq j-i$, that is, $S[f(i)\mathinner{.\,.}f(j)]$ is the source of
$S[i\mathinner{.\,.}j]$. The bidirectional macro scheme is valid if, for all $1\leq i\leq n$,
there is a $k>0$ such that $f^{k}(i)=0$, that is, every position is eventually
decoded by repeatedly looking for the sources.
We call $b=b(S)$ the minimum number of phrases of a bidirectional macro
scheme for $S[1\mathinner{.\,.}n]$. It obviously holds that $b(S)\leq z(S)$ for every
string $S$ because Lempel-Ziv is just one possible bidirectional macro scheme.
Example 3.4
Figure 3 shows a bidirectional macro scheme for
$S=\mathsf{alabaralalabarda\$}$ formed by $b=10$ phrases,
$S=\mathsf{ala|\framebox{$\mathsf{b}$}|a|\framebox{$\mathsf{r}$}|\framebox{$%
\mathsf{a}$}|\framebox{$\mathsf{l}$}|alabar|\framebox{$\mathsf{d}$}|a|%
\framebox{$\mathsf{\$}$}}$ (we had $z=11$ for the same string,
see Figure 2).
It has function $f[1\mathinner{.\,.}n]=\langle 7,8,9,0,3,0,0,0,1,2,3,4,5,6,$ $0,11,0\rangle$.
One can see that every symbol can eventually be
obtained from the explicit ones. For example, following the arrows in the
figure (or, similarly, iterating the function $f$),
we can obtain $S[13]=S[5]=S[3]=S[9]=S[1]=S[7]=\mathsf{a}$.
Just as for Lempel-Ziv, we can compress $S$ to $O(b)$ space by encoding the
source of each of the $b$ phrases.
It is not hard to recover $S[1\mathinner{.\,.}n]$ in $O(n)$ time from the
encoded phrases, because when we traverse $t$ positions until finding an
explicit symbol in time $O(t)$, we discover the contents of all those $t$
positions. Instead, finding the smallest bidirectional macro scheme is
NP-hard [Gallant (1982)].
This is probably the reason that made this technique less popular, although
in practice one can find bidirectional parses smaller than $z$ with some effort
[Russo
et al. (2020)].
As said, it always holds that $b\leq z$. \shortciteNGNP18latin (corrected in
\shortciteNNPO19) showed that $z=O(b\log(n/b))$ for every string family,
and that this bound is tight: there are string families where
$b=O(1)$ and $z=\Theta(\log n)$.
Measure $b$ is the smallest of those we study that is reachable, that is,
we can compress $S[1\mathinner{.\,.}n]$ to $O(b)$ words. It is also symmetric,
unlike $z$: $b(S)=b(S^{rev})$. Still, it is unknown if $b$ is monotonic,
that is, if $b(S)\leq b(S\cdot S^{\prime})$ for all $S$ and $S^{\prime}$.
3.4 Grammar Compression: Measures $g$ and $g_{rl}$
\shortciteN
KY00 introduced a compression technique based on context-free
grammars (the idea can be traced back to \shortciteNRub76). Given $S[1\mathinner{.\,.}n]$,
we find a grammar that generates only the
string $S$, and use it as a compressed representation. The size of a
grammar is the sum of the lengths of the right-hand sides of the rules
(we avoid empty right-hand sides).
Example 3.5
Figure 4 shows a context-free grammar that generates only the string
$S=\mathsf{alabaralalabarda\$}$. The grammar has three rules, $A\rightarrow\mathsf{al}$, $B\rightarrow A\mathsf{abar}$, and the initial rule
$C\rightarrow BAB\mathsf{da\$}$. The sum of the lengths of the right-hands
of the rules is $13$, the grammar size.
Note that, in a grammar that generates
only one string, there is exactly one rule $A\rightarrow X_{1}\cdots X_{k}$ per
nonterminal $A$, where each $X_{i}$ is a terminal or a nonterminal (if there are
more than one rules, these must be redundant and can be eliminated).
Figure 4 also displays the parse tree of the grammar: an ordinal
labeled tree where the root is labeled with the initial symbol, the leaves
are labeled with the terminals that spell out $S$, and each internal node
is labeled with a nonterminal $A$: if $A\rightarrow X_{1}\cdots X_{k}$, then
the node has $k$ children labeled, left to right, $X_{1},\ldots,X_{k}$.
\shortciteN
KY00 prove that grammars that satisfy a few reasonable rules reach the
$k$th order entropy of a source, and the same holds for the empirical entropy
of individual strings [Ochoa and
Navarro (2019)]. Their size is always $O(n/\log_{\sigma}n)$.
Grammar compression is interesting for us because repetitive strings should
have small grammars. Our associated measure of repetitiveness is then the size
$g=g(S)$ of the smallest grammar that generates only $S$.
It is known that $z_{no}\leq g=O(z_{no}\log(n/z_{no}))$ [Rytter (2003), Charikar et al. (2005)],
and even $g=O(z\log(n/z))$ [Gawrychowski (2011), Lem. 8].
Finding such smallest grammar is NP-complete, however [Storer and
Szymanski (1982), Charikar et al. (2005)].
This has not made grammar compression impopular, thanks to several
efficient constructions that yield grammars of size $O(z_{no}\log(n/z_{no}))$
and even $O(z\log(n/z))$ [Rytter (2003), Charikar et al. (2005), Sakamoto (2005), Jeż (2015), Jeż (2016)]. Further,
heuristics like RePair [Larsson and
Moffat (2000)] or Sequitur [Nevill-Manning et al. (1994)] perform extremely
well and are preferred in practice.
Since it always holds that $z_{no}\leq g$, a natural question is why grammar
compression is interesting. One important reason is that grammars allow for
direct access to the compressed string in logarithmic time, as we will describe
in Section 4.1.
For now, a simple version illustrates its power.
A grammar construction algorithm produces balanced grammars if the height
of their parse tree is $O(\log n)$ when built on strings of length $n$. On a
balanced grammar for $S[1\mathinner{.\,.}n]$, with constant-size rules, it is very easy to
extract any symbol $S[i]$ by virtually traversing the parse tree, if one knows
the lengths of the string represented by each nonterminal. The first grammar
construction from a Lempel-Ziv parse [Rytter (2003)] had these properties, so
it was the first structure of size $O(z_{no}\log(n/z_{no}))$ with access time
$O(\log n)$.
A little more notation on grammars will be useful. We call $exp(A)$ the string
of terminals to which nonterminal $A$ expands, and $|A|=|exp(A)|$. To
simplify matters, we forbid rules of right-hand length $0$ or $1$. An
important concept will be the grammar tree, which is obtained by pruning
the parse tree: for each nonterminal $A$, only one internal node labeled $A$
is retained; all the others are converted to leaves by pruning their subtree.
Since the grammar tree will have the $k$ children of each unique nonterminal
$A\rightarrow X_{1}\cdots X_{k}$, plus the root, its total number of nodes is
$g+1$ for a grammar of size $g$.
With the grammar tree we can easily see that $z_{no}\leq g$, for
example. Consider a grammar tree where only leftmost occurrence of every
nonterminal is an internal node. The string $S$ is then cut into at most $g$
substrings, each covered by a leaf of the grammar tree. Each leaf is either a
terminal or a pruned nonterminal. We can then define a
left-to-right parse with $g$ phrases: the phrase covered by pruned nonterminal
$A$ points to its copy below the internal node $A$, which is to its left;
terminal leaves become explicit symbols.
Since this is a left-to-right parse with no overlaps and $z_{no}$ is the least
size of such a parse, we have $z_{no}\leq g$.
Example 3.6
Figure 5 shows the grammar tree of the parse tree of
Figure 4, with $14$ nodes. It induces the left-to-right parse
$S=\mathsf{\framebox{$\mathsf{a}$}|\framebox{$\mathsf{l}$}|\framebox{$\mathsf{a%
}$}|\framebox{$\mathsf{b}$}|\framebox{$\mathsf{a}$}|\framebox{$\mathsf{r}$}|al%
|alabar|\framebox{$\mathsf{d}$}|\framebox{$\mathsf{a}$}|\framebox{$\mathsf{\$}%
$}}$.
3.4.1 Run-length grammars
To handle some anomalies that occur when compressing repetitive strings with
grammars, \shortciteNNIIBT16 proposed to enrich context-free grammars with
run-length rules, which are of the form $A\rightarrow X^{t}$, where $X$
is a terminal or a nonterminal and $t\geq 2$ is an integer. The rule is
equivalent to $A\rightarrow X\cdots X$ with $t$ repetitions of $X$, but it
is assumed to be of size $2$. Grammars that use run-length rules are called
run-length (context-free) grammars.
We call $g_{rl}=g_{rl}(S)$ the size of the smallest run-length grammar that
generates $S$. It obviously holds that $g_{rl}(S)\leq g(S)$ for every string $S$.
It can also be proved that $z(S)\leq g_{rl}(S)$ for every string $S$
[Gagie
et al. (2018a)]171717They claim $z\leq 2g_{rl}$ because they use a
different definition of grammar size..
An interesting connection with bidirectional macro schemes is that
$g_{rl}=O(b\log(n/b))$ (from where $z=O(b\log(n/b))$ is obtained)
[Gagie
et al. (2018a), Navarro
et al. (2019)].
There is no clear dominance between $g_{rl}$ and $z_{no}$, however: On the
string $S=\mathsf{a}^{n-1}\mathsf{\$}$ we have $g_{rl}=O(1)$ and
$z_{no}=\Theta(\log n)$ (as well as $g=\Theta(\log n)$), but there exist
string families where $g_{rl}=\Omega(z_{no}\log n/\log\log n)$
[Bille
et al. (2018)] (the weaker result $g=\Omega(z\log n/\log\log n)$ was known
before [Charikar et al. (2005), Hucke
et al. (2016)]).
The parse tree of a run-length grammar is the same as if rules $A\rightarrow X^{t}$ were written as $A\rightarrow X\cdots X$. The grammar tree, instead,
is modified to ensure it has $g_{rl}+1$ nodes if the grammar is of size
$g_{rl}$: The internal node labeled $A$ has two children, the left one is
labeled $X$ and the right one is labeled $X^{[t-1]}$. Those special marked
nodes are treated differently in the various indexes and access methods on
run-length grammars.
3.5 Collage Systems: Measure $c$
To generalize sequential pattern matching algorithms, \shortciteNKMSTSA03 proposed
an extension of run-length grammars called collage systems. These allow,
in addition, truncation rules of the form $A\rightarrow B^{[t]}$ and
$A\rightarrow\!^{[t]}B$, which are of size 2 and mean that $exp(A)$ consists
of the first or last $t$ symbols of $exp(B)$, respectively.
Example 3.7
A collage system generating $S=\mathsf{alabaralalabarda\$}$, though larger than
our grammar, is $A\rightarrow\mathsf{al}$, $B\rightarrow AA\mathsf{abar}$,
$B^{\prime}\rightarrow\!^{[6]}B$, and the initial rule
$C\rightarrow B^{\prime}B\mathsf{da\$}$.
The size of the smallest collage
system generating a string $S$ is called $c=c(S)$, and thus it obviously holds
that $c(S)\leq g_{rl}(S)$ for every string $S$. \shortciteNKMSTSA03 also proved
that $c=O(z\log z)$; a better bound $c=O(z)$ was recently proved by
\shortciteNNPO19. This is interesting because it sheds light on what must be added
to grammars in order to make them as powerful as Lempel-Ziv parses.
\shortciteN
NPO19 also prove lower bounds on $c$ when restricted to what they call
internal collage systems, where $exp(A)$ must appear in $S$ for every
nonterminal $A$. This avoids collage systems that generate a huge string from
where a small string $S$ is then obtained by truncation. For internal collage
systems it holds that $b=O(c)$, and there are string families with a
separation $c=\Omega(b\log n)$. Instead, while the bound $c=O(z)$ also
holds for internal collage systems, it is unknown if there are string
families where $c=o(z)$, even for general collage systems.
3.6 Burrows-Wheeler Transform: Measure $r$
\shortciteN
BW94 designed a reversible transformation with the goal of making
strings easier to compress by local methods. The Burrows-Wheeler Transform
(BWT) of $S[1\mathinner{.\,.}n]$, $S^{bwt}$, is a permutation of $S$ obtained as follows:
1.
Sort all the suffixes of $S$ lexicographically (as in the suffix
array).
2.
Collect, in increasing order, the symbol preceding each
suffix (the symbol preceding the longest suffix is taken to be $).
Example 3.8
The BWT of $S=\mathsf{alabaralalabarda\$}$ is
$S^{bwt}=\mathsf{adll\$lrbbaaraaaaa}$, as shown in Figure 6
(ignore the leftmost part of the figure for now).
It turns out that, by properly
partitioning $S^{bwt}$ and applying zeroth-order compression to each piece,
one obtains $k$th order compression of $S$ [Manzini (2001), Ferragina et al. (2005), Gog et al. (2019)]. The
reason behind this fact is that the BWT puts together all the suffixes
starting with the same context $C$ of length $k$, for any $k$. Then,
encoding the symbols preceding those suffixes is the same as encoding together
the symbols preceding the occurrences of each context $C$ in $S$. Compare
with the definition of empirical $k$th order entropy we gave in
Section 3.1 applied to the reverse of $S$, $H_{k}(S^{rev})$.
The BWT has a strong connection with the suffix array $A$ of $S$; see the
right of Figure 6: it is not hard to see that
$$S^{bwt}[j]~{}~{}=~{}~{}S[A[j]-1],$$
if we interpret $S[0]=S[n]$.
From the suffix array, which can be built in linear time,
we easily compute the BWT. The BWT is also easily reversed in linear
time [Burrows and
Wheeler (1994)]. The connection between the BWT and the suffix array
has been used to implement fast searches on
$S[1\mathinner{.\,.}n]$ in $nH_{k}(S)+o(n\log\sigma)$ bits of space [Ferragina and
Manzini (2005), Navarro and
Mäkinen (2007)]; we
see in Section 6 how this technique is implemented
when $S$ is highly repetitive.
In addition, the BWT has
interesting properties on highly repetitive strings, with connections to
the measures we have been studying. Let us define $r=r(S)$ as the number
of equal-symbol runs in $S^{bwt}$.
Since the BWT is reversible, we can represent $S$ in $O(r)$ space, by
encoding the $r$ symbols and run lengths of $S^{bwt}$. While it is not known
how to provide fast access to any $S[i]$ within this $O(r)$ space, it is
possible to provide fast pattern searches by emulating the original BWT-based
indexes [Mäkinen and
Navarro (2005), Mäkinen et al. (2010), Gagie
et al. (2020)].
Example 3.9
The BWT of $S=\mathsf{alabaralalabarda\$}$,
$S^{bwt}=\mathsf{adll\$lrbbaaraaaaa}$, has $r(S)=10$ runs. It can be
encoded by the symbols and lengths of the runs:
$(\mathsf{a},1),(\mathsf{d},1),(\mathsf{l},2),$ $(\mathsf{\$},1),(\mathsf{l},1),(\mathsf{r},1),(\mathsf{b},2),(\mathsf{a},2),(%
\mathsf{r},1),(\mathsf{a},5)$,
and then we can recover $S$ from $S^{bwt}$.
There is no direct dominance relation between BWT runs and Lempel-Ziv parses or
grammars: there are string families over binary alphabets
where $r=\Theta(n)$ [Belazzougui et al. (2015)] and thus, since
$g=O(n/\log n)$, we have $r=\Omega(g\log n)$. In others, it holds that
$z=\Omega(r\log n)$ [Prezza (2016), Navarro
et al. (2019)].
Interestingly, a relation of $r$ with bidirectional macro schemes can be proved,
$b=O(r)$ [Navarro
et al. (2019)], by noting that the BWT runs induce a bidirectional
macro scheme of size $2r$: If we map each position $S^{bwt}[j]$ starting a run
to the position $S[i]$ where $S^{bwt}[j]$ occurs, then we define the
phrases as all those explicit positions $i$ plus the (non-explicit) substrings
between those explicit positions. Since that is shown to be a valid
bidirectional scheme, it follows that $b\leq 2r$.
Example 3.10
On $S=\mathsf{alabaralalabarda\$}$, with
$S^{bwt}=\mathsf{adll\$lrbbaaraaaaa}$, the corresponding bidirectional macro
scheme is
$S=\mathsf{\framebox{$\mathsf{a}$}|\framebox{$\mathsf{l}$}|\framebox{$\mathsf{a%
}$}|\framebox{$\mathsf{b}$}|a|\framebox{$\mathsf{r}$}|a|\framebox{$\mathsf{l}$%
}|alaba|\framebox{$\mathsf{r}$}|\framebox{$\mathsf{d}$}|\framebox{$\mathsf{a}$%
}|\framebox{$\mathsf{\$}$}}$,
of size $13$. See the leftmost part of Figure 6.
As a final observation, we note that a drawback of $r$ as a repetitiveness
measure is that it depends on the order of the alphabet.
It is NP-hard to find the alphabet permutation that minimizes $r$ [Bentley
et al. (2019)].
Example 3.11
Replacing $\mathsf{a}$ by $\mathsf{e}$ in $S=\mathsf{alabaralalabarda\$}$
we obtain $S^{\prime}=\mathsf{elebereleleberde\$}$, whose BWT has only $r(S^{\prime})=8$ runs.
3.7 Lexicographic Parsing: Measure $v$
\shortciteN
NPO19 generalized the Lempel-Ziv parsing into “ordered” parsings,
which are bidirectional macro schemes where each nonexplicit phrase equals
some substring of $S$ that is
smaller under some criterion (in Lempel-Ziv, the criterion is to start earlier
in $S$). A particularly interesting case are the so-called lexicographic
parsings, where each nonexplicit phrase $S[i\mathinner{.\,.}j]$ must have a copy
$S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ where the suffix $S[i^{\prime}\mathinner{.\,.}n]$ is lexicographically smaller than
$S[i\mathinner{.\,.}n]$. The smallest lexicographic parse of $S$ is called the lex-parse of $S$ and has $v=v(S)$ phrases. It is obtained by
processing $S$ left to right and maximizing the length of each phrase, as for
Lempel-Ziv, that is, $S[i\mathinner{.\,.}j]$ is the longest prefix of $S[i\mathinner{.\,.}n]$ that
occurs at some $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ where the suffix $S[i^{\prime}\mathinner{.\,.}n]$ is lexicographically
smaller than $S[i\mathinner{.\,.}n]$. Note that this is the same to say that $S[i^{\prime}\mathinner{.\,.}n]$
is the suffix that lexicographically precedes $S[i\mathinner{.\,.}n]$.
Example 3.12
Figure 7 gives the lex-parse of
$S=\mathsf{a|\framebox{$\mathsf{l}$}|a|\framebox{$\mathsf{b}$}|a|\framebox{$%
\mathsf{r}$}|ala|labar|\framebox{$\mathsf{d}$}|\framebox{$\mathsf{a}$}|%
\framebox{$\mathsf{\$}$}}$,
of size $v(S)=11$. For example, the first phrase is
$\mathsf{a}$ because the suffix $\mathsf{alabaralalabarda\$}$ shares a prefix
of length $1$ with its lexicographically preceding suffix, $\mathsf{abarda\$}$,
and the second phrase is the explicit symbol $\mathsf{\framebox{$\mathsf{l}$}}$ because it
shares no prefix with its lexicographically preceding suffix,
$\mathsf{da\$}$. Just like $r$, the value of $v$ depends on the alphabet
ordering, for example for $S^{\prime}=\mathsf{elebereleleberde\$}$ we have $v(S^{\prime})=10$.
Measure $v$ has several interesting characteristics. First, it can be computed
in linear time via the so-called longest common prefix array [Kasai
et al. (2001)].
Second, apart from $b=O(v)$ because the lex-parse is a bidirectional macro
scheme, it holds that $v=O(r)$, because the bidirectional macro scheme
induced by the runs of $S$ is also lexicographic [Navarro
et al. (2019)]. Therefore, the
lex-parse is instrumental in connecting the BWT with parsings. Note that,
although $r$ and $z$ are incomparable, $v$ is never asymptotically larger than
$r$. \shortciteNNPO19 also connect $v$ with grammars, by showing that $v\leq g_{rl}$, and therefore $v=O(b\log(n/b))$ and $v\leq nH_{k}+o(n\log\sigma)$.
It follows that there are string families where $r=\Omega(v\log n)$.
They also show string families where $v=\Omega(b\log n)$ and where
$c=\Omega(v\log n)$. Instead, it is unknown if $z$ can be $o(v)$.
3.8 Compact Directed Acyclic Word Graphs: Measure $e$
Measure $r$ is a way to expose the regularities that appear in the suffix
array of $S$ when repetitiveness arises. As seen in Section 2.3,
the suffix array
corresponds to the leaves of the suffix tree, where each suffix of $S$ labels
a path towards a distinct leaf. A Compact Directed Acyclic Word Graph (CDAWG)
[Blumer et al. (1987)] is obtained by collapsing all the leaves of the suffix tree
and minimizing it as an automaton. The suffix trees of repetitive strings tend
to have isomorphic subtrees, and these become collapsed in the CDAWG. The size
$e$ of the CDAWG of $S$, measured in terms of nodes plus edges, is then a
repetitiveness measure. The CDAWG is also built in linear time [Blumer et al. (1987)].
Example 3.13
Figure 8 shows the CDAWG of $S=\mathsf{alabaralalabarda\$}$, and
Figure 1 shows its suffix tree and array. The CDAWG has $5$ nodes
and $14$ edges, so $e=19$. Note, for example, that all the suffixes starting
with $\mathsf{r}$ are preceded by $\mathsf{a}$, all the suffixes starting with
$\mathsf{ar}$ are preceded by $\mathsf{b}$, and so on until $\mathsf{alabar}$.
This causes identical subtrees at the loci of all those substrings,
$\mathsf{r}$, $\mathsf{ar}$, $\mathsf{bar}$, $\mathsf{abar}$, $\mathsf{labar}$,
and $\mathsf{alabar}$. All those loci become the single CDAWG node that is
reachable from the root by those strings. This also relates with $r$: the loci
correspond to suffix array ranges $A[16\mathinner{.\,.}17]$, $A[8\mathinner{.\,.}9]$, $A[10\mathinner{.\,.}11]$,
$A[3\mathinner{.\,.}4]$, $A[13\mathinner{.\,.}14]$, and $A[5\mathinner{.\,.}6]$. All but the last such intervals,
consequently, fall inside BWT runs with symbols
$\mathsf{a}$, $\mathsf{b}$, $\mathsf{a}$, $\mathsf{l}$, and $\mathsf{a}$.
There are other larger identical subtrees, like those rooted by the loci of
$\mathsf{la}$ and $\mathsf{ala}$, corresponding to intervals $A[13\mathinner{.\,.}15]$ and
$A[5\mathinner{.\,.}8]$, the first of which is within a run,
$S^{bwt}[13\mathinner{.\,.}15]=\mathsf{aaa}$. Finally, the run
$S^{bwt}[13\mathinner{.\,.}17]=\mathsf{aaaaa}$ corresponds to two consecutive equal
subtrees, with the suffix array intervals $A[13\mathinner{.\,.}17]$ and $A[5\mathinner{.\,.}9]$.
This is the weakest repetitiveness measure among those we study. It always
holds that $e=\Omega(\max(z,r))$ [Belazzougui et al. (2015)] and $e=\Omega(g)$
[Belazzougui and
Cunial (2017b)]. Worse, on some string families (as simple as $\mathsf{a}^{n-1}\mathsf{\$}$) $e$ can be $\Theta(n)$ times larger than $r$ or $z$
[Belazzougui et al. (2015)] and $\Theta(n/\log n)$ times larger than $g$ [Belazzougui and
Cunial (2017b)].
The CDAWG is, on the other hand, well
suited for pattern searching, as we show in Section 6.
3.9 String Attractors: Measure $\gamma$
\shortciteN
KP18 proposed a new measure of repetitiveness that takes a different
approach: It is a direct measure on the string $S$ instead of the result of
a specific compression method. Their goal was to unify the existing measures
into a cleaner and more abstract characterization of the string. An
attractor of $S$ is a set $\Gamma$ of positions in $S$ such that any
substring $S[i\mathinner{.\,.}j]$ must have a copy including an element of $\Gamma$.
The substrings of a repetitive string should be covered with smaller attractors.
The measure is
then $\gamma=\gamma(S)$, the smallest size of an attractor $\Gamma$ of $S$.
Example 3.14
An attractor of string $S=\mathsf{alabaralalabarda\$}$
is $\Gamma=\{4,6,7,8,15,17\}$. We know that this is the smallest possible
attractor, $\gamma(S)=6$,
because it coincides with the alphabet size $\sigma$, and it must obviously
hold that $\gamma\geq\sigma$.
In general, it is NP-complete to find the smallest attractor size for $S$
[Kempa and
Prezza (2018)], but in exchange they show that $\gamma=O(\min(b,c,z,z_{no},r,g_{rl},g))$.181818For $c$ they consider internal collage systems, recall
Section 3.5. Note that,
with current knowledge, it would be sufficient to prove that $\gamma=O(b)$,
because $b$ asymptotically lower-bounds all those measures, as well as $v$.
Indeed, we can easily see that $\gamma\leq b$: given a bidirectional macro
scheme, take its explicit symbol positions as the attractor $\Gamma$. Every
substring $S[i\mathinner{.\,.}j]$ not containing an explicit symbol (i.e., a position of
$\Gamma$) is inside a phrase and thus it occurs somewhere else, in particular
at $S[f(i)\mathinner{.\,.}f(j)]$. If this new substring does not contain an explicit
position, we continue with $S[f^{2}(i)\mathinner{.\,.}f^{2}(j)]$, and so on. In a valid macro
scheme we must eventually succeed; therefore $\Gamma$ is a valid attractor.
Example 3.15
Our example attractor $\Gamma=\{4,6,7,8,15,17\}$ is derived in this way
from the bidirectional macro scheme of Figure 3.
That is, $\gamma$ is a lower bound to all the other repetitiveness measures.
We do not know if it is reachable, however, that is, if we can represent $S$
within space $O(\gamma)$. Instead, \shortciteNKP18 show that
$O(\gamma\log(n/\gamma))$ space suffices not only to encode $S$ but also to
provide logarithmic-time access to any $S[i]$
(Section 4.2.3).
\shortciteN
CEKNP19 also show how to support indexed searches within
$O(\gamma\log(n/\gamma))$ space; see Section 5.1.1. They
actually build a particular run-length grammar of that size, thus implying the
bound $g_{rl}=O(\gamma\log(n/\gamma))$. A stronger bound $g=O(\gamma\log(n/\gamma))$ is very recent.191919T. Kociumaka, personal
communication.
3.10 String Complexity: Measure $\delta$
Our final measure of repetitiveness for a string $S$, $\delta=\delta(S)$,
is built on top of the concept of string complexity, that is, the number
$S(k)$ of distinct substrings of length $k$.
\shortciteNRRRS13 defines $\delta=\max\{S(k)/k,1\leq k\leq n\}$.
It is not hard to see that $\delta(S)\leq\gamma(S)$ for every string $S$
[Christiansen et al. (2019)]: Since every substring of length $k$ in $S$ has a copy
including some of its $\gamma$ attractor elements, there can be only $k\gamma$
distinct substrings, that is, $S(k)\leq k\gamma$ for all $k$.
\shortciteNCEKNP19 also show how $\delta$ can be computed in linear time.
Example 3.16
For our string $S=\mathsf{alabaralalabarda\$}$ we have
$S(1)=6$, $S(2)=9$, $S(3)=10$, $S(4)=S(5)=S(6)=11$, and $S(k)=17-k+1$ for
$k>6$ (i.e., all the substrings of length over 6 are different); therefore
$\delta(S)=6$.
\shortciteN
KNP20 show that, for every $\delta$, there are string families
where $\gamma=\Omega(\delta\log(n/\delta))$. Although $\delta$ is strictly
stronger than $\gamma$ as a compressibility measure, they also show that it
is possible not only to represent $S$ within $O(\delta\log(n/\delta))$ space,
but also to efficiently access any symbol $S[i]$ (see
Section 4.2.3) and support indexed searches on $S$
(see Section 5.5.2) within that space. Indeed, this
space is optimal as a function of $\delta$: for every $2\leq\delta\leq n^{1-\epsilon}$ (for an arbitrary constant $0<\epsilon<1$) there are string
families that need $\Omega(\delta\log(n/\delta))$ space to be represented.
This means that we know that $o(\delta\log n)$ space is unreachable in general,
whereas it is unknown if $o(\gamma\log n)$ space can always be reached.
\shortciteN
RRRS13 prove that $z=O(\delta\log(n/\delta))$, and it can also be
proved that $g_{rl}=O(\delta\log(n/\delta))$.202020T. Kociumaka,
personal communication. The same cannot be said about $g$: \shortciteNKNP20
prove that there are string families where $g=\Omega(\delta\log^{2}n/\log\log n)$. This establishes another separation between $g_{rl}$ and $g$.
Very recently, the only upper bound on $r$ in terms of another repetitiveness
measure was obtained: $r=O(\delta\log^{2}n)$ [Kempa and
Kociumaka (2019)].
3.11 Relations
Figure 9 summarizes what is known about the repetitiveness
measures we have covered. Those in gray are reachable, and those in dark gray
support efficient access and indexing, as we show in the next sections. An
intriguing case is $r$, which allows for efficient indexing but not access,
as far as we know.
Note that we do not know if $\gamma$ should be grayed or not, whereas we do
know that $\delta$ must not be grayed. The smallest grayed measure is $b$,
which is, by definition, the best space we can obtain via copying substrings
from elsewhere in the string. We have shown that every bidirectional macro
scheme can be converted into an attractor of at most the same size, and thus
$\gamma\leq b$. The converse is not direct: if we take the positions
of an attractor $\Gamma$ as the explicit symbols of a bidirectional macro
scheme, and declare that the gaps are the nonexplicit phrases, the result may
not be a valid macro scheme, because it might be impossible to define a
target-to-source function $f$ without cycles.
Example 3.17
Figure 10 shows an example of this case [Kempa and
Prezza (2018)]. The string
$S=\mathsf{cdabccdabcca\$}$ has attractor $\Gamma=\{4,7,11,12,13\}$. The only
possible bidirectional macro scheme with those explicit symbols has the cycle
$f(3)=f(8)=f(3)$, and thus it is not valid.
Still, it could be that one can always add $O(\gamma)$ explicit symbols to
break those cycles, in which case $\gamma$ and $b$ would be asymptotically
equivalent.
Otherwise, if there are string families where $\gamma=o(b)$, this still
does not mean that we cannot represent a string $S$ within $O(\gamma)$ space,
but that representation will not consist of just
substring copies. To definitely show that not all strings can be represented
in $O(\gamma)$ space, we should find a string family of common measure
$\gamma$ and of size $n^{\omega(\gamma)}$.
Note that it is known that $\delta\log(n/\delta)$ is a reachable measure
that is asymptotically optimal as a function of $\delta$ [Kociumaka
et al. (2020)].
That is, if we
separate the set of all strings into subsets $\mathcal{S}_{\delta}$ where the
strings have measure $\delta$, then inside each set there are families that
need $\Theta(\delta\log(n/\delta))$ space to be represented. The measure
$\delta$ is then optimal in that coarse sense. Still, we know that $b$ is
uniformly better than $\delta\log(n/\delta)$ and sometimes asymptotically
smaller; therefore it is a more refined measure.
In this line, the definitive question is: What is the least reachable
and computable measure of repetitiveness? Just like Shannon’s entropy is a
lower bound when we decide to exploit only frequencies, and bidirectional macro
schemes are a lower bound when we decide to exploit only string copies, there
may be some useful compressibility measure between those and the bottom line
of the (uncomputable) Kolmogorov complexity. As far as we know,
$\gamma$ is a candidate for that.
Other equally fascinating questions can be asked about accessing and indexing
strings: What is the least reachable measure under which we can access
and/or index the strings efficiently? Right now, $g_{rl}$ is the best known
limit for efficient access; it is unknown if one can access $S[i]$ efficiently
within $O(z_{no})$ or $O(r)$ space. Indexing can also be supported within
$O(g_{rl})$ space, but also in $O(r)$; we do not know if this is possible
within $O(z_{no})$ or $O(v)$ space. We explore this in the upcoming sections.
In practice
Some direct [Belazzougui et al. (2015), Gagie
et al. (2020), Navarro
et al. (2019), Russo
et al. (2020)] and less
direct [Mäkinen et al. (2010), Kreft and
Navarro (2013), Claude et al. (2016), Belazzougui et al. (2017)] experiments suggest that in typical
repetitive texts it holds that $b<z\approx v<g<r<e$, where “$<$”
denotes a clear difference in magnitude.
Since those
experiments have been made on different texts and it is hard to combine them,
and because no experiments on $\delta$ have been published,
Table 1 compares the measures that can be computed in
polynomial time on a sample of repetitive collections obtained from the Repetitive
Corpus of Pizza&Chili212121http://pizzachili.dcc.uchile.cl/repcorpus/real.
We include an upper bound on $g$ obtained by a heuristically balanced RePair algorithm222222https://www.dcc.uchile.cl/gnavarro/software/repair.tgz, directory bal/.
We also build the CDAWG on the DNA collections, where the
code we have allows it232323https://github.com/mathieuraffinot/locate-cdawg, which
works only on the alphabet $\Sigma=\{\mathsf{A},\mathsf{C},\mathsf{G},\mathsf{T}\}$. For
escherichia we converted the other symbols to A. We verified that this would
have a negligible impact on the other measures..
The table suggests $z\approx v\approx 1.5{-}2.5\cdot\delta$,
$g\approx 3{-}6\cdot\delta$,
$r\approx 7{-}11\cdot\delta$, and
$e\approx 32{-}35\cdot\delta$.
4 Accessing the Compressed Text and Computing Fingerprints
The first step beyond mere compression, and towards compressed indexing, is to
provide direct access to the compressed string without having to fully
decompress it. We wish to extract arbitrary substrings $S[i\mathinner{.\,.}j]$ in a time
that depends on $n$ only polylogarithmically. Further, some indexes also need
to efficiently compute Karp-Rabin fingerprints (Section 2.4) of
arbitrary substrings.
In this section
we cover the techniques and data structures that are used to provide these
functionalities, depending on the underlying compression method. In all cases,
any substring $S[i\mathinner{.\,.}j]$ can be computed in time $O(j-i+\log n)$ or less,
whereas the fingerprint of any $S[i\mathinner{.\,.}j]$ can be computed in time $O(\log n)$
or less. Section 4.1 shows how this is done in $O(g)$ and
even $O(g_{rl})$ space by enriching (run-length) context-free grammars, whereas
Section 4.2 shows how to do this in space $O(z\log(n/z))$, and
even $O(\delta\log(n/\delta))$, by using so-called block trees. Some indexes
require more restricted forms of access, which those data structures can
provide in less time. Section 4.3 shows another speedup
technique called bookmarking.
The experiments [Belazzougui
et al. (2019)] show that practical access data structures built
on block trees take about the same space as those built on balanced grammars
(created with RePair [Larsson and
Moffat (2000)]), but block trees grow faster as soon as the
repetitiveness decreases. On the other hand, access on block trees is more
than an order of magnitude faster than on grammars.
4.1 Enhanced Grammars
If the compressed string is represented with a context-free grammar of
size $g$ or a run-length grammar of size $g_{rl}$, we can enrich the
nonterminals with information associated with the length of the string they
expand to, so as to provide efficient access within space $O(g)$ or $O(g_{rl})$,
respectively.
For a simple start, let $A\rightarrow X_{1}\cdots X_{k}$. Then, we store
$\ell_{0}=0$, $\ell_{1}=\ell_{0}+|X_{1}|$, $\ell_{2}=\ell_{1}+|X_{2}|$, $\ldots$, $\ell_{k}=\ell_{k-1}+|X_{k}|$ associated with $A$. To extract the $i$th symbol of $exp(A)$,
we look for the predecessor of $i$ in those values, finding $j$ such that
$\ell_{j-1}<i\leq\ell_{j}$, and then seek to obtain the $i^{\prime}$th symbol of
$X_{j}$, with $i^{\prime}=i-\ell_{j-1}$. Since predecessors are computed in time
$O(\log\log n)$ [Pătraşcu and Thorup (2006)], on a grammar of height $h$ we can extract
any $S[i]$ in time $O(h\log\log n)$, which is $O(\log n\log\log n)$ if the
grammar is balanced. If the right-hands of the rules are of constant length,
then the predecessors take constant time and the extraction time drops to
$O(\log n)$, as with the simple method described in Section 3.4.
\shortciteN
BLRSRW15 showed how this simple idea can be extended to extract any
$S[i]$ in time $O(\log n)$ from arbitrary grammars, not necessarily balanced.
They extract a heavy path from the parse tree of $S$. A heavy path starts
at the root $A\rightarrow X_{1}\cdots X_{k}$ and continues by the child $X_{j}$ with
the longest expansion, that is, with maximum $|X_{j}|$ (and breaking ties in some
deterministic way), until reaching a leaf.
We store the heavy path separately and remove all its nodes and edges from the
parse tree, which gets disconnected and becomes a forest. We then repeat the
process from each root of the forest until all the nodes are in the extracted
heavy paths.
Consider the path going through a node labeled $B$ in the parse tree, whose
last element is the terminal $exp(B)[t_{B}]$. We associate with $B$ its start and
end values relative to $t_{B}$, $s_{B}=1-t_{B}$ and $e_{B}=|B|-t_{B}$, respectively.
Note that these values will be the same wherever $B$ appears in the parse tree,
because the heavy path starting from $B$ will be identical. Further, if $C$
follows $B$ in the heavy path, then $exp(C)[t_{C}]$ is the same symbol
$exp(B)[t_{B}]$. For a heavy path rooted at $A$, the values $s_{B}$ of the nodes
we traverse downwards to the leaf, then the zero, and then the values $e_{B}$ of
the nodes we traverse upwards to $A$ again, form an increasing sequence of
positions, $P_{A}$. The search for $S[i]$ then proceeds as follows. We search
for the predecessor of $i-t_{A}$ in the sequence $P_{A}$ associated with the root
symbol $A$. Say that $B$ is followed by $C$ downwards in the path and their
starting positions are $s_{B}\leq i-t_{A}<s_{C}$, or their ending positions are
$e_{C}<i-t_{A}\leq e_{B}$. Then the search for $S[i]$ must continue as the search
for $i^{\prime}=i-t_{A}+t_{B}$ inside $B$, because $i$ is inside $exp(B)$ but not
inside $exp(C)$. With another predecessor search for
$i^{\prime}$ on the starting positions $\ell_{j}$ of the children of $B$, we find the
child $B_{j}$ by which our search continues, with $i^{\prime\prime}=i^{\prime}-\ell_{j-1}$. Note
that $B_{j}$ is the root of another heavy path, and therefore we can proceed
recursively.
Example 4.18
Figure 11 (left) shows an example for the grammar we used in
Figure 4 on our string $S=\mathsf{alabaralalabarda\$}$. The first
heavy path, extracted from the root, is $C\rightarrow B\rightarrow A\rightarrow\mathsf{l}$ (breaking ties arbitrarily). From the other $B$ of
the parse tree, which becomes a tree root after we remove the edges of
the first heavy path, we extract another heavy path: $B\rightarrow A\rightarrow\mathsf{l}$. The remaining $A$ produces the final heavy path,
$A\rightarrow\mathsf{l}$. All the other paths have only one node.
Note that $t_{C}=10$ and $t_{B}=t_{A}=2$, that is, $exp(C)[10]=exp(B)[2]=exp(A)[2]=\mathsf{l}$ is the last element in the heavy path.
Therefore, $s_{C}=-9$, $e_{C}=7$,
$s_{B}=-1$, $e_{B}=4$, $s_{A}=-1$, $e_{A}=0$. The $\ell$ values for $C$
are $0,6,8,14,15,16,17$. To find $S[12]$, we determine that $8<12\leq 14$, thus we have to descend by $B$, which follows the heavy path. We then
search for $12-t_{C}=2$ in the sequence $s_{C},s_{B},s_{A},0,e_{A},e_{B},e_{C}=-9,-1,-1,0,0,4,7$ to find
$0<2\leq 4$, meaning that we fall between $e_{A}=0$ and $e_{B}=4$. We thus
follow the search from $B$, for the new position $12-t_{C}+t_{B}=4$. Since the
$\ell$ values associated with $B$ are $0,2,3,4,5,6$, we descend by a light
edge towards the $\mathsf{b}$, which is $S[12]$.
The important property is that, if $X_{j}$ follows $A$ in the heavy path, then
all the other children $X_{j^{\prime}}$ of $A$ satisfy $|X_{j^{\prime}}|\leq|A|/2$, because
otherwise $X_{j^{\prime}}$ would have followed $A$ in the heavy path. Therefore, every
time we traverse a light edge to
switch to another heavy path, the length of the expansion of the
nonterminal is halved. As a consequence, we cannot switch more than $\log n$
times to another heavy path in our traversal from the root to the leaf that
holds $S[i]$. Since we perform two predecessor searches [Pătraşcu and Thorup (2006)] to find
the next heavy path, the total extraction cost is $O(\log n\log\log n)$, even
if the grammar is unbalanced.
\shortciteN
BLRSRW15 remove the $O(\log\log n)$ factor by using a different
predecessor search data structure that, if $i$ falls between positions
$p_{j-1}$ and $p_{j}$ inside a universe of $u$ positions, then the search takes
time $O(\log(u/(p_{j}-p_{j-1})))$. This makes the successive searches on heavy
paths and children telescope to $O(\log n)$.
The other problem is that the parse tree has $\Theta(n)$ nodes, and thus we
cannot afford storing all the heavy paths. Fortunately, this is not necessary:
If $X_{j}$ is the child of $A$ with the largest $|X_{j}|$, then $X_{j}$ will follow
$A$ in every heavy path where $A$ appears. We can then store all the heavy
paths in a trie where $X_{j}$ is the parent of $A$. Heavy paths are then
read as upward paths in this trie, which has exactly one node per nonterminal
and per terminal, the latter being children of the trie root.
The trie then represents all the heavy paths within $O(g)$ space.
\shortciteNBLRSRW15 show how an $O(g)$-space data structure on the trie provides
the desired predecessor searches on upward trie paths.
Example 4.19
Figure 11 (right) shows the trie associated with our example
parse tree. Every time $B$ appears in the parse tree, the heavy path
continues by $A$, so $A$ is the parent of $B$ in the trie.
4.1.1 Extracting substrings
\shortciteN
BLRSRW15 also show how to extract $S[i\mathinner{.\,.}j]$
in time $O(j-i+\log n)$. We find the path towards $i$ and the path towards $j$.
These coincide up to some node in the parse tree, from which they descend by
different children. From there, all the subtrees to the right of the chosen
path towards $i$ (from the deepest to the shallowest), and then all the
subtrees to the left of the chosen path towards $j$ (from the shallowest to the
deepest), are fully traversed in order to obtain the $j-i+1$ symbols of
$S[i\mathinner{.\,.}j]$ in optimal time (because we output all the leaves of the traversed
trees, and internal nodes have at least two children).
With a bit more of sophistication, \shortciteNBPT15 obtain RAM-optimal time on the
substring length, $O((j-i)/\log_{\sigma}n+\log n)$, among other tradeoffs. We
note that the $O(\log n)$ additive overhead is almost optimal: any structure
using $g^{O(1)}$ space requires $\Omega(\log^{1-\epsilon}n)$ time to access a
symbol, for any constant $\epsilon>0$ [Verbin and Yu (2013)].
4.1.2 Karp-Rabin fingerprints
An easy way to obtain the Karp-Rabin fingerprint of any $S[i\mathinner{.\,.}j]$ is to
obtain $\kappa(S[1\mathinner{.\,.}i-1])$ and $\kappa(S[1\mathinner{.\,.}j])$, and then operate them
as shown in Section 2.4. To
compute the fingerprint of a prefix of $S$, \shortciteNBGCSVV17 store for each
$A\rightarrow X_{1}\cdots X_{k}$ the fingerprints $\kappa_{1}=\kappa(exp(X_{1}))$,
$\kappa_{2}=\kappa(exp(X_{1})\cdot exp(X_{2}))$, $\ldots$, $\kappa_{k}=\kappa(exp(X_{1})\cdots exp(X_{k}))$. Further, for each node $B$ in a heavy path
ending at a leaf $exp(B)[t_{B}]$, we store $\kappa(exp(B)[1\mathinner{.\,.}t_{B}-1])$.
Thus, if we have to leave at $B$ a heavy path that starts in $A$, the
fingerprint of the prefix of $exp(A)$ that precedes $exp(B)$ is obtained by
combining $\kappa(exp(A)[1\mathinner{.\,.}t_{A}-1])$ and $\kappa(exp(B)[1\mathinner{.\,.}t_{B}-1])$.
In our path towards extracting $S[i]$, we can
then compose the fingerprints so as to obtain $\kappa(S[1\mathinner{.\,.}i-1])$ at the
same time. Any fingerprint
$\kappa(S[i\mathinner{.\,.}j])$ can therefore be computed in $O(\log n)$ time.
Example 4.20
In the same example of Figure 11, say we want to compute
$\kappa(S[1\mathinner{.\,.}11])$. We start with the heavy path that starts at $C$, which
we leave at $B$. For the heavy path, we have precomputed
$\kappa(exp(C)[1\mathinner{.\,.}t_{C}-1])=\kappa(\mathsf{alabarala})$ for $C$ and $\kappa(exp(B)[1\mathinner{.\,.}t_{B}-1])=\kappa(\mathsf{a})$ for $B$. By operating them, we obtain the fingerprint of
the prefix of $exp(C)$ that precedes $exp(B)$, $\kappa(\mathsf{alabaral})$.
We now descend by the second child of $B$. We have also precomputed the
fingerprints of the prefixes of $exp(B)$ corresponding to its children,
in particular the first one, $\kappa(exp(A))=\kappa(\mathsf{al})$. By
composing both fingerprints, we have $\kappa(\mathsf{alabaralal})$ as desired.
4.1.3 Extracting rule prefixes and suffixes in real time
The typical search algorithm of compressed indexes
(Section 5) does not need to extract
arbitrary substrings, but only to expand prefixes or suffixes of nonterminals.
\shortciteNGKPS05 showed how one can extract prefixes or suffixes of any
$exp(A)$ in real time, that is, $O(1)$ per additional symbol. They build a
trie similar to that used to store all the heavy paths, but this time they
store leftmost paths (for prefixes) or rightmost paths (for suffixes). That
is, if $A\rightarrow X_{1}\cdots X_{k}$, then $X_{1}$ is the parent of $A$ in
the trie of leftmost paths and $X_{k}$ is the parent of $A$ in the trie of
rightmost paths.
Let us consider leftmost paths; rightmost paths are analogous. To extract the
first symbol of $exp(A)$, we go to the root of the trie, descend to the child
in the path to node $A$, and output its corresponding terminal, $a$. This takes
constant time with level ancestor queries [Bender and
Farach-Colton (2004)]. Let $B\rightarrow aB_{2}\cdots B_{r}$ be the child of $a$ in the path to $A$ (again, found with level
ancestor queries from $A$).
The next symbols are then extracted recursively from $B_{2},\ldots,B_{r}$. Once
those are exhausted, we continue with the child of $B$ in the path to $A$,
$C\rightarrow BC_{2}\cdots C_{s}$, and extract $C_{2},\ldots,C_{s}$, and so on,
until we extract all the desired characters.
4.1.4 Run-length grammars
\shortciteN
CEKNP19 (App. A) showed how the results above can be obtained on
run-length context-free grammars as well, relatively easily, by regarding
the rule $A\rightarrow X^{k}$ as $A\rightarrow X\cdots X$ and managing to
use only $O(1)$ words of precomputed data in order to simulate the desired
operations.
4.1.5 All context-free grammars can be balanced
Very recently, \shortciteNGJL19 proved that every context-free grammar of size
$g$ can be converted into another of size $O(g)$, right-hands of size 2, and
height $O(\log n)$. While the conversion seems nontrivial at first sight, once
it is carried out we need only very simple information associated with nonterminals to
extract any $S[i\mathinner{.\,.}j]$ in time $O(j-i+\log n)$ and to
compute any fingerprint $\kappa(S[i\mathinner{.\,.}j])$ in time $O(\log n)$. It is not
known, however, if run-length grammars can be balanced in the same way.
4.2 Block Trees and Variants
Block trees [Belazzougui
et al. (2015)] are in principle built on a Lempel-Ziv parsing of
$S[1\mathinner{.\,.}n]$, of $z$ phrases. Built with a parameter $r$, they provide a way to
access any $S[i]$ in time $O(\log_{r}(n/z))$ with a data structure of size
$O(zr\log_{r}(n/z))$. For example, with $r=O(1)$, the time is $O(\log(n/z))$ and
the space is $O(z\log(n/z))$. Recall that several heuristics build grammars of
size $g=O(z\log(n/z))$, and thus block trees are not asymptotically smaller
than structures based on grammars, but they can be asymptotically faster.
The block tree is of height $\log_{r}(n/z)$. The root has $z$ children, $u_{1},\ldots,u_{z}$, which logically divide $S$ into blocks of length $n/z$, $S=S_{u_{1}}\cdots S_{u_{z}}$. Each such node $v=u_{j}$ has $r$ children,
$v_{1},\ldots,v_{r}$,
which divide its block $S_{v}$ into equal parts, $S_{v}=S_{v_{1}}\cdots S_{v_{r}}$.
The nodes $v_{i}$ have, in turn, $r$ children that subdivide their block, and so
on. After slightly less than $\log_{r}(n/z)$ levels, the blocks are of length
$\log_{\sigma}n$, and can be stored explicitly using $\log n$ bits, that is,
in constant space.
Some of the nodes $v$ can be removed because their block $S_{v}$ appears earlier
in $S$. The precise mechanism is as follows: every consecutive pair of nodes
$v_{1},v_{2}$ where the concatenation $S_{v_{1}}\cdot S_{v_{2}}$ does not appear
earlier is marked (that is, we mark $v_{1}$ and $v_{2}$). After this,
every unmarked node $v$ has an earlier occurrence, so instead of creating
its $r$ children, we replace $v$ by a leftward pointer to the first
occurrence of $S_{v}$ in $S$. This first occurrence spans in general two
consecutive nodes $v_{1},v_{2}$ at the same level of $v$, and these exist and are
marked by construction. We then make $v$ a leaf pointing to $v_{1},v_{2}$, also
recording the offset where $S_{v}$ occurs inside $S_{v_{1}}\cdot S_{v_{2}}$.
To extract $S[i]$, we determine the top-level node $v=u_{j}$ where $i$ falls and
then extract its corresponding symbol. In general, to extract $S_{v}[i]$,
there are three cases. (1) If $S_{v}$ is stored explicitly (i.e., $v$ is a node
in the last level), we access $S_{v}[i]$ directly. (2) If $v$ has $r$ children, we
determine the corresponding child $v^{\prime}$ of $v$ and the corresponding offset
$i^{\prime}$ inside $S_{v^{\prime}}$, and descend to the next level looking for $S_{v^{\prime}}[i^{\prime}]$.
(3) If $v$ points to a pair of nodes $v_{1},v_{2}$ to the left, at the same level,
then $S_{v}$ occurs inside $S_{v_{1}}\cdot S_{v_{2}}$. With the offset information,
we translate the query $S_{v}[i]$ into a query inside $S_{v_{1}}$ or inside
$S_{v_{2}}$. Since nodes $v_{1}$ and $v_{2}$ are marked, they have children, so we
are now in case (2) and can descend to the next level. Overall, we do $O(1)$
work per level of the block tree, for
a total access time of $O(\log_{r}(n/z))$.
To see that the space of this structure is $O(zr\log_{r}(n/z))$, it suffices to
show that there are $O(z)$ marked nodes per level: we charge $O(r)$ space to the
marked nodes in a level to account for the space of all the nodes in the next
level.
Note that, in a given level, there are only $z$ blocks containing Lempel-Ziv
phrase boundaries. Every pair of nodes $v_{1},v_{2}$ without phrase boundaries in
$S_{v_{1}}\cdot S_{v_{2}}$ has an earlier occurrence because it is inside a
Lempel-Ziv phrase. Thus, a node $v$ containing a phrase boundary in $S_{v}$ may
be marked and force its preceding and following nodes to be marked as well,
but all the other nodes are unmarked.
In conclusion, there can be at most $3z$ marked nodes per level.
Still, the construction is conservative, possibly preserving internal nodes
$v$ such that $S_{v}$ occurs earlier, and no other node points inside $S_{v}$ nor
inside
the block of a descendant of $v$. Such nodes are identified and converted into
leaves with a final postorder traversal [Belazzougui
et al. (2019)].
Example 4.21
Figure 12 shows a block tree for $S=\mathsf{alabaralalabarda\$}$
(we should start with $z(S)=11$ blocks, but $3$ is better to exemplify).
To access $S[11]$ we descend by the second block ($S_{u_{2}}=\mathsf{alalab}$),
accessing $S_{u_{2}}[5]$. Since the block has children, we descend by the second
($\mathsf{lab}$), aiming for its second position. But this block has no
children because its content is replaced by an earlier occurrence of
$\mathsf{lab}$. The quest for the second position of $\mathsf{lab}$ then
becomes the quest for the third position of $\mathsf{ala}$, within the first
block of the second level. Since this block has children, we descend to its
second child ($\mathsf{a}$), aiming for its first position. This block is
explicit, so we obtain $S[11]=\mathsf{a}$.
4.2.1 Extracting substrings
By storing the first and last $\log_{\sigma}n$ symbols of every block $S_{v}$,
a chunk of $S$ of that length can also be extracted in time
$O(\log_{r}(n/z))$: we traverse the tree as for a single symbol until
the paths for the distinct symbols of the chunk diverge. At this point, the
chunk spans more than one block, and thus its content can be assembled from
the prefixes and suffixes stored for the involved blocks.
Therefore, we can extract any
$S[i\mathinner{.\,.}j]$ by chunks of $\log_{\sigma}n$ symbols, in time
$O((1+(j-i)/\log_{\sigma}n)\log_{r}(n/z))$.
4.2.2 Karp-Rabin fingerprints
\shortciteN
NP18 show, on a slight block tree variant, that fingerprints of any
substring $S[i\mathinner{.\,.}j]$ can be computed in time $O(\log_{r}(n/z))$ by storing
some precomputed fingerprints: (1) for every top-level node $u_{j}$,
$\kappa(S_{u_{1}}\cdots S_{u_{j}})$; (2) for every internal node $v$ with children
$v_{1},\ldots,v_{r}$, $\kappa(S_{v_{1}}\cdots S_{v_{j}})$ for all $j$; and (3) for
every leaf $v$ pointing leftwards to the occurrence of $S_{v}$ inside
$S_{v_{1}}\cdot S_{v_{2}}$, $\kappa(S_{v}\cap S_{v_{1}})$. Then, by using the
composition operations of Section 2.4, the computation of
any prefix fingerprint $\kappa(S[1\mathinner{.\,.}i])$ is translated into the computation
of some $\kappa(S_{u_{j}}[1\mathinner{.\,.}i^{\prime}])$, and the computation of any
$\kappa(S_{v}[1\mathinner{.\,.}i^{\prime}])$ is translated into the computation of some
$\kappa(S_{v^{\prime}}[1\mathinner{.\,.}i^{\prime\prime}])$ at the same level (at most once per level) or
at the next level.
4.2.3 Other variants
The block tree concept is not as tightly coupled to Lempel-Ziv
parsing as it might seem. \shortciteNKP18 build a similar structure on top of an
attractor $\Gamma$ of $S$, of minimal size $\gamma\leq z$. Their structure
uses $O(\gamma\log(n/\gamma))\subseteq O(z\log(n/z))$ space and extracts any
$S[i\mathinner{.\,.}j]$ in time $O((1+(j-i)/\log_{\sigma}n)\log(n/\gamma))$.
Unlike the block tree, their structure divides $S$
irregularly, defining blocks as the areas between consecutive attractor
positions. We prefer to describe the so-called $\Gamma$-tree [Navarro and
Prezza (2019)],
which is more similar to a block tree and more suitable for indexing.
The $\Gamma$-tree starts with $\gamma$ top-level nodes (i.e., in level $0$),
each representing a block of length $n/\gamma$ of $S$. The nodes of level $l$
represent blocks of length $b_{l}=n/(\gamma\cdot 2^{l})$. At each level $l$, every
node whose block is at distance less than $b_{l}$ from an attractor position is
marked. Marked nodes point to their two children in the next level, whereas
unmarked nodes $v$ become leaves pointing to a pair of nodes $v_{1},v_{2}$ at
the same level, where $S_{v}$ occurs inside $S_{v_{1}}\cdot S_{v_{2}}$ and the
occurrence contains an attractor position. Because of our marking rules,
nodes $v_{1},v_{2}$ exist and are marked.
It is easy to see that the $\Gamma$-tree has height $\log(n/\gamma)$, at most
$3\gamma$ marked nodes per level, and that it requires $O(\gamma\log(n/\gamma))$
space. This space is better than the classical block tree because $\gamma\leq z$. The $\Gamma$-tree can retrieve any $S[i]$ in time $O(\log(n/\gamma))$ and
can be enhanced to match the substring extraction time of \shortciteNKP18. As
mentioned, they can also compute Karp-Rabin fingerprints of substrings of $S$
in time $O(\log(n/\gamma))$.
Recently, \shortciteNKNP20 showed that the original block tree is easily tuned to
use $O(\delta r\log_{r}(n/\delta))\subseteq O(zr\log_{r}(n/z))$ space (recall that
$\delta\leq\gamma\leq z$). The only change needed is to start with $\delta$
top-level blocks. It can then be seen that there are only $O(\delta)$ marked
blocks per level (though the argument is more complex than the previous ones).
The tree height is $O(\log_{r}(n/\delta))$, higher than the block tree. However,
from $z=O(\delta\log(n/\delta))$, they obtain that
$\log(n/\delta)=O(\log(n/z))$, and therefore the difference in query times
is not asymptotically relevant.
4.3 Bookmarking
\shortciteN
GGKNP12 combine grammars with Lempel-Ziv parsing to speed up string
extraction over (Lempel-Ziv) phrase prefixes and suffixes, and \shortciteNGGKNP14
extend the result to fingerprinting. We present the ideas in simplified form
and on top of the stronger concept of attractors.
Assume we have split $S$ somehow into $t$ phrases, and let
$\Gamma$ be an attractor on $S$, of size $\gamma$. Using a structure
from Sections 4.1 or 4.2, we provide
access to any substring $S[i\mathinner{.\,.}i+\ell]$ in time $O(\ell+\log n)$, and
computation of its Karp-Rabin fingerprint in time $O(\log n)$. Bookmarking
enables, within $O((t+\gamma)\log\log n)$ additional
space, the extraction of phrase prefixes and suffixes in time
$O(\ell)$ and their fingerprint computation in time $O(\log\ell)$.
Let us first handle extraction. We consider only the case $\ell\leq\log n$,
since otherwise $O(\ell+\log n)$ is already $O(\ell)$. We build a string
$S^{\prime}[1\mathinner{.\,.}n^{\prime}]$ by collecting all the symbols of $S$ that are at distance at
most $\log n$ from an attractor position. It then holds that $n^{\prime}=O(\gamma\log n)$, and every phrase prefix or suffix of $S$ of length up to
$\log n$ appears in $S^{\prime}$, because it has a copy in $S$ that includes a
position of $\Gamma$. By storing a pointer from each of the $t$ phrase
prefixes or suffixes to a copy in $S^{\prime}$, using $O(t)$ space, we can
focus on extracting substrings from $S^{\prime}$.
An attractor $\Gamma^{\prime}$ on $S^{\prime}$ can be obtained by projecting the positions
of $\Gamma$. Further, if the area between two attractor positions in
$\Gamma$ is longer than $2\log n$, its prefix and suffix of length $\log n$
are concatenated in $S^{\prime}$. In that case we add the middle position to
$\Gamma^{\prime}$, to cover the possibly novel substrings. Then, $\Gamma^{\prime}$ has
a position every (at most) $\log n$ symbols of $S^{\prime}$, and it is an attractor of
size at most $2\gamma$ for $S^{\prime}$.
Example 4.22
Consider the attractor $\Gamma=\{4,6,7,8,15,17\}$ of
$S=\mathsf{ala\framebox{$\mathsf{b}$}a\framebox{$\mathsf{r}$}\framebox{$\mathsf%
{a}$}\framebox{$\mathsf{l}$}alabar\framebox{$\mathsf{d}$}a\framebox{$\mathsf{%
\$}$}}$
(the boxed symbols are the attractor positions).
Let us replace $\log n$ by $2$ for the sake of the example. It then holds that
$S^{\prime}=\mathsf{labaralalarda\$}$. The attractor we build for $\Gamma^{\prime}$ includes the
positions $\{3,5,6,7,12,14\}$, projected from $\Gamma$. In addition, since
some middle symbols of the area $S[9\mathinner{.\,.}14]=\mathsf{alabar}$ are removed and it
becomes $S^{\prime}[8\mathinner{.\,.}11]=\mathsf{alar}$, we add one more attractor in the middle,
to obtain $\Gamma^{\prime}=\{3,5,6,7,10,12,14\}$, that is,
$S^{\prime}=\mathsf{la\framebox{$\mathsf{b}$}a\framebox{$\mathsf{r}$}\framebox{%
$\mathsf{a}$}\framebox{$\mathsf{l}$}al\framebox{$\mathsf{a}$}r\framebox{$%
\mathsf{d}$}a\framebox{$\mathsf{\$}$}}$.
As mentioned at the end of Section 3.9, we can build a run-length
grammar of size $O(\gamma^{\prime}\log(n^{\prime}/\gamma^{\prime}))=O(\gamma\log\log n)$ on $S^{\prime}$
(without the need to find the attractor, which would be NP-hard). This grammar
is, in addition, locally balanced, that is, every nonterminal whose parse tree
node has $l$ leaves is of height $O(\log l)$.
Assume we want to extract a substring $S^{\prime}[i^{\prime}\mathinner{.\,.}i^{\prime}+2^{k}]$ for some fixed $i^{\prime}$ and
$0\leq k\leq\log\log n$. We may store the lowest common ancestor $u$ in the
parse tree of the $i^{\prime}$th and $(i^{\prime}+2^{k})$th leaves. Let $v$ and $w$ be the
children of $u$ that are ancestors of those two leaves, respectively. Let
$j_{v}$ be the rank of the rightmost leaf of $v$ and $j_{w}$ that of the leftmost
leaf of $w$. Thus, we have $i^{\prime}\leq j_{v}<j_{w}\leq i^{\prime}+2^{k}$. This implies that,
if $v^{\prime}$ and $w^{\prime}$ are, respectively, the lowest common ancestors of the leaves
with rank $i^{\prime}$ and $j_{v}$, and with rank $j_{w}$ and $i^{\prime}+2^{k}$, then $v^{\prime}$ descends
from $v$ and $w^{\prime}$ descends from $w$, and both $v^{\prime}$ and $w^{\prime}$ are of height
$O(\log 2^{k})=O(k)$ because the grammar is locally balanced. We can then
extract $S[i^{\prime}\mathinner{.\,.}i^{\prime}+2^{k}]$ by extracting the $j_{v}-i^{\prime}+1$ rightmost leaves of $v^{\prime}$
by a simple traversal from the right, in time $O(j_{v}-i^{\prime}+k)$, then the whole
children of $u$ that are between $v$ and $w$, in time $O(2^{k})$, and finally the
$i^{\prime}+2^{k}-j_{w}+1$ leftmost leaves of $w^{\prime}$, in time $O(i^{\prime}+2^{k}-j_{w}+k)$, with a simple
traversal from the left. All this adds up to $O(2^{k})$ work. See
Figure 13.
We store the desired information on the nodes $v^{\prime}$ and $w^{\prime}$ for every position
$S^{\prime}[i^{\prime}]$ to which a phrase beginning $S[i]$ is mapped. Since this is stored for
every value $k$, we use $O(t\log\log n)$ total space. To extract $S^{\prime}[i^{\prime}\mathinner{.\,.}i^{\prime}+\ell]$ for some $0\leq\ell\leq\log n$, we choose
$k=\lceil\log\ell\rceil$ and extract the substring in time
$O(2^{k})=O(\ell)$. The arrangement for phrase suffixes is analogous.
Similarly, we can compute the Karp-Rabin signature of $S^{\prime}[i^{\prime}\mathinner{.\,.}i^{\prime}+\ell]$ by
storing the signature for every nonterminal, and combine the signatures of
(1) the $O(k)=O(\log\ell)$ subtrees that cover the area between $S^{\prime}[i^{\prime}]$ and
$S^{\prime}[j_{v}]$, (2) the children of $u$ between $v$ and $w$ (if any), and (3) the
$O(k)=O(\log\ell)$ subtrees that cover the area between $S^{\prime}[j_{w}]$ and
$S^{\prime}[i^{\prime}+\ell]$ (if any). (If $i^{\prime}+\ell<j_{w}$, we only combine the $O(\log\ell)$
subtrees that cover the area between $S^{\prime}[i^{\prime}]$ and $S[i^{\prime}+\ell]$.)
This can be done in $O(\log\ell)$ time, recall Section 4.1.2.
5 Parsing-Based Indexing
In this section we describe a common technique underlying a large class of
indexes for repetitive string collections. The key idea, already devised by
\shortciteNKU96, builds on the parsing induced by a compression method, which
divides
$S[1\mathinner{.\,.}n]$ into $p$ phrases, $S=S_{1}\cdots S_{p}$. The parsing is used to
classify the occurrences of any pattern $P[1\mathinner{.\,.}m]$ into two types:
•
The primary occurrences are those that cross a phrase boundary.
•
The secondary occurrences are those contained in a single phrase.
The main idea of parsing-based indexes is to first detect the primary
occurrences with
a structure using $O(p)$ space, and then obtain the secondary ones from those,
also using $O(p)$ space. The key property that the parsing must
hold is that it must allow spotting the secondary occurrences from the primary
ones within $O(p)$ space.
5.1 Geometric Structure to Track Primary Occurrences
Every primary occurrence of $P$ in $S$ can be uniquely described by
$\langle i,j\rangle$, indicating:
1.
The leftmost phrase $S_{i}$ it intersects.
2.
The position $j$ of $P$ that aligns at the end of that phrase.
A primary occurrence $\langle i,j\rangle$ then implies that
•
$P[1\mathinner{.\,.}j]$ is a suffix of $S_{i}$, and
•
$P[j+1\mathinner{.\,.}m]$ is a prefix of $S_{i+1}\cdots S_{p}$.
The idea is then to create two sets of strings:
•
$\mathcal{X}$ is the set of all the reversed phrase contents,
$X_{i}=S_{i}^{rev}$, for $1\leq i<p$, and
•
$\mathcal{Y}$ is the set of all the suffixes $Y_{i}=S_{i+1}\cdots S_{p}$,
for $1\leq i<p$.
If, for a given $j$, $P[1\mathinner{.\,.}j]^{rev}$ is a prefix of $X_{i}$ (i.e., $P[1\mathinner{.\,.}j]$
is a suffix of $S_{i}$) and $P[j+1\mathinner{.\,.}m]$ is a prefix of $Y_{i}$, then
$\langle i,j\rangle$ is a primary occurrence of $P$ in $S$. To find
them all, we lexicographically sort the strings in $\mathcal{X}$ and
$\mathcal{Y}$, and set up a bidimensional grid of size $p\times p$. The grid
has exactly $p$ points, one per row and per column: if, for some $i$, the
$x$th element of $\mathcal{X}$ in lexicographic order is $X_{i}$ and the $y$th
element of $\mathcal{Y}$ in lexicographic order is $Y_{i}$, then there is a point
at $(x,y)$ in the grid, which we label $i$.
The primary occurrences of $P$ are then found with the following procedure:
•
For each $1\leq j<m$
1.
Find the lexicographic range $[s_{x},e_{x}]$ of $P[1\mathinner{.\,.}j]^{rev}$ in
$\mathcal{X}$.
2.
Find the lexicographic range $[s_{y},e_{y}]$ of $P[j+1\mathinner{.\,.}m]$ in
$\mathcal{Y}$.
3.
Retrieve all the grid points $(x,y)\in[s_{x},e_{x}]\times[s_{y},e_{y}]$.
4.
For each retrieved point $(x,y)$ labeled $i$, report the
primary occurrence $\langle i,j\rangle$.
It is then sufficient to associate the end position $p(i)=|S_{1}\cdots S_{i}|$
with each phrase $S_{i}$, to know that the primary occurrence $\langle i,j\rangle$ must be reported at position $S[p(i)-j+1\mathinner{.\,.}p(i)-j+m]$. Or we can
simply store $p(i)$ instead of $i$ in the grid.
Example 5.23
Figure 14 shows the grid built on a parsing of
$S=\mathsf{alabaralalabarda\$}$.
Every reversed phrase appears on top, as
an $x$-coordinate, and evey suffix appears on the right, as a $y$-coordinate.
Both sets of strings are lexicographically sorted and the points in the grid
connect phrases ($x$) with their following suffix ($y$). Instead of points we
draw the label, which is the number of the phrase in the $x$-coordinate.
A search for $P=\mathsf{la}$ finds its primary occurrences by searching
$\mathcal{X}$ for $P[1]^{rev}=\mathsf{l}$, which yields the range $[x_{s},x_{e}]=[7,8]$, and searching $\mathcal{Y}$ for $P[2]=\mathsf{a}$, which gives
the range $[y_{s},y_{e}]=[2,6]$. The search for the (grayed) zone $[7,8]\times[2,6]$ returns two points, with labels $2$ and $7$, meaning that $P[1]$ aligns
at the end of those phrase numbers, precisely at positions $S[2]$ and $S[8]$.
Note that, by definition, there are no primary occurrences when $|P|=1$.
Still, it will be convenient to find all the occurrences of $P$ that lie at the
end of a phrase. To do this, we carry out the same steps above for
$j=1$, in the understanding that the lexicographic range on $\mathcal{Y}$
is $[s_{y},e_{y}]=[1,p]$.
The challenges are then (1) how to find the intervals in $\mathcal{X}$ and
$\mathcal{Y}$, and (2) how to find the points in the grid range.
5.1.1 Finding the intervals in $\mathcal{X}$ and $\mathcal{Y}$
A simple solution is to perform a binary search on the sets, which requires
$O(\log p)$ comparisons of strings. The desired prefixes of $X_{i}$ or
$Y_{i}$ to compare with, of length at most $m$, must be extracted from a
compressed representation of $S$: we represent $X_{i}$ and $Y_{i}$ just with
the position where they appear in $S$. By using any of the techniques
in Section 4, we extract them in
time $f_{e}=O(m/\log_{\sigma}n+\log n)$
(Section 4.1.1) or $f_{e}=O((1+m/\log_{\sigma}n)\log(n/z))$ (Section 4.2.1).
Since this is repeated for every $1\leq j<m$, all the intervals are found in
time $O(f_{e}\;m\log p)$, which is in $O((m+\log n)m\log n)$ with the first
tradeoff. This complexity can be reduced to $O(m^{2}\log n)$ on grammars, by
exploiting the fact that the phrases are defined as the leaves of the grammar
tree, and therefore we always need to extract prefixes or suffixes of
nonterminal expansions (Section 4.1.3). The same time
can be obtained with $O((p+\gamma)\log\log n)$ additional space by using
bookmarking (Section 4.3). In all cases, however, the
complexity stays quadratic in $m$: we need to search for $m-1$ prefixes/suffixes
of $P$ of length $\Theta(m)$.
The quadratic term can be removed
by using a batched search for all the suffixes $P[j+1\mathinner{.\,.}m]$ together
(or all the suffixes $P[1\mathinner{.\,.}j]^{rev}$). The technique is based on compact
tries and Karp-Rabin fingerprints [Belazzougui
et al. (2010), Gagie et al. (2014), Bille et al. (2017), Gagie
et al. (2018b), Christiansen et al. (2019)].
The idea is to represent the sets $\mathcal{X}$ and $\mathcal{Y}$ with compact
tries, storing fingerprints of edge labels instead of substrings, and then
verifying the candidates to ensure they are actual matches. The fingerprints
of all the suffixes sought are computed in time $O(m)$. The trie is actually a
version called z-fast trie [Belazzougui et al. (2009), Belazzougui
et al. (2010)], which allows searching
for a string of length $\ell$ with $O(\log\ell)$ fingerprint comparisons. Once
all the candidate results are found, a clever method does all the
verifications with one single string extraction, exploiting the fact that we
are looking for various suffixes of a single pattern. If a fingerprint
$\kappa(S[i\mathinner{.\,.}j])$
is computed in time $f_{h}$, then the lexicographic ranges of any $k$ suffixes
of $P[1\mathinner{.\,.}m]$ can be found in time $O(m+k(f_{h}+\log m)+f_{e})$. The structure
uses $O(p)$ space, and is built in $O(n\log n)$ expected time to ensure
that there are no fingerprint collisions in $S$.
As shown in Sections 4.1.2 and 4.2.2, we can
compute Karp-Rabin fingerprints in time $O(\log n)$ or $O(\log(n/z))$ using
grammars or block trees, respectively. To search for the $k=m-1$ suffixes of $P$
(or of its reverse) we then need time $O(m\log n)$.
This approach can be applied with block trees built on Lempel-Ziv
($O(z\log(n/z))$ space, and even $O(\delta\log(n/\delta))$ as shown in
Section 4.2.3), or built on attractors
($O(\gamma\log(n/\gamma))$ space). It can also be applied on grammars
($O(g)$ space), and even on run-length grammars (see
Section 4.1.4), within space
$O(g_{rl})\subseteq O(\delta\log(n/\delta))$. Combined with bookmarking
(Section 4.3), the time can be reduced to $O(m\log m)$
because $f_{h}=O(\log m)$,
yet we need $O(z\log\log n)$ or $O(g\log\log n)$ further space.
A recent twist [Christiansen and
Ettienne (2018), Christiansen et al. (2019)] is that a specific type of grammar, called
locally consistent grammar,242424Built via various rounds of the
better-known locally consistent parsings [Cole and
Vishkin (1986), Sahinalp and
Vishkin (1995), Mehlhorn
et al. (1997), Batu
et al. (2006)].
can speed up the searches because there
are only $k=O(\log m)$ cuts of $P$ that deserve consideration. In a locally
consistent grammar, the subtrees of the parse tree expanding two identical
substrings $S[i\mathinner{.\,.}j]=S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ are identical except for $O(1)$ nodes
in each level. \shortciteNCEKNP19 show that locally balanced and locally consistent
grammars of size $O(\gamma\log(n/\gamma))$ can be built without the need to
compute $\gamma$ (which would be NP-hard). Further, we can obtain $f_{e}=O(m)$
time with grammars because, as explained, we extract only rule prefixes and
suffixes when searching $\mathcal{X}$ or $\mathcal{Y}$
(Section 4.1.3). They also show how to compute Karp-Rabin
fingerprints in time $O(\log^{2}m)$ (without the extra space bookmarking uses).
The time to find all the relevant intervals then decreases to
$O(m+k(f_{h}+\log m)+f_{e})\subseteq O(m)$.
5.1.2 Finding the points in the two-dimensional range
This is a well-studied geometric problem [Chan
et al. (2011)]. We can represent $p$
points on a $p\times p$ grid within $O(p)$ space, so that we can report all
the $t$ points within any given two-dimensional range in time
$O((1+t)\log^{\epsilon}p)$, for any constant $\epsilon>0$. By using slightly
more space, $O(p\log\log p)$, the time drops to $O((1+t)\log\log p)$, and
if we use $O(p\log^{\epsilon}p)$ space, the time drops to $O(\log\log p+t)$,
thus enabling constant time per occurrence reported.
If we look for the $k=m-1$ prefixes and suffixes of $P$ with $O(p)$ space, the
total search
time is $O(m\log^{\epsilon}p)\subseteq O(m\log n)$, plus $O(\log^{\epsilon}p)$
per primary occurrence. If we reduce $k$ to $O(\log m)$, the search time
drops to $O(\log m\log^{\epsilon}p)$, plus $O(\log^{\epsilon}p)$
per primary occurrence. \shortciteNCEKNP19 reduce this $O(\log m\log^{\epsilon}p)$
additive term to just $O(\log^{\epsilon}\gamma)$ by dealing with short patterns
separately.
5.2 Tracking Secondary Occurrences
The parsing method must allow us infer all the secondary occurrences from
the primary ones. The precise method for propagating primary to secondary
occurrences depends on the underlying technique.
5.2.1 Lempel-Ziv parsing and macro schemes
The idea of primary and secondary occurrences was first devised for the
Lempel-Ziv parsing [Kärkkäinen and Ukkonen (1996)], of size $p=z$. Note that the leftmost occurrence
of any pattern $P$ cannot be secondary, because then it would be inside
a phrase that would occur earlier in $S$. Secondary occurrences can then be
obtained by finding all the phrase sources that cover each primary occurrence.
Each such source produces a secondary occurrence at the phrase that copies the
source. In turn, one must find all the phrase sources that cover these secondary
occurrences to find further secondary occurrences, and so on. All the
occurrences are found in that way.
A simple technique [Kreft and
Navarro (2013)] is to maintain all the sources $[b_{k},e_{k}]$ of
the $z$ phrases of $S$ in arrays $B[1\mathinner{.\,.}z]$ (holding all $b_{k}$s) and
$E[1\mathinner{.\,.}z]$ (holding all $e_{k}$s), both sorted by increasing endpoint $e_{k}$.
Given an occurrence $S[i\mathinner{.\,.}j]$, a successor search on $E$ finds
(in $O(\log\log n)$ time [Pătraşcu and Thorup (2006)]) the smallest endpoint $e_{k}\geq j$, and
therefore all the sources
in $E[k\mathinner{.\,.}z]$ end at or after $S[i\mathinner{.\,.}j]$ (those in $E[1\mathinner{.\,.}k-1]$ cannot cover
$S[i\mathinner{.\,.}j]$ because they end before $j$). We then want to retrieve
the values $B[l]\leq i$ with $k\leq l\leq z$, that is, the sources that
in addition start no later than $i$.
A technique to retrieve each source covering $S[i\mathinner{.\,.}j]$ in constant time is
as follows. We build a Range Minimum Query (RMQ) data structure
[Bender et al. (2005), Fischer and
Heun (2011)] on $B$, which uses $O(z)$ bits and returns, in constant
time, the minimum value in any range $B[k\mathinner{.\,.}k^{\prime}]$. We first query for
$B[k\mathinner{.\,.}z]$. Let the minimum be at $B[l]$. If $B[l]>i$, then this source
does not cover $S[i\mathinner{.\,.}j]$, and no other source does because this is the
one starting the earliest. We can therefore stop. If, instead, $B[l]\leq i$,
then we have a secondary occurrence in the target of $[b_{l},e_{l}]$. We must
report that occurrence and recursively look for other sources covering it.
In addition, we must recursively look for sources that start early enough
in $B[k\mathinner{.\,.}l-1]$ and $B[l+1\mathinner{.\,.}z]$. Since we spot an occurrence each time
we find a suitable value of $B$ in the current range, and stop as soon as
there are no further values, it is easy to see that
we spot each secondary occurrence in constant time.
Example 5.24
Figure 15 shows the search for $P=\mathsf{la}$ on the
Lempel-Ziv parse of Figure 2. There is only one primary
occurrence at $S[2\mathinner{.\,.}3]$. The sources $[b_{k},e_{k}]$ are, in increasing
order of $e_{k}$, $[1,1],[1,3],[3,3],[2,6],[11,11]$, so we have the arrays
$B=\langle 1,1,3,2,11\rangle$ and $E=\langle 1,3,3,6,11\rangle$. A
successor search for $3$ in $E$ shows that $E[2\mathinner{.\,.}5]$ contains all the
sources that finish at or after position $3$. We now use RMQs on $B$ to find
those that start at or before position $2$. The first candidate is $RMQ(2,5)=2$, which identifies the valid source $[B[2],E[2]]=[1,3]$ covering our
primary occurrence. The target of this source is $S[7\mathinner{.\,.}9]$, which contains
a secondary occurrence at $S[8\mathinner{.\,.}9]$ (the offset of the occurrence within the
target is the same as within the source). There is no other source covering
$S[8\mathinner{.\,.}9]$, so that secondary occurrence does not propagate further. We
continue with our RMQs, now on the remaining interval $B[3\mathinner{.\,.}5]$. The query
$RMQ(3,5)=4$ yields the source $[B[4],E[4]]=[2,6]$, which also covers the
primary occurrence. Its target, $S[10\mathinner{.\,.}14]$, then contains a secondary
occurrence at the same offset as in the source, in $S[10\mathinner{.\,.}11]$. Again,
no source covers this secondary occurrence. Continuing,
we must check the intervals $B[3\mathinner{.\,.}3]$ and $B[5\mathinner{.\,.}5]$. But
since both are larger than $2$, they do not cover the primary occurrence
and we are done.
Thus, if we use a Lempel-Ziv parse, we require $O(z)$ additional space to
track the secondary occurrences. With $occ$ occurrences in total,
the time is $O(occ\log\log n)$, dominated by the predecessor searches.
Note that the scheme works well also under bidirectional macro schemes,
because it does not need that the targets be to the right of the sources.
Thus, the space can be reduced to $O(b)$.
5.3 Block Trees
The sequence of leaves in any of the block tree variants we described
partitions $S$ into a sequence of $p$ phrases (where $p$ can be as small
as $O(\delta\log(n/\delta))$, see Section 4.2.3).
Each such phrase is either explicit (if it is at the last level) or it
has another occurrence inside an internal node of the same level.
This parsing also permits applying the scheme of primary and secondary
occurrences, if we use leaves of length $1$ [Navarro and
Prezza (2019)]. It is not hard to see
that every secondary occurrence $S[i\mathinner{.\,.}j]$, with $i<j$, has a copy that
crosses a phrase boundary: $S[i\mathinner{.\,.}j]$ is inside a block $S_{v}$ that is a
leaf, thus it points to another occurrence of $S_{v}$ inside $S_{v_{1}}\cdot S_{v_{2}}$. If the copy $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ spans both children, then it is a primary
occurrence. Otherwise it falls inside $S_{v_{1}}$ or $S_{v_{2}}$, and then
it is also inside a block of the next block tree level. At this next
level, $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ may fall inside a block $S_{v^{\prime}}$ that is a leaf, and
thus it points leftward towards another occurrence of $S_{v^{\prime}}$. In this case,
$S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ is also a secondary occurrence and we will discover $S[i\mathinner{.\,.}j]$
from it; in turn $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ will be discovered from its leftward occurrence
$S[i^{\prime\prime}\mathinner{.\,.}j^{\prime\prime}]$, and so on. Instead, $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ may fall inside an internal
block $S_{v^{\prime}}=S_{v^{\prime}_{1}}\cdot S_{v^{\prime}_{2}}$. If $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ spans both children,
then it is a primary occurrence, otherwise it appears inside a block of the
next level, and so on. We continue this process until we find an occurrence of
$S[i\mathinner{.\,.}j]$ that crosses a block boundary, and thus it is
primary. Our original occurrence $S[i\mathinner{.\,.}j]$ is then found from the primary
occurrence, through a chain of zero or more intermediate secondary occurrences.
Example 5.25
Consider the parsing $S=\mathsf{a|l|a|b|a|r|ala|lab|ar|d|a|\$}$ induced by the
block tree of Figure 12, where the phrases of length 1 are the
leaves of the last level. Then $P=\mathsf{al}$ has two primary occurrences,
at $S[1\mathinner{.\,.}2]$ and $S[9\mathinner{.\,.}10]$. The sources of blocks are $[1,3]$, $[2,4]$,
and $[5,6]$. Therefore the source $[1,3]$ covers the first primary occurrence,
which then has a secondary occurrence at the target, in $S[7,8]$.
The process and data structures are then almost identical to those for the
Lempel-Ziv parsing. We collect the sources of all the leaves at all the
levels in arrays $B$ and $E$, as for Lempel-Ziv, and use them to spot,
directly or transitively, all the secondary occurrences.
In some variants, such as block trees built on attractors
(see Section 4.2.3) the sources can be before or after
the target in $S$, as in bidirectional macro schemes, and the scheme works
equally well.
5.4 Grammars
In the case of context-free grammars [Claude and
Navarro (2012)] (and also run-length
grammars), the partition of $S$ induced by the leaves of the grammar tree
induces a suitable
parsing: a secondary occurrence $S[i\mathinner{.\,.}j]$ inside a leaf labeled by
nonterminal $A$ has another occurrence $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ below the occurrence of
$A$ as an internal node of the grammar tree. If $S[i^{\prime}\mathinner{.\,.}j^{\prime}]$ is inside a
leaf labeled $B$ (with $|B|<|A|$), then there is another occurrence
$S[i^{\prime\prime}\mathinner{.\,.}j^{\prime\prime}]$ below the internal node labeled $B$, and so on. Eventually,
we find a copy crossing a phrase boundary, and this is our primary occurrence.
To each phrase boundary between grammar tree leaves $X$ and $Y$, we associate
the lowest common ancestor node $A=lca(X,Y)$ that covers both leaves. The set
$\mathcal{X}$ is then formed by the reverse expansions, $exp(X)^{rev}$, of
all the leaves $X$ of the grammar tree, whereas $\mathcal{Y}$ contains the
part of $exp(A)$ that follows $exp(X)$ (we do not need to index full suffixes).
The grid is formed in the same way, with the points indicating the corresponding
node $A$ and the offset of $exp(Y)$ inside $exp(A)$.
Once we establish that $P$ occurs inside $exp(A)$ at position $j$, we must
track $j$ upwards in the grammar tree, adjusting it at each step, until
locating the occurrence in the start symbol, which gives the position where $P$
occurs in $S$. To support this upward traversal we store, in each grammar tree
node $A$ with parent $C$, the offset of $exp(A)$ inside $exp(C)$. This is
added to $j$ when we climb from $A$ to $C$.
In addition, every other occurrence of $A$ in the grammar tree contains a
secondary occurrence of $P$, with the same offset $j$. Note that all those
other occurrences of $A$ are leaves in the grammar tree.
Each node labeled $A$ has then
a pointer to the next grammar tree node labeled $A$, forming a linked list that
must be traversed to spot all the secondary occurrences (in any desired order;
the only restriction is that the list must start at the only internal node
labeled $A$). Further, if $C$ is the parent of $A$, then any other occurrence
of $C$ in the grammar tree (which is necessarily a leaf as well) also contains a
new secondary occurrence of $P$.
The process then starts at each primary occurrence $A$ and recursively moves
to (1) the parent of $A$ (adjusting the offset $j$), and (2) the next node
labeled $A$. The recursive calls end when we reach the grammar tree root in
step (1), which occurs once per distinct secondary occurrence of $P$, and
when there is no next node to consider in step (2).
Example 5.26
Figure 16 shows how the only primary occurrence of
$P=\mathsf{al}$ in $S=\mathsf{alabaralalabarda\$}$, using the parse of
Figure 4 (and Figure 14), are propagated using the grammar
tree. The primary occurrence, $S[1\mathinner{.\,.}2]$ spans the first two leaves, and the
pointer of the grid sends us to the internal node labeled $A$, which is the
lowest common ancestor of those two leaves, with offset $1$ (indeed,
$exp(A)=\mathsf{al}$). To find its position in $S$, we go up to $B$, the parent
of $A$, where the offset is still $1$ because the offset of $A$ within $B$ is
$0$ ($exp(B)=\mathsf{alabar}$). Finally, we
reach $C$, the parent of $B$ and the tree root, where the offset is still $1$
and thus we report the primary occurrence $S[1\mathinner{.\,.}2]$.
The secondary occurrences are found by recursively following the dashed arrows
towards the other occurrences of the intermediate nonterminals. From the
internal node $A$ we reach the only other occurrence of $A$ in the grammar
tree (which is a leaf; remind that there is only one internal node per
label). This leaf has offset $6$ within its parent $C$, so the offset within
$C$ is $1+6=7$. We then move to
$C$ and report a secondary occurrence at $S[7\mathinner{.\,.}8]$. The list of the $A$s
ends there. Similary, when we arrive at the internal node $B$, we follow the
dashed arrow towards the only other occurrence of $B$ in the grammar tree.
This has offset $8$ within its parent $C$, so when we move up to $C$ we report
the secondary occurrence $S[9\mathinner{.\,.}10]$.
Note that the bold arrows, solid and dashed, form a binary tree rooted at the
primary occurrence. The leaves that are left children are list ends and those
that are right children are secondary occurrences. Thus the total number of
nodes (and the total cost of the tracking) is proportional to the
number of occurrences reported.
\shortciteN
CNspire12 show that the cost of the traversal indeed amortizes to
constant time per secondary occurrence if we ensure that every nonterminal $A$
occurs at least twice in the grammar tree (as in our example). Nonterminals
appearing only once can be
easily removed from the grammar. If we cannot modify the grammar, we can
instead make each node $A$ point not to its parent, but to its lowest
ancestor that appears at least twice in the grammar tree (or to the root,
if no such ancestor exists) [Christiansen et al. (2019)]. This ensures that we report the
$occ_{s}$ secondary occurrences in time $O(occ_{p}+occ_{s})$.
\shortciteN
CEKNP19 show how the process is adapted to handle the special nodes
induced by the rules $A\rightarrow X^{k}$ of run-length grammars.
5.5 Resulting Tradeoffs
By considering the cost to find the primary and secondary occurrences, and
sticking to the best possible space in each case, it turns out that we can
find all the $occ$ occurrences of $P[1\mathinner{.\,.}m]$ in $S[1\mathinner{.\,.}n]$ either:
•
In time $O(m\log n+occ\log^{\epsilon}n)$, within
space $O(g_{rl})\subseteq O(\delta\log(n/\delta))$.
•
In time $O(m+(occ+1)\log^{\epsilon}n)$, within
space $O(\gamma\log(n/\gamma))$.
The first result is obtained by using a run-length grammar to define the parse
of $S$ (thus the grid is of size $g_{rl}\times g_{rl}$), to access nonterminal
prefixes and suffixes and compute fingerprints, and to track the secondary
occurrences. Note that finding the smallest grammar is NP-hard, but there are
ways to build run-length grammars of size $O(\delta\log(n/\delta))$, recall the
end of Section 3.10. The second result uses the improvement based
on locally consistent parsing (end of Section 5.1.1); recall that
we do not need to compute $\gamma$ (which is NP-hard too) in order to obtain it.
5.5.1 Using more space
We can combine the slightly larger grid representations
(Section 5.1.2) with bookmarking in
order to obtain improved times for the first result. We can use a bidirectional
macro scheme to define the phrases, so that the grid is of size $b\times b$,
and use the geometric data structure of size $O(b\log\log b)$ that reports the
$occ$ points in time $O((1+occ)\log\log b)$. We then use a run-length grammar
to provide direct access to $S$, and enrich it with bookmarking
(Section 4.3) to provide substring extraction and Karp-Rabin
hashes (to find the ranges in $\mathcal{X}$ and $\mathcal{Y}$) in
time $f_{e}=O(m)$ and $f_{h}=O(\log m)$, respectively, at phrase boundaries.
This adds $O((b+\gamma)\log\log n)=O(b\log\log n)$ space. The time to
find the $m-1$ ranges in $\mathcal{X}$ and $\mathcal{Y}$ is then
$O(m+m(f_{h}+\log m)+f_{e})=O(m\log m)$. The $m-1$ geometric searches take time
$O(m\log\log b+occ\log\log b)$, and the secondary occurrences are reported
in time $O(occ\log\log n)$ (Section 5.2.1).
\shortciteNGGKNP14 get rid of the $O(m\log\log b)$ term by dealing
separately with short patterns (see their Section 4.2; it adapts to our
combination of structures without any change).
Note again that it is NP-hard to find the smallest bidirectional macro scheme,
but we can build suboptimal ones from the Lempel-Ziv parse, the lex-parse, or
the BWT runs, for example (Sections 3.2, 3.6, and
3.7). Recall also that there are heuristics to build
bidirectional macro schemes smaller than $z$ [Russo
et al. (2020)].
The larger grid representation, of size $O(p\log^{\epsilon}p)$ for $p$ points,
reports primary occurrences in constant time, but to maintain that constant time
for secondary occurrences we need that the parse comes from a (run-length)
grammar (Section 5.4). We must therefore use a grid of
$g_{rl}\times g_{rl}$. The grammar
already extracts phrase (i.e., nonterminal) prefixes and suffixes in constant
time, yet bookmarking is still useful to compute fingerprints in $O(\log m)$
time. We can then search:
•
In time $O(m\log m+occ\log\log n)$, within
space $O(g_{rl}+b\log\log n)$.
•
In time $O(m\log m+occ)$, within
space $O(g_{rl}\log^{\epsilon}n)$.
Finally, using larger grids directly on the result that uses
$O(\gamma\log(n/\gamma))$ space yields the first optimal-time index
[Christiansen et al. (2019)]. We can search:
•
In time $O((m+occ)\log\log n)$, within
space $O(\gamma\log(n/\gamma)\log\log n)$.
•
In time $O(m+occ)$, within
space $O(\gamma\log(n/\gamma)\log^{\epsilon}n)$.
5.5.2 History
The generic technique we have described encompasses a large number of indexes
found in the literature. As said, \shortciteNKU96 pioneered the idea of primary
and secondary occurrences based on Lempel-Ziv. Their index is not properly a
compressed index because it stores $S$ in
plain form, and uses $O(z)$ additional space to store the grid and a mechanism
of stratified lists of source areas to find the secondary occurrences.
Figure 17 shows a diagram with the main ideas that appeared
along time and the influences between contributions. The Appendix gives a
detailed account. The best results to date are those we have given explicitly,
plus some intermediate tradeoffs given by \shortciteNCEKNP19 (see their Table I).
6 Suffix-Based Indexing
Suffix arrays and suffix trees (Section 2.3) are data structures
designed to support indexed searches. They are of size $O(n)$, but large in
practice. We next describe how their search algorithms translate into structures
of size $O(r)$ or $O(e)$, which are related to the regularities induced by
repetitiveness on suffix arrays and trees.
6.1 Based on the BWT
The suffix array search based on the BWT dates back to \shortciteNFM00,FM05, who
showed that, with appropriate data structures, $S^{bwt}$ is sufficient to
simulate a suffix array search and find the range $A[sp\mathinner{.\,.}ep]$ of the suffixes
that start with a search pattern $P$. Their method, called backward
search, consecutively finds the interval $A[sp_{i}\mathinner{.\,.}ep_{i}]$ of the suffixes
starting with $P[i\mathinner{.\,.}m]$, by starting with $[sp_{m+1}\mathinner{.\,.}ep_{m+1}]=[1\mathinner{.\,.}n]$
and then computing, for $i=m$ to $i=1$,
$$\displaystyle sp_{i}$$
$$\displaystyle=$$
$$\displaystyle C[P[i]]+rank_{P[i]}(S^{bwt},sp_{i+1}-1)+1,$$
$$\displaystyle ep_{i}$$
$$\displaystyle=$$
$$\displaystyle C[P[i]]+rank_{P[i]}(S^{bwt},ep_{i+1}),$$
where $C[c]$ is the number of occurrences in $S$ of symbols lexicographically
smaller than $c$, and $rank_{c}(S^{bwt},j)$ is the number of occurrences of $c$
in $S^{bwt}[1\mathinner{.\,.}j]$.252525If $sp_{i}>ep_{i}$ occurs, then $P$ does not
occur in $S$ and we must not continue the backward search.
Further, if $A[j]=i$, that is, the lexicographically
$j$th smallest suffix of $S$ is $S[i\mathinner{.\,.}]$, then $A[j^{\prime}]=i-1$ for $c=S^{bwt}[j]$
and
$$j^{\prime}~{}=~{}LF(j)~{}=~{}C[c]+rank_{c}(S^{bwt},j),$$
which is called an LF-step from $j$. By performing LF-steps on the BWT
of $S$, we virtually traverse $S$ backwards.
To understand the rationale of the backward search formula, let us start with
the backward step. Recall from Section 3.6 that $c=S^{bwt}[j]$ is the
symbol preceding the suffix $A[j]$, $c=S^{bwt}[j]=S[A[j]-1]$. The function
$j^{\prime}=LF(j)$ computes where is in $A$ the suffix that points to $A[j]-1$. First,
all the $C[c]$ suffixes that start with symbols less than $c$ precede $A[j^{\prime}]$.
Second, the suffixes $S[A[j]-1\mathinner{.\,.}]$ are stably
sorted by $\langle S[A[j]-1],A[j]\rangle=\langle S^{bwt}[j],A[j]\rangle$,
that is, by their first symbol and breaking ties with the rank of the suffix
that follows. Therefore, $LF(j)$ adds the number $C[c]$ of suffixes starting
with symbols less than $c$ and the number $rank_{c}(S^{bwt},j)$ of suffixes that
start with $c$ up to the one we want to translate, $A[j]$.
Example 6.27
Consider $S=\mathsf{alabaralalabarda\$}$, with
$S^{bwt}=\mathsf{adll\$lrbbaaraaaaa}$, in Figure 6.
From $A[14]=10$ and $S^{bwt}[14]=\mathsf{a}$ (which correspond to the suffix
$S[10\mathinner{.\,.}]=\mathsf{labarda\$}$, we compute $LF(14)=C[\mathsf{a}]+rank_{\mathsf{a}}(S^{bwt},14)=1+5=6$. Indeed $A[6]=9=A[14]-1$,
corresponding to the suffix $\mathsf{alabarda\$}$.
Let us now consider the backward search steps. Note that we know that the
suffixes in $A[sp_{i+1}\mathinner{.\,.}ep_{i+1}]$ start with $P[i+1\mathinner{.\,.}m]$. The range
$A[sp_{i}\mathinner{.\,.}ep_{i}]$ lists the suffixes that start with $P[i\mathinner{.\,.}m]$, that is, they
start with $P[i]$ and then continue with a suffix in $A[sp_{i+1}\mathinner{.\,.}ep_{i+1}]$.
We then want to capture all the suffixes in $A[sp_{i+1}\mathinner{.\,.}ep_{i+1}]$ that are
preceded by $c=P[i]$ and map them to their corresponding position in $A$. Since
they will be mapped to a range, the backward search formula is a way to perform
all those LF-steps in one shot.
Example 6.28
Consider again $S=\mathsf{alabaralalabarda\$}$, with
$S^{bwt}=\mathsf{adll\$lrbbaaraaaaa}$, in Figure 6.
To search for $P=\mathsf{la}$, we start with the range $A[sp_{3}\mathinner{.\,.}ep_{3}]=[1\mathinner{.\,.}17]$. The first backward step, for $P[2]=\mathsf{a}$, gives $sp_{2}=C[\mathsf{a}]+rank_{\mathsf{a}}(S^{bwt},0)+1=1+1=2$ and $ep_{2}=C[\mathsf{a}]+rank_{\mathsf{a}}(S^{bwt},17)=1+8=9$. Indeed,
$A[2\mathinner{.\,.}9]$ is the range of all the suffixes starting with $P[2\mathinner{.\,.}2]=\mathsf{a}$. The second and final backward step, for $P[1]=\mathsf{l}$, gives
$sp_{1}=C[\mathsf{l}]+rank_{\mathsf{l}}(S^{bwt},1)+1=12+1=13$ and $ep_{2}=C[\mathsf{l}]+rank_{\mathsf{l}}(S^{bwt},9)=12+3=15$. Indeed, $A[13\mathinner{.\,.}15]$
is the range of the suffixes starting with $P=\mathsf{la}$, and thus the
occurrences of $P$ are at $A[13]=2$, $A[14]=10$, and $A[15]=8$.
Note that, if we knew that the suffixes in $A[2\mathinner{.\,.}9]$ preceded by
$\mathsf{l}$ were at positions $3$, $4$, and $6$, and we had computed
$LF(3)$, $LF(4)$, and $LF(6)$, we would also have obtained the interval
$A[13\mathinner{.\,.}15]$.
\shortciteN
FM05 and \shortciteNFMMN07 show how to represent $S^{bwt}$ within
$nH_{k}(S)+o(n\log\sigma)$ bits of space, that is, asymptotically within the
$k$th order empirical entropy of $S$, while supporting pattern searches in time
$O(m\log\sigma+occ\log^{1+\epsilon}n)$ for any constant $\epsilon>0$.
These concepts are well covered in other surveys [Navarro and
Mäkinen (2007)], so we will not
develop them further here; we will jump directly to how to implement them when
$S$ is highly repetitive.
6.1.1 Finding the interval
\shortciteN
MN05 showed how to compute $rank$ on $S^{bwt}$ when it is represented
in run-length form (i.e., as a sequence of $r$ runs). We present the results in
a more recent setup [Mäkinen et al. (2010), Gagie
et al. (2020)] that ensures $O(r)$ space. The positions
that start runs in $S^{bwt}$ are stored in a predecessor data structure that
also tells the rank of the corresponding runs. A string $S^{\prime}[1\mathinner{.\,.}r]$ stores the
symbol corresponding to each run of $S^{bwt}$, in the same order of $S^{bwt}$.
The run lengths are also stored in another array, $R[1\mathinner{.\,.}r]$, but they are
stably sorted
lexicographically by the associated symbol. More precisely, if $R[t]$ is
associated with symbol $c$, it stores the cumulative length of the runs
associated with $c$
in $R[1\mathinner{.\,.}t]$. Finally, $C^{\prime}[c]$ tells the number of runs of symbols $d$ for
all $d<c$. Then, to compute $rank_{c}(S^{bwt},j)$, we:
1.
Find the predecessor $j^{\prime}$ of $j$, so that we know that $j$ belongs to the
$k$th run in $S^{bwt}$, which starts at position $j^{\prime}\leq j$.
2.
Determine that the symbol of the current run is $c^{\prime}=S^{\prime}[k]$.
3.
Compute $p=rank_{c}(S^{\prime},k-1)$ to determine that there are $p$ runs
of $c$ before the current run.
4.
The position of the run $k-1$ in $R$ is $C^{\prime}[c]+p$: $R$ lists the $C^{\prime}[c]$
runs of symbols less than $c$, and then the $p$ runs of $c$ preceding our run
$k$ (because $R$ is stably sorted, upon ties it retains the order of
the runs in $S^{bwt}$).
5.
We then know that $rank_{c}(S^{bwt},j^{\prime}-1)=R[C^{\prime}[c]+p]$.
6.
This is the final answer if $c\not=c^{\prime}$. If $c=c^{\prime}$, then $j$ is within a
run of $c$s and thus we must add $j-j^{\prime}+1$ to the answer.
Example 6.29
The BWT of $S=\mathsf{alabaralalabarda\$}$ has $r(S)=10$ runs (recall
Figure 6), $S^{bwt}=\mathsf{a|d|ll|\$|l|r|bb|aa|r|aaaaa}$.
The predecessor data structure then contains the run start positions,
$\langle 1,2,3,5,6,7,8,10,12,13\rangle$. The string of distinct run symbols
is $S^{\prime}[1\mathinner{.\,.}10]=\mathsf{adl\$lrbara}$. Stably sorting the runs
$\langle 1\mathinner{.\,.}10\rangle$ by symbol we obtain $\langle 4,1,8,10,7,2,3,5,6,9\rangle$ (e.g., we first list the $4$th run because its symbol is the smallest,
$\mathsf{\$}$, then we list the $3$ positions of $\mathsf{a}$ in $S^{\prime}$,
$1,8,10$, and so
on), and therefore $R=\langle 1,1,3,8,2,1,2,3,1,2\rangle$ (e.g., $R[2\mathinner{.\,.}4]=\langle 1,3,8\rangle$ because the runs of $\mathsf{a}$ are of lengths $1$, $2$,
and $5$, which cumulate to $1$, $3$, and $8$). Finally, $C^{\prime}[\mathsf{\$}]=0$,
$C^{\prime}[\mathsf{a}]=1$, $C^{\prime}[\mathsf{b}]=4$, $C^{\prime}[\mathsf{d}]=5$, $C^{\prime}[\mathsf{l}]=6$,
and $C^{\prime}[\mathsf{r}]=8$ precede the positions where the runs of each symbol start
in $R$.
To compute $rank_{a}(S^{bwt},15)$ we find the predecessor $j^{\prime}=13$ of $j=15$ and
from the same structure learn that it is the run number $k=10$. We then know
that it is a run of $\mathsf{a}$s because $S^{\prime}[10]=\mathsf{a}$. We then find
out that there are $p=2$ runs of $\mathsf{a}$s preceding it because
$rank_{\mathsf{a}}(S^{\prime},9)=2$. Further, there are $C^{\prime}[\mathsf{a}]=1$ runs of
symbols smaller than $\mathsf{a}$ in $S^{bwt}$. This means that the runs of
$\mathsf{a}$s start in $R$ after position $C^{\prime}[\mathsf{a}]=1$, and that the run
$k-1=9$ is, precisely, at $C^{\prime}[\mathsf{a}]+p=3$. With $R[3]=3$ we learn that
there
are $3$ $\mathsf{a}$s in $S^{bwt}[1\mathinner{.\,.}12]$. Finally, since we are counting
$\mathsf{a}$s and $j$ is in a run of $\mathsf{a}$s, we must add the
$j-j^{\prime}+1=15-13+1=3$ $\mathsf{a}$s in our current run. The final answer is
then $rank_{a}(S^{bwt},15)=3+3=6$.
The cost of the above procedure is dominated by the time to find the predecessor
of $j$, and the time to compute $rank$ on $S^{\prime}[1\mathinner{.\,.}r]$. Using only structures
of size $O(r)$ [Gagie
et al. (2020)], the predecessor can be computed in time
$O(\log\log(n/r))$ if there are $r$ elements in a universe $[1\mathinner{.\,.}n]$, and
$rank$ on $S^{\prime}$ can be computed in time $O(\log\log\sigma)$. This yields a total
time of $O(m\log\log(\sigma+n/r))$ to determine the range $A[sp\mathinner{.\,.}ep]$ using
backward search for $P$, in space $O(r)$ [Gagie
et al. (2020)].
6.1.2 Locating the occurrences
Once we have determined the interval $[sp\mathinner{.\,.}ep]$ where the answers lie in the
suffix array, we must output the positions $A[sp],\ldots,A[ep]$ to complete the
query. We do not have, however, the suffix array in explicit form. The classical
procedure [Ferragina and
Manzini (2005), Mäkinen et al. (2010)] is to choose a sampling step $t$ and sample the
suffix array entries that point to all the positions of the form
$S[i\cdot t+1]$, for all $0\leq i<n/t$. Then, if $A[j]$ is not sampled,
we compute $LF(j)$ and see if $A[LF(j)]$ is not sampled, and so on. Since we
sample $S$ regularly, some $A[LF^{s}(j)]$ must be sampled for $0\leq s<t$.
Since function $LF$ implicitly moves us one position backward in $S$, it holds
that $A[j]=A[LF^{s}(j)]+s$. Therefore, within $O(n/t)$ extra space, we can
report each of the occurrence positions in time $O(t\log\log(n/r))$ (the
LF-steps do not require $O(\log\log\sigma)$ time for the $rank$ on $S^{\prime}$ because
its queries are of the form $rank_{S^{\prime}[i]}(S^{\prime},i)$, for which we can just store
the answers to the $r$ distinct queries).
Though this procedure is reasonable for statistical compression, the extra
space $O(n/t)$ is usually much larger than $r$, unless we accept a significantly
high time $O(t\log\log(n/r))$ to report each occurrence. This had been a
challenge for BWT-based indexes on highly repetitive collections
until very recently [Gagie
et al. (2020)], where a way to efficiently locate the
occurrences within $O(r)$ space was devised.
\shortciteN
GNP19 solve the problem of reporting $A[sp\mathinner{.\,.}ep]$ in two steps. First,
they show that the backward search can be intervened so that, at the
end, we know the value of $A[ep]$. Second, they show how to find $A[j-1]$ given
the value of $A[j]$.
The first part is not difficult. When we start with $[sp\mathinner{.\,.}ep]=[1\mathinner{.\,.}n]$, we
just need to store the value of $A[n]$. Now, assume we know $[sp_{i+1}\mathinner{.\,.}ep_{i+1}]$ and $A[ep_{i+1}]$, and compute $[sp_{i}\mathinner{.\,.}ep_{i}]$ using the backward
search formula. If the last suffix, $A[ep_{i+1}]$, is preceded by $P[i]$
(i.e., if $S^{bwt}[ep_{i+1}]=P[i]$), then the last suffix of $A[sp_{i}\mathinner{.\,.}ep_{i}]$
will be precisely $A[ep_{i}]=A[LF(ep_{i+1})]=A[ep_{i+1}]-1$, and thus we know
it. Otherwise, we must find the last occurrence of $P[i]$ in
$S^{bwt}[sp_{i+1}\mathinner{.\,.}ep_{i+1}]$, because this is the one that will be mapped
to $A[ep_{i}]$.
This can be done by storing an array $L[1\mathinner{.\,.}r]$ parallel to $R$, so that
$L[t]$ is the value of $A$ for the last entry of the run $R[t]$ refers to.
Once we determine $p$ using the backward search step described above, we have
that $A[ep_{i}]=L[C^{\prime}[c]+p]-1$.
Example 6.30
For $S=\mathsf{alabaralalabarda\$}$ and $S^{bwt}=\mathsf{adll\$lrbbaaraaaaa}$,
we have $L=\langle 1,17,12,14,13,16,11,9,7,15\rangle$. For example, $L[3]$
refers to the end of the $2$nd run of $\mathsf{a}$s, as seen in the previous
example for $R[3]$. This ends at $S^{bwt}[11]$, and $A[11]=12=L[3]$. In the
backward search for $P=\mathsf{la}$, we start with $[sp_{3}\mathinner{.\,.}ep_{3}]=[1\mathinner{.\,.}17]$,
and know that $A[17]=14$. The backward search computation then yields
$[sp_{2}\mathinner{.\,.}ep_{2}]=[2\mathinner{.\,.}9]$. Since $S^{bwt}[17]=\mathsf{a}=P[2]$, we deduce that $A[9]=14-1=13$.
A new backward step yields $[sp_{1}\mathinner{.\,.}ep_{1}]=[13\mathinner{.\,.}15]$. Since $S^{bwt}[9]=\mathsf{b}\not=P[1]$, we must consult $L$. The desired position of $L$ is
obtained with the same process to find $rank_{\mathsf{l}}(S^{bwt},9)$: $k=7$,
$p=rank_{\mathsf{l}}(S^{\prime},6)=2$, $t=C^{\prime}[\mathsf{l}]+p=6+2=8$, from where we obtain
that $A[15]=L[8]-1=9-1=8$.
For the second part, finding $A[j-1]$ from $A[j]$, let us define $d=A[j-1]-A[j]$, and assume both positions are in the same run, that is,
$S^{bwt}[j-1]=S^{bwt}[j]=c$ for some $c$. By the LF-step formula, it
is not hard to see that $LF(j-1)=LF(j)-1$, and thus $A[LF(j-1)]-A[LF(j)]=(A[j-1]-1)-(A[j]-1)=d$.262626With the possible exception of $A[j-1]$ or
$A[j]$ being $1$, but in this case the BWT symbol is $\mathsf{\$}$, and thus
they cannot be in the same run. This means that, as we perform LF-steps from
both positions and they stay in the same run, their difference $d$ stays the
same. Only if, after performing $s$ LF-steps, $S^{bwt}[j^{\prime}]=S^{bwt}[LF^{s}(j)]$
starts a run, we have that $S^{bwt}[j^{\prime}-1]=S^{bwt}[LF^{s}(j-1)]$ belongs to
another run. Note that, in $S$, this corresponds to having performed $s$
backward steps from $A[j]$, because $A[j^{\prime}]=A[LF^{s}(j)]=A[j]-s$ corresponds
to a run start in $S^{bwt}$.
We then store another predecessor data structure with the positions $A[j^{\prime}]$ in
$S$ that correspond to run starts in $S^{bwt}$, $S^{bwt}[j^{\prime}-1]\not=S^{bwt}[j^{\prime}]$. To the position $t=A[j^{\prime}]$ we associate $d(t)=A[j^{\prime}-1]-A[j^{\prime}]$.
To compute $A[j-1]$ from $A[j]$, we simply find the predecessor $t=A[j^{\prime}]$ of
$A[j]$ and then know that $A[j-1]=A[j]+d(t)$, because $A[j-1]-A[j]=d(t)=A[j^{\prime}-1]-A[j^{\prime}]$.272727\shortciteNGNP19 store $A[j^{\prime}-1]$ instead of $d(t)$,
and thus add $s=A[j]-t$ to return $A[j-1]=A[j^{\prime}-1]+s$.
Example 6.31
Figure 18 shows the run beginnings projected to $S$, and the
associated values $d$.
Once we find the interval $A[13\mathinner{.\,.}15]$ for $P=\mathsf{la}$ in the previous
example, and since we know that $A[15]=8$, we can compute $A[14]$ as follows.
The boxed predecessor of $S[8]$ is $S[7]$. Since $A[7]=7$, we stored $d(7)=A[6]-A[7]=2$ associated with $S[7]$, and thus we know that $A[14]=A[15]+d(7)=10$. Now, the boxed predecessor of $S[10]$ is $S[9]$. Since $A[6]=9$, we
stored $d(9)=A[5]-A[6]=-8$ associated with $S[9]$, and thus we know that $A[13]=A[14]+d(9)=2$.
Each new position is then found with a predecessor search, yielding total
search time $O(m\log\log(\sigma+n/r)+occ\log\log(n/r))$, and $O(r)$ space
[Gagie
et al. (2020)]. This index was implemented and shown to be 1–2 orders of
magnitude faster than parsing-based indexes, though up to twice as large
[Gagie
et al. (2020)]. When the collections are very repetitive, its size is still
small enough, but the index (as well as measure $r$) degrades faster than $z$
or $g$ when repetitiveness starts to decrease.
Note that the index does not provide direct access to $S$ within
$O(r)$ space, only within $O(r\log(n/r))$ space (and time $O(\ell+\log(n/r))$,
and this is provided through a run-length grammar of that size and height
$O(\log(n/r))$. What is more interesting is that they also build grammars of
size $O(r\log(n/r))$ that provide access in time $O(\log(n/r))$ to any cell
of $A$ or $A^{-1}$.
A previous attempt to provide fast searches on top of the BWT
[Belazzougui et al. (2015)] combines it with Lempel-Ziv parsing: it uses
$O(z(S)+r(S)+r(S^{rev}))$ space and searches in time
$O(m(\log\log n+\log z)+occ\log^{\epsilon}z)$.
An careful implementation [Belazzougui et al. (2017)] shows to be relevant,
for example it uses about 3 times more space and is faster than
the index of \shortciteNKN13.282828In their experiments, they include
the size of indexes and their working space. This penalizes
parsing-based indexes because the recursion stack when spotting the secondary
occurrences can be large. This raises a valid issue, however, because
suffix-based indexes do not have this problem.
6.1.3 Optimal search time
\shortciteN
Kem19 generalizes the concept of BWT runs to $s$-runs, where the $s$
symbols preceding each suffix $A[j]$ must coincide. He shows that, if $S$
has $r$ runs, then it has $O(rs)$ $s$-runs. \shortciteNGNP19 use this idea to
define a new string $S^{*}=S^{0}\cdot S^{1}\cdots S^{s-1}$, where $S^{k}$ is formed
by discarding the first $k$ symbols of $S$ and then packing it into
“metasymbols” of length $s$. The metasymbols are lexicographically compared
in the same way as the string that composes them. They show that the suffix
array interval for
$P$ in $S$ and for $P^{0}$ in $S^{*}$ are the same. Since the length of $P^{0}$ is
$m^{\prime}=m/s$, one can choose $s=\log\log n$ in order to represent $S^{*}$ within
$O(rs)=O(r\log\log n)$ space and find the interval $A[sp\mathinner{.\,.}ep]$ of
$P$ in $S$ by searching for $P^{0}$ in $S^{*}$, in time $O(m^{\prime}\log\log(\sigma+n/r))=O(m)$.
In turn, the occurrences are located also in chunks of $s$, by storing in
$d(t)$ not only the information on $A[j^{\prime}-1]$, but also on $A[j^{\prime}-2],\ldots,A[j^{\prime}-s]$, in space $O(r\log\log n)$ as well. Thus, we invest a predecessor
search, time $O(\log\log(n/r))$, but in exchange retrieve $\log\log n$
occurrences. The resulting time is $O(m+\log\log(n/r)+occ)$, which is
converted into the optimal $O(m+occ)$ by handling short patterns separately.
RAM-optimal search time is also possible with this index, within
$O(rw\log\log n)$ space [Gagie
et al. (2020)]. Interestingly, RAM-optimal search
time was only obtained in the classical scenario, using $O(n)$ words of space
[Navarro and
Nekrich (2017)].
6.2 Based on the CDAWG
In principle, searching the CDAWG as easy as searching a suffix tree
[Crochemore and
Hancart (1997)]: since any suffix can be read from the root node, we simply move
from the root using $P$ until finding its locus node (as on the suffix tree,
if we end in the middle of an edge, we move to its target node).
A first problem is that we do not store the strings that label the edges of the
CDAWG. Instead, we may store only the first symbols and the lengths of those
strings, as done for suffix trees in Section 2.3. Once we reach
the locus node, we must verify that all the skipped symbols coincide with $P$
[Crochemore and
Hancart (1997), Belazzougui and
Cunial (2017a)]. The problem is that the string $S$ is not directly
available for verification. Since $e=\Omega(g)$, however, we can in principle
build a grammar of size $O(e)$ so that we can extract a substring of $S$ of
length $m$, and thus verify the skipped symbols, in time $O(m+\log n)$;
recall Section 4.1.
To determine which position of $S$ to extract, we use the property that all
the strings arriving at a CDAWG node are suffixes of one another
[Blumer et al. (1987), Lem. 1]. Thus, we store the final position in $S$ of the
longest string arriving at each node from the root. If the longest string
arriving at the locus node $v$ ends at position $t(v)$, and
we skipped the last $l$ symbols of the last edge, then $P$ should be equal to
$S[t(v)-l-m+1\mathinner{.\,.}t(v)-l]$, so we extract that substring and compare it with $P$.
If they coincide, then every distinct path from $v$ to the final node, of total
length $L$, represents an occurrence at $S[n-L-l-m+1\mathinner{.\,.}n-L-l]$. Since every
node has at least two outgoing edges, we spend $O(1)$ amortized time per
occurrence reported [Blumer et al. (1987)].
The total search time is then $O(m+\log n+occ)$.
Example 6.32
Let us search for $P=\mathsf{la}$ in the CDAWG of Figure 8. We
leave the root by the arrow whose string starts with $\mathsf{l}$, and arrive
at the target node $v$ with $l=0$ (because the edge consumes the $m=2$ symbols
of $P$). The node $v$ can be associated with position $t(v)=3$, which ends an
occurrence of the longest string arriving at $v$, $\mathsf{ala}$. We then
extract $S[t(v)-l-m+1\mathinner{.\,.}t(v)-l]=S[2\mathinner{.\,.}3]=\mathsf{la}$ and verify that the
skipped symbols match $P$. We then traverse all the forward paths from $v$,
reaching the final node in three ways, with total lengths $L=8,14,6$.
Therefore, $P$ occurs in $S$ at positions $n-L-l-m+1=8,2,10$.
Another alternative, exploiting the fact that $e=\Omega(r)$, is to enrich
the CDAWG with the BWT-based index of size $O(r)$: as shown in
Section 6.1.1, we can determine in time $O(m\log\log n)$
whether $P$ occurs in $S$ or not, that is, if $sp\leq ep$ in the interval
$A[sp\mathinner{.\,.}ep]$ we compute. If $P$ occurs in $S$, we do not need the grammar to
extract and verify the skipped symbols; we can just proceed to output all
the occurrences [Belazzougui et al. (2015)]. The total time is then $O(m\log\log n+occ)$.
This variant is carefully implemented by \shortciteNBCGPR17, who show that the
structure is about two orders of magnitude faster than the index of
\shortciteNKN13, though it uses an order of magnitude more space.
It is even possible to reach optimal $O(m+occ)$ time with the CDAWG, by
exploiting the fact that the CDAWG induces a particular grammar of size $O(e)$
where there is a distinct nonterminal per string labeling a CDAWG edge
[Belazzougui and
Cunial (2017b)]. Since we need to extract a prefix of the string leading to the
locus of $P$, and this is the concatenation of several edges plus a prefix of
the last edge, the technique of Section 4.1.3 allows us
to extract the string to verify in time $O(m)$. Variants of this idea are
given by \shortciteNBCspire17 (using $O(e)$ space) and \shortciteNTGFIA17
(using $O(e(S)+e(S^{rev}))$ space).
7 Other Queries and Models
In this section we briefly cover the techniques to handle other type of
queries, apart from the fundamental one of finding all the occurrences of a
pattern in a string. We also consider indexes based on more ad-hoc scenarios
where repetitiveness arises.
7.1 Counting
A query that in principle is simpler than locating the $occ$ occurrences of a
pattern $P$ in $S$ is that of counting them, that is, determining $occ$. This
query is useful in Data Mining and Information Retrieval, for example, where
one might want to know how frequent or relevant $P$ is in $S$.
In suffix-based indexes, counting is generally a step that predeces locating:
for the latter, we find the interval $A[sp\mathinner{.\,.}ep]$ of the suffixes starting
with $P$ and then we output $A[sp],\ldots,A[ep]$. For counting, we just have
to output $occ=ep-sp+1$. As shown in Section 6.1.1, this can be easily done in $O(r)$ space and
$O(m\log n)$ time [Mäkinen et al. (2010)], or even $O(m\log\log n)$ time with stronger
predecessor data structures [Gagie
et al. (2020)]. It is also possible to count in
$O(m)$ time and $O(r\log\log n)$ space, and in the RAM-optimal
$O(\lceil m\log(\sigma)/w\rceil)$ time and $O(rw\log\log n)$ space
[Gagie
et al. (2020)].
Similary, we can count in $O(m)$ time and $O(e)$ space using CDAWGs
(Section 6.2). By storing
in each CDAWG node $v$ the number of paths that lead from the node to the final
state, we simply find the locus $v$ of $P$ in $O(m)$ time and report this
number.
It is less obvious how to count in phrase-based indexes, as these are
oriented to reporting all the occurrences. \shortciteNNav18 observes that, in a
grammar-based index, the number of occurrences (one primary and many
secondary) triggered by each point of the grid (Section 5.1) depends
only on the point (Section 5.4), and thus one can associate
that number of the point itself. Note that this is not the case of the grid
associated with, say, the Lempel-Ziv parse, where each source may or not
contain the primary occurrence depending on the alignment of $P$
(Section 5.2.1).
Counting on a grammar-based index then reduces to summing the number of
occurrences of all the points inside the grid ranges obtained from each
partition of $P$. With appropriate geometric data structures [Chazelle (1988)],
\shortciteNNav18 obtains $O(m^{2}+m\log^{2+\epsilon}n)$ counting time in $O(g)$
space. This time can be reduced to $O(m\log^{2+\epsilon}n)$ by searching for
all the partitions of $P$ in batch (Section 5.1.1) and computing
fingerprints on the grammar (Section 4.1.2).
\shortciteN
CEKNP19 further reduce this time by using their locally consistent
grammar, which is of size $O(\gamma\log(n/\gamma))$ and requires one to test
only $O(\log m)$ partitions of $P$ (recall the end of
Section 5.1.1). Such a lower number of partitions of $P$ leads
to counting in time $O(m+\log^{2+\epsilon}n)$, once again without the need to
find the smallest attractor. This is more complex than before because theirs
is a run-length grammar, and the run-length rules challenge the observation
that the number of occurrences depends only on the points. Among other
tradeoffs, they show how to count in optimal time
$O(m)$ using $O(\gamma\log(n/\gamma)\log n)$ space.
7.2 Suffix Trees
Compressed suffix-based indexes can be enhanced in order to support full
suffix-tree functionality (recall Section 3.8). Suffix trees enable
a large number of complex analyses on the strings, and are particularly
popular in stringology [Crochemore and
Rytter (2002)] and bioinformatics [Gusfield (1997)].
\shortciteN
Sad07 pioneered the compressed suffix trees, defining a basic set
of primitives for traversing the tree and showing how to implement them with
just $6n$ bits on top of a compressed suffix array. The required primitives
are:
•
Access to any cell of the suffix array, $A[i]$.
•
Access to any cell of the inverse suffix array, $A^{-1}[j]$.
•
Access to any cell of the longest common prefix array, $LCP[i]$
is the length of the longest common prefix of $S[A[i]\mathinner{.\,.}]$ and $S[A[i-1]\mathinner{.\,.}]$.
•
Either:
–
The navigable topology of the suffix tree [Sadakane (2007a)], or
–
Support for range minimum queries (RMQs,
Section 5.2.1) on the $LCP$ array [Fischer
et al. (2009)].
The idea that the differential suffix array, $DA[i]=A[i]-A[i-1]$, is
highly compressible on repetitive sequences can be traced back to \shortciteNGN07,
and it is related with the BWT runs: if $S^{bwt}[i\mathinner{.\,.}i+\ell]$ is a run, then
the LF-step formula (Section 6.1) implies that
$LF(i+k)=LF(i)+k$ for all $0\leq k\leq\ell$, and therefore
$DA[LF(i)+k]=A[LF(i)+k]-A[LF(i)+k-1]=A[LF(i+k)]-A[LF(i+k-1)]=(A[i+k]-1)-(A[i+k-%
1]-1)=A[i+k]-A[i+k-1]=DA[i+k]$. That is,
$DA[i+1\mathinner{.\,.}i+\ell]=DA[LF(i)+1\mathinner{.\,.}LF(i)+\ell]$ is a repetition in $DA$.
Example 7.33
In Figure 6, the run $S^{bwt}[13\mathinner{.\,.}17]=\mathsf{aaaaa}$ implies
that $LF(13+k)=LF(13)+k=5+k$ for $0\leq k\leq 4$. Therefore, $A[5+k]=A[13+k]-1$ for $0\leq k\leq 4$: $A[5\mathinner{.\,.}9]=\langle 1,9,7,5,13\rangle$ and
$A[13\mathinner{.\,.}17]=\langle 2,10,8,6,14\rangle$. This translates into a copy in
$DA[2\mathinner{.\,.}17]=\langle-1,-13,8,-10,8,-2,-2,8,-9,8,3,-13,8,-2,-2,8\rangle$:
we have $DA[14\mathinner{.\,.}17]=DA[6\mathinner{.\,.}9]$.
The same happens with the differential versions of arrays $A^{-1}$ and $LCP$,
and also with representations of the tree topology, which also has large
identical subtrees. It is also interesting that the array $PLCP[j]=LCP[A^{-1}[j]]$, can be represented in $O(r)$ space [Fischer
et al. (2009)]. All these
regularities inspired several practical compressed suffix trees for highly
repetitive strings, where some of those components were represented using
grammars or block trees, and even with the original run-length BWT
[Mäkinen et al. (2010), Abeliuk
et al. (2013), Navarro and
Ordóñez (2016), Cáceres and
Navarro (2019)]. Though implemented and shown to be practical,
only recently [Gagie
et al. (2020)] it was shown that one can build run-length
grammars of size $O(r\log(n/r))$ to represent those differential arrays and
support most of the suffix tree primitives in time $O(\log(n/r))$.
CDAWGs are also natural candidates to implement suffix trees. \shortciteNBCGP16
combine run-length compressed BWTs with CDAWGs to implement several primitives
in $O(1)$ or $O(\log\log n)$ time, within space $O(e(S)+e(S^{rev}))$.
\shortciteNBC17 use heavy path decomposition of the suffix tree to expand the
set of supported primitives, in times from $O(1)$ to $O(\log n)$ and within
the same asymptotic space.
7.3 Document Retrieval
While basic pattern matching is at the core of a number of data retrieval
activities, typical Information Retrieval queries operate at the document
level, more than at the individual level of occurrences. That is, we have a
collection of $\$$-terminated strings $S_{1},\ldots,S_{d}$, and concatenate them
all into a single string $S[1\mathinner{.\,.}n]=S_{1}\cdots S_{d}$. Given a search pattern
$P[1\mathinner{.\,.}m]$, we are interested in three basic queries:
Document listing:
List all the $docc$ documents where $P$ appears.
Document counting:
Compute $docc$, the number of documents where $P$
appears.
Top-$k$ retrieval:
List the $k$ documents where $P$ appears more
prominently, which is typically defined as having the most occurrences.
While there has been some activity in supporting document retrieval queries on
general string collections, even considering compressed indexes [Navarro (2014)],
the developments for highly repetitive string collections is very incipient.
Most of the developments are of practical nature.
7.3.1 Document listing
The first document listing index [Muthukrishnan (2002)] obtained optimal $O(m+docc)$ time
and $O(n)$ space. They use the document array $D[1\mathinner{.\,.}n]$, where $D[i]$
is the document where $A[i]$ belongs and $A$ is the suffix array of $S$.
They use the suffix tree of $S$ to find the interval $A[sp\mathinner{.\,.}ep]$, and thus
the task is to output the distinct values in $D[sp\mathinner{.\,.}ep]$.
\shortciteN
CM13 proposed the first document listing index for repetitive string
collections. They enrich a typical grammar-based index as we have described in
Section 5 with an inverted list that stores, for
each nonterminal, the documents where its expansion appears. They then find
the primary occurrences, collect all the nonterminals involved in secondary
occurrences and, instead of listing all the occurrence positions, they merge
all the inverted lists of the involved nonterminals. To reduce space, the set
of inverted lists is also grammar-compressed, so each list to merge must be
expanded. They do not provide worst-case time or space bounds.
\shortciteN
GHKKNPS17 propose two techniques, ILCP and PDL. In the former, they
create an array $ILCP[1\mathinner{.\,.}n]$ that interleaves the local $LCP$ arrays
(Section 7.2) of the documents $S_{i}$, according to the documents
pointed from $D[1\mathinner{.\,.}n]$, that is, if $D[i]=d$ and $d$ appears $k$ times in
$D[1\mathinner{.\,.}i]$, then $ILCP[i]=LCP_{i}[k]$, where $LCP_{i}$ is the LCP array of
$S_{i}$. They show that (1) $ILCP$ tends to have long runs of equal values on
repetitive string collections, and (2) the algorithm of \shortciteNMut02 runs
almost verbatim on top of $ILCP$ instead of $D$. This yields a document listing
index bounded by the number $\rho$ of the $ILCP$ runs.
With the help of a compressed suffix array that finds
$[sp\mathinner{.\,.}ep]$ in time $t_{s}$ and computes an arbitrary cell $A[i]$ in time $t_{a}$,
they can perform document listing in time $O(t_{s}+ndoc\cdot t_{a})$; for
example we can obtain $t_{s}=O(m\log\log n)$ and $t_{a}=O(\log(n/r))$ within
$O(r\log(n/r))$ space. They have, however, only average-case bounds for
the size $O(\rho)$ of their index: if the collection can be regarded as a base
document of size $n^{\prime}$ generated at random and the other $d-1$ documents are
copies on which one applies $s$ random edits (symbol insertions, deletions, or
substituions), then $\rho=O(n^{\prime}+s\log(n^{\prime}+s))$.
In PDL, \shortciteNGHKKNPS17 use a pruned suffix tree, where the nodes expanding
to less than $s$ positions of $A$ are removed. In the remaining nodes, they
store the inverted lists of the documents where the node string appears. The
lists are then grammar-compressed as in \shortciteNCM13. Document listing is then
done by finding the locus of $P$. If it exists, the answer is precomputed there.
Otherwise, we have to build the answer by brute force from a range of only
$O(s)$ positions of $A$. By using a run-length compressed BWT index in addition
to the sampled suffix tree, we can then answer in time $O((m+s)\log\log n+docc)$. The space is $O(r)$ for the BWT-based index, plus $O(n/s)$ for the
sampled suffix tree, plus an unbounded space for the grammar-compressed lists.
\shortciteN
CNspire19 propose a simpler variant where the document array $D[1\mathinner{.\,.}n]$
is compressed using a balanced grammar. Note that $D$ is compressible for the
same reasons of the differential suffix array $DA$ (Section 7.2).
For each nonterminal $A$, they store the (also grammar-compressed) inverted
list of all the distinct document numbers to which $A$ expands. The
range $D[sp\mathinner{.\,.}ep]$ is then covered by $O(\log n)$ nonterminals, whose lists
can be expanded and merged in time $O(ndoc\log n)$. With a run-length
compressed BWT index, for example, they obtain time $O(m\log\log n+ndoc\log n)$. The run-length BWT takes $O(r)$ space and the grammar-compressed
document array $D$ takes $O(r\log(n/r))$ space, but the space
of the grammar-compressed inverted lists is not bounded.
Although some offer good time bounds for document listing, none of the previous
indexes offer worst-case space bounds. They have all been implemented, however,
and show good performance [Claude and
Munro (2013), Gagie et al. (2017), Cobas and
Navarro (2019)]. \shortciteNNav18 offers
the only index with a limited form of worst-case space bound: if we have a base
document of length $n^{\prime}$ and $s$ arbitrary edits applied on the other $d-1$
documents, the index is of size $O(n^{\prime}\log\sigma+s\log^{2}n)$. It performs
document listing in time $O(m\log n+ndoc\cdot m\log^{1+\epsilon}n)$.
The technique is to store the inverted lists inside the
components of the geometric data structure for range searches that we use to
find the primary occurrences.
7.3.2 Document counting
\shortciteN
Sad07b showed that the number of distinct documents where $P$ appears
can be counted in constant time once $[sp\mathinner{.\,.}ep]$ is known, by adding a data
structure of just $2n+o(n)$ bits on top of a suffix tree or array on $S$. While
this is a good solution in classical scenarios, spending even $\Theta(n)$ bits
is excessive for large highly repetitive string collections.
\shortciteN
GHKKNPS17 show that the $2n$ bits of \shortciteNSad07b do inherit the
repetitiveness of the string collection, in different ways (runs, repetitions,
etc.) depending on the type of repetitiveness (versioning, documents that are
internally repetitive, etc.), and explore various ways to exploit it. They
experimentally
show that the structure can be extremely small and fast in practice.
They also build a counting structure based on ILCP, which uses $O(r+\rho)$
space and counts in time $O(m\log\log n)$, but it does not perform so well in
the experiments.
7.3.3 Top-$k$ retrieval
PDL [Gagie et al. (2017)] can be adapted for top-$k$ retrieval, by sorting the
inverted lists in decreasing frequency order. One can then read only the first
$k$ documents from the inverted list of the locus of $P$, when it exists;
otherwise a brute-force solution over $O(s)$ suffix array cells is applied.
They compare PDL experimentally with other solutions (none of which is designed
for highly repetitive string collections) and find it to be very competitive.
Note that this idea is not directly applicable to other indexes that use
inverted lists [Claude and
Munro (2013), Cobas and
Navarro (2019)] because they would have to merge various
inverted lists to find the top-$k$ candidates. It is possible, however, to
obtain exact or approximate results by applying known pruning techniques
[Büttcher et al. (2010), Baeza-Yates and
Ribeiro-Neto (2011), Gagie et al. (2017)].
7.4 Heuristic Indexes
Apart from those covered in Section 3,
other compression techniques for highly repetitive string collections have been
proposed, but they are aimed at specific situations. In this section we briefly
cover a couple of those for which compressed indexes have been developed.
7.4.1 Relative Lempel-Ziv
\shortciteN
KPZ10 proposed a variant of Lempel-Ziv specialized for genome
collections of the same species, where every genome is sufficiently similar to
each other. The idea is to create a reference string $R$ (e.g., one of the
genomes, but it is also possible to build artificial ones
[Kuruppu
et al. (2011), Kuruppu et al. (2012), Gagie
et al. (2016), Liao
et al. (2016)])
and parse every other string $S$ in the same way of
Lempel-Ziv, but with the sources being extracted only from $R$. Very good
compression ratios are reported on genome collections [Deorowicz
et al. (2015)]. Further,
if we retain direct access to any substring of $R$, we also have efficient
direct access to any substring $S[i\mathinner{.\,.}j]$ of every other string $S$
[Deorowicz and
Grabowski (2011), Ferrada
et al. (2014), Cox et al. (2016)].
The simplicity of random access to the strings also impacts on indexing.
The parse-based indexing we described in Section 5
adapts very well to Relative Lempel-Ziv compression [Gagie et al. (2012), Do
et al. (2014), Navarro and
Sepúlveda (2019)].
The reference $R$ can be indexed within statistical entropy bounds [Navarro and
Mäkinen (2007)],
so that we first locate the occurrences of $P$ in it. The occurrences inside
phrases in the other strings $S$ are obtained by mapping sources in $R$
covering occurrences of $P$ to their targets in $S$, using the same mechanism
of Section 5.2.1. Finally, the occurrences that span more than
one phrase in other strings $S$ are found with a mechanism similar to the
use of the grid (Section 5.1).
From the existing indexes, only that of \shortciteNNS19 is implemented. It uses
$O(|R|/\log_{\sigma}n+z)$ space (where $z$ is the size of the parse) and
searches in time $O((m+occ)\log n)$; the others [Gagie et al. (2012), Do
et al. (2014)] obtain
slightly better complexities. Their
experiments show that an index based on Relative Lempel-Ziv
outperforms all the others in practice, but it performs well in space only
when all
the documents are very similar to each other. For example, this is not the
case of versioned document collections, where each document is very similar
to its close versions but possibly very different from far versions.
7.4.2 Alignments
Another way to exploit having a reference $R$ and many other strings similar
to it is to build a classical or entropy-compressed index for $R$ and a
“relative” index for every other string $S$. The rationale is that the
similarity between the two strings should translate into similarities in the
index data structures.
\shortciteN
BGGMS14 use this idea on BWT-based indexes
(Section 6.1). By working on the symbols that are not in the
longest common subsequence of the BWTs of $R$ and $S$, they simulate the
BWT-based index of each string $S$. This is expanded into full
suffix-tree functionality [Farruggia et al. (2018)] by adding an $LCP$ array for $S$
that is compressed using Relative Lempel-Ziv with respect to the $LCP$ array
of $R$, and managing to efficiently compute RMQs on it.
\shortciteN
NKPKKMP16,NKMPLLMP18 use a different approach, also based on the
alignments of $R$ and the strings $S$. To build the (BWT-based) index, they
separate the regions that are common to all sequences, and the uncommon regions
(all the rest). The main
vulnerability of this approach is that a single change in one sequence destroys
an otherwise common region. Still, their implementation is shown to
outperform a generic index [Mäkinen et al. (2010)] on genome collections, in space and
time. A similarly inspired structure providing suffix tree functionality was
also proposed [Na et al. (2013)], though not implemented. They do not give clear
space bounds, but show how to search for patterns in optimal time
$O(m+occ)$, and conjecture that most suffix tree operations can be supported.
8 Current Challenges
In the final section of this survey, we consider what we consider the most
important current challenges in this area: (1) obtain practical implementations,
(2) be able to build the indexes for very large text collections, and (3)
reconsider which are the most useful queries on highly repetitive text
collections.
8.1 Practicality and Implementations
There is usually a long way between a theoretical finding and its practical
deployment. Many decisions that are made for the sake of obtaining good
worst-case complexities, or for simplicity of presentation, are not good in
practice. Algorithm engineering is the process of modifying a theoretically
appealing idea into a competitive implementation, involving knowledge
of the detailed cost model of computers (caching, prefetching, multithreading,
etc.). Further, big-$O$ space figures ignore constants, which must be carefully
considered to obtain competitive space for the indexes. In practice variables
like $z$, $g$, $r$, etc. are a hundredth or a thousandth of $n$, and therefore
using space like $10z$ bytes yields a large index in practice.
While competitive implementations have been developed for indexes based on
Lempel-Ziv [Kreft and
Navarro (2013), Ferrada
et al. (2013), Claude et al. (2016), Ferrada
et al. (2018)],
grammars [Maruyama et al. (2013), Takabatake
et al. (2014), Claude et al. (2016), Claude
et al. (2020)],
the BWT [Mäkinen et al. (2010), Belazzougui et al. (2017), Gagie
et al. (2020)], and CDAWGs [Belazzougui et al. (2017)], the most recent
and promising theoretical developments [Bille et al. (2018), Navarro and
Prezza (2019), Christiansen et al. (2019), Kociumaka
et al. (2020)] are
yet to be implemented and tested. It is unknown how much these
improvements will impact in practice.
Figure 19 shows, in very broad terms, the space/time tradeoffs
obtained by the implementations built on the different repetitiveness concepts.
It is made by taking the most representative values obtained across the
different experiments of the publications mentioned above, discarding too
repetitive and not repetitive enough collections. The black dots represent the
run-length BWT built on regular sampling [Mäkinen et al. (2010)], which has been a
baseline for most comparisons.
8.2 Construction and Dynamism
An important obstacle for the practical adoption of the indexes we covered is
how to build them on huge datasets. Once built, the indexes are orders of
magnitude smaller than the input and one hopes to handle them in main memory.
However, the initial step of computing the parsing, the run-length BWT, or
another small representation of a very large text collection, even if it can be
generally performed in the optimal $O(n)$ time, usually requires $O(n)$ main
memory space with a significant constant. There are various approaches that aim
to reduce those main memory requirements and/or read the text in streaming mode,
but some are still incipient.
Burrows-Wheeler Transform
The BWT is easily obtained from the
suffix array, which in turn can be built in $O(n)$ time and space
[Kim
et al. (2005), Ko and Aluru (2005), Kärkkäinen et al. (2006)]. However, the constant associated with the space is
large. \shortciteNKSB06 allow using $O(nv)$ time and $O(n/\sqrt{v})$ space for
a parameter $v$, but they still need to store the $n\log n+n\log\sigma$ bits
of the suffix array and the text. External-memory suffix array construction
requires optimal $O(Sort(n))$ I/Os and time [Farach-Colton et al. (2000), Kärkkäinen et al. (2006)].
\shortciteNKem19 shows how to build the run-length BWT in $O(n/\log_{\sigma}n+r\,\mathrm{polylog}~{}n)$ time and working space.
With a dynamic representation of sequences that supports insertions of symbols
[Munro and
Nekrich (2015)] one can build the run-length encoded BWT incrementally by
traversing the text in reversed form; the LF-mapping formula given in
Section 6.1 shows where to insert the next text symbol. This
idea is used by \shortciteNPP18 to build the run-length compressed BWT directly,
in streaming mode, in $O(n\log r)$ time and within $O(r)$ main memory space.
\shortciteNOSTIS18 improve their practical performace by a factor of $50$ (using
just twice the space).
\shortciteN
BGKLMM19 propose a practical method called
“prefix free parsing”, which first parses the text using a rolling hash
(a Karp-Rabin-like hash that depends on the last $\ell$ symbols read; a
phrase ends whenever the hash modulo a parameter value is zero). The result
is a dictionary
of phrases and a sequence of phrase identifiers; both are generally much
smaller than $n$ when the text is repetitive. They then build the BWT from
those elements. Their experiments show that they can build BWTs of very large
collections in reasonable time.
Lempel-Ziv parsing
While it has been known for decades how to
obtain this parse in $O(n)$ time [Rodeh
et al. (1981), Storer and
Szymanski (1982)], these algorithms
use $\Theta(n)$ space (i.e., $\Theta(n\log n)$ bits) with a significant
constant. A long line of research
[Chen
et al. (2008), Ohlebusch and
Gog (2011), Kempa and
Puglisi (2013), Kärkkäinen et al. (2013), Goto and Bannai (2013), Goto and Bannai (2014), Kärkkäinen et al. (2014), Yamamoto et al. (2014), Fischer
et al. (2015), Kärkkäinen et al. (2016), Köppl and
Sadakane (2016), Belazzougui and
Puglisi (2016), Fischer
et al. (2018), Kempa (2019)]
has focused on using little space, reducing the constant associated
with the $O(n\log n)$ bits and even reaching $O(n\log\sigma)$ bits
[Kärkkäinen et al. (2013), Belazzougui and
Puglisi (2016), Kempa (2019)]. This is still linear in the text size, however.
Interestingly, the only method to build the Lempel-Ziv parsing in less
space ($O(z+r)$) is to build the run-length BWT first and then derive the
Lempel-Ziv parse from it [Policriti and
Prezza (2018)]. The most recent implementation of this
method [Ohno
et al. (2018)] uses 2–3 orders of magnitude less space (and just
2–4 times more time) than the previous approaches. Another
interesting development [Kärkkäinen et al. (2014)] uses external memory: with a RAM of
size $M$, it performs $O(n^{2}/M)$ I/Os and requires $2n$ bytes of disk working
space. Despite this quadratic complexity, their experiments show that this
technique can handle, in practice, much larger texts than previous
approaches.
Other approaches aim at approximating the Lempel-Ziv parse.
\shortciteNFGGK15 build an $(1+\epsilon)$-approximation in $O((n/\epsilon)\log n)$
time and $O(z)$ space. \shortciteNKK17 build the LZ-End variant [Kreft and
Navarro (2013)] in
streaming mode and $O(z+\ell)$ main memory space, where $\ell$ is the length
of the longest phrase. \shortciteNVKNP19 use Relative Lempel-Ziv as a building
block.
Grammar construction
RePair [Larsson and
Moffat (2000)] is the heuristic that
obtains the best grammars in practice. While it computes the grammar in
$O(n)$ time and space, the constant associated with the space is significant
and prevents using it on large texts. Attempts to reduce this space have paid a
significant price in time [Bille
et al. (2017), Sakai et al. (2019), Köppl et al. (2019)]. A recent heuristic
[Gagie et al. (2019)] obtains space close to that of RePair using a semi-streaming
algorithm, but it degrades quickly as the repetitiveness decreases.
Various other grammar construction algorithms, for example \shortciteNSak05
and \shortciteNJez15,Jez16, build a balanced grammar that approximates the
smallest grammar within an $O(\log n)$ factor by performing a logarithmic
number of passes on the text (which halves at each pass), and could be amenable
to a semi-streamed construction.
An important line of research in this regard are the online grammar construction
algorithms [Maruyama
et al. (2012), Maruyama et al. (2013), Takabatake
et al. (2017)]. In OLCA [Maruyama
et al. (2012)], the authors build
a grammar in $O(n)$ time by reading the text in streaming mode. They obtain
an $O(\log^{2}n)$ approximation to the smallest grammar using $O(g\log^{2}n)$
space. In SOLCA [Maruyama et al. (2013)] they reduce the space to $O(g)$.
FOLCA [Takabatake
et al. (2017)] improves the space to $g\log g+o(g\log g)$ bits; the
authors also prove that the grammar built by SOLCA and FOLCA is an
$O(\log n\log^{*}n)$ approximation. Their experiments show that the grammar
is built very efficiently in time and main memory space, though the resulting
grammar is 4–5 times larger than the one generated by RePair.
Dynamism
A related challenge is dynamism, that is, the ability to modify the index when
the text changes. Although a dynamic index is clearly more practical than one
that has to be rebuilt every time, this is a difficult topic on which little
progress has been made. The online construction methods for run-length BWTs
[Policriti and
Prezza (2018), Ohno
et al. (2018)] and grammars [Maruyama et al. (2013), Takabatake
et al. (2017)] naturally allow us adding
new text at the beginning or at the end of the string. A dynamic BWT
representation allows adding or removing arbitrary documents from a text
collection [Mäkinen and
Navarro (2008)]. Supporting arbitrary modifications to the text is much
more difficult, however.
We are only aware of the work of \shortciteNNIIBT20, which build
on edit-sensitive parsing to maintain a grammar under arbitrary substring
insertions and deletions. They use $O(z\log n\log^{*}n)$ space and search in
time $O(m(\log\log n)^{2}+\log n\log^{*}n(\log n+\log m\log^{*}n)+occ\log n)$. A substring of length $\ell$ is inserted/deleted in time
$O((\ell+\log n\log^{*}n)\log^{2}n\log^{*}n)$. In practice, the search is fast
for long patterns only; \shortciteNNTT18 improved their search time on short
patterns.
8.3 New Queries
Counting and locating queries have been the fundamental object of study since
the beginning of compressed indexes [Ferragina and
Manzini (2000), Grossi and
Vitter (2000)]. In a highly repetitive
scenario, however, their relevance can be questioned. For example, consider
a versioned collection where we search for a pattern $P$. If $P$ appears in the
common part of many documents, the index will report all those occurrences
in every document, each time with identical context. It is not clear that the
$\Omega(occ)$ effort of producing such a large output is worthy. We finish with
a proposal of some query variants that can be more appropriate on highly
repetitive collections; note that some are closer to document retrieval queries
(see Section 7.3):
Contextual locating:
Instead of reporting all the positions
where $P$ occurs in $S$, list the $cocc$ distinct “contexts” where it appears,
for example, the distinct lines (separated by newlines), or the distinct
substrings of some length $\ell$ preceding and following $P$, and one (or
all, or the amount of) the positions where $P$ occurs in each context.
While it is feasible to
solve this query in time proportional to $occ$, the challenge is to solve it
in time proportional to $cocc$. Grammar, BWT, and CDAWG based indexes seem
well suited for this task.
Range document listing:
On a document collection formed by a
linear list of versions (like Wikipedia or periodic publications), list the
$nrange$ maximal ranges of versions where $P$ appears (e.g.,
“$P$ appears in the documents $45$–$57$ and $101$–$245$”). On a document
collection formed by a tree of versions (like GitHub and most versioned document
repositories), list the $nrange$ maximal subtrees such that $P$ appears in
all the subtree nodes. This makes sense because close versions should
be more similar than far versions, and then the occurrences should cluster.
This query is solved by document listing in time
proportional to $ndoc$, the number of documents where $P$ appears
(Section 7.3.1), but the challenge is to solve it in time
proportional to $nrange$. Handling subtrees essentially boils down to handling
linear ranges if we consider preorder numbering of the documents in the tree
of versions. Document listing indexes that use inverted lists
[Claude and
Munro (2013), Gagie et al. (2017), Cobas and
Navarro (2019), Navarro (2019)] could be adapted to handle this query.
Granular document listing:
On a document collection formed by a
tree of versions, list the nodes of a certain depth where $P$ appears in
some node of their subtree (e.g., “$P$ appears somewhere in versions 1.2.x and
4.0.x”). This query allows us, for example, to determine in which “major
versions” of documents we can find $P$, without the detail of precisely in
which minor versions it appears. Again the challenge is to perform this query
in time proportional to the size of the output, and again document listing
structures could be well suited to face it.
Document restricted listing:
List the occurrences of $P$ only
within a given range or subtree of documents one is interested in (“find $P$
only within the reports of $2010$–$2013$”). This query can be combined with
any of the previous ones.
Those queries can be combined with the aim of retrieving the $k$ “most
important” results, where importance can be defined in terms of the documents
themselves (as done by search engines with Web pages) and/or in terms of the
presence of $P$ in the documents (typically favoring those where $P$ occurs
most often, as in classical Information Retrieval systems), see \shortciteNNav14.
In general, we expect that the techniques developed for locating pattern
occurrences on highly repetitive text collections can be used as a base to
solve these more sophisticated queries. Efficiently solving some of them can be
challenging even without repetitiveness, however. For example, document
restricted listing is related to the problem of “position-restricted substring
searching”, which is unlikely to be solvable within succinct space
[Hon
et al. (2012)].
Acknowledgements
We thank Travis Gagie and Nicola Prezza for useful comments.
References
Abeliuk
et al. (2013)
Abeliuk, A., Cánovas, R., and Navarro, G. 2013.
Practical compressed suffix trees.
Algorithms 6, 2, 319–351.
Apostolico (1985)
Apostolico, A. 1985.
The myriad virtues of subword trees.
In Combinatorial Algorithms on Words. NATO ISI Series.
Springer-Verlag, 85–96.
Baeza-Yates and
Ribeiro-Neto (2011)
Baeza-Yates, R. and Ribeiro-Neto, B. 2011.
Modern Information Retrieval, 2nd ed.
Addison-Wesley.
Batu
et al. (2006)
Batu, T., Ergün, F., and Sahinalp, S. C. 2006.
Oblivious string embeddings and edit distance approximations.
In Proc. 17th Symposium on Discrete Algorithms (SODA).
792–801.
Belazzougui
et al. (2010)
Belazzougui, D., B., P., Pagh, R., and Vigna, S.
2010.
Fast prefix search in little space, with applications.
In Proc. 18th Annual European Symposium on Algorithms (ESA).
427–438.
Belazzougui et al. (2009)
Belazzougui, D., Boldi, P., Pagh, R., and Vigna,
S. 2009.
Monotone minimal perfect hashing: Searching a sorted table with
O(1) accesses.
In Proc. 20th Annual Symposium on Discrete Mathematics (SODA).
785–794.
Belazzougui
et al. (2019)
Belazzougui, D., Cáceres, M., Gagie, T., Gawrychowski, P., Kärkkäinen, J., Navarro, G., Ordóñez, A., Puglisi, S. J., and Tabei, Y. 2019.
Block trees.
Submitted.
Belazzougui and
Cunial (2017a)
Belazzougui, D. and Cunial, F. 2017a.
Fast label extraction in the CDAWG.
In Proc. 24th International Symposium on String Processing and
Information Retrieval (SPIRE). 161–175.
Belazzougui and
Cunial (2017b)
Belazzougui, D. and Cunial, F. 2017b.
Representing the suffix tree with the CDAWG.
In Proc. 28th Annual Symposium on Combinatorial Pattern Matching
(CPM). 7:1–7:13.
Belazzougui et al. (2015)
Belazzougui, D., Cunial, F., Gagie, T., Prezza, N.,
and Raffinot, M. 2015.
Composite repetition-aware data structures.
In Proc. 26th Annual Symposium on Combinatorial Pattern Matching
(CPM). 26–39.
Belazzougui et al. (2016)
Belazzougui, D., Cunial, F., Gagie, T., Prezza, N.,
and Raffinot, M. 2016.
Practical combinations of repetition-aware data structures.
CoRR abs/1604.06002.
Belazzougui et al. (2017)
Belazzougui, D., Cunial, F., Gagie, T., Prezza, N.,
and Raffinot, M. 2017.
Flexible indexing of repetitive collections.
In Proc. 13th Conference on Computability in Europe (CiE).
162–174.
Belazzougui
et al. (2015)
Belazzougui, D., Gagie, T., Gawrychowski, P., Kärkkäinen, J., Ordóñez, A., Puglisi, S. J., and Tabei, Y. 2015.
Queries on LZ-bounded encodings.
In Proc. 25th Data Compression Conference (DCC). 83–92.
Belazzougui et al. (2014)
Belazzougui, D., Gagie, T., Gog, S., Manzini, G., and Sirén, J. 2014.
Relative FM-indexes.
In Proc. 21st International Symposium on String Processing and
Information Retrieval (SPIRE). 52–64.
Belazzougui and
Puglisi (2016)
Belazzougui, D. and Puglisi, S. J. 2016.
Range predecessor and Lempel-Ziv parsing.
In Proc. 27th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA). 2053–2071.
Belazzougui
et al. (2015)
Belazzougui, D., Puglisi, S. J., and Tabei, Y. 2015.
Access, rank, select in grammar-compressed strings.
In Proc. 23rd Annual European Symposium on Algorithms (ESA).
142–154.
Bell
et al. (1990)
Bell, T. C., Cleary, J., and Witten, I. H. 1990.
Text Compression.
Prentice Hall.
Bender and
Farach-Colton (2004)
Bender, M. and Farach-Colton, M. 2004.
The level ancestor problem simplified.
Theoretical Computer Science 321, 1, 5–12.
Bender et al. (2005)
Bender, M. A., Farach-Colton, M., Pemmasani, G., Skiena,
S., and Sumazin, P. 2005.
Lowest common ancestors in trees and directed acyclic graphs.
Journal of Algorithms 57, 2, 75–94.
Bentley
et al. (2019)
Bentley, J., Gibney, D., and Thankachan, S. V. 2019.
On the complexity of BWT-runs minimization via alphabet reordering.
CoRR 1911.03035.
Bille et al. (2017)
Bille, P., Ettienne, M. B., Gørtz, I. L., and Vildhøj, H. W. 2017.
Time-space trade-offs for Lempel-Ziv compressed indexing.
In Proc. 28th Annual Symposium on Combinatorial Pattern Matching
(CPM). 16:1–16:17.
Bille et al. (2018)
Bille, P., Ettienne, M. B., Gørtz, I. L., and Vildhøj, H. W. 2018.
Time-space trade-offs for Lempel-Ziv compressed indexing.
Theoretical Computer Science 713, 66–77.
Bille
et al. (2018)
Bille, P., Gagie, T., Gørtz, I. L., and Prezza,
N. 2018.
A separation between RLSLPs and LZ77.
Journal of Discrete Algorithms 50, 36–39.
Bille et al. (2017)
Bille, P., Gørtz, I. L., Cording, P. H., Sach, B.,
Vildhøj, H. W., and Vind, S. 2017.
Fingerprints in compressed strings.
Journal of Computer and System Sciences 86, 171–180.
Bille
et al. (2017)
Bille, P., Gørtz, I. L., and Prezza, N. 2017.
Space-efficient Re-Pair compression.
In Proc. 27th Data Compression Conference (DCC). 171–180.
Bille
et al. (2014)
Bille, P., Gørtz, I. L., Sach, B., and Vildhøj, H. W. 2014.
Time-space trade-offs for longest common extensions.
Journal of Discrete Algorithms 25, 42–50.
Bille et al. (2015)
Bille, P., Landau, G. M., Raman, R., Sadakane, K., Rao, S. S., and Weimann, O. 2015.
Random access to grammar-compressed strings and trees.
SIAM Journal on Computing 44, 3, 513–539.
Blumer et al. (1987)
Blumer, A., Blumer, J., Haussler, D., McConnell, R. M.,
and Ehrenfeucht, A. 1987.
Complete inverted files for efficient text retrieval and analysis.
Journal of the ACM 34, 3, 578–595.
Boucher et al. (2019)
Boucher, C., Gagie, T., Kuhnle, A., Langmead, B., Manzini, G., and Mun, T. 2019.
Prefix-free parsing for building big BWTs.
Algorithms for Molecular Biology 14, 1, 13:1–13:15.
Burrows and
Wheeler (1994)
Burrows, M. and Wheeler, D. 1994.
A block sorting lossless data compression algorithm.
Tech. Rep. 124, Digital Equipment Corporation.
Büttcher et al. (2010)
Büttcher, S., Clarke, C. L. A., and Cormack, G. V.
2010.
Information Retrieval: Implementing and Evaluating Search
Engines.
MIT Press.
Cáceres and
Navarro (2019)
Cáceres, M. and Navarro, G. 2019.
Faster repetition-aware compressed suffix trees based on block trees.
In Proc. 26th International Symposium on String Processing and
Information Retrieval (SPIRE). 434–451.
Chan
et al. (2011)
Chan, T. M., Larsen, K. G., and Pătraşcu, M.
2011.
Orthogonal range searching on the RAM, revisited.
In Proc. 27th ACM Symposium on Computational Geometry (SoCG).
1–10.
Charikar et al. (2005)
Charikar, M., Lehman, E., Liu, D., Panigrahy, R., Prabhakaran, M., Sahai, A., and Shelat, A. 2005.
The smallest grammar problem.
IEEE Transactions on Information Theory 51, 7,
2554–2576.
Chazelle (1988)
Chazelle, B. 1988.
A functional approach to data structures and its use in
multidimensional searching.
SIAM Journal on Computing 17, 3, 427–462.
Chen
et al. (2008)
Chen, G., Puglisi, S. J., and Smyth, W. F. 2008.
Lempel-Ziv factorization using less time & space.
Mathematics in Computer Science 1, 605–623.
Christiansen and
Ettienne (2018)
Christiansen, A. R. and Ettienne, M. B. 2018.
Compressed indexing with signature grammars.
In Proc.13th Latin American Symposium on Theoretical Informatics
(LATIN). 331–345.
Christiansen et al. (2019)
Christiansen, A. R., Ettienne, M. B., Kociumaka, T., Navarro, G., and Prezza, N. 2019.
Optimal-time dictionary-compressed indexes.
CoRR 1811.12779.
Claude et al. (2016)
Claude, F., Fariña, A., Martínez-Prieto, M., and Navarro, G. 2016.
Universal indexes for highly repetitive document collections.
Information Systems 61, 1–23.
Claude and
Munro (2013)
Claude, F. and Munro, J. I. 2013.
Document listing on versioned documents.
In Proc. 20th Symposium on String Processing and Information
Retrieval (SPIRE). 72–83.
Claude and
Navarro (2009)
Claude, F. and Navarro, G. 2009.
Self-indexed text compression using straight-line programs.
In Proc. 34th International Symposium on Mathematical
Foundations of Computer Science (MFCS). 235–246.
Claude and
Navarro (2011)
Claude, F. and Navarro, G. 2011.
Self-indexed grammar-based compression.
Fundamenta Informaticae 111, 3, 313–337.
Claude and
Navarro (2012)
Claude, F. and Navarro, G. 2012.
Improved grammar-based compressed indexes.
In Proc. 19th International Symposium on String Processing and
Information Retrieval (SPIRE). 180–192.
Claude
et al. (2020)
Claude, F., Navarro, G., and Pacheco, A. 2020.
Grammar-compressed indexes with logarithmic search time.
CoRR abs/2004.01032.
Cobas and
Navarro (2019)
Cobas, D. and Navarro, G. 2019.
Fast, small, and simple document listing on repetitive text
collections.
In Proc. 26th International Symposium on String Processing and
Information Retrieval (SPIRE). 482–498.
Cole and
Vishkin (1986)
Cole, R. and Vishkin, U. 1986.
Deterministic coin tossing with applications to optimal parallel list
ranking.
Information and Control 70, 1, 32–53.
Cover and
Thomas (2006)
Cover, T. and Thomas, J. 2006.
Elements of Information Theory, 2nd ed.
Wiley.
Cox et al. (2016)
Cox, A. J., Farruggia, A., Gagie, T., Puglisi, S. J.,
and Sirén, J. 2016.
RLZAP: Relative Lempel-Ziv with adaptive pointers.
In Proc 23rd International Symposium on String Processing and
Information Retrieval (SPIRE). 1–14.
Crochemore and
Hancart (1997)
Crochemore, M. and Hancart, C. 1997.
Automata for matching patterns.
In Handbook of Formal Languages. Springer, 399–462.
Crochemore et al. (2012)
Crochemore, M., Iliopoulos, C. S., Kubica, M., Rytter,
W., and Waleń, T. 2012.
Efficient algorithms for three variants of the LPF table.
Journal of Discrete Algorithms 11, 51–61.
Crochemore and
Rytter (2002)
Crochemore, M. and Rytter, W. 2002.
Jewels of Stringology.
World Scientific.
Deorowicz
et al. (2015)
Deorowicz, S., Danek, A., and Niemiec, M. 2015.
GDC 2: Compression of large collections of genomes.
CoRR abs/1503.01624.
Deorowicz and
Grabowski (2011)
Deorowicz, S. and Grabowski, S. 2011.
Robust relative compression of genomes with random access.
Bioinformatics 27, 2979–2986.
Do
et al. (2014)
Do, H. H., Jansson, J., Sadakane, K., and Sung,
W.-K. 2014.
Fast relative Lempel-Ziv self-index for similar sequences.
Theoretical Computer Science 532, 14–30.
Elsayed and
Oard (2006)
Elsayed, T. and Oard, D. W. 2006.
Modeling identity in archival collections of email: A preliminary
study.
In Proc. 3rd Conference on Email and Anti-Spam (CEAS).
Farach and
Thorup (1998)
Farach, M. and Thorup, M. 1998.
String matching in Lempel-Ziv compressed strings.
Algorithmica 20, 4, 388–404.
Farach-Colton et al. (2000)
Farach-Colton, M., Ferragina, P., and Muthukrishnan, S.
2000.
On the sorting-complexity of suffix tree construction.
Journal of the ACM 47, 6, 987–1011.
Farruggia et al. (2018)
Farruggia, A., Gagie, T., Navarro, G., Puglisi, S. J.,
and Sirén, J. 2018.
Relative suffix trees.
The Computer Journal 61, 5, 773–788.
Ferrada
et al. (2014)
Ferrada, H., Gagie, T., Gog, S., and Puglisi,
S. J. 2014.
Relative Lempel-Ziv with constant-time random access.
In Proc. 21st International Symposium on String Processing and
Information Retrieval (SPIRE). 13–17.
Ferrada
et al. (2013)
Ferrada, H., Gagie, T., Hirvola, T., and Puglisi,
S. J. 2013.
Hybrid indexes for repetitive datasets.
CoRR abs/1306.4037.
Ferrada
et al. (2018)
Ferrada, H., Kempa, D., and Puglisi, S. J. 2018.
Hybrid indexing revisited.
In Proc. 20th Workshop on Algorithm Engineering and Experiments
(ALENEX). 1–8.
Ferragina et al. (2005)
Ferragina, P., Giancarlo, R., Manzini, G., and Sciortino, M. 2005.
Boosting textual compression in optimal linear time.
Journal of the ACM 52, 4, 688–713.
Ferragina and
Grossi (1999)
Ferragina, P. and Grossi, R. 1999.
The string B-tree: A new data structure for string search in
external memory and its applications.
Journal of the ACM 46, 2, 236–280.
Ferragina and
Manzini (2000)
Ferragina, P. and Manzini, G. 2000.
Opportunistic data structures with applications.
In Proc. 41st IEEE Symposium on Foundations of Computer Science
(FOCS). 390–398.
Ferragina and
Manzini (2005)
Ferragina, P. and Manzini, G. 2005.
Indexing compressed texts.
Journal of the ACM 52, 4, 552–581.
Ferragina et al. (2007)
Ferragina, P., Manzini, G., Mäkinen, V., and Navarro, G. 2007.
Compressed representations of sequences and full-text indexes.
ACM Transactions on Algorithms 3, 2, article 20.
Fischer et al. (2015)
Fischer, J., Gagie, T., Gawrychowski, P., and Kociumaka, T. 2015.
Approximating LZ77 via small-space multiple-pattern matching.
In Proc. 23rd Annual European Symposium on Algorithms (ESA).
533–544.
Fischer and
Heun (2011)
Fischer, J. and Heun, V. 2011.
Space-efficient preprocessing schemes for range minimum queries on
static arrays.
SIAM Journal on Computing 40, 2, 465–492.
Fischer
et al. (2015)
Fischer, J., I, T., and Köppl, D. 2015.
Lempel Ziv computation in small space (LZ-CISS).
In Proc. 26th Annual Symposium on Combinatorial Pattern Matching
(CPM). 172–184.
Fischer
et al. (2018)
Fischer, J., I, T., Köppl, D., and Sadakane,
K. 2018.
Lempel-Ziv factorization powered by space efficient suffix trees.
Algorithmica 80, 7, 2048–2081.
Fischer
et al. (2009)
Fischer, J., Mäkinen, V., and Navarro, G. 2009.
Faster entropy-bounded compressed suffix trees.
Theoretical Computer Science 410, 51, 5354–5364.
Fritz
et al. (2011)
Fritz, M. H.-Y., Leinonen, R., Cochrane, G., and Birney, E. 2011.
Efficient storage of high throughput DNA sequencing data using
reference-based compression.
Genome Research, 734–740.
Gagie (2006)
Gagie, T. 2006.
Large alphabets and incompressibility.
Information Processing Letters 99, 6, 246–251.
Gagie et al. (2012)
Gagie, T., Gawrychowski, P., Kärkkäinen, J., Nekrich, Y., and Puglisi, S. J. 2012.
A faster grammar-based self-index.
In Proc. 6th International Conference on Language and Automata
Theory and Applications (LATA). 240–251.
Gagie et al. (2014)
Gagie, T., Gawrychowski, P., Kärkkäinen, J., Nekrich, Y., and Puglisi, S. J. 2014.
LZ77-based self-indexing with faster pattern matching.
In Proc. 11th Latin American Symposium on Theoretical
Informatics (LATIN). 731–742.
Gagie et al. (2017)
Gagie, T., Hartikainen, A., Karhu, K., Kärkkäinen, J., Navarro, G., Puglisi, S. J., and
Sirén, J. 2017.
Document retrieval on repetitive collections.
Information Retrieval 20, 253–291.
Gagie et al. (2019)
Gagie, T., I, T., Manzini, G., Navarro, G., Sakamoto, H., and Takabatake, Y. 2019.
Rpair: Scaling up repair with rsync.
In Proc. 26th International Symposium on String Processing and
Information Retrieval (SPIRE). 35–44.
Gagie
et al. (2018a)
Gagie, T., Navarro, G., and Prezza, N. 2018a.
On the approximation ratio of Lempel-Ziv parsing.
In Proc. 13th Latin American Symposium on Theoretical
Informatics (LATIN). 490–503.
Gagie
et al. (2018b)
Gagie, T., Navarro, G., and Prezza, N. 2018b.
Optimal-time text indexing in BWT-runs bounded space.
In Proc. 29th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA). 1459–1477.
Gagie
et al. (2020)
Gagie, T., Navarro, G., and Prezza, N. 2020.
Fully-functional suffix trees and optimal text searching in
BWT-runs bounded space.
Journal of the ACM 67, 1, article 2.
Gagie
et al. (2016)
Gagie, T., Puglisi, S. J., and Valenzuela, D. 2016.
Analyzing Relative Lempel-Ziv reference construction.
In Proc. 23rd International Symposium on String Processing and
Information Retrieval (SPIRE). 160–165.
Gallant (1982)
Gallant, J. K. 1982.
String compression algorithms.
Ph.D. thesis, Princeton University.
Ganardi
et al. (2019)
Ganardi, M., Jeż, A., and Lohrey, M. 2019.
Balancing straight-line programs.
CoRR 1902.03568.
Gasieniec et al. (2005)
Gasieniec, L., Kolpakov, R., Potapov, I., and Sant, P. 2005.
Real-time traversal in grammar-based compressed files.
In Proc. 15th Data Compression Conference (DCC). 458–458.
Gawrychowski (2011)
Gawrychowski, P. 2011.
Pattern matching in Lempel-Ziv compressed strings: Fast, simple,
and deterministic.
In Proc. 19th Annual European Symposium on Algorithms (ESA).
421–432.
Gog et al. (2019)
Gog, S., Kärkkäinen, J., Kempa, D., Petri,
M., and Puglisi, S. J. 2019.
Fixed block compression boosting in FM-indexes: Theory and
practice.
Algorithmica 81, 4, 1370–1391.
González and
Navarro (2007)
González, R. and Navarro, G. 2007.
Compressed text indexes with fast locate.
In Proc. 18th Annual Symposium on Combinatorial Pattern Matching
(CPM). 216–227.
Goto and Bannai (2013)
Goto, K. and Bannai, H. 2013.
Simpler and faster Lempel Ziv factorization.
In Proc. 23rd Data Compression Conference (DCC). 133–142.
Goto and Bannai (2014)
Goto, K. and Bannai, H. 2014.
Space efficient linear time Lempel-Ziv factorization for small
alphabets.
In Proc. 24th Data Compression Conference (DCC). 163–172.
Grossi (2011)
Grossi, R. 2011.
A quick tour on suffix arrays and compressed suffix arrays.
Theoretical Computer Science 412, 27, 2964–2973.
Grossi and
Vitter (2000)
Grossi, R. and Vitter, J. S. 2000.
Compressed suffix arrays and suffix trees with applications to text
indexing and string matching.
In Proc. 32nd ACM Symposium on Theory of Computing (STOC).
397–406.
Gusfield (1997)
Gusfield, D. 1997.
Algorithms on Strings, Trees and Sequences: Computer Science and
Computational Biology.
Cambridge University Press.
Henzinger (2006)
Henzinger, M. R. 2006.
Finding near-duplicate web pages: A large-scale evaluation of
algorithms.
In Proc. 29th Annual International ACM Conference on Research
and Development in Information Retrieval (SIGIR). 284–291.
Hon
et al. (2012)
Hon, W.-K., Shah, R., Thankachan, S. V., and Vitter, J. S. 2012.
On position restricted substring searching in succinct space.
Journal of Discrete Algorithms 17, 109–114.
Hucke
et al. (2016)
Hucke, D., Lohrey, M., and Reh, C. P. 2016.
The smallest grammar problem revisited.
In Proc. 23rd International Symposium on String Processing and
Information Retrieval (SPIRE). 35–49.
Jacobson (1989)
Jacobson, G. 1989.
Space-efficient static trees and graphs.
In Proc. 30th IEEE Symposium on Foundations of Computer Science
(FOCS). 549–554.
Jeż (2015)
Jeż, A. 2015.
Approximation of grammar-based compression via recompression.
Theoretical Computer Science 592, 115–134.
Jeż (2016)
Jeż, A. 2016.
A really simple approximation of smallest grammar.
Theoretical Computer Science 616, 141–150.
Kapser and
Godfrey (2005)
Kapser, C. and Godfrey, M. W. 2005.
Improved tool support for the investigation of duplication in
software.
In Proc. 21st IEEE International Conference on Software
Maintenance (ICSM). 305–314.
Kärkkäinen et al. (2013)
Kärkkäinen, J., Kempa, D., and Puglisi,
S. J. 2013.
Lightweight Lempel-Ziv parsing.
In Proc. 12th International Symposium on Experimental Algorithms
(SEA). 139–150.
Kärkkäinen et al. (2014)
Kärkkäinen, J., Kempa, D., and Puglisi,
S. J. 2014.
Lempel-Ziv parsing in external memory.
In Proc. 24th Data Compression Conference (DCC). 153–162.
Kärkkäinen et al. (2016)
Kärkkäinen, J., Kempa, D., and Puglisi,
S. J. 2016.
Lazy Lempel-Ziv factorization algorithms.
ACM Journal of Experimental Algorithmics 21, 1,
2.4:1–2.4:19.
Kärkkäinen et al. (2006)
Kärkkäinen, J., Sanders, P., and Burkhardt,
S. 2006.
Linear work suffix array construction.
Journal of the ACM 53, 6, 918–936.
Kärkkäinen and Ukkonen (1996)
Kärkkäinen, J. and Ukkonen, E. 1996.
Lempel-Ziv parsing and sublinear-size index structures for string
matching.
In Proc. 3rd South American Workshop on String Processing
(WSP). 141–155.
Karp and Rabin (1987)
Karp, R. M. and Rabin, M. O. 1987.
Efficient randomized pattern-matching algorithms.
IBM Journal of Research and Development 2, 249–260.
Kasai
et al. (2001)
Kasai, T., Lee, G., Arimura, H., Arikawa, S., and
Park, K. 2001.
Linear-time longest-common-prefix computation in suffix arrays and
its applications.
In Proc. 12th Annual Symposium on Combinatorial Pattern Matching
(CPM). 181–192.
Kempa (2019)
Kempa, D. 2019.
Optimal construction of compressed indexes for highly repetitive
texts.
In Proc. 30th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA). 1344–1357.
Kempa and
Kociumaka (2019)
Kempa, D. and Kociumaka, T. 2019.
Resolution of the Burrows-Wheeler Transform conjecture.
CoRR 1910.10631.
Kempa and
Kosolobov (2017)
Kempa, D. and Kosolobov, D. 2017.
LZ-End parsing in compressed space.
In Proc. 27th Data Compression Conference (DCC). 350–359.
Kempa and
Prezza (2018)
Kempa, D. and Prezza, N. 2018.
At the roots of dictionary compression: String attractors.
In Proc. 50th Annual ACM Symposium on the Theory of Computing
(STOC). 827–840.
Kempa and
Puglisi (2013)
Kempa, D. and Puglisi, S. J. 2013.
Lempel-Ziv factorization: Simple, fast, practical.
In Proc. 15th Workshop on Algorithm Engineering and Experiments
(ALENEX). 103–112.
Kida et al. (2003)
Kida, T., Matsumoto, T., Shibata, Y., Takeda, M., Shinohara, A., and Arikawa, S. 2003.
Collage system: A unifying framework for compressed pattern matching.
Theoretical Computer Science 298, 1, 253–272.
Kieffer and
Yang (2000)
Kieffer, J. C. and Yang, E.-H. 2000.
Grammar-based codes: A new class of universal lossless source
codes.
IEEE Transactions on Information Theory 46, 3,
737–754.
Kim
et al. (2005)
Kim, D. K., Sim, J. S., Park, H., and Park, K.
2005.
Constructing suffix arrays in linear time.
Journal of Discrete Algorithms 3, 2-4, 126–142.
Ko and Aluru (2005)
Ko, P. and Aluru, S. 2005.
Space efficient linear time construction of suffix arrays.
Journal of Discrete Algorithms 3, 2-4, 143–156.
Kociumaka
et al. (2020)
Kociumaka, T., Navarro, G., and Prezza, N. 2020.
Towards a definitive measure of repetitiveness.
In Proc. 14th Latin American Symposium on Theoretical
Informatics (LATIN).
To appear.
Kolmogorov (1965)
Kolmogorov, A. N. 1965.
Three approaches to the quantitative definition of information.
Problems on Information Transmission 1, 1, 1–7.
Köppl and
Sadakane (2016)
Köppl, D. and Sadakane, K. 2016.
Lempel-Ziv computation in compressed space (LZ-CICS).
In Proc. 26th Data Compression Conference (DCC). 3–12.
Kosaraju and
Manzini (2000)
Kosaraju, R. and Manzini, G. 2000.
Compression of low entropy strings with Lempel-Ziv algorithms.
SIAM Journal on Computing 29, 3, 893–911.
Kreft and
Navarro (2011)
Kreft, S. and Navarro, G. 2011.
Self-indexing based on LZ77.
In Proc. 22nd Annual Symposium on Combinatorial Pattern Matching
(CPM). 41–54.
Kreft and
Navarro (2013)
Kreft, S. and Navarro, G. 2013.
On compressing and indexing repetitive sequences.
Theoretical Computer Science 483, 115–133.
Kuruppu et al. (2012)
Kuruppu, S., Beresford-Smith, B., Conway, T. C., and
Zobel, J. 2012.
Iterative dictionary construction for compression of large DNA data
sets.
IEEE/ACM Transactions on Computational Biology and
Bioinformatics 9, 137–149.
Kuruppu
et al. (2010)
Kuruppu, S., Puglisi, S. J., and Zobel, J. 2010.
Relative Lempel-Ziv compression of genomes for large-scale storage
and retrieval.
In Proc. 17th International Symposium on String Processing and
Information Retrieval (SPIRE). 201–206.
Kuruppu
et al. (2011)
Kuruppu, S., Puglisi, S. J., and Zobel, J. 2011.
Reference sequence construction for relative compression of genomes.
In Proc. 18th International Symposium on String Processing and
Information Retrieval (SPIRE). 420–425.
Köppl et al. (2019)
Köppl, D., I, T., Furuya, I., Takabatake, Y., Sakai, K., and Goto, K. 2019.
Re-Pair in small space.
CoRR 1908.04933.
Larsson and
Moffat (2000)
Larsson, J. and Moffat, A. 2000.
Off-line dictionary-based compression.
Proceedings of the IEEE 88, 11, 1722–1732.
Lempel and Ziv (1976)
Lempel, A. and Ziv, J. 1976.
On the complexity of finite sequences.
IEEE Transactions on Information Theory 22, 1,
75–81.
Liao
et al. (2016)
Liao, K., Petri, M., Moffat, A., and Wirth, A.
2016.
Effective construction of Relative Lempel-Ziv dictionaries.
In Proc. 25th International Conference on World Wide Web (WWW).
807–816.
Linstead et al. (2009)
Linstead, E., Bajracharya, S., Ngo, T., Rigor, P., Lopes, C., and Baldi, P. 2009.
Sourcerer: Mining and searching internet-scale software repositories.
Data Mining and Knowledge Discovery 18, 2, 300–336.
Liu (2007)
Liu, B. 2007.
Web Data Mining: Exploring Hyperlinks, Contents and Usage Data.
Springer.
Mäkinen et al. (2015)
Mäkinen, V., Belazzougui, D., Cunial, F., and Tomescu, A. I. 2015.
Genome-Scale Algorithm Design.
Cambridge University Press.
Mäkinen and
Navarro (2005)
Mäkinen, V. and Navarro, G. 2005.
Succinct suffix arrays based on run-length encoding.
Nordic Journal of Computing 12, 1, 40–66.
Mäkinen and
Navarro (2008)
Mäkinen, V. and Navarro, G. 2008.
Dynamic entropy-compressed sequences and full-text indexes.
ACM Transactions on Algorithms 4, 3, article 32.
Mäkinen et al. (2010)
Mäkinen, V., Navarro, G., Sirén, J., and Välimäki, N. 2010.
Storage and retrieval of highly repetitive sequence collections.
Journal of Computational Biology 17, 3, 281–308.
Manber and
Myers (1993)
Manber, U. and Myers, G. 1993.
Suffix arrays: A new method for on-line string searches.
SIAM Journal on Computing 22, 5, 935–948.
Manzini (2001)
Manzini, G. 2001.
An analysis of the Burrows-Wheeler transform.
Journal of the ACM 48, 3, 407–430.
Maruyama et al. (2011)
Maruyama, S., Nakahara, M., Kishiue, N., and Sakamoto, H. 2011.
ESP-Index: A compressed index based on edit-sensitive parsing.
In Proc. 18th International Symposium on String Processing and
Information Retrieval (SPIRE). 398–409.
Maruyama et al. (2013)
Maruyama, S., Nakahara, M., Kishiue, N., and Sakamoto, H. 2013.
ESP-index: A compressed index based on edit-sensitive parsing.
Journal of Discrete Algorithms 18, 100–112.
Maruyama
et al. (2012)
Maruyama, S., Sakamoto, H., and Takeda, M. 2012.
An online algorithm for lightweight grammar-based compression.
Algorithms 5, 2, 213–235.
Maruyama et al. (2013)
Maruyama, S., Tabei, Y., Sakamoto, H., and Sadakane, K. 2013.
Fully-online grammar compression.
In Proc. 20th International Symposium on String Processing and
Information Retrieval (SPIRE). 218–â229.
McCreight (1976)
McCreight, E. 1976.
A space-economical suffix tree construction algorithm.
Journal of the ACM 23, 2, 262–272.
Mehlhorn
et al. (1997)
Mehlhorn, K., Sundar, R., and Uhrig, C. 1997.
Maintaining dynamic sequences under equality tests in polylogarithmic
time.
Algorithmica 17, 2, 183–198.
Morrison (1968)
Morrison, D. 1968.
PATRICIA – practical algorithm to retrieve information coded in
alphanumeric.
Journal of the ACM 15, 4, 514–534.
Munro and
Nekrich (2015)
Munro, J. I. and Nekrich, Y. 2015.
Compressed data structures for dynamic sequences.
In Proc. 23rd Annual European Symposium on Algorithms (ESA).
891–902.
Muthukrishnan (2002)
Muthukrishnan, S. 2002.
Efficient algorithms for document retrieval problems.
In Proc. 13th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA). 657–666.
Na et al. (2018)
Na, J. C., Kim, H., Min, S., Park, H., Lecroq,
T., Léonard, M., Mouchard, L., and Park, K.
2018.
FM-index of alignment with gaps.
Theoretical Computer Science 710, 148–157.
Na et al. (2016)
Na, J. C., Kim, H., Park, H., Lecroq, T., Léonard, M., Mouchard, L., and Park, K. 2016.
FM-index of alignment: A compressed index for similar strings.
Theoretical Computer Science 638, 159–170.
Na et al. (2013)
Na, J. C., Park, H., Crochemore, M., Holub, J., Iliopoulos, C. S., Mouchard, L., and Park, K. 2013.
Suffix tree of alignment: An efficient index for similar data.
In Proc. 24th International Workshop on Combinatorial Algorithms
(IWOCA). 337–348.
Navarro (2014)
Navarro, G. 2014.
Spaces, trees and colors: The algorithmic landscape of document
retrieval on sequences.
ACM Computing Surveys 46, 4, article 52.
Navarro (2016)
Navarro, G. 2016.
Compact Data Structures – A practical approach.
Cambridge University Press.
Navarro (2017)
Navarro, G. 2017.
A self-index on block trees.
In Proc. 24th International Symposium on String Processing and
Information Retrieval (SPIRE). 278–289.
Navarro (2019)
Navarro, G. 2019.
Document listing on repetitive collections with guaranteed
performance.
Theoretical Computer Science 777, 58–72.
Navarro and
Mäkinen (2007)
Navarro, G. and Mäkinen, V. 2007.
Compressed full-text indexes.
ACM Computing Surveys 39, 1, article 2.
Navarro and
Nekrich (2017)
Navarro, G. and Nekrich, Y. 2017.
Time-optimal top-$k$ document retrieval.
SIAM Journal on Computing 46, 1, 89–113.
Navarro and
Ordóñez (2016)
Navarro, G. and Ordóñez, A. 2016.
Faster compressed suffix trees for repetitive text collections.
Journal of Experimental Algorithmics 21, 1, article
1.8.
Navarro and
Prezza (2019)
Navarro, G. and Prezza, N. 2019.
Universal compressed text indexing.
Theoretical Computer Science 762, 41–50.
Navarro
et al. (2019)
Navarro, G., Prezza, N., and Ochoa, C. 2019.
On the approximation ratio of greedy parsings.
CoRR abs/1803.09517v2.
Navarro and
Sepúlveda (2019)
Navarro, G. and Sepúlveda, V. 2019.
Practical indexing of repetitive collections using Relative
Lempel-Ziv.
In Proc. 29th Data Compression Conference (DCC). 201–210.
Nevill-Manning et al. (1994)
Nevill-Manning, C., Witten, I., and Maulsby, D. 1994.
Compression by induction of hierarchical grammars.
In Proc. 4th Data Compression Conference (DCC). 244–253.
Nishimoto et al. (2015)
Nishimoto, T., I, T., Inenaga, S., Bannai, H., and Takeda, M. 2015.
Dynamic index, LZ factorization, and LCE queries in compressed
space.
CoRR abs/1504.06954.
Nishimoto et al. (2016)
Nishimoto, T., I, T., Inenaga, S., Bannai, H., and Takeda, M. 2016.
Fully dynamic data structure for LCE queries in compressed space.
In Proc. 41st International Symposium on Mathematical
Foundations of Computer Science (MFCS). 72:1–72:15.
Nishimoto et al. (2020)
Nishimoto, T., I, T., Inenaga, S., Bannai, H., and Takeda, M. 2020.
Dynamic index and LZ factorization in compressed space.
Discrete Applied Mathematics 274, 116–129.
Nishimoto
et al. (2018)
Nishimoto, T., Takabatake, Y., and Tabei, Y. 2018.
A dynamic compressed self-index for highly repetitive text
collections.
In Proc. 28th Data Compression Conference (DCC). 287–296.
Ochoa and
Navarro (2019)
Ochoa, C. and Navarro, G. 2019.
Repair and all irreducible grammars are upper bounded by high-order
empirical entropy.
IEEE Transactions on Information Theory 65, 5,
3160–3164.
Ohlebusch (2013)
Ohlebusch, E. 2013.
Bioinformatics Algorithms: Sequence Analysis, Genome
Rearrangements, and Phylogenetic Reconstruction.
Oldenbusch Verlag.
Ohlebusch and
Gog (2011)
Ohlebusch, E. and Gog, S. 2011.
Lempel-Ziv factorization revisited.
In Proc. 22nd Annual Symposium on Combinatorial Pattern Matching
(CPM). 15–26.
Ohno
et al. (2018)
Ohno, T., Sakai, K., Takabatake, Y., I, T., and
Sakamoto, H. 2018.
A faster implementation of online RLBWT and its application to
LZ77 parsing.
Journal of Discrete Algorithms 52-53, 18–28.
Pătraşcu and Thorup (2006)
Pătraşcu, M. and Thorup, M. 2006.
Time-space trade-offs for predecessor search.
In Proc. 38th Annual ACM Symposium on Theory of Computing
(STOC). 232–240.
Policriti and
Prezza (2018)
Policriti, A. and Prezza, N. 2018.
LZ77 computation based on the run-length encoded BWT.
Algorithmica 80, 7, 1986–2011.
Prezza (2016)
Prezza, N. 2016.
Compressed computation for text indexing.
Ph.D. thesis, University of Udine.
Przeworski
et al. (2000)
Przeworski, M., Hudson, R. R., and Rienzo, A. D. 2000.
Adjusting the focus on human variation.
Trends in Genetics 16, 7, 296–302.
Raskhodnikova et al. (2013)
Raskhodnikova, S., Ron, D., Rubinfeld, R., and Smith, A. D. 2013.
Sublinear algorithms for approximating string compressibility.
Algorithmica 65, 3, 685–709.
Rodeh
et al. (1981)
Rodeh, M., Pratt, V. R., and Even, S. 1981.
Linear algorithm for data compression via string matching.
Journal of the ACM 28, 1, 16–24.
Rubin (1976)
Rubin, F. 1976.
Experiments in text file compression.
Communications of the ACM 19, 11, 617–623.
Russo
et al. (2020)
Russo, L. M. S., Correia, A., Navarro, G., and Francisco, A. P. 2020.
Approximating optimal bidirectional macro schemes.
In Proc. 30th Data Compression Conference (DCC). 153–162.
Rytter (2003)
Rytter, W. 2003.
Application of Lempel-Ziv factorization to the approximation of
grammar-based compression.
Theoretical Computer Science 302, 1-3, 211–222.
Sadakane (2007a)
Sadakane, K. 2007a.
Compressed suffix trees with full functionality.
Theory of Computing Systems 41, 4, 589–607.
Sadakane (2007b)
Sadakane, K. 2007b.
Succinct data structures for flexible text retrieval systems.
Journal of Discrete Algorithms 5, 1, 12–22.
Sahinalp and
Rajpoot (2003)
Sahinalp, S. C. and Rajpoot, N. M. 2003.
Chapter 6: Dictionary based data compression: An algorithmic
perspective.
In Lossless Compression Handbook. Academic Press, 156–168.
Sahinalp and
Vishkin (1995)
Sahinalp, S. C. and Vishkin, U. 1995.
Data compression using locally consistent parsing.
Tech. rep., Dept. of Computer Science, University of Maryland.
Sakai et al. (2019)
Sakai, K., Ohno, T., Goto, K., Takabatake, Y., I,
T., and Sakamoto, H. 2019.
RePair in compressed space and time.
In Proc. 29th Data Compression Conference (DCC). 518–527.
Sakamoto (2005)
Sakamoto, H. 2005.
A fully linear-time approximation algorithm for grammar-based
compression.
Journal of Discrete Algorithms 3, 2â4, 416–430.
Shannon (1948)
Shannon, C. E. 1948.
A mathematical theory of communication.
Bell Systems Technical Journal 27, 398–403.
Silvestri (2010)
Silvestri, F. 2010.
Mining query logs: Turning search usage data into knowledge.
Foundations and Trends in Information Retrieval 4, 1–2, 1–174.
Sirén et al. (2008)
Sirén, J., Välimäki, N., Mäkinen, V., and Navarro, G. 2008.
Run-length compressed indexes are superior for highly repetitive
sequence collections.
In Proc. 15th International Symposium on String Processing and
Information Retrieval (SPIRE). 164–175.
Stephens et al. (2015)
Stephens, Z. D., Lee, S. Y., Faghri, F., Campbell,
R. H., Chenxiang, Z., Efron, M. J., Iyer, R., Sinha,
S., and Robinson, G. E. 2015.
Big data: Astronomical or genomical?
PLoS Biology 17, 7, e1002195.
Storer and
Szymanski (1982)
Storer, J. A. and Szymanski, T. G. 1982.
Data compression via textual substitution.
Journal of the ACM 29, 4, 928–951.
Su
et al. (2010)
Su, J.-H., Huang, Y.-T., Yeh, H.-H., and Tseng,
V. S. 2010.
Effective content-based video retrieval using pattern-indexing and
matching techniques.
Expert Systems with Applications 37, 7, 5068–5085.
Takabatake
et al. (2017)
Takabatake, Y., I, T., and Sakamoto, H. 2017.
A space-optimal grammar compression.
In Proc. 25th Annual European Symposium on Algorithms (ESA).
67:1–67:15.
Takabatake
et al. (2014)
Takabatake, Y., Tabei, Y., and Sakamoto, H. 2014.
Improved ESP-index: A practical self-index for highly repetitive
texts.
In Proc. 13th International Symposium on Experimental Algorithms
(SEA). 338–350.
Takagi et al. (2017)
Takagi, T., Goto, K., Fujishige, Y., Inenaga, S., and Arimura, H. 2017.
Linear-size CDAWG: New repetition-aware indexing and grammar
compression.
CoRR abs/1705.09779.
Tao
et al. (2013)
Tao, K., Abel, F., Hauff, C., Houben, G., and
Gadiraju, U. 2013.
Groundhog day: Near-duplicate detection on Twitter.
In Proc. 22nd International World Wide Web Conference (WWW).
1273–1284.
Typke
et al. (2005)
Typke, R., Wiering, F., and Veltkamp, R. 2005.
A survey of music information retrieval systems.
In Proc. 6th International Conference on Music Information
Retrieval (ISMIR). 153–160.
Ukkonen (1995)
Ukkonen, E. 1995.
On-line construction of suffix trees.
Algorithmica 14, 3, 249–260.
Valenzuela et al. (2019)
Valenzuela, D., Kosolobov, D., Navarro, G., and Puglisi, S. J. 2019.
Lempel-Ziv-like parsing in small space.
CoRR abs/1903.01909.
Verbin and Yu (2013)
Verbin, E. and Yu, W. 2013.
Data structure lower bounds on random access to grammar-compressed
strings.
In Proc. 24th Annual Symposium on Combinatorial Pattern Matching
(CPM). 247–258.
Weiner (1973)
Weiner, P. 1973.
Linear Pattern Matching Algorithms.
In Proc. 14th IEEE Symposium on Switching and Automata Theory
(FOCS). 1–11.
Witten
et al. (1987)
Witten, I. H., Neal, R. M., and Cleary, J. G. 1987.
Arithmetic coding for data compression.
Communications of the ACM 30, 520–540.
Yamamoto et al. (2014)
Yamamoto, J., I, T., Bannai, H., Inenaga, S., and
Takeda, M. 2014.
Faster compact on-line Lempel-Ziv factorization.
In Proc. 31st International Symposium on Theoretical Aspects of
Computer Science (STACS). 675–686.
Ziv and Lempel (1977)
Ziv, J. and Lempel, A. 1977.
A universal algorithm for sequential data compression.
IEEE Transactions on Information Theory 23, 3,
337–343.
Appendix A History of the Contributions to Parsing-Based Indexing
\shortciteNCN09,CNfi10
proposed the first compressed index based on
grammar
compression. Given any grammar of size $g$, their index uses $O(g)$ space to
implement the grid and the tracking of occurrences over the grammar tree, but
not yet the amortization mechanism we described. On a grammar tree of height
$h$, the index searches in time $O(m(m+h)\log n+occ\cdot h\log n)$ and
extracts a substring of length $\ell$ in time $O((\ell+h)\log n)$. The terms
$O(\log n)$ can be reduced by using more advanced data structures, but the
index was designed with practice in mind and it was actually implemented
[Claude et al. (2016)],
using a RePair construction [Larsson and
Moffat (2000)] that is heuristically balanced.
\shortciteNKN11,KN13
proposed the first compressed index based on
Lempel-Ziv, and the only one so far of size $O(z)$. Within this size, they
cannot provide access to $S$ with good time guarantees: each accessed symbol
must be traced through the chain of target-to-source dependencies. If the
maximum length of such a chain is $h\leq z$, their search time is $O(m^{2}h+(m+occ)\log z)$. The term $\log z$ could be $\log^{\epsilon}z$ by using
the geometric structure we have described but, again,
they opt for a practical version.
Binary searches in $\mathcal{X}$ and $\mathcal{Y}$ are sped up with Patricia
trees [Morrison (1968)]. A substring of length $\ell$ is extracted in time
$O(\ell\,h)$. This is the smallest implemented index; it is rather
efficient unless the patterns are too long [Kreft and
Navarro (2013), Claude et al. (2016)]. Interestingly,
it outperforms the previous index [Claude and
Navarro (2011)] both in space (as expected)
and time (not expected).
\shortciteNMNKS11,MNKS13,\shortciteNTTS14
propose another grammar index based on
“edit-sensitive parsing”, which is related to locally consistent parsing
(see the end of Section 5.1.1). This ensures that the parsing of
$P$ and of any of its occurrences in $S$ will differ only by a few ($O(\log n\log^{*}n)$) symbols in the extremes of the respective parse trees, and therefore
the internal symbols are consistent. By looking for those one captures all the
$occ_{c}$ potential occurrences, which however can be more than the
actual occurrences. Given a grammar of size $g_{e}\geq g$ built using
edit-sensitive parsing, their index takes $O(g_{e})$ space and searches in time
$O(m\log\log n\log^{*}n+occ_{c}\log m\log n\log^{*}n)$. Substrings of length
$\ell$ are extracted in time $O(\ell+\log n)$. Their index is implemented,
and outperforms that of \shortciteNKN13 for $m\geq 100$.
\shortciteNCNspire12,\shortciteNCNP20
improved the proposal of
\shortciteNCNfi10 by introducing the amortization mechanism and also using the
mechanism to extract phrase prefixes and suffixes in optimal time
(Section 4.1.3). The result is an index of size
$O(g)$ built on any grammar of size $g$, which searches in time
$O(m^{2}\log\log_{g}n+(m+occ)\log n)$. Again, this index is described with
practicality in mind; they show that with larger data structures of size
$O(g)$ one can reach search time $O(m^{2}+(m+occ)\log^{\epsilon}n)$. Any
substring of size $\ell$ can be extracted in time $O(\ell+\log n)$ with the
mechanisms seen in Section 4.1.1. An implementation of
this index [Claude
et al. (2020)] outperforms the Lempel-Ziv based index [Kreft and
Navarro (2013)]
in time, while using somewhat more space. The optimal-time extraction of
prefixes and suffixes is shown to have no practical impact on balanced grammars.
\shortciteNGGKNP12
invented bookmarking to speed up substring extraction
in the structure of \shortciteNKN13. They use bookmarking on a Lempel-Ziv parse,
of size $O(z\log\log z)$, which is added to a grammar of size $O(g)$ to provide
direct access to $S$. As a result, their index is of size $O(g+z\log\log z)$
and searches in time $O(m^{2}+(m+occ)\log\log n)$. Their technique is more
sophisticated than the one we present in Section 4.3, but it
would not improve the tradeoffs we obtained.
\shortciteNFGHP13,FKP18
proposed the so-called hybrid indexing.
Given a maximum pattern length $M$ that can be sought, and a suitable parse
(Lempel-Ziv, in their case) of size $z$, they form a string $S^{\prime}$ of size
$<2Mz$ by collecting the pieces $S[i-M+1\mathinner{.\,.}i+M-2]$ around all phrase
beginnings $i$ and separating them with $\$$s. Any primary occurrence in $S$ is then found in $S^{\prime}$. They then index $S^{\prime}$ using any compact index and search
it for $P$. All the occurrences of $P$ in $S^{\prime}$ that cross the middle of a piece
are primary occurrences in $S$. The mechanism of Section 5.2.1
to propagate primary to secondary occurrences is used. Patterns longer than $M$
are searched for by cutting them into chunks of length $M$ and assembling their
occurrences. Their space is then $O(Mz/\log_{\sigma}n+z)$. Though they cannot
give time guarantees, their implementation outperforms other classical indexes
[Mäkinen et al. (2010), Kreft and
Navarro (2013)] when $m$ is up to a few times $M$. The weak point of this
index shows up when $m$ is much smaller or much larger than the $M$ value chosen
at index construction time.
\shortciteNGGKNP14
extended bookmarking to include fingerprinting as
well (Section 4.3, again more sophisticated than our
presentation), and invented the technique of using fingerprinting to remove
the $O(m^{2})$ term that appeared in all previous indexes. In the way they
present their index, the space is $O(z\log n)$ and the search
time is $O(m\log m+occ\log\log n)$.292929Their actual space is
$O(z(\log^{*}n+\log(n/z)+\log\log z))$, which they convert to
$O(z\log(n/z))$ by assuming a small enough alphabet and using
$z=O(n/\log_{\sigma}n)$.
\shortciteNNIIBT15,NIIBT20
propose the first dynamic compressed index
(i.e., one can modify $S$ without rebuilding the index from scratch). It is
based on edit-sensitive parsing, and they manage to remove the term $occ_{c}$
in the previous index [Takabatake
et al. (2014)] by finding stronger properties of the
encoding of $P$ via its parse tree. Their search time is
$O(m(\log\log n)^{2}+\log m\log n\log^{*}n(\log n+\log m\log^{*}n)+occ\log n)$.
\shortciteNPhBiCPM17,BEGV18
improve upon the result of
\shortciteNGGKNP14. They propose for the first time the batched search
for the pattern prefixes and suffixes, recall
Section 5.1.1. They also speed up the searches by storing more
points in the grid: if we store the points $S[i],\ldots,S[i+\tau-1]$ for every
phrase starting at $S[i]$, then we need to check only one every $\tau$
partitions of $P$, that is, we check $m/\tau$ partitions. This leads to various
tradeoffs, which in simplified form are:
$O(z\log(n/z)\log\log z)$ space and $O((m+occ)\log\log n)$ time,
$O(z(\log(n/z)+\log\log z))$ space and $O((m+occ)\log^{\epsilon}n)$ time,
$O(z(\log(n/z)+\log\log z)\log\log z)$ space and $O(m+occ\log\log n)$ time, and
$O(z(\log(n/z)+\log^{\epsilon}n))$ space and
$O(m+occ\log^{\epsilon}n)$ time.
The last two reach for the first time linear complexity in $m$.
They also show how to extract a substring of length $\ell$ in time
$O(\ell+\log(n/z))$.
\shortciteNNav17,\shortciteNNP18
build a compressed index based on block
trees (Section 4.2), which are used to provide both access
and a suitable
parse of $S$. They reuse the idea of the grid and the mechanism to propagate
secondary occurrences. By using a block tree built on attractors [Navarro and
Prezza (2019)],
they obtain $O(\gamma\log(n/\gamma))$ space and $O(m\log n+occ\log^{\epsilon}n)$
search time. They called this index “universal” because it was the first
one built on a general measure of compressibility (attractors) instead of on
specific compressors like grammars or Lempel-Ziv. For example, if one builds
a bidirectional macro scheme of size $b$ (on which no index has been proposed),
one can use it as an upper bound to $\gamma$ and have a functional index
of size $O(b\log(n/b))$.
\shortciteNCE18
were the first to show that, using a locally consistent
parsing, only $O(\log m)$ partitions of $P$ need be considered (see the end of
Section 5.1.1). Building on
the grammar-based index of \shortciteNCNspire12 and on batched pattern searches
(Section 5.1.1), they obtain an index using
$O(z(\log(n/z)+\log\log z))$ space and $O(m+\log^{\epsilon}(z\log(n/z))+occ(\log\log n+\log^{\epsilon}z))\subseteq O(m%
+(1+occ)\log^{\epsilon}n)$
time,303030This corrected time is
given in the journal version [Christiansen et al. (2019)]. thus offering another tradeoff
with time linear in $m$.
\shortciteNCEKNP19
rebuild the result of \shortciteNCE18 on top of
attractors, like \shortciteNNP18. They use a slightly different run-length
grammar, which is proved to be of size $O(\gamma\log(n/\gamma))$ and
a better mechanism to track secondary occurrences within constant amortized
time [Claude and
Navarro (2012)]. Their index, of size $O(\gamma\log(n/\gamma))$,
then searches in time $O(m+\log^{\epsilon}\gamma+occ\log^{\epsilon}(\gamma\log(n/\gamma)))\subseteq O%
(m+(1+occ)\log^{\epsilon}n)$.
By enlarging the index to size $O(\gamma\log(n/\gamma)\log^{\epsilon}n)$,
they reach for the first time optimal time in parsing-based indexes,
$O(m+occ)$. Several other intermediate tradeoffs are obtained too.
Interestingly, they obtain this space in terms of $\gamma$ without
the need to find the smallest attractor, which makes the index implementable
(they use measure $\delta$, Section 3.10, to approximate $\gamma$).
Finally, they extend the current results on indexes
based on grammars to run-length grammars, thus reaching an index of size
$O(g_{rl})$ that searches in time $O(m\log n+occ\log^{\epsilon}n)$.
\shortciteNKNP20
prove that the original block trees [Belazzougui
et al. (2015)]
are not only of size $O(z\log(n/z))$, but also $O(\delta\log(n/\delta))$.
They then show that the universal index of \shortciteNNP18 can also be
represented in space $O(\delta\log(n/\delta))$ and is directly implementable
within this space. The search time, $O(m\log n+occ\log^{\epsilon}n)$, is also obtained in space
$O(g_{rl})$ [Christiansen et al. (2019)], which as explained can be proved to be in
$O(\delta\log(n/\delta))$, though there is no efficient way to obtain a
run-length grammar of the optimal size $g_{rl}$. |
Variational Minimization of Orbital-dependent Density Functionals
Cheol-Hwan Park${}^{1,2}$
chpark77@mit.edu
Andrea Ferretti${}^{2,3,4}$
Ismaila Dabo${}^{5}$
Nicolas Poilvert${}^{1}$
Nicola Marzari${}^{1,2}$
${}^{1}$Department of Materials Science and Engineering, Massachusetts
Institute of Technology, Cambridge, Massachusetts 02139, USA
${}^{2}$Department of Materials, University of Oxford, Oxford OX1 3PH, UK
${}^{3}$INFM-S3 & Physics Department, University of Modena and Reggio Emilia,
Modena, Italy
${}^{4}$Centro S3, CNR–Instituto Nanoscienze, I-41125 Modena, Italy
${}^{5}$Université Paris-Est, CERMICS, Project Micmac ENPC-INRIA,
6 & 8 avenue Blaise Pascal, 77455 Marne-la-Vallée Cedex 2, France
(December 3, 2020)
Abstract
Density-functional theory has been one of the most successful approaches ever to
address the electronic-structure problem; nevertheless, since its
implementations are by necessity approximate, they can suffer from a number of fundamental
qualitative shortcomings, often rooted in the remnant electronic self-interaction
present in the approximate energy functionals adopted.
Functionals that strive to correct for such self-interaction errors, such as those
obtained by imposing the Perdew-Zunger self-interaction
correction [Phys. Rev. B 23, 5048 (1981)]
or the generalized Koopmans’ condition
[Phys. Rev. B 82, 115121 (2010)],
become orbital dependent or orbital-density dependent, and provide a very promising avenue to
go beyond density-functional theory,
especially when studying electronic, optical and
dielectric properties, charge-transfer excitations, and molecular dissociations.
Unlike conventional density functionals, these functionals
are not invariant under unitary transformations of occupied electronic
states, which leave the total charge density intact, and this added complexity
has greatly inhibited both their development and their practical applicability.
Here, we first recast the minimization
problem for non-unitary invariant energy functionals
into the language of ensemble density-functional theory [Phys. Rev. Lett. 79, 1337 (1997)],
decoupling the variational search into an inner loop of unitary
transformations that minimize the energy at fixed orbital subspace, and an outer-loop evolution of
the orbitals in the space orthogonal to the occupied manifold.
Then, we show that the potential energy surface in the inner loop is far from convex parabolic
in the early stages of the minimization
and hence minimization schemes based on these assumptions are unstable,
and present an approach to overcome such difficulty.
The overall formulation allows for a stable, robust, and
efficient variational minimization of non-unitary-invariant functionals, essential to
study complex materials and molecules, and to investigate the bulk thermodynamic limit,
where orbitals converge typically to localized Wannier functions.
In particular, using maximally localized Wannier functions as an initial guess
can greatly reduce the computational costs needed to reach the energy minimum
while not affecting or improving the convergence efficiency.
I I. Introduction
Density functional theory (DFT) HohenbergKohn64 ; KohnSham65
has become the basis of much computational materials
science today, thanks to its predictive accuracy in describing
ground-state properties directly from first principles.
While DFT is in principle exact, in any practical implementation
it requires an educated guess for the exact form of the energy functional.
For many years, local or semi-local approximations to the exchange-correlation
energy, such as the local density approximation
(LDA) CeperleyAlder80 ; perdew_zunger
or the generalized gradient approximation PerdewBurkeErnzerhof96
have been
successfully applied to a wealth of different systems marzari_mrs .
Still, these
approximations lead to some dramatic failures, including
the overestimation of dielectric response, incorrect
chemical barriers for reactions involving strongly-localized orbitals kulik:prl ; zhou:prb ,
energies of dissociating molecular species, and
excitation energies of charge-transfer complexes,
to name a few cohen_insights_2008 .
Key to these failures is the
self-interaction error of approximate
DFT cohen_insights_2008 ; perdew_zunger ,
where the electrostatic and exchange-correlation contributions
to the effective energy of the entire charge distribution are
not “purified” from this spurious self interaction of an
individual electron with itself.
To address this issue,
Perdew and Zunger (PZ) introduced first an elegant solution to this problem,
where a self-interaction correction (SIC) is added to the total energy calculated from
approximate DFT (e.g. within the LDA CeperleyAlder80 ; perdew_zunger ),
but practical applications have remained
scarce svane:prl ; hughes_lanthanide_2007 ; stengel_spaldin ; capelle ; kuemmel ; Baumeier ; Filippetti ; ruzsinszky_density_2007 ; suraud ; sanvito .
An important property of DFT with local or semi-local exchange-correlation functionals
is the invariance of the total energy with respect to unitary transformation
of the occupied electronic states. However, SIC-DFT does not
have this invariance property, and in fact finding the
optimal unitary transformation
given a set of orbital wavefunctions is crucial to the numerically
consistent minimization of density functionals with
SIC pederson_local-density_1984 ; svane:1996 ; svane:2000 ; goedecker_umrigar ; vydrov_scuseria ; stengel_spaldin ; klupfel .
In this paper, we focus on the variational minimization of
energy functionals that do not satisfy unitary invariance
in order to provide a stable, robust, and efficient
determination of the electronic structure in this
challenging case. In particular,
we adopt the formulation of ensemble DFT eDFT to decouple the
variational minimization into an inner loop of unitary
transformations and an outer loop of evolution for the occupied manifold, and
suggest optimal strategies for the dynamics of unitary transformations.
In the solid-state limit, this dynamics gives rise to a localized Wannier representation
for the electronic states, and we assess their relation with
maximally localized Wannier functions
(MLWFs) marzari_vanderbilt ; souza_marzari_vanderbilt
as obtained in the absence of SIC.
The remainder of the paper is organized as follows.
In Sec. II, DFT with SIC is briefly reviewed, the method
of inner-loop minimization is explained,
and the issue of using MLWFs as an initial guess
for the wavefunctions is discussed.
In Sec. III, we present and discuss the results.
First, we present results on how the total energy varies with
the unitary transformation of the occupied electronic states.
Second, we discuss the stability and efficiency of our method
for inner-loop minimization.
Finally, we show how the calculated total energy converges
both as a function of the outer-loop iterations and
as a function of the CPU time and discuss the optimal
scheme for total energy minimization of energy functionals with SIC.
We then summarize our findings in Sec. IV.
II II. Methodology
II.1 A. Background
For simplicity, we consider in the following the wavefunctions to be real;
however, the discussion can straightforwardly be extended to
complex wavefunctions.
The total energy of the interacting electron system from Kohn-Sham DFT
within the LDA is given by KohnSham65
$$\displaystyle E_{\rm LDA}[\{\psi_{\sigma i}\}]$$
$$\displaystyle=-\sum_{\sigma}\sum_{i=1}^{N}\frac{1}{2}\int\psi_{\sigma i}({\bf r%
})\nabla^{2}\psi_{\sigma i}({\bf r})\,d{\bf r}+\int V_{\rm ext}({\bf r})\rho({%
\bf r})d{\bf r}$$
$$\displaystyle+\frac{1}{2}\int\int\frac{\rho({\bf r})\rho({\bf r}^{\prime})}{|{%
\bf r}-{\bf r}^{\prime}|}\,d{\bf r}\,d{\bf r}^{\prime}+\int\epsilon_{\rm xc}^{%
\rm LDA}(\rho({\bf r}))\,\rho({\bf r})d{\bf r}\,,$$
(1)
where $\sigma$ is the spin index, the band index $i$ runs through
the $N$ occupied electronic states, and
$\rho({\bf r})=\sum_{\sigma}\sum_{i=1}^{N}|\psi_{\sigma i}({\bf r})|^{2}$
is the total charge density.
The first term on the right hand side of Eq. (1) is
the kinetic energy, the second term the interaction energy between electrons
and the ion cores, the third term the Hartree interaction energy,
and the last term the exchange-correlation energy.
This energy functional $E_{\rm LDA}[\{\psi_{\sigma i}\}]$
is invariant under the following unitary transformation
$$\psi^{\prime}_{\sigma i}({\bf r})=\sum_{j=1}^{N}\psi_{\sigma j}({\bf r})\,O_{%
\sigma ji}$$
(2)
for an arbitrary unitary matrix $O_{\sigma}$
since the total charge density $\rho({\bf r})$
and the kinetic energy [Eq. (1)] are
invariant under this transformation.
Given that the wavefunctions are real, we consider $O_{\sigma}$ to be an
orthogonal matrix, i. e. ,
real and satisfying $O_{\sigma}^{\rm t}O_{\sigma}=I$ where $I$
is the $N\times N$ identity matrix.
For some density functionals with SIC perdew_zunger ; dabo:NK ,
the total energy $E_{\rm total}[\{\psi_{\sigma i}\}]$ is given by
$$E_{\rm total}[\{\psi_{\sigma i}\}]=E_{\rm LDA}[\{\psi_{\sigma i}\}]+E_{\rm SIC%
}[\{\rho_{\sigma i}\}]\,,$$
(3)
where $\rho_{\sigma i}({\bf r})=|\psi_{\sigma i}({\bf r})|^{2}$.
$E_{\rm SIC}[\{\rho_{\sigma i}\}]$ and hence
$E_{\rm total}[\{\psi_{\sigma i}\}]$
are in general not invariant under orthogonal transformations
because they are dependent not only on the total charge
density $\rho({\bf r})$, which is invariant under orthogonal
or unitary transformation, but also on the charge densities,
$\rho_{\sigma i}({\bf r})$’s,
arising from different orbitals.
This can be seen by considering how the SIC energy varies
under the orthogonal transformation of Eq. (2).
To this end, it is useful to recall that
an orthogonal matrix $O_{\sigma}$ can be written as
$$O_{\sigma}=e^{A_{\sigma}}$$
(4)
where $A_{\sigma}$ is an antisymmetric matrix; if we further consider
the case where
the norm of $A_{\sigma}$ is much less than that of an identity matrix,
we can assume
$$O_{\sigma}\approx I+A_{\sigma}\,.$$
(5)
Therefore, the transformed wavefunctions are given by
$$\psi^{\prime}_{\sigma j}({\bf r})\approx\psi_{\sigma j}({\bf r})+\sum_{i=1}^{N%
}\psi_{\sigma i}({\bf r})A_{\sigma ij}\,,$$
(6)
from which
$$\frac{\partial\rho_{\sigma j}({\bf r})}{\partial A_{\sigma ij}}=2\psi_{\sigma j%
}({\bf r})\psi_{\sigma i}({\bf r})\,,$$
(7)
and (using the antisymmetry of $A_{\sigma}$)
$$\frac{\partial\rho_{\sigma i}({\bf r})}{\partial A_{\sigma ij}}=-2\psi_{\sigma
j%
}({\bf r})\psi_{\sigma i}({\bf r})\,.$$
(8)
Finally, if we define the SIC potential
$$v^{\rm SIC}_{\sigma i}({\bf r})=\frac{\delta E_{\rm SIC}}{\delta\rho_{\sigma i%
}({\bf r})}\,,$$
(9)
we obtain the gradient of SIC energy with respect to the
transformation matrix elements
$$\displaystyle G_{\sigma ij}\equiv\frac{\partial E_{\rm SIC}}{\partial A_{%
\sigma ij}}$$
$$\displaystyle=2\int\psi_{\sigma i}({\bf r})\left[v^{\rm SIC}_{\sigma j}({\bf r%
})-v^{\rm SIC}_{\sigma i}({\bf r})\right]\psi_{\sigma j}({\bf r})\,d{\bf r}\,,$$
(10)
which is a result originally obtained by
Pederson et al. pederson_local-density_1984 .
Note that this gradient matrix $G_{\sigma}$ is also antisymmetric,
just like $A_{\sigma}$.
Therefore, at an energy minimum, the wavefunctions satisfy
$$0=\int\psi_{\sigma i}({\bf r})\left[v^{\rm SIC}_{\sigma j}({\bf r})-v^{\rm SIC%
}_{\sigma i}({\bf r})\right]\psi_{\sigma j}({\bf r})\,d{\bf r}\,,$$
(11)
which was referred to as the “localization condition”
by Pederson et al. pederson_local-density_1984 .
To date, the most widely used SIC scheme is PZ
SIC perdew_zunger (and its few refinements, e. g. ,
Refs. Filippetti ; lundin_eriksson ; davezac ).
In PZ scheme, the SIC energy is given by
$$\displaystyle E^{\rm PZ}_{\rm SIC}[\{\rho_{\sigma i}\}]$$
$$\displaystyle=$$
$$\displaystyle-\sum_{\sigma}\sum_{i=1}^{N}\frac{1}{2}\int\int\frac{\rho_{\sigma
i%
}({\bf r})\rho_{\sigma i}({\bf r}^{\prime})}{|{\bf r}-{\bf r}^{\prime}|}\,d{%
\bf r}\,d{\bf r}^{\prime}$$
(12)
$$\displaystyle-$$
$$\displaystyle\sum_{\sigma}\sum_{i=1}^{N}\int\epsilon_{\rm xc}^{\rm LDA}(\rho_{%
\sigma i}({\bf r}))\,\rho_{\sigma i}({\bf r})d{\bf r}\,.$$
The rationale underlying PZ SIC is both simple and beautiful:
correcting the total energy by subtracting
the incorrect energy contributions from the interaction of an electron
with itself — i. e. , the Hartree, exchange, and
correlation energies. Hence PZ SIC is exact for one-electron
systems, or in the limit where the total charge density can
be decomposed into non-overlapping one-electron charge density contributions.
Recently, an alternative scheme suitable for many-electron systems
based on the generalized Koopmans condition koopmans
was introduced in Ref. dabo:NK .
In brief, one could start from Janak’s theorem janak
that states that in DFT
the orbital energy $\epsilon_{\sigma i}(f)$ with fractional occupation
of a state being $f_{\sigma i}=f$ is
$$\epsilon_{\sigma i}(f)=\left.\frac{dE_{\sigma i}(f^{\prime})}{df^{\prime}}%
\right|_{f^{\prime}=f}\,,$$
(13)
where $E_{\sigma i}$ is the Kohn-Sham total energy minimized under the
constraint $f_{\sigma i}=f$. If there were no self-interaction,
the orbital energy of a state $\epsilon_{\sigma i}(f)$ would
not change upon varying its own occupation $f$.
In other words, for a self-interaction-free functional,
$$\epsilon_{\sigma i}(f)={\rm constant}\,\,\,(0\leq f\leq 1)\,.$$
(14)
Alternatively, using Janak’s theorem janak ,
this can be rewritten as
$$\displaystyle\Delta E^{\rm Koopmans}_{\sigma i}(f)\equiv E_{\sigma i}(f_{%
\sigma i})-E_{\sigma i}(0)=f_{\sigma i}\,\epsilon_{\sigma i}(f)$$
$$\displaystyle(0\leq f\leq 1)\,,$$
(15)
which is equivalent to the generalized Koopmans theorem dabo:NK ,
telling us that the total energy
varies linearly with the fractional occupation $f_{\sigma i}$.
In conventional DFT, however, Eq. (14)
or Eq. (15) does not hold and instead,
$$\Delta E_{\sigma i}\equiv E_{\sigma i}(f_{\sigma i})-E_{\sigma i}(0)=\int_{0}^%
{f_{\sigma i}}\epsilon_{\sigma i}(f^{\prime})\,df^{\prime}\,.$$
(16)
From Eqs. (15) and (16),
the non-Koopmans (NK) energy $\Pi_{\sigma i}(f)$ – i. e. ,
the deviation from the linearity
for the energy versus occupation
– can be defined as dabo:NK
$$\displaystyle\Pi_{\sigma i}(f)$$
$$\displaystyle=$$
$$\displaystyle\Delta E^{\rm Koopmans}_{\sigma i}(f)-\Delta E_{\sigma i}$$
(17)
$$\displaystyle=$$
$$\displaystyle\int_{0}^{f_{\sigma i}}\left[\epsilon_{\sigma i}(f)-\epsilon_{%
\sigma i}(f^{\prime})\right]\,df^{\prime}\,.$$
From this result, the SIC energy term based on the generalized
Koopmans theorem has been defined as
$$E_{\rm SIC}^{\rm NK}[\{\rho_{\sigma i}\}]=\sum_{\sigma}\sum_{i=1}^{N}\Pi_{%
\sigma i}(f_{\rm ref})\,,$$
(18)
where $f_{\rm ref}$ is a reference occupation factor
(for many-electron systems,
$f_{\rm ref}=\frac{1}{2}$ was shown to be the best choice dabo:NK ).
The total energy versus (fractional) number of electrons relation
calculated by exact DFT should be piecewise linear
with slope discontinuities at integral electron occupations piecewise ;
however, within the LDA, this energy versus occupation relation is
piecewise convex cohen_insights_2008 .
The LDA deviation from the piecewise linearity
is the main reason for the failures of approximate DFTs cohen_insights_2008 .
The new SIC functional [Eq. (18)] is introduced to cure this
pathology and to recover the piecewise linearity of exact DFT dabo:NK .
The (bare) NK SIC discussed above and its screened version
explain some of the most important material properties
such as ionization energy and
electron affinity better than PZ SIC.
We refer the reader to Ref. dabo:NK
for the details of NK SIC.
II.2 B. Implementation
In order to implement a variational minimization of the total energy
functional,
we adopt the same strategy as the ensemble-DFT approach eDFT ,
decoupling the dynamics of orbital rotations in the occupied subspace
and that of orbital evolution in the manifold orthogonal to the occupied
subspace. In explicit terms, we minimize the SIC energy through
$$\min_{\{\psi^{\prime}_{\sigma i}\}}E_{\rm SIC}[\{\psi^{\prime}_{\sigma i}\}]=%
\min_{\{\psi_{\sigma i}\}}\left(\min_{\{O_{\sigma}\}}E_{\rm SIC}[\{\psi_{%
\sigma i}\},\{O_{\sigma}\}]\right)\,,$$
(19)
where $\{\psi^{\prime}_{\sigma i}\}$ and $\{\psi_{\sigma i}\}$ are
connected by an orthogonal transformation $\{O_{\sigma}\}$ [Eq. (2)].
Minimization over the basis orbital wavefunctions $\{\psi_{\sigma i}\}$
and that over the orthogonal transformation
$\{O_{\sigma i}\}$ – inside the round parenthesis in
Eq. (19) – correspond to
the outer-loop minimization and inner-loop minimization,
respectively, i. e. ,
given the orbital wavefunctions,
an optimal orthogonal transformation is searched and then
the orbital wavefunctions are evolved. This process is
repeated until convergence.
Ensemble-DFT minimization has also been discussed
in studying the SIC problem by Stengel and Spaldin stengel_spaldin
and by Klüpfel, Klüpfel, and Jónsson klupfel .
The main focus here is on inner-loop minimization.
The gradient matrix $G_{\sigma ij}=\partial E_{\rm SIC}/\partial A_{\sigma ij}$
in Eq. (10) is
antisymmetric and real; hence, $-i\,G_{\sigma}$ is Hermitian
(and purely imaginary). Therefore, $-i\,G_{\sigma}$ can be
diagonalized as
$$-i\,G_{\sigma}=U_{\sigma}^{\dagger}\,D_{\sigma}\,U_{\sigma}\,,$$
(20)
or,
$$G_{\sigma}=i\,U_{\sigma}^{\dagger}\,D_{\sigma}\,U_{\sigma}\,,$$
(21)
where $U_{\sigma}$ is a unitary matrix and
$$D_{\sigma ij}=\lambda_{\sigma i}\,\delta_{ij}$$
(22)
a real diagonal matrix.
From Eq. (21),
we evolve the matrix $A_{\sigma}$ along the energy gradient with a
step of size $l$
$$\Delta A_{\sigma}=-l\,G_{\sigma}=-i\,l\,U_{\sigma}^{\dagger}\,D_{\sigma}\,U_{%
\sigma}\,,$$
(23)
calculate the updated orthogonal matrix
$$O_{\sigma}=e^{\Delta A_{\sigma}}=U_{\sigma}^{\dagger}\,e^{-i\,l\,D_{\sigma}}\,%
U_{\sigma}\,,$$
(24)
and then transform the wavefunctions accordingly.
Here, we use the steepest-descent method for the inner-loop minimization.
But one could employ other methods such as damped dynamics
or conjugate gradients.
In each of the inner-loop steps, we evaluate the SIC energy
with two different sets of wavefunctions: first
by using the given wavefunctions [$E_{\rm SIC}(l=0)$]
and second by using the wavefunctions transformed by $O_{\sigma}$
in Eq. (24) with a trial step $l=l_{\rm trial}$
[$E_{\rm SIC}(l=l_{\rm trial})$].
In addition, the gradient at $l=0$ reads
$$\displaystyle\left.\frac{dE_{\rm SIC}(l)}{dl}\right|_{l=0}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\sum_{\sigma ij}\left[\frac{\partial E_{\rm SIC}}{%
\partial A_{\sigma ij}}\,\frac{d\Delta A_{\sigma ij}}{dl}\right]_{l=0}$$
(25)
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}\sum_{\sigma ij}|G_{\sigma ij}|^{2}\,,$$
where we have used Eqs. (10) and (23),
and the fact that only half of the matrix elements of $G_{\sigma}$
are independent.
Thus, knowing
$E_{\rm SIC}(l=0)$, $E_{\rm SIC}(l=l_{\rm trial})$, and $dE_{\rm SIC}(l)/dl|_{l=0}$,
we can fit a parabola to $E_{\rm SIC}(l)$,
yielding the optimal step $l=l_{\rm optimal}$
and the energy minimum $E_{\rm SIC}(l=l_{\rm optimal})$.
This completes one inner-loop iteration. We then
use the transformed wavefunctions
to calculate the gradient [Eq. (10)]
and repeat iterations until the SIC energy converges.
For optimal convergence, we set the step size
based on the highest frequency component
of the gradient matrix, i. e. ,
$$l=\gamma\,l_{\rm c}\,\,\,\,\,\,\left(l_{\rm c}=\frac{\pi}{\lambda_{\max}}%
\right)\,,$$
(26)
where $\gamma$ is a constant of order $\sim 0.1$ and
$\lambda_{\rm max}$ the maximum eigenvalue of $D_{\sigma}$,
$$\lambda_{\max}=\,\max_{\sigma i}\,\lambda_{\sigma i}\,.$$
(27)
The critical step $l_{\rm c}$ should be considered
as the point when the transformed wavefunctions
become appreciably different from the original wavefunctions.
Therefore, when we evolve wavefunctions by using a step much
larger than $l_{\rm c}$, a fitting of $E_{\rm SIC}$ versus
$l$ by a parabola will not be successful.
Imposing the constraint $l=\gamma\,l_{\rm c}$ [Eq. (26)]
when necessary is the key part of our method:
(i) We set the trial step of the first iteration of the inner-loop minimization
according to Eq. (26). (In subsequent iterations, the trial step
$l_{\rm trial}$ is set based on the optimal step of the previous iteration:
we set it to be twice the optimal step
of the previous iteration.)
By setting the initial trial step based on the eigenspectrum
of the gradient matrix, we make
the inner-loop process unaffected by the absolute
magnitude of the SIC energy gradient with respect to
the orthogonal transformation [Eq. (10)].
(ii) When the calculated optimal
step is larger than $\gamma\,l_{\rm c}$, we set $l_{\rm optimal}=\gamma\,l_{\rm c}$.
This procedure has proven to be instrumental when $E_{\rm SIC}(l)$ versus $l$
relation cannot be fitted well by a parabola.
In such cases, the calculated $l_{\rm optimal}$ can be much larger
than $l_{\rm c}$.
A similar scaling method based on the highest frequency component
of the gradient matrix was used in finding the
MLWFs marzari_vanderbilt ; Mostofi2008685 .
II.3 C. MLWFs as an initial guess for the wavefunctions
SIC tends to localize the orbital wavefunctions [note e. g. , that
the Hartree
term in Eq. (12) will be more negative if the state becomes
more localized]. Therefore, it is natural to consider using some localized
basis functions as an initial guess for the wavefunctions of density
functionals with SIC.
To this end, employing
MLWFs marzari_vanderbilt ; souza_marzari_vanderbilt
represents a very promising initial-guess strategy.
Although the possibility of using MLWFs in this regard was
recently suggested tsemekhman ,
no literature is available on the merit of that scheme.
We address this issue in conjunction with the inner-loop minimization
method discussed in the previous subsection.
II.4 D. Computational details
We performed DFT calculations with norm-conserving
pseudopotentials TroullierMartins91
in the LDA perdew_zunger using
the Car-Parrinello (CP) code of the Quantum ESPRESSO
distribution baroni:2006_Espresso
with the inner-loop minimization described in
the previous subsections, and a conventional damped dynamics algorithm
for the outer-loop minimization.
We have performed calculations on both PZ SIC perdew_zunger
and NK SIC dabo:NK .
Except for the case of investigating the effect of using
MLWFs as an initial guess for the wavefunctions, we have
used LDA wavefunctions with some arbitrary phases –
they are not LDA eigenstates – when we start the calculations.
We performed calculations on a rather big molecule,
C${}_{20}$ fullerene.
A supercell geometry was used with the minimum distance
between the carbon atoms in neighboring supercells larger
than 6.7 Å.
The Coulomb interaction is truncated to prevent
spurious interaction between periodic replicas in different
supercells dabo:Coulomb ; dabo:arXiv .
III III. Results and Discussion
In order to find an optimal strategy for the minimization
of SIC DFT, it is important to know how the energy varies
with orthogonal transformations.
We first show the energy variation along the direction
in the orthogonal transformation space parallel to the gradient
[Eq. (10)] of the energy, i. e. ,
$E_{\rm SIC}(l)$ versus $l$,
where $l$ is a step representing the amount of
orthogonal rotation as defined in Eq. (23).
Figure 1(a) shows the results for PZ SIC
at a few different stages during the inner-loop minimization.
What we can see is that initially $E_{\rm SIC}^{\rm PZ}(l)$
varies slowly with $l$, and then, in the middle
of the inner-loop minimization, varies
fast and then, toward the end of the minimization,
varies slowly again. There is no good length scale
of $l$ which can consistently describe the variation
of $E_{\rm SIC}(l)$ during the entire process of an inner-loop
minimization. The speed of the energy variation
at different stages of the inner-loop minimization with
respect to $l$ near $l=0$ can however be very well explained
by $\lambda_{\rm max}$ [Eq. (27)],
which is the fastest frequency component of the gradient matrix
[Eq. (10)], as shown in Fig. 1(b).
We can draw similar conclusions for NK SIC as
shown in Figs. 1(c) and 1(d).
However, there are a few points that are worth mentioning.
First, the magnitude of NK SIC energy is several times
smaller than that of PZ SIC energy [Figs. 1(a) and 1(c)].
Second, $\lambda_{\rm max}$, or the main driving force for
orthogonal transformation near $l=0$, for NK SIC is also
much smaller than that for PZ SIC, although eventually both
of them converge to zero at energy minima.
Because of these differences between different SIC functionals,
it is clear that determining the trial step $l_{\rm trial}$
based on $\lambda_{\rm max}$ will be very useful, even more so because
$\lambda_{\rm max}$ is also affected by the arbitrary initial phases
of the wavefunctions,
as will be discussed later (Fig. 6).
Based on the previous discussion, we now show,
in Fig. 2, $E_{\rm SIC}(l)$
as a function of the scaled step
$$l_{\rm scaled}\,\equiv\,{l}\,/\,{l_{\rm c}}\,,$$
(28)
i. e. ,
$l$ in units of $l_{\rm c}$.
For both PZ SIC and NK SIC, the energy variation
length scale
near $l=0$ through the entire process of the inner-loop
minimization is $\sim 0.5$ in units of $l_{\rm scaled}$.
The results confirm
that indeed a natural length scale for $l$ that should be
used in the inner-loop minimization is the $l_{\rm c}$
defined in Eq. (26).
One more thing to note here is that in both PZ SIC and
NK SIC, at the initial stages of the inner-loop iterations,
the energy profile cannot be well fitted by a parabola.
This trend is dramatic especially for NK SIC,
where the $E_{\rm SIC}(l)$ versus $l$ (or $l_{\rm scaled}$)
relation is concave, not convex, at $l=0$.
This can be best understood using a simple system:
a carbon atom which has, in our pseudopotential
calculations, two orbitals
($2s$ and $2p$), i. e. , it is a two-level system.
The PZ SIC energy $E_{\rm SIC}^{\rm PZ}(l)$ versus
$l_{\rm scaled}$ is shown in Fig. 3.
The profile is sinusoidal with a period of 0.5,
rather than parabolic for the entire process of minimization.
Notably, the period 0.5 in units of $l_{\rm scaled}$
is similar to the previously discussed length scale
for C${}_{20}$ fullerene.
The shape of the curve does not change as we proceed
in the inner-loop minimization;
the only variation is that the minimum of
the curve moves toward the origin ($l_{\rm scaled}=0$).
We can understand this behavior as follows. The gradient
matrix in Eq. (10) for a carbon atom
is of the form
$$G=\left(\begin{array}[]{cc}0&c\\
-c&0\end{array}\right)=i\,c\,\sigma_{y}\,,$$
(29)
where $c$ is a real constant
and $\sigma_{y}$ the Pauli matrix.
(We dropped the spin index for obvious reasons.)
Assuming (without losing generality) that $c>0$,
the maximum eigenvalue of $G$ is
$$\lambda_{\rm max}=c$$
(30)
and the orthogonal transformation matrix
[Eqs. (23) and (24)]
is given by
$$O=e^{-lG}=\cos\,(lc)\,I\,-\,i\,\sin\,(lc)\,\sigma_{y}\,,$$
(31)
or, using $l_{\rm scaled}$ [Eq. (28)],
$$O=\cos\,(\pi\,l_{\rm scaled})\,I\,-\,i\,\sin\,(\pi\,l_{\rm scaled})\,\sigma_{y%
}\,.$$
(32)
In particular, when $l_{\rm scaled}=0.5$, $O=-i\,\sigma_{y}$,
and, under this orthogonal transformation $O$, $\psi_{1}^{\prime}=-\psi_{2}$ and $\psi_{2}^{\prime}=\psi_{1}$,
i. e. , $O$ just exchanges the two orbital wavefunctions
(plus a trivial sign change).
When the original wavefunctions $\psi_{1}$ and $\psi_{2}$
correspond to the maximum SIC energy configuration,
the new set of wavefunctions $\psi_{1}^{\prime}$ and $\psi_{2}^{\prime}$
will correspond also to the SIC energy maximum.
Therefore, the period of $E_{\rm SIC}(l)$ versus $l_{\rm scaled}$
will be 0.5 in agreement with our calculation [Fig. 3].
(The shape of the curve is not exactly sinusoidal
and varies slightly with the kind of SIC used.)
For this example,
which part of the sinusoidal-like curve
one starts the inner-loop minimization from
depends on the initial orbital wavefunctions
(and an arbitrary rotation of them).
If we start
from the LDA eigenstates, the SIC energy is at its maximum
(roughly speaking, the LDA eigenstates are the most delocalized and the
SIC energy is highest) and the inner-loop minimization starts from
the top of the sinusoidal-like curve, and hence
(i) the driving force for the orthogonal transformation is extremely
weak (zero at the maximum) and (ii) $E_{\rm SIC}(l)$ versus
$l_{\rm scaled}$
is concave. For these reasons, if we do not properly scale
$l$, or if we do not constrain $l$ during the inner-loop minimization
process, the minimization process based on the assumption that the
energy profile is convex parabolic may become unstable or extremely slow.
This discussion is also relevant to other systems,
as we have seen in the case of C${}_{20}$ fullerene.
Figure 4(a) compares the performance of the inner-loop
minimization for the case PZ SIC. In one case (dashed or blue curve),
we take the
optimal step size $l_{\rm optimal}$ obtained from fitting
$E_{\rm SIC}^{\rm PZ}(l)$ versus $l$ by a parabola
from three calculated quantities: $E_{\rm SIC}^{\rm PZ}(l=0)$,
$E_{\rm SIC}^{\rm PZ}(l=l_{\rm trial})$, and
$dE_{\rm SIC}^{\rm PZ}(l)/dl|_{l=0}$.
In the other case (solid or red curve),
if the calculated $l_{\rm optimal}$ is larger than $\gamma\,l_{\rm c}$
(with $\gamma=0.1$) [Eq. (26)],
we set $l_{\rm optimal}=\gamma\,l_{\rm c}$.
Apparently, by using this constraint based on $l_{\rm c}$,
or, $\lambda_{\rm max}$, the inner-loop minimization process
becomes more stable and faster.
(In both cases, the trial step of the first iteration was set
to $l_{\rm trial}=\gamma\,l_{\rm c}$.)
The difference between using and not using
this $\lambda_{\rm max}$ constraint is dramatic for NK SIC
[Fig. 4(b)].
This again is due to (i) the small gradient of the SIC energy
with respect to the variation of the orthogonal transformation,
and (ii) non-concave-parabolic dependence of
$E_{\rm SIC}(l)$ on $l$.
Until now, our focus was on the inner-loop minimization. Now
we look at the entire minimization procedure including the outer loop.
In order to find an optimal minimization strategy, we have performed
our calculations by restricting the number of inner-loop minimization
iterations per each outer-loop iteration to be less than or
equal to $n_{\rm max}$.
(However, not every outer-loop iteration will require $n_{\rm max}$
inner-loop iterations because the SIC energy may be converged
earlier during inner-loop minimization. We exit the inner loop
if the energy difference between consecutive iterations is lower
than the energy convergence threshold of $10^{-5}$ Ry.)
The case without inner-loop minimization is denoted by $n_{\rm max}=0$.
Figure 5(a) shows the convergence of PZ SIC energy
for various different
choices of $n_{\rm max}$. In all cases where the inner-loop minimization
routine is used (i. e. , $n_{\rm max}>0$),
the total number of outer-loop iterations necessary
to achieve the same level of convergence is much smaller than
that when no inner-loop minimization is used.
This, however, does not necessarily mean that the total computation
time is reduced. In Fig. 5(b), we show the CPU
time dependence of the SIC energy (the results include both the
inner-loop and outer-loop minimization iterations).
Surprisingly, in all cases other than $n_{\rm max}=1$,
inner-loop minimization actually slows down the computation
for PZ SIC. When we set $n_{\rm max}=1$, i.e., if
the number of inner-loop iterations per each outer-loop iteration
is restricted to 1, we find about twice improvement over when
no inner-loop minimization is performed in terms of
the CPU time.
The case of NK SIC is very different.
Figures 5(c) and 5(d) shows that
inner-loop minimization reduces not only the required
number of outer-loop iterations but also the CPU time significantly.
Especially, the CPU time is reduced by $\sim 20$ times when we perform
inner-loop minimization, and is rather insensitive to $n_{\rm max}$.
These results on PZ SIC and NK SIC support that the presented
method works regardless of the absolute magnitude of
the SIC energy gradient with respect to the orthogonal transformation
[Eq. (10)].
The method can be applied to density functionals with other kinds
of SIC. For example, SIC with
screening, for which the
total energy is given by
$$E_{\rm total}=E_{\rm LDA}+\alpha\,E_{\rm SIC}\,\,\,(\alpha<1)\,,$$
(33)
will have the SIC energy gradient lower in magnitude than
the unscreened version of SIC ($\alpha=1$), and our method
will be more useful.
It has to be noted that the relative CPU time among different
calculations shown in Fig. 5 at different stages
of the minimization is affected only by the ratio of the
CPU time for one inner-loop iteration to that for one outer-loop iteration.
Therefore, the relative CPU time is rather insensitive to the
complexity of the system studied, and in that sense is meaningful.
(The absolute CPU time is also affected much by the complexity of
the system, the performance and number of processors, etc.)
In our case, one inner-loop iteration for PZ SIC
takes 3.6 times as long as one outer-loop iteration
and one inner-loop iteration for NK SIC takes
2.0 times as long as one outer-loop iteration.
Finally, we discuss how useful it is to use
MLWFs marzari_vanderbilt ; souza_marzari_vanderbilt
as an initial guess for
the wavefunctions klupfel .
The following description is relevant for both
PZ SIC [Figs. 6(a) and 6(b)] and
NK SIC [Figs. 6(c) and 6(d)]
and whether or not the inner-loop minimization is employed.
Figure 6 shows that when MLWFs are used,
the initial total energy is lower than when LDA wavefunctions with
arbitrary phases is used. On the other hand, the slope of
$\log[({\rm current\,\,\,total\,\,\,energy})-({\rm converged\,\,\,total\,\,\,%
energy})]$
versus either the number of outer-loop iterations
or the relative CPU time is not very different in the two cases.
Therefore, it is advantageous to use MLWFs as an initial guess
for the wavefunctions; however, the lower the energy convergence
threshold the smaller the relative advantage.
IV IV. Conclusions
In summary, we have developed a variational, stable and
efficient approach for the total-energy
minimization of unitary variant functionals,
as they appear in self-interaction corrected formulations,
with a focus
on properly minimizing the energy by unitary transformations
of the occupied manifold.
In particular, we have shown that the energy changes along the
gradient direction can be very different from being convex parabolic,
and suggested the use of the maximum frequency component of
the gradient matrix in determining optimal rotations for
the inner-loop minimization.
When maximally localized Wannier functions are used as an initial
guess for the wavefunctions,
the initial energy decreases significantly from that
corresponding to wavefunctions with arbitrary phases; however, the
logarithmic energy convergence rate remains similar in the two cases.
We expect that the results will be useful for investigating
the physical properties of complex materials and big molecules
with self-interaction corrected density functional theory.
We thank fruitful discussions with Peter Klüpfel and Simon Klüpfel.
CHP acknowledges financial support from
Intel Corporation.
References
(1)
P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964).
(2)
W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965).
(3)
D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. 45, 566 (1980).
(4)
J. P. Perdew and A. Zunger, Phys. Rev. B 23, 5048 (1981).
(5)
J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865
(1996).
(6)
N. Marzari, Bull. Mater. Res. Soc. 31, 681 (2006).
(7)
H. J. Kulik, M. Cococcioni, D. A. Scherlis, and N. Marzari, Phys. Rev. Lett.
97, 103001 (2006).
(8)
F. Zhou, M. Cococcioni, C. A. Marianetti, D. Morgan, and G. Ceder, Phys. Rev. B
70, 235121 (2004).
(9)
A. J. Cohen, P. Mori-Sanchez, and W. Yang, Science 321, 792 (2008).
(10)
A. Svane and O. Gunnarsson, Phys. Rev. Lett. 65, 1148 (1990).
(11)
I. D. Hughes, M. Dane, A. Ernst, W. Hergert, M. Luders, J. Poulter, J. B.
Staunton, A. Svane, Z. Szotek, and W. M. Temmerman, Nature 446, 650
(2007).
(12)
M. Stengel and N. A. Spaldin, Phys. Rev. B 77, 155106 (2008).
(13)
D. Vieira and K. Capelle, J. Chem. Theor. Comput. 6, 3319 (2010).
(14)
T. Koerzdoerfer, M. Mundt, and S. Kuemmel, Phys. Rev. Lett. 100, 133004
(2008).
(15)
B. Baumeier, P. Kruger, and J. Pollmann, Phys. Rev. B 73, 195205
(2006).
(16)
A. Filippetti and N. A. Spaldin, Phys. Rev. B 67, 125109 (2003).
(17)
A. Ruzsinszky, J. P. Perdew, G. I. Csonka, O. A. Vydrov, and G. E. Scuseria, J.
Chem. Phys. 126, 104102 (2007).
(18)
C. A. Ullrich, P. G. Reinhard, and E. Suraud, Phys. Rev. A 62, 053202
(2000).
(19)
C. Toher and S. Sanvito, Phys. Rev. Lett. 99, 056801 (2007).
(20)
M. R. Pederson, R. A. Heaton, and C. C. Lin, J. Chem. Phys. 80, 1972
(1984).
(21)
A. Svane, Phys. Rev. B 53, 4275 (1996).
(22)
A. Svane, W. M. Temmerman, Z. Szotek, J. Laegsgaard, and H. Winter, Int. J.
Quant. Chem. 77, 799 (2000).
(23)
S. Goedecker and C. J. Umrigar, Phys. Rev. A 55, 1765 (1997).
(24)
O. A. Vydrov and G. E. Scuseria, J. Chem. Phys. 121, 8187 (2004).
(25)
P. Klüpfel, S. Klüpfel, and H. Jónsson,
http://vefir.hi.is/para10/extab/para10-paper-150.pdf.
(26)
N. Marzari, D. Vanderbilt, and M. C. Payne, Phys. Rev. Lett. 79, 1337
(1997).
(27)
N. Marzari and D. Vanderbilt, Phys. Rev. B 56, 12847 (1997).
(28)
I. Souza, N. Marzari, and D. Vanderbilt, Phys. Rev. B 65, 035109
(2001).
(29)
I. Dabo, A. Ferretti, N. Poilvert, Y. Li, N. Marzari, and M. Cococcioni, Phys.
Rev. B 82, 115121 (2010).
(30)
U. Lundin and O. Eriksson, Int. J. Quantum Chem. 81, 247 (2001).
(31)
M. dÁvezac, M. Calandra, and F. Mauri, Phys. Rev. B 71, 205210
(2005).
(32)
T. Koopmans, Physica 1, 104 (1934).
(33)
J. F. Janak, Phys. Rev. B 18, 7165 (1978).
(34)
J. P. Perdew, R. G. Parr, and J. L. Balduz, Jr., Phys. Rev. Lett. 49,
1691 (1982).
(35)
A. A. Mostofi, J. R. Yates, Y.-S. Lee, I. Souza, D. Vanderbilt, and N. Marzari,
Comput. Phys. Comm. 178, 685 (2008).
(36)
K. Tsemekhman, E. Bylaska, and H. Jónsson,
http://users.physik.fu-berlin.de/$\sim$ag-gross/oep-workshop/Talks/OEP05Tsemekhman$\_$Talk.pdf.
(37)
N. Troullier and J. L. Martins, Phys. Rev. B 43, 1993 (1991).
(38)
P. Giannozzi et al., J. Phys.: Cond. Mat. 21, 395502 (2009).
(39)
I. Dabo, B. Kozinsky, N. E. Singh-Miller, and N. Marzari, Phys. Rev. B 77, 115139 (2008).
(40)
Y. Li and I. Dabo, arXiv:1107.2047, submitted. |
Coupled quantum wires
D. Makogon, N. de Jeu, and C. Morais Smith
Institute for Theoretical Physics, University of
Utrecht, Leuvenlaan 4, 3584 CE Utrecht, The Netherlands.
(December 7, 2020)
Abstract
We study a set of crossed 1D systems, which are coupled with each
other via tunnelling at the crossings. We begin with the simplest
case with no electron-electron interactions and find that besides
the expected level splitting, bound states can emerge. Next, we
include an external potential and electron-electron interactions,
which are treated within the Hartree approximation. Then, we write
down a formal general solution to the problem, giving additional
details for the case of a symmetric external potential.
Concentrating on the case of a single crossing, we were able to
explain recent experinents on crossed metallic and semiconducting
nanotubes [J. W. Janssen, S. G. Lemay, L. P. Kouwenhoven,
and C. Dekker, Phys. Rev. B 65, 115423 (2002)], which
showed the presence of localized states in the region of crossing.
pacs: 73.21.Hb, 73.22.-f, 73.23.Hk, 73.43.Jn
I Introduction
Physics in 1D systems manifests a number of peculiar phenomena,
such as spin-charge separation, conductance
quantization,BLandauer and anomalous low-temperature
behavior in the presence of backscattering impurity.Kane1
It is reasonable to expect that the more complex structures
composed of crossed 1D systems, such as crossings and arrays,
should exhibit some particular features as well. Although the
transport properties of crossed 1D systems and their arrays have
been thoroughly studied both theoreticallyKomnik and
experimentallyGao ; Fuhrer ; Postma , the electronic structure
of these systems is much less understood and the interpretation of
existing experimental results is challenging. Recent scanning
tunnelling microscopy (STM) experiments on a metallic carbon
nanotube crossed with a semiconducting oneJanssen have
shown the existence of localized states at
the crossing which are not due to disorder. However, these localized states do not appear
systematically in all experiments, i.e. the effect is highly
dependent on the nature of the carbon nanotubes (metallic or
semiconducting), of the barrier formed at the crossing, etc.
Aiming at clarifying this problem, we present in this paper a
detailed study of tunnelling effects between crossed 1D systems in
the presence of potential barriers for massive quasiparticle
excitations. Because effects of electron-electron interactions can
be reasonably incorporated in a random phase approximation
(RPA),DzLarkin ; DasSarma we study a simpler model,
accounting for electron-electron interactions only within Hartree
approximation. The outline of this paper is the following: in
section II we introduce the model that we are going to use to
describe the array of crossed nanowires. In section III we
consider a particular case of free electrons and write down
explicit solutions for the case of one and four crossings. Section
IV contains formal general solution with additional details given
for the case of a symmetric external potential. We demonstrate the
effect of tunnelling on the electronic structure of single
crossings in Section V and qualitatively discuss different
possibilities depending on the external potential. Section VI
contains quantitative analysis and comparison with available
experimental data of the electronic structure of single crossing
for different values of parameters. Our conclusions and open
questions are presented in Section VII.
II The Model
We consider a system composed of two layers of crossed quantum
wires with interlayer coupling. The upper layer has a set of
parallel horizontal wires described by fermionic fields
$\psi_{j}(x)$, whereas the lower layer contains only vertical
parallel wires described by the fields $\varphi_{i}(y)$. The wires
cross at the points $(x_{i},y_{j})$, with $i,j\in Z$ and the distance
between
layers is $d$, with $min(|x_{i}-x_{i+1}|,|y_{j}-y_{j+1}|)\gg d$, see Fig.1.
The partition function of the system reads
$$Z=\int d[\psi_{j}]d[\psi^{*}_{j}]d[\varphi_{i}]d[\varphi_{i}^{*}]e^{-S/\hbar},$$
(1)
with the total action given by
$$S=S_{0}+S_{\rm sct}+S_{\rm int}.$$
(2)
The first term accounts for the kinetic energy and external
potential $V^{\rm ext}_{j}(x)$, which can be different in each wire and may arise, e.g.,
due to a lattice deformation, when one wire is built on top of
another,
$$\displaystyle S_{0}$$
$$\displaystyle=$$
$$\displaystyle\sum_{j}\int_{0}^{\hbar\beta}d\tau\int dx\psi_{j}^{*}(x,\tau)G^{-%
1}_{jx}\psi_{j}(x,\tau)$$
(3)
$$\displaystyle+$$
$$\displaystyle\sum_{i}\int_{0}^{\hbar\beta}d\tau\int dy\varphi_{i}^{*}(y,\tau)G%
^{-1}_{iy}\varphi_{i}(y,\tau),$$
where
$$\displaystyle G^{-1}_{jx}$$
$$\displaystyle=$$
$$\displaystyle\hbar\frac{\partial}{\partial\tau}-\frac{\hbar^{2}}{2m}\frac{d^{2%
}}{dx^{2}}+V^{\rm ext}_{j}(x)-\mu_{x},$$
$$\displaystyle G^{-1}_{iy}$$
$$\displaystyle=$$
$$\displaystyle\hbar\frac{\partial}{\partial\tau}-\frac{\hbar^{2}}{2m}\frac{d^{2%
}}{dy^{2}}+V^{\rm ext}_{i}(y)-\mu_{y}.$$
(4)
Here, $\mu_{x,y}$ denotes the chemical potential in the upper
($\mu_{x}$) or lower ($\mu_{y}$) layer.
The second term of Eq. (2) describes scattering at
the crossings $(x_{i},y_{j})$,
$$S_{\rm sct}=\sum_{ij}\int_{0}^{\hbar\beta}d\tau H_{ij},$$
(5)
where
$$H_{ij}=\left[\psi_{j}^{*}(x_{i},\tau)\quad\varphi^{*}_{i}(y_{j},\tau)\right]%
\left(\begin{array}[]{cc}U_{ij}&T_{ij}\\
T^{*}_{ij}&\tilde{U}_{ij}\end{array}\right)\left[\begin{array}[]{c}\psi_{j}(x_%
{i},\tau)\\
\varphi_{i}(y_{j},\tau)\end{array}\right].$$
Notice that the matrix element $U_{ij}$ describing intra-layer
contact scattering can, in principle, be different from
$\tilde{U}_{ij}$, but both must be real. On the other hand, the
contact tunnelling (inter-layer) coefficient between the two
crossed wires $T_{ij}$ can be a complex number, since the only
constraint is that the matrix above must be Hermitian.
The third term in Eq. (2) accounts for electron-electron
interactions,
$$\displaystyle S_{\rm int}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\sum_{j}\int_{0}^{\hbar\beta}d\tau\int_{0}^{\hbar\beta%
}d\tau^{\prime}\int dx\int dx^{\prime}\psi_{j}^{*}(x,\tau)\psi_{j}^{*}(x^{%
\prime},\tau^{\prime})V^{\rm e-e}(x-x^{\prime})\psi_{j}(x,\tau)\psi_{j}(x^{%
\prime},\tau^{\prime})$$
(6)
$$\displaystyle+$$
$$\displaystyle\frac{1}{2}\sum_{i}\int_{0}^{\hbar\beta}d\tau\int_{0}^{\hbar\beta%
}d\tau^{\prime}\int dy\int dy^{\prime}\varphi_{i}^{*}(y,\tau)\varphi_{i}^{*}(y%
^{\prime},\tau^{\prime})V^{\rm e-e}(y-y^{\prime})\varphi_{i}(y,\tau)\varphi_{i%
}(y^{\prime},\tau^{\prime}).$$
III Free electrons case
We start by considering a very simplified case, namely, free
electrons (no electron-electron interaction, $V^{\rm e-e}(x)=0$
and no external potential, $V^{\rm ext}_{j}(x)=0$). Moreover, we
assume $\tilde{U}_{ji}=U_{ji}=0$ and put $\mu_{x}=\mu_{y}=\mu$. The
interlayer tunnelling is assumed to be equal at each crossing
point $T_{ij}=T$ and to have a real and positive value. In such a
case, the partition function consists of only Gaussian integrals.
We can then integrate out the quantum fluctuations, which reduces
the problem to just solving the equations of motion. Considering a
real time evolution and performing a Fourier transformation in the
time variable, we are left with the following equations of motion
for the fields:
$$\displaystyle\left(-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}-E\right)\psi_{j}(%
x)+T\sum_{l}\delta(x-x_{l})\varphi_{l}(y_{j})$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle\left(-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dy^{2}}-E\right)\varphi_{%
i}(x)+T\sum_{l}\delta(y-y_{l})\psi_{l}(x_{i})$$
$$\displaystyle=$$
$$\displaystyle 0,$$
(7)
where $m$ denotes the electron mass and $E$ is the energy of an
electron state. Firstly, we evaluate the solutions for the case of
free electrons without tunnelling and then we investigate how the
addition of tunnelling changes the results. The solution for the
free electron case consists of symmetric and antisymmetric
normalized modes,
$$\psi_{s}(x)=\frac{1}{\sqrt{L}}\cos(k_{s}x),\qquad\psi_{a}(x)=\frac{1}{\sqrt{L}%
}\sin(k_{a}x),$$
(8)
respectively. The corresponding momenta $k_{s}$ and $k_{a}$ depend on
the boundary conditions: with open boundary conditions $k_{s}=\pi(2n+1)/2L$, $k_{a}=\pi n/L$ and with periodic boundary conditions
$k_{s}=k_{a}=\pi n/L$ for a wire of length $2L$ and $n$ integer. To
find the solution for the case with tunnelling $T\neq 0$, we have
to solve Eqs. (7). These equations are linear,
therefore, the solution consists of a homogeneous and an
inhomogeneous parts,
$$\psi_{j}(x)=\psi_{j}^{\rm hom}(x)+\psi_{j}^{\rm inh}(x),$$
(9)
which are
$$\psi_{j}^{\rm hom}(x)=A_{j}e^{ikx}+B_{j}e^{-ikx},$$
(10)
$$\psi_{j}^{\rm inh}(x)=\frac{Tm}{\hbar^{2}k}\sum_{l}\varphi_{l}(y_{j})\sin(k|x-%
x_{l}|).$$
(11)
Imposing open boundary conditions, $\psi_{j}(\pm L)=0$, we find
$$\displaystyle A_{j}e^{ikL}+B_{j}e^{-ikL}+\psi_{j}^{\rm inh}(L)$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle A_{j}e^{-ikL}+B_{j}e^{ikL}+\psi_{j}^{\rm inh}(-L)$$
$$\displaystyle=$$
$$\displaystyle 0.$$
(12)
Writing the above equations in a matrix notation and inverting
yields
$$\left(\begin{array}[]{c}A_{j}\\
B_{j}\end{array}\right)=\frac{-1}{2i\sin(2kL)}\left(\begin{array}[]{cc}e^{ikL}%
&-e^{-ikL}\\
-e^{-ikL}&e^{ikL}\end{array}\right)\left(\begin{array}[]{c}\psi_{j}^{\rm inh}(%
L)\\
\psi_{j}^{\rm inh}(-L)\end{array}\right).$$
Substituting explicitly the expression for $\psi_{j}^{\rm inh}(\pm L)$ given by Eq. (11) and using the mathematical identity
$$\displaystyle\left(e^{ikx}\quad e^{-ikx}\right)\left(\begin{array}[]{cc}e^{ikL%
}&-e^{-ikL}\\
-e^{-ikL}&e^{ikL}\end{array}\right)\left(\begin{array}[]{c}\sin(kL-kx_{l})\\
\sin(kL+kx_{l})\end{array}\right)$$
$$\displaystyle=\cos\left(2kL\right)\cos(kx-kx_{l})-\cos(kx+kx_{l}),$$
leads, after simplifications, to the solution
$$\displaystyle\psi_{j}(x)$$
$$\displaystyle=$$
$$\displaystyle-T\sum_{l}G(x,x_{l})\varphi_{l}(y_{j}),$$
$$\displaystyle\varphi_{i}(y)$$
$$\displaystyle=$$
$$\displaystyle-T\sum_{l}G(y,y_{l})\psi_{l}(x_{i}),$$
(13)
where, for open boundary conditions,
$$\displaystyle G_{o}(x_{i},x_{j},E)$$
$$\displaystyle\equiv$$
$$\displaystyle\frac{m}{\hbar^{2}k\sin(2kL)}[\cos(kx_{i}+kx_{j})$$
(14)
$$\displaystyle-$$
$$\displaystyle\cos(2kL-k|x_{i}-x_{j}|)],$$
and the energy $E$ is related to $k$ as $E=\hbar^{2}k^{2}/2m$.
Similar calculations can be performed for the case of periodic
boundary conditions, yielding Eq. (13) with
$$G_{p}(x_{i},x_{j},E)\equiv\frac{m}{\hbar^{2}k\sin(kL)}\cos(kL-k|x_{i}-x_{j}|).$$
(15)
III.1 Two crossed wires
In particular, for the simplest case of a single horizontal and a
single vertical wires, with just one crossing at $(x_{0},y_{0})$, the
solution is:
$$\displaystyle\psi(x)$$
$$\displaystyle=$$
$$\displaystyle-TG(x,x_{0},E)\varphi(y_{0})$$
$$\displaystyle\varphi(y)$$
$$\displaystyle=$$
$$\displaystyle-TG(y,y_{0},E)\psi(x_{0}).$$
(16)
By substituting $(x,y)=(x_{0},y_{0})$, we find that at the crossing
point
$$\displaystyle\psi(x_{0})$$
$$\displaystyle=$$
$$\displaystyle-TG(x_{0},x_{0},E)\varphi(y_{0})$$
$$\displaystyle\varphi(y_{0})$$
$$\displaystyle=$$
$$\displaystyle-TG(y_{0},y_{0},E)\psi(x_{0}).$$
(17)
The consistency condition requires that
$$\left|\begin{array}[]{cc}1&TG(x_{0},x_{0},E)\\
TG(y_{0},y_{0},E)&1\end{array}\right|=0,$$
(18)
or
$$T^{2}G(x_{0},x_{0},E)G(y_{0},y_{0},E)=1.$$
(19)
The solution is even simpler if $(x_{0},y_{0})=(0,0)$. Then, for open
boundary conditions, the symmetric modes are
$$\displaystyle\psi(x)$$
$$\displaystyle=$$
$$\displaystyle\frac{\varphi(0)Tm}{\hbar^{2}k\cos(kL)}\sin(kL-k|x|),$$
$$\displaystyle\varphi(y)$$
$$\displaystyle=$$
$$\displaystyle\frac{\psi(0)Tm}{\hbar^{2}k\cos(kL)}\sin(kL-k|y|),$$
and the antisymmetric modes are left unchanged in comparison with
Eqs. (8). Also,
$$G(0,0,E)=\frac{m\tan(kL)}{\hbar^{2}k},$$
(20)
and the secular equation
(19) becomes
$$\left[\frac{Tm\tan(kL)}{\hbar^{2}k}\right]^{2}=1,$$
(21)
which splits into two transcendental equations
$$\displaystyle k^{+}$$
$$\displaystyle=$$
$$\displaystyle-\frac{Tm}{\hbar^{2}}\tan(k^{+}L),$$
$$\displaystyle k^{-}$$
$$\displaystyle=$$
$$\displaystyle\frac{Tm}{\hbar^{2}}\tan(k^{-}L).$$
The first one describes the shifted values of scattering states
energies, whereas the second equation has an additional bound
state solution with $E<0$, if $T>T_{0}=\hbar^{2}/mL$. The appearance
of the bound state is exclusively due to the presence of
tunnelling. For an electron in a wire of length $2L=10^{3}$ nm the
corresponding value is $T_{0}=7.62\times 10^{-5}$ eV$\cdot$nm and
for quasiparticles the value of $T_{0}$ is typically larger,
inversely proportional to their effective mass. Defining then
$\kappa\equiv-ik^{-}$ and taking the thermodynamic limit
$L\rightarrow\infty$, we find $|\kappa|=Tm/\hbar^{2}$ with the
corresponding bound state energy
$$E=-\frac{T^{2}m}{2\hbar^{2}},$$
(22)
and the wave function given by
$$\psi(x)=\frac{\sqrt{|\kappa|}}{2}e^{-|\kappa x|}.$$
(23)
The factor $1/2$ instead of $1/\sqrt{2}$ comes from the fact that
now an electron can tunnel into the other wire, where its
wavefunction $\varphi(0)=-\psi(0)$.
Eqs. (22) and (23) hold for both
open and periodic boundary conditions. Since the threshold value
$T_{0}$ is quite small, the bound state should exist for a typical
crossing with relatively good contact. However, the energy of the
state is extremely small, $E\sim 10^{-8}$ eV if $T\sim T_{0}$.
Qualitatively similar results were found by numerical
computationSchult ; Carini of the ground-state energy of an
electron trapped at the intersection of a cross formed by two
quantum wires of finite width.
III.2 Four crossed wires
For the case of two wires in the upper and two in the lower layers, there are four
crossings. In this case, the self consistent equations read
$$\left[\begin{array}[]{c}\psi_{1}(x_{1})\\
\psi_{1}(x_{2})\\
\psi_{2}(x_{1})\\
\psi_{2}(x_{2})\end{array}\right]=M(x_{1},x_{2},E)\left[\begin{array}[]{c}%
\varphi_{1}(y_{1})\\
\varphi_{1}(y_{2})\\
\varphi_{2}(y_{1})\\
\varphi_{2}(y_{2})\end{array}\right]$$
(24)
and
$$\left[\begin{array}[]{c}\varphi_{1}(y_{1})\\
\varphi_{1}(y_{2})\\
\varphi_{2}(y_{1})\\
\varphi_{2}(y_{2})\end{array}\right]=M(y_{1},y_{2},E)\left[\begin{array}[]{c}%
\psi_{1}(x_{1})\\
\psi_{1}(x_{2})\\
\psi_{2}(x_{1})\\
\psi_{2}(x_{2})\end{array}\right],$$
(25)
where
$$M(x_{1},x_{2},E)=-T\left(\begin{array}[]{cccc}G(x_{1},x_{1},E)&0&G(x_{1},x_{2}%
,E)&0\\
G(x_{1},x_{2},E)&0&G(x_{2},x_{2},E)&0\\
0&G(x_{1},x_{1},E)&0&G(x_{1},x_{2},E)\\
0&G(x_{1},x_{2},E)&0&G(x_{2},x_{2},E)\end{array}\right).$$
(26)
The secular equation then has the form
$$\det[M(x_{1},x_{2},E)M(y_{1},y_{2},E)-I]=0,$$
(27)
which yields a rather complicated transcendental equation ($I$ is
the identity matrix). The spectral equation for bound states $E<0$
can be significantly simplified in the thermodynamic limit
$L\rightarrow\infty$. Then, with $k=i\kappa$, for both open and
periodic boundary conditions, the matrix elements become
$$G(x_{i},x_{j},E)=\frac{m}{\hbar^{2}|\kappa|}e^{-|\kappa(x_{i}-x_{j})|}$$
(28)
and the secular equation in Eq. (27) has 4
solutions with negative energy described by
$$\displaystyle E$$
$$\displaystyle=$$
$$\displaystyle-\frac{T^{2}m}{2\hbar^{2}}(1-a_{1}-a_{2}+a_{1}a_{2}),$$
$$\displaystyle E$$
$$\displaystyle=$$
$$\displaystyle-\frac{T^{2}m}{2\hbar^{2}}(1+a_{1}-a_{2}-a_{1}a_{2}),$$
$$\displaystyle E$$
$$\displaystyle=$$
$$\displaystyle-\frac{T^{2}m}{2\hbar^{2}}(1-a_{1}+a_{2}-a_{1}a_{2}),$$
$$\displaystyle E$$
$$\displaystyle=$$
$$\displaystyle-\frac{T^{2}m}{2\hbar^{2}}(1+a_{1}+a_{2}+a_{1}a_{2}).$$
Here, $a_{1}\equiv e^{-|\kappa(x_{2}-x_{1})|}$, $a_{2}\equiv e^{-|\kappa(y_{2}-y_{1})|}$, and $E=-\hbar^{2}\kappa^{2}/2m$ (notice the
implicit dependence of $a_{1}$ and $a_{2}$ on $E$). The value of $a_{i}$
depends exponentially on the distance between the crossing points.
In the limit $|x_{2}-x_{1}|,|y_{2}-y_{1}|\rightarrow\infty$ the value of
$a_{1},a_{2}\rightarrow 0$, which correspond to four independent
crossings with the bound state energy $E=-T^{2}m/{2\hbar^{2}}$, the
same value as we found in the previous case (see Eq. (22)).
III.3 A regular lattice of crossed wires
Consider now a regular square lattice, with lattice constant $a$.
Then, one has $x_{l}=al$ and $y_{j}=aj$. From symmetry arguments, the
wave functions should be $\psi_{j}(x)=\psi_{0}(x)e^{iK_{y}aj}$ and
$\varphi_{l}(y)=\varphi_{0}(y)e^{iK_{x}al}$. After substituting
them into Eq. (13) and using Eq. (28) we
find
$$\displaystyle\psi_{j}(x)=$$
$$\displaystyle-$$
$$\displaystyle T\varphi_{0}(y_{j})\frac{me^{iK_{x}l_{x}a}}{\hbar^{2}\kappa}%
\left[\frac{\sinh(\kappa x-\kappa al_{x})e^{iK_{x}a}}{\cosh(\kappa a)-\cos(K_{%
x}a)}\right.$$
$$\displaystyle-$$
$$\displaystyle\left.\frac{\sinh(\kappa x-\kappa(l_{x}+1)a)}{\cosh(\kappa a)-%
\cos(K_{x}a)}\right],$$
$$\displaystyle\varphi_{l}(y)=$$
$$\displaystyle-$$
$$\displaystyle T\psi_{0}(x_{l})\frac{me^{iK_{y}l_{y}a}}{\hbar^{2}\kappa}\left[%
\frac{\sinh(\kappa x-\kappa al_{y})e^{iK_{y}a}}{\cosh(\kappa a)-\cos(K_{y}a)}\right.$$
$$\displaystyle-$$
$$\displaystyle\left.\frac{\sinh(\kappa y-\kappa(l_{y}+1)a)}{\cosh(\kappa a)-%
\cos(K_{y}a)}\right],$$
where $l_{x},l_{y}\in Z$, such that $al_{x}\leq x<a(l_{x}+1)$ and
$al_{y}\leq y<a(l_{y}+1)$. Therefore,
$\psi_{j}(x_{l})=\psi_{0}(0)e^{i(K_{x}al+K_{y}aj)}$ and
$\varphi_{l}(y_{j})=\varphi_{0}(0)e^{i(K_{x}al+K_{y}aj)}$, with
$\psi_{0}(0)$ and $\varphi_{0}(0)$ related by
$$\displaystyle\psi_{0}(0)$$
$$\displaystyle=$$
$$\displaystyle-T\frac{m}{\hbar^{2}\kappa}\frac{\sinh(\kappa a)}{\cosh(\kappa a)%
-\cos(K_{x}a)}\;\varphi_{0}(0),$$
$$\displaystyle\varphi_{0}(0)$$
$$\displaystyle=$$
$$\displaystyle-T\frac{m}{\hbar^{2}\kappa}\frac{\sinh(\kappa a)}{\cosh(\kappa a)%
-\cos(K_{y}a)}\;\psi_{0}(0).$$
(29)
Thus, the spectral equation reads
$$1=\frac{(mT)^{2}}{(\hbar^{2}\kappa)^{2}}\frac{\sinh^{2}(\kappa a)}{[\cosh(%
\kappa a)-\cos(K_{x}a)][\cosh(\kappa a)-\cos(K_{y}a)]}.$$
By performing an analytic continuation $k=i\kappa$ in Eq. (29), we find an equations similar to the one obtained
previously by Kazymyrenko and DouçotKazymyrenko when
studying scattering states in a lattice. The spectral equation
describes a band formed by bound states with energies $-T/a<E<0$.
The momenta $K_{x}$ and $K_{y}$ run in the interval
$-\pi<K_{x}a,K_{y}a<\pi$ if $T\geq T_{f}=2\hbar^{2}/ma$ or inside the
region $|\sin(K_{x}a/2)\sin(K_{y}a/2)|\leq T/T_{f}$ if $T<T_{f}$. Similar
results were calculated,Dickinson
estimated,CastroNeto and measuredZhou in the context
of hybridization between vertical and horizontal stripe modes in
high-Tc superconductors.
IV A more general case
Now we consider a more general model, which takes into account the
presence of an inhomogeneous potential $V^{\rm ext}_{j}(x)$
arising from possible lattice deformations, and includes
electron-electron interactions $V^{\rm e-e}(x)$, which will be
treated at a mean field level, within the Hartree approximation
$V^{\rm e-e}_{{\rm H}j}(x)$. Each crossing $(x_{i},y_{j})$ is
considered as a scattering point with tunnelling $T_{ij}$ and
scattering potential $U_{ij}$. The corresponding equations of
motion then read
$$\displaystyle D_{jx}\psi_{j}(x)+\sum_{l}[U_{lj}\psi_{j}(x_{l})+T_{lj}\varphi_{%
l}(y_{j})]\delta(x-x_{l})$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle D_{iy}\varphi_{i}(x)+\sum_{l}[\tilde{U}_{il}\;\varphi_{i}(y_{l})%
+T^{*}_{il}\psi_{l}(x_{i})]\delta(y-y_{l})$$
$$\displaystyle=$$
$$\displaystyle 0,$$
where
$$\displaystyle D_{jx}$$
$$\displaystyle=$$
$$\displaystyle-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V_{j}(x)-E,$$
$$\displaystyle D_{iy}$$
$$\displaystyle=$$
$$\displaystyle-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dy^{2}}+V_{i}(y)-E,$$
with $V_{j}(x)=V^{\rm ext}_{j}(x)+V^{\rm e-e}_{{\rm H}j}(x)$.
This model is solved most easily through the Green’s function
satisfying
$$D_{jx_{1}}G_{j}(x_{1},x_{2},E)=\delta(x_{1}-x_{2})$$
with
$$G_{j}(x_{1},x_{2},E)=G_{j}^{*}(x_{2},x_{1},E),$$
and the corresponding open boundary conditions,
$$G_{j}(x_{1},L,E)=0,\quad G_{j}(x_{1},-L,E)=0,$$
or the periodic ones
$$\displaystyle G_{j}(x_{1},L,E)=G_{j}(x_{1},-L,E),$$
$$\displaystyle{G_{j}}^{\prime}(x_{1},L,E)={G_{j}}^{\prime}(x_{1},-L,E),$$
where the prime denotes the derivative with respect to $x_{1}$. Note
that we consider real time Green’s function for a particular wire
(not the whole system). The solution to the model is
$$\displaystyle\psi_{j}(x)$$
$$\displaystyle=$$
$$\displaystyle-\sum_{l}[U_{lj}\psi_{j}(x_{l})+T_{lj}\varphi_{l}(y_{j})]G_{j}(x,%
x_{l},E),$$
$$\displaystyle\varphi_{i}(y)$$
$$\displaystyle=$$
$$\displaystyle-\sum_{l}[\tilde{U}_{il}\;\varphi_{i}(y_{l})+T^{*}_{il}\psi_{l}(x%
_{i})]G_{i}(y,y_{l},E),$$
(30)
which we require to be normalized
$$\sum_{l}\left(\int|\psi_{l}(x)|^{2}dx+\int|\varphi_{l}(y)|^{2}dy\right)=1.$$
(31)
The self consistency condition for the value of the functions at
crossing points $(x_{i},y_{j})$ yields the equations
$$\displaystyle\sum_{l}[(U_{lj}G_{j}(x_{i},x_{l},E)+\delta_{il})\psi_{j}(x_{l})$$
$$\displaystyle+T_{lj}G_{j}(x_{i},x_{l},E)\varphi_{l}(y_{j})]$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle\sum_{l}[(\tilde{U}_{il}\;G_{s}(y_{j},y_{l},E)+\delta_{jl})%
\varphi_{i}(y_{l})$$
$$\displaystyle+T^{*}_{il}G_{i}(y_{j},y_{l},E)\psi_{j}(x_{i})]$$
$$\displaystyle=$$
$$\displaystyle 0.$$
(32)
To find nontrivial solutions for the fields $\psi_{j}(x)$ and
$\varphi_{i}(y)$, the system of homogeneous equations in Eq. (32) has to be linearly dependent and hence the solution
is represented by the null space of the system. This means that
after writing the equations in a matrix form, the determinant of
the matrix should be zero, thus leading to a spectral equation for
$E$. Moreover, bound state solutions in the thermodynamic limit
$L\rightarrow\infty$ satisfy both open and periodic boundary
conditions, since $\psi(\pm L)\rightarrow 0$ and $\psi^{\prime}(\pm L)\rightarrow 0$.
To understand better the dependence of the Green’s function $G_{j}(x_{i},x_{l},E)$ on $E$, we represent the function through the
solutions of the homogenous equations,
$$D_{jx}\psi_{j}(x)=0.$$
(33)
We omit the index $j$ in what follows for simplicity. The most
general and common representation, which holds for any static
potential, reads as follows:
$$G(x_{1},x_{2},E)=\sum_{n}\frac{\psi_{\varepsilon_{n}}^{*}(x_{1})\psi_{%
\varepsilon_{n}}(x_{2})}{\varepsilon_{n}-E}.$$
(34)
Here, the function $\psi_{\varepsilon}(x)$ is the solution of the
homogenous equation
$$\left(-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V(x)-\varepsilon\right)\psi_{%
\varepsilon}(x)=0,$$
(35)
and the spectrum $\{\varepsilon_{n}\}$ is obtained by imposing the
corresponding boundary conditions. Notice that in the present representation of $G(x_{1},x_{2},E)$ the
functions $\psi_{\varepsilon_{n}}(x)$ have to be orthonormal. By
writing $G(x_{1},x_{2},E)$ in the form given in Eq. (34),
the following identity arises
$$\int dx^{\prime}G(x_{1},x^{\prime},E)G(x^{\prime},x_{2},E)=\frac{\partial G(x_%
{1},x_{2},E)}{\partial E}.$$
(36)
The case $x_{1}=x_{2}=0$ for free electrons is illustrated in Fig. 3, where Eq. (20) is plotted. If some
external potential is present, $G(x_{0},x_{0},E)$ has the same form
but the positions of the poles are shifted and the corresponding
values are different.
If no regularization is used, the calculations for $E>0$ must be
performed in the finite size limit, otherwise with
$L\rightarrow\infty$ the energy distance between different modes
vanishes and the poles situated on the real positive half axis
merge to form a branch cut singularity. This behavior can be
readily seen on the example of Eq. (20), where can
perform an analytic continuation, considering $k\rightarrow k+ik^{\prime}$. Then, in the limit $L\rightarrow\infty$, $\tan(kL+ik^{\prime}L)=i{\rm sgn}(k^{\prime})$, and the function $G(x_{0},x_{0})$ changes sign as one
goes from the upper to the lower complex half plane for $k\neq 0$.
Now we represent the Green’s function through the solutions of the
homogenous equation
$$\left(-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V(x)-E\right)\psi(x)=0.$$
(37)
This is a second order differential equation, therefore, it should
have two linearly independent solutions, which we call $\psi_{1}(x)$
and $\psi_{2}(x)$. Then the Green’s function is
$$G(x_{1},x_{2},E)=\left\{\begin{array}[]{c}A_{-}\psi_{1}(x_{1})+B_{-}\psi_{2}(x%
_{1}),x_{1}\leq x_{2}\\
A_{+}\psi_{1}(x_{1})+B_{+}\psi_{2}(x_{1}),x_{1}>x_{2}\end{array}\right.,$$
(38)
where the expressions for the coefficients
$A_{-},B_{-},A_{+},B_{+}$ (functions of $x_{2}$), are derived in the
Appendix A. In particular, for a symmetric potential $V(x)$, we
can choose a symmetric $\psi_{s}(x)$ and an antisymmetric
$\psi_{a}(x)$ solutions as linearly independent, i.e.,
$\psi_{1}(x)=\psi_{s}(x)$ and $\psi_{2}(x)=\psi_{a}(x)$. Thus we find
$$G(x,0,E)=\frac{m\psi_{a}(L)}{\hbar^{2}{\psi_{a}}^{\prime}(0)}\left[\frac{\psi_%
{s}(x)}{\psi_{s}(L)}-\frac{\psi_{a}(|x|)}{\psi_{a}(L)}\right]$$
(39)
and
$$G(0,0,E)=\frac{m\psi_{s}(0)}{\hbar^{2}{\psi_{a}}^{\prime}(0)}\frac{\psi_{a}(L)%
}{\psi_{s}(L)}.$$
(40)
To obtain the results in the thermodynamic limit $L\rightarrow\infty$, it is useful to rewrite $G(x_{1},x_{2})$ using quantities
which do not depend on $L$ explicitly. For example,
$$G(x,0,E)=G(0,0,E)\frac{\psi_{s}(x)}{\psi_{s}(0)}-\frac{m}{\hbar^{2}}\frac{\psi%
_{a}(|x|)}{{\psi_{a}}^{\prime}(0)}.$$
(41)
After substitution of Eqs. (8) into Eq. (38) and simplification, for the case of noninteracting
electrons we find
$$\displaystyle G(x_{1},x_{2},E)$$
$$\displaystyle=$$
$$\displaystyle\frac{m}{\hbar^{2}k\sin(2kL)}[\cos(kx_{1}+kx_{2})$$
$$\displaystyle-$$
$$\displaystyle\cos(2kL-k|x_{1}-x_{2}|)],$$
which is the same expression as in the previous section (see Eq. (14)). This is a posteriori justification of the use
of the same letter $G(x_{1},x_{2},E)$ in the first section. The case
of a harmonic potential is considered in Appendix B.
V a single crossing
Now we apply our results including tunnelling and external
potential to the simpler case of only two crossed wires, aiming to compare
our findings with experiments. Using the general solution given by
Eq. (30), and considering $T=T^{*}$, we can write
$$\displaystyle\psi(x)$$
$$\displaystyle=$$
$$\displaystyle-[U\psi(x_{0})+T\varphi(y_{0})]G_{1}(x,x_{0},E),$$
$$\displaystyle\varphi(y)$$
$$\displaystyle=$$
$$\displaystyle-[\tilde{U}\;\varphi(y_{0})+T\psi(x_{0})]G_{2}(y,y_{0},E).$$
By substituting $(x,y)=(x_{0},y_{0})$, we find that at the crossing
point
$$\displaystyle[1+UG_{1}(x_{0},x_{0},E)]\psi(x_{0})+TG_{1}(x_{0},x_{0},E)\varphi%
(y_{0})$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle[1+\tilde{U}G_{2}(y_{0},y_{0},E)]\varphi(y_{0})+TG_{2}(y_{0},y_{0%
},E)\psi(x_{0})$$
$$\displaystyle=$$
$$\displaystyle 0.$$
The consistency condition requires that
$$\left|\begin{array}[]{cc}1+UG_{1}(x_{0},x_{0},E)&TG_{1}(x_{0},x_{0},E)\\
TG_{2}(y_{0},y_{0},E)&1+\tilde{U}G_{2}(y_{0},y_{0},E)\end{array}\right|=0,$$
(42)
or
$$\displaystyle 0$$
$$\displaystyle=$$
$$\displaystyle[1+UG_{1}(x_{0},x_{0},E)][1+\tilde{U}G_{2}(y_{0},y_{0},E)]$$
$$\displaystyle-$$
$$\displaystyle T^{2}G_{1}(x_{0},x_{0},E)G_{2}(y_{0},y_{0},E).$$
The meaning of this equation becomes clearer in the symmetric
case, when $U=\tilde{U}$ and $G_{1}(x_{0},x_{0},E)=G_{2}(y_{0},y_{0},E)=G$.
In this case, it reduces to a quadratic equation, which bears two
solutions,
$$G_{+}=\frac{-1}{U+T},\qquad G_{-}=\frac{-1}{U-T}.$$
Notice that they differ by the sign in front of the tunnelling
amplitude $T$, which is shifting the potential $U$. Such symmetry
effectively reduces the problem to 1D with effective potential
$U_{\rm eff}\delta(x_{0})$. Hence, we have
$$\displaystyle\psi(x_{0})$$
$$\displaystyle=$$
$$\displaystyle\varphi(y_{0}),\quad\;\;U_{\rm eff}^{+}=U+T,$$
$$\displaystyle\psi(x_{0})$$
$$\displaystyle=$$
$$\displaystyle-\varphi(y_{0}),\quad U_{\rm eff}^{-}=U-T.$$
(43)
The shift of the energy levels in a wire due to the presence of
the $\delta$ potential can be visualized with the help of the
Green’s function expansion, where one has
$$G(x_{0},x_{0},E)=\sum_{n}\frac{|\psi_{\varepsilon_{n}}(x_{0})|^{2}}{%
\varepsilon_{n}-E}=\frac{-1}{U_{\rm eff}}.$$
(44)
In the case with $U_{\rm eff}=0$, the energies are exactly those
of the poles and, therefore, remain unshifted. However, since $G(x_{0},x_{0},E)=-1/U_{\rm eff}$, the curve actually describes how the
energies of the modes change as we keep increasing $-1/U_{\rm eff}$ from $-\infty$ if $U_{\rm eff}>0$ or decreasing $-1/U_{\rm eff}$ from $+\infty$ if $U_{\rm eff}<0$. In the latter case, we
can run into the region with $E<0$, which would correspond to the
appearance of a bound state. Nevertheless, to obtain an exact
solution, it is more convenient to work with the expression for $G(x_{0},x_{0},E)$ in terms of the wave functions,
$$G(0,0,E)=\frac{m\psi_{s}(0)}{\hbar^{2}{\psi_{a}}^{\prime}(0)}\frac{\psi_{a}(L)%
}{\psi_{s}(L)}=\frac{-1}{U_{\rm eff}},$$
(45)
where we assumed $x_{0}=0$ for simplicity.
VI comparison with experiments
Now, we will compare our theoretical findings with experimental
results. We concentrate mostly on the analysis of a system
consisting of two crossed single wall carbon nanotubes (SWNTs): a
metallic on top of a semiconducting (MS) one.Janssen In its
unperturbed state, the band structure of a SWNT can be understood
by considering the electronic structure of graphene. Due to its
cylindrical shape, the transverse momentum of one particle
excitations in a SWNT has to be quantized, whereas the
longitudinal momentum may vary continuously. Combining this
condition with the assumption that the electronic structure is not
very different from that of graphene, one finds two different
situations, depending on the topology of the SWNT: there are no
gapless modes and the nanotube is semiconducting, or two gapless
modes are present and the nanotube is called metallic. Analyzing
the spectroscopic measurements performed along the metallic
nanotube (see Fig. 4) and comparing with the unperturbed
electronic structure,
one notices two main changes. First, a small quasi gap opens
around the Fermi energy level $\varepsilon_{F}$ between
$\varepsilon_{F}-0.2$ eV and $\varepsilon_{F}+0.3$ eV in the spectrum
of the massless modes (corresponding to zero transverse momentum).
Second, two peaks are visible at $\varepsilon_{0}=\varepsilon_{F}-0.3$
eV and $\varepsilon_{1}=\varepsilon_{F}-0.6$ eV in the region around
the crossing, corresponding to localized states between the Fermi
energy and the van Hove singularity at $\varepsilon_{\rm vH}=\varepsilon_{F}-0.8$ eV. Such states are not visible above the
Fermi energy, thus suggesting that the electron-hole symmetry is
broken by the presence of some external potential. The latter may
appear due to lattice distortions and the formation of a Schottky
barrier at the contact between the nanotubes.
Odintsov ; OdintsovYo In the following, we show that if the
potential is strong enough, localized states can form in the
spectrum of the massive mode corresponding to the van Hove
singularity with energy $\varepsilon=\varepsilon_{\rm vH}-E$.
Therefore, the observed localized states should have $E_{0}=-0.5$ eV
and $E_{1}=-0.2$ eV.
To incorporate in a more complete way the effects of the Schottky
barrier and lattice deformation, we assume $V^{\rm ext}(x)$ to
have a Lorentzian shape,
$$V^{\rm ext}(x)=-\frac{\tilde{V}}{1+x^{2}/b^{2}}.$$
(46)
Firstly, we study the influence of this potential alone on the
electronic structure, i.e. we assume that there is no tunnelling
$T=0$, and no electron-electron interactions. Exact numerical
solution of the Schrodinger equation shows that an approximation
of the potential in Eq. (46) by the harmonic one
does not change the solution qualitatively. Therefore, we consider
$V^{\rm ext}(x)\approx-\tilde{V}(1-x^{2}/b^{2})$, which describes a
harmonic oscillator with frequency
$\omega=\sqrt{2\tilde{V}/mb^{2}}$ and corresponding spectra
$E_{n}=-\tilde{V}+(n+1/2)\sqrt{2\hbar^{2}\tilde{V}/mb^{2}}$ for
$E_{n}<0$. Moreover, it is reasonable to assume that the strength
of the barrier $\tilde{V}$ is of the same order as the energy of
the bound states and that the potential is localized on the same
length scale as the localized states. Hence, we take
$\tilde{V}=0.7$ eV and $b=4$ nm. It follows then from our
calculations that the difference between neighboring energy levels
is quite small and there are many bound states present in the case
when $m$ is the actual electron mass. However, assuming $m$ to be
an effective electron mass, with $m=0.025\;m_{e}$, which is of the
same order as the experimentally estimated values $m=0.037\;m_{e}$JarilloHerrero and $m=0.06\;m_{e}$,Radosavljevic
we find exactly two pronounced bound states: the first one has
$E=-0.5$ eV and is described by the symmetric wavefunction
$\psi_{s}(x)$ as shown in Fig. 5,
whereas the other has $E=-0.2$ eV and is described by the
antisymmetric wavefunction $\psi_{a}(x)$, see Fig. 6.
Considering Fig. 5, we observe that the localization
size of the state is around $10$ nm, which agrees well with the
experimental data. On the other hand, the state shown in Fig. 6 has a zero value exactly at the crossing and is rather
spread, a behavior which is not observed experimentally. Besides
these two, a number of other states are also present in the
vicinity of the van Hove singularity with $E>-0.1$ eV.
Secondly, we take into account electron-electron interactions to
consider other possibilities to obtain two pronounced bound
states. Unfortunately, our approach only allows us to incorporate
electron-electron interactions at the mean-field level by using
the Hartree selfconsistent approximation
$$V^{\rm e-e}_{H}(x)=\int dx^{\prime}V^{\rm e-e}(x-x^{\prime})n(x^{\prime}),$$
(47)
where $n(x)$ is the electron density, given by
$$n(x)=\sum_{k}|\psi_{k}(x)|^{2}n_{F}(\varepsilon_{k}-\mu).$$
(48)
Here the summation $k$ goes over energy levels and
$n_{F}(\varepsilon)$ is the Fermi distribution. Although it is known
that in 1D systems quantum fluctuations play an extremely
important role, we nevertheless start with the mean-field
approximation as a first step to incorporate them in RPA.
Moreover, we believe that their presence does not qualitatively
change the obtained results. To render the numerical calculation
simpler, we consider a delta-like interaction potential, which
leads to
$$V^{\rm e-e}_{H}(x)=V_{0}n(x),$$
(49)
By estimating the effective interaction strength $V_{0}\sim 2\pi\hbar v_{F}$ from the Luttinger liquid theory, we obtain that
$V_{0}\sim 3.4$ eV$\cdot$nm for $v_{F}=8.2\times 10^{7}$
cm/s.Lemay Suppose that the lowest energy state with
$E=-0.5$ eV is occupied by an electron with a certain spin. Then,
there is a possibility to add to the same state an electron with
an opposite spin. However, due to the repulsive Coulomb
interaction the energy of the two-electron state becomes $E=-0.2$
eV for $V_{0}=3.15$ eV$\cdot$nm. The corresponding self consistent
solution is presented in Fig. 7.
The state has the same shape as in Fig. 5, but is a bit
more spread. By comparing the density of states (DOS) distribution
with scanning tunnelling spectroscopy (STS) data for the
crossing,Janssen we observe that the inclusion of
electron-electron interactions (Fig. 7) provides a
much better agreement between theory and experiment for the
$E=-0.2$ eV bound state than in the previous case (Fig. 6).
Thirdly, we take into account tunnelling between the wires.
Qualitatively, this leads to the splitting of energy levels and
redistribution of charge density in the wires, thus effectively
reducing the strength of electron-electron interactions. Since we
have no information about the electronic structure of the
semiconducting nanotube, to make a quantitative estimation we
assume that the effective mass is equal in both wires and that the
potential is also the same. In such a case, from symmetry arguments the
electron density should be evenly distributed in both wires even
for a very weak tunnelling. Therefore, the electron-electron
interactions should be twice stronger than in the case without
tunnelling, namely, $V_{0}=6.3$ eV$\cdot$nm to achieve the same
energy value. Moreover, if the tunnelling coefficient is large
enough, the splitting of the energy levels becomes significant and
detectable. We can estimate the coefficient $T$, if we assume that
it has the same order for SM, metallic-metallic (MM), and
semiconducting-semiconducting (SS) nanotube junctions. The SS and
MM junctions have Ohmic voltage-current dependance, characterized
by the conductance $G$. Moreover, we can estimate the transmission
coefficient of the junction as $G/G_{0}\sim(T/2\pi\hbar v_{F})^{2}$, for $G/G_{0}\ll 1$. For MM junctions experimental
measurementsDickinson typically yield $G/G_{0}\sim 10^{-2}$,
thus corresponding to $T\sim 0.34$ eV$\cdot$nm. For example, for
$T=0.28$ eV$\cdot$nm and $\tilde{V}=0.44$ eV in Eq. (46), without electron-electron interactions
we find that the system has two bound states.
The lowest energy bound state with $E=-0.5$ eV is shown in Fig. 8.
Compared with Fig. 5, the state has a peak exactly at
the crossing, corresponding to a local increase of the DOS. The
other bound state with $E=-0.2$ eV is shown in Fig. 9.
Contrary to the previous case, the state has a deep at the
crossing, corresponding to a local decrease of the DOS. However,
these local change in DOS is too small to be observable in the
present experimental data. If we now include electron-electron
interactions with $V_{0}=3.15$ eV$\cdot$nm and add a second electron
with different spin to the system, we find that the new state has
$E=-0.267$ eV and acquires the shape shown in Fig. 10.
The last result suggests that there are yet other possible
interpretations of the experimental results. Firstly, if the
potential in the metallic SWNT is significantly decreased due to
screening effects but a Schottky barrier in the semiconducting
SWNT can reach considerable values, sufficient for the formation
of the bound states, then the latter are also going to be present
in the metallic SWNT due to tunnelling between SWNTs. Secondly,
there is still a possibility to find a bound state existing purely
due to tunnelling, i.e., without external potential, as was shown
in Eq. (23), and a second bound state may arise with
different energy due to Coulumb repulsion between electrons with
different spins. However, this is most probably not the case we
have in the experiments, because due to electron-hole symmetry
such states would exist also above the Fermi energy, a result
which is not observed experimentally.
VII Conclusions
We presented several possibilities to explain the observed
localized states at the crossing of metallic and semiconducting
nanotubes.Janssen All of them require the existence of an
external potential in the metallic and/or semiconducting SWNT to
break the electron-hole symmetry, since the localized states were
seen only below the Fermi energy. Most probably, such a potential
comes from a Schottky barrier and the effect of lattice
distortions is minimal, since such localized states were, up to
now, observed only for MS crossings and not for MM or SS ones.
Moreover, the effective mass of quasiparticle excitations should
be of order $m=0.025\;m_{e}$, where $m_{e}$ is the actual electron
mass, to generate only a few bound states localized on a region of
approximately $10$ nm with energy of order of $0.5$ eV. The best
agreement with the experimental data is obtained by assuming that
the second bound state has a different energy due to the Coulumb
repulsion between electrons with different spins. The role of
tunnelling in the observed electronic structure is not clear and
allows for many interpretations. To avoid such ambiguity, the
electronic structure of the semiconducting nanotube should be
measured as well. Moreover, to be sure that the available STS
measurements indeed represent the electronic structure of the
nanotube and are free of artifacts introduced by the STM tip
LeRoy several measurements with different tip height should
be performed.
VIII ACKNOWLEDGMENTS
We are very grateful to S. G. Lemay for useful discussions.
Appendix A
Here we consider the Green’s function as a function of one
variable $x_{1}$ and fix $x_{2}$ for a moment. Since $G(x_{1},x_{2},E)$
is the Green’s function, we require it to satisfy proper boundary
conditions $G(\pm L,x_{2},E)=0$, be continuous $G(x_{2}-0,x_{2},E)=G(x_{2}+0,x_{2},E)$, and also $G^{\prime}(x_{2}-0,x_{2},E)-G^{\prime}(x_{2}+0,x_{2},E)=2m/\hbar^{2}$. Substituting Eq. (38) into
the above requirements one finds
$$P\left(\begin{array}[]{c}A_{+}\\
A_{-}\\
B_{+}\\
B_{-}\end{array}\right)=\frac{2m}{\hbar^{2}}\left(\begin{array}[]{c}0\\
0\\
0\\
1\end{array}\right),$$
(50)
where
$$P\equiv\left(\begin{array}[]{cccc}\psi_{1}(L)&0&\psi_{2}(L)&0\\
0&\psi_{1}(-L)&0&\psi_{2}(-L)\\
\psi_{1}(x_{2})&-\psi_{1}(x_{2})&\psi_{2}(x_{2})&-\psi_{2}(x_{2})\\
-\psi_{1}^{\prime}(x_{2})&\psi_{1}^{\prime}(x_{2})&-\psi_{2}^{\prime}(x_{2})&%
\psi_{2}^{\prime}(x_{2})\end{array}\right).$$
(51)
Multiplying the Eq. (50) by the matrix $P^{-1}$ we find
$$\left(\begin{array}[]{c}A_{+}\\
A_{-}\\
B_{+}\\
B_{-}\end{array}\right)=C\left(\begin{array}[]{c}\psi_{2}(L)[\psi_{2}(-L)\psi_%
{1}(x_{2})-\psi_{1}(-L)\psi_{2}(x_{2})]\\
\psi_{2}(-L)[\psi_{2}(L)\psi_{1}(x_{2})-\psi_{1}(L)\psi_{2}(x_{2})]\\
-\psi_{1}(L)[\psi_{2}(-L)\psi_{1}(x_{2})-\psi_{1}(-L)\psi_{2}(x_{2})]\\
-\psi_{1}(-L)[\psi_{2}(L)\psi_{1}(x_{2})-\psi_{1}(L)\psi_{2}(x_{2})]\end{array%
}\right),$$
where
$$C\equiv\frac{2m}{\hbar^{2}W_{r}}[\psi_{1}(L)\psi_{2}(-L)-\psi_{1}(-L)\psi_{2}(%
L)]^{-1}.$$
The Wronskian
$$W_{r}\equiv\psi_{1}(x_{2})\psi_{2}^{\prime}(x_{2})-\psi_{2}(x_{2})\psi_{1}^{%
\prime}(x_{2}),$$
is nonzero for linearly independent functions and its value does
not depends on the point $x_{2}$.
Appendix B
Suppose that Eq. (37) has a solution $\psi(x)$ which is
neither symmetric nor antisymmetric. Thus, for symmetric
potentials $\psi(-x)$ is also a solution and both of them are
linearly independent. Furthermore, we can then compose a symmetric
$\psi_{s}(x)=(\psi(x)+\psi(-x))/2$ and an antisymmetric
$\psi_{a}(x)=(\psi(x)-\psi(-x))/2$ solutions. In particular, for a
harmonic potential $V(x)=m\omega^{2}x^{2}/2$, one can find such a
solution
$$\psi(x)=e^{-\frac{m\omega x^{2}}{2\hbar}}H\left(\frac{E}{\hbar\omega}-\frac{1}%
{2},\sqrt{\frac{m\omega}{\hbar}}x\right),$$
(52)
where $H(\nu,x)$ is the Hermite polynomial for integer $\nu$. It
follows then that
$$\psi_{s}(0)=2^{\frac{E}{\hbar\omega}-\frac{1}{2}}\frac{\sqrt{\pi}}{\Gamma(%
\frac{3}{4}-\frac{E}{\hbar\omega})}$$
(53)
and
$$\psi_{a}^{\prime}(0)=-2^{\frac{E}{\hbar\omega}}\sqrt{\frac{2\pi\omega m}{\hbar%
}}\frac{1}{\Gamma(\frac{1}{4}-\frac{E}{\hbar\omega})}.$$
(54)
Moreover, in the thermodynamic limit $L\rightarrow\infty$,
$$G(0,0,E)=\frac{1}{2\hbar}\sqrt{\frac{m}{\omega\hbar}}\frac{\Gamma(\frac{1}{4}-%
\frac{E}{\hbar\omega})}{\Gamma(\frac{3}{4}-\frac{E}{\hbar\omega})}.$$
(55)
The Eq. (55) approaches asymptotically the expression
for free fermions, as $\omega\rightarrow 0$ for $E<0$,
$$G(0,0,E)\rightarrow\frac{1}{\hbar}\sqrt{\frac{-m}{2E}}.$$
(56)
References
(1)
M. Büttiker, Y. Imry, R. Landauer, and
S. Pinhas, Phys. Rev. B 31, 6207 (1985).
(2)
C. L. Kane and M. P. A. Fisher, Phys. Rev. Lett. 68, 1220 (1992).
(3)
A. Komnik and R. Egger, Phys. Rev. Lett. 80, 2881 (1997).
(4)
B. Gao, A. Komnik, R. Egger, D. C. Glattli, and A. Bachtold, Phys. Rev. Lett. 92, 216804 (2004).
(5)
M. S. Fuhrer, J. Nygard, L. Shih, M. Forero, Young-Gui Yoon, M. S. C. Mazzoni,
Hyoung Joon Choi, Jisoon Ihm, Steven G. Louie, A. Zettl, and
Paul L. McEuen, Science 288, 494 (2000).
(6)
H. W. Ch. Postma, M. de Jonge, Zhen Yao, and C. Dekker, Phys. Rev. B 62, 10653 (2000).
(7)
J. W. Janssen, S. G. Lemay, L. P. Kouwenhoven, and C. Dekker, Phys. Rev. B
65, 115423 (2002).
(8)
I. E. Dzyaloshinskii and A. I. Larkin, Zh. Eksp. Toer. Fiz. 65, 411 (1973) [So. Phys. JETP 38, 202
(1974)].
(9)
S. Das Sarma and E. H. Hwang, Phys. Rev. B
54, 1936 (1996).
(10)
R. L. Schult, D. G. Ravenhall, and H. W. Wyld, Phys. Rev. B
39, 5476 (1989).
(11)
J. P. Carini, J. T. Londergan, K. Mullen,
and D. P. Murdock, Phys. Rev. B 46, 15538 (1992).
(12)
K. Kazymyrenko, B. Douçot, Phys. Rev. B
71, 075110 (2005).
(13)
P. H. Dickinson and S. Doniach, Phys. Rev. B
47, 11447 (1993).
(14)
A. H. Castro Neto and F. Guinea, Phys. Rev. Lett. 80, 4040 (1998).
(15)
X. J. Zhou, P. Bogdanov, S. A. Kellar, T. Noda, H. Eisaki, S. Uchida, Z. Hussain, and Z.X. Shen, Science 286, 268 (1999).
(16)
A. Odintsov, Phys. Rev. Lett. 85, 150 (2000).
(17)
A. Odintsov and Hideo
Yoshioka, Phys. Rev. B 59, 10457 (1999).
(18)
P. Jarillo-Herrero, S. Sapmaz, C. Dekker, L. P. Kouwenhoven,
and H. S. J. van der Zant, Nature 427, 389 (2004).
(19)
M. Radosavljevic, J. Appenzeller, Ph. Avouris, and J. Knoch,
Appl. Phys. Lett. 84, 3693 (2004).
(20)
S. G. Lemay, J. W. Janssen, M. van den Hout, M. Mooij, M. J. Bronikowski,
P. A. Willis, R. E. Smalley, L. P. Kouwenhoven, and C. Dekker, Nature (London) 412, 617 (2001).
(21)
B. J. LeRoy, I. Heller, V. K. Pahilwani, C. Dekker, and S. G. Lemay, Nano Lett. 7, 2937 (2007). |
Confinement of hydrodynamic modes on a free surface and their quantum
analogs
M. Torres111Electronic address: manolo@iec.csic.es
Instituto de Física Aplicada, Consejo Superior de
Investigaciones Científicas, Serrano 144, 28006 Madrid, Spain.
J. P. Adrados
Instituto de Física Aplicada, Consejo Superior de
Investigaciones Científicas,
Serrano 144, 28006 Madrid, Spain.
P. Cobo
A. Fernández
Instituto de Acústica, Consejo Superior de
Investigaciones Científicas, Serrano 144, 28006
Madrid, Spain
C. Guerrero
Laboratorio de Física de Sistemas Pequeños y
Nanotecnología, Consejo Superior de Investigaciones
Científicas, Serrano 144, 28006 Madrid, Spain.
G. Chiappe
Departamento de Física Aplicada and Unidad
Asociada del Consejo Superior de Investigaciones Científicas,
Universidad de Alicante, San Vicente del Raspeig, Alicante 03690,
Spain.
E. Louis
Departamento de Física Aplicada and Unidad
Asociada del Consejo Superior de Investigaciones Científicas,
Universidad de Alicante, San Vicente del Raspeig, Alicante 03690,
Spain.
J.A. Miralles
Departamento de Física Aplicada and Unidad
Asociada del Consejo Superior de Investigaciones Científicas,
Universidad de Alicante, San Vicente del Raspeig, Alicante 03690,
Spain.
J. A. Vergés
Departamento de Teoría de la Materia Condensada,
Instituto de Ciencia de Materiales de Madrid, Consejo Superior de
Investigaciones Científicas, Cantoblanco, Madrid 28049, Spain.
J.L. Aragón222Electronic address:
aragon@fata.unam.mx
Centro de Física Aplicada y Tecnología
Avanzada, Universidad Nacional Autónoma de
México, Apartado Postal 1-1010, Querétaro 76000, México.
(November 25, 2020)
Abstract
A subtle procedure to confine hydrodynamic modes on the free surface of a
fluid is presented here. The experiment consists of a square vessel
with an immersed square central well vibrating vertically so that
the surface waves generated by the meniscus at the vessel boundary
interfere with the bound states of the well. This is a classical
analogy of a quantum well where some fundamental phenomena, such as
bonding of states and interference between free waves and bound
states, can be visualized and controlled. The above mentioned
interference leads to a novel hydrodynamic transition from quasiperiodic
to periodic patterns. Tight binding numerical calculations are performed
here to show that our results could be transferred to design quantum
confinements exhibiting electronic quasiperiodic surface states and their
rational approximants for the first time.
pacs: 47.35.+i, 47.20.-k, 47.54.+r, 71.23.Ft
The interest in experiments of classical analogs, which accurately
model the salient features of some quantum systems or other
fundamental undulatory phenomena, was firstly raised by the acoustic
experiments of Maynard Maynard , where the analogy between
both the acoustic and Schrödinger equations is invoked. In a
general overview, a hydrodynamic analogy has also been used to
describe the flow of electrons in quantum semiconductor devices
Gardner . Some experiments with liquid surface waves have been
reported later and they presented abstract concepts, such as Bloch
states, domain walls and band-gaps in periodic systems
Torres1 ; Torres2 , Bloch-like states in quasiperiodic systems
Torres3 or novel findings as the superlensing effect
Hu in an visually clear manner. On the other hand, the
correspondence between the shallow water wave equation and the
acoustic wave equation has also been demonstrated
Chou ; Torres2 . Such correspondences could be exploited to
investigate and address formally similar quantum effects as those
observed in quantum corrals Crommie , where an optical analogy
has already been proposed Francs , and in grain boundaries or
simple surface steps Hasegawa . Our main goal is to build up
the hydrodynamic analogy of the interference between bound states in
a finite quantum well and free states. Then we realized an
experimental set up consisting of a square vessel with a single
square well drilled at its bottom; both squares are concentric and
well diagonals are parallel to the box sides. The immersed well was
expected to work as a weak potential binding surface waves upon
dependence on the liquid depth. When the vessel vibrated vertically
with such amplitudes that the Faraday instability was prevented,
such geometry produced three kinds of linear or weakly non-linear
patterns on the free surface of the liquid. The first pattern is a
sort of bound state restricted to the surface area occupied by the
immersed well that works as a weak potential binding standing plane
waves. The second pattern is produced by the meniscus at the vessel
walls Douady and it can invade the region of the well
depending on the liquid depth and the vessel vibration frequency.
Finally, the last pattern arises from the interference between the
vessel meniscus waves and the bound states of the well. Patterns
observed depend on the liquid depth $h_{1}$, which plays the role of
an order parameter by controlling the amplitudes of the bound states
inside the immersed well.
Our main observation is summarized in Fig. 2(a), which
clearly shows the binding of the surface wave produced by the
drilled well when the vibration amplitude is 60 $\mu$m. This pattern
will be detailed below but the physics of its origin can be
discussed as follows. The bound states arise from an inertial
hydrodynamic instability, balanced by the liquid surface tension
Landau , that grows over the square well region
Torres3 . The bound state amplitudes increase on increasing $1/a^{2}$; where $a^{2}=T/\rho g$, $a$ is the capillary length, $T$ is
the liquid surface tension, $\rho$ is the liquid density and $g=g_{0}\pm\alpha\omega^{2}$ is the effective gravity; $g_{0}$ is the
acceleration due to gravity, $\alpha$ is the vessel vibration
amplitude and $\omega$ is the vibration angular frequency of the
vessel. On the contrary, the meniscus wave amplitude depends on the
variation of the meniscus volume during each vessel oscillation and,
it grows accordingly when $a^{2}$ increases Douady ; Landau . In
our experiment $\alpha$ is about 60 $\mu$m and the meniscus wave
amplitude reaches a maximum at a vessel vibration frequency of 64
Hz. On the other hand, the vessel vibration frequency represents the
wave state energy. The frequency and the wavelength are related
through the well known dispersion relation of the gravity-capillary
waves Torres1 ; Torres2 ; Torres3 ; Landau . The present experiment
is essentially monochromatic as it occurs in the optical analogs
Francs of quantum corrals. To show that Fig.
1(a) is not a Faraday pattern, we present a snapshot of
the system vibrating at about 70 $\mu$m in Fig. 1(b),
when the Faraday instability is really triggered in the square well.
Figure 1(b) shows a higher wavelength that matches with
the corresponding period doubling related to a Faraday wave pattern.
We chose a square vessel and the orientated square well
configuration to verify that the immersed well confined wave states.
Then we used a square methacrylate box with side $L$ of 8 cm where a
single square well with depth $d$ of 2 mm and side $l$ of 3.5 cm was
drilled at its bottom. The bottom of the vessel was covered with a
shallow ethanol layer of depth $h_{1}$. The liquid depth over the well
was then $h_{2}=h_{1}+d$.
As it was already mentioned, the vessel vibration amplitude was 60
$\mu$m, below the Faraday instability threshold set at about 70
$\mu$m. The vessel vibrated vertically at a single frequency lying
within the range from 35 to 60 Hz. An optimum frequency value was 50
Hz. Depending on the depth $h_{1}$, our experimental results can be
separated into three cases.
Case I. For $h_{1}$ lower than 1 mm, the experiment shows two
square lattices rotated 45${}^{o}$ between them, namely the lattice of
the immersed square well is separated from the vessel square lattice
(Fig. 1a). The external wave amplitudes are lower than
the wave amplitudes within the well and, furthermore, the external
wave reflection at the well step Lamb ; Bartholomeusz ; Miles is
strong. For very shallow liquid layers, waves are only present
within the well at the vibration amplitudes of the experiment.
Case II. At a vessel vibration frequency of 50 Hz, when
$h_{1}$ is 1.2 mm, a quasicrystalline standing wave pattern appears
inside the immersed square region whereas the outer wave pattern is
a square network (Fig. 2(a)). The immersed square well
works as a weak potential and binds standing plane waves with
eigenvectors $k_{2}\mathbf{u}^{\prime}_{x}$ and $k_{2}\mathbf{u}^{\prime}_{y}$, parallel to the well sides. Nevertheless, it is transparent for
the standing waves of the square box which tunnel the immersed well
framework under the experimental conditions. Inside the square
region, the vessel eigenstates have eigenvectors $k_{2}\mathbf{u}_{x}$
and $k_{2}\mathbf{u}_{y}$, parallel to the outer box sides. Thus, the
standing state is finally described by $\psi_{2}(k_{2})+\psi^{\prime}_{2}(k_{2})=A_{2}[\exp(\imath k_{2}x)+\exp(\imath
k%
_{2}y)]+A^{\prime}_{2}[\exp(\imath k_{2}x^{\prime})+\exp(\imath k_{2}y^{\prime%
})]$ inside
the region of the square well. Equal phases along $x$ and $y$, and
$x^{\prime}$ and $y^{\prime}$, respectively, were assumed and $|A_{2}|=|A^{\prime}_{2}|$ for the mentioned liquid depth and vibration
frequency. The interference of standing patterns $\psi_{2}$ with $\psi^{\prime}_{2}$ increases the symmetry in the well from square
crystalline to octagonal quasicrystalline. Such pattern matches well
with the outer one described by $\psi_{1}(k_{1})=A_{1}[\exp(\imath k_{1}x)+\exp(\imath k_{1}y)]$. According to the dispersion relation for
gravity-capillary waves described elsewhere
Torres1 ; Torres2 ; Torres3 ; Landau , the difference between $k_{1}$
and $k_{2}$ is about 2% and the refraction bending of about 1.1${}^{o}$
at the boundary of the central window is negligible. Furthermore,
the external wave reflection at the well step
Lamb ; Bartholomeusz ; Miles is also negligible with such
parameters. On the other hand, slender outgoing evanescent waves are
emitted at the boundary of the well and they play an important role
in the matching between patterns. It is important to remark that the
parameters of the experiment do not allow through the well known
dispersion relation Torres1 ; Torres2 ; Torres3 ; Landau that the
wave inside the inner well were a subharmonic Faraday wave and the
outer wave were a harmonic meniscus wave. If this were the case,
then $k_{1}$ and $k_{2}$ should be very different.
Case III. When $h_{1}$ is increased, the potential of the
immersed well is seen increasingly weaker by the system, and
$A^{\prime}_{2}$ decreases accordingly. Under such conditions,
transitional patterns from a quasicrystalline form to a crystalline
one appear gradually on the hydrodynamic window. Figure
2(b) shows a transitional pattern corresponding to a
liquid depth $h_{1}$ of 1.5 mm and an excitation frequency of 50 Hz.
According to crystallographic techniques of image processing
described elsewhere Torres3 , the Fourier transforms of the
experimental patterns are calculated (Figs. 2(a) and
2(b)) and they are used to depict Fig. 3,
where the fast decay of $|A^{\prime}_{2}/A_{2}|^{2}$ is shown on
increasing $h_{1}$. Figure 4 shows four frames of the
numerical simulation of the quasicrystal-crystal transition on the
hydrodynamic central square window. The complete movie is available
as Auxiliary Material.
The appearance of a quasiperiodic pattern inside the immersed square
region, in Case II, confirms that the square well is binding
standing plane waves. Such a pattern can only be produced by the
interference of two square patterns rotated by 45${}^{o}$, namely the
pattern of bound standing waves of the square well is transparent to
the pattern of standing meniscus waves of the vessel if the adequate
conditions of the experimental parameters are fulfilled. On the
other hand, the square pattern of the vessel is only observed beyond
the region of the central square well. Moreover, there are
evanescent waves coming out of the boundary of the square well. The
observation of such waves also confirms the analogy with a quantum
well Messiah .
The description of the observed standing wave patterns given in Case
II is the same as the used to describe the stationary states of a
particle moving in a potential given by a square well. Using these
functions, we show a numerical simulation of the observed
quasicrystalline pattern in Fig. 3(b). A trial and
error numerical method was used to fit at the boundaries and a
better matching is obtained if slender outgoing evanescent waves
emitted at the boundary of the well are considered.
To test the viability of a quantum scenario analogous to our
experimental results, we numerically studied the quantum confinement
of a double square well by using a tight-binding Hamiltonian in a $L\times L$ cluster of the square lattice with a single atomic orbital
per lattice site Cuevas ,
$$\displaystyle{\hat{H}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{m,n}\epsilon_{m,n}|m,n\rangle\langle m,n|$$
$$\displaystyle-\sum_{<mn;m^{\prime}n^{\prime}>}t_{m,n;m^{\prime}n^{\prime}}|m,n%
\rangle\langle m^{\prime},n^{\prime}|,$$
where $|m,n\rangle$ represents an atomic orbital at site $(m,n)$,
and $\epsilon_{m,n}$ its energy. In order to simulate the inner
square we have explored several possibilities. Outside the inner
square, the energy of all orbitals are taken equal to zero. Besides,
the hopping energies between nearest-neighbor sites (the symbol $<>$
denotes that the sum is restricted to nearest-neighbors) were all
taken equal to 1. Instead, inside the inner square, we either
changed the orbital or the hopping energies. Some illustrative
results are shown in Fig. 5. Figures 5(a)
and (b) correspond to wavefunctions close to the band bottom,
i.e., long wavelengths and high linearity. In Fig.
5(a) the octagonal symmetry within the inner square is
clearly visible. Figure 5(b) illustrates an effect that
is purely quantum, namely, the effect of the outer square is visible
even on a wavefunction localized in the inner square. Although one
cannot expect a one-to-one correspondence between the experiments
discussed here and this simple quantum simulation, the results
clearly suggest that similar effects could be observed in a suitable
quantum system. Figure 5(c) shows a peculiar null
energy state which is not located at the band bottom. In this state
the hopping energy outside the square is equal to 1, whereas it is
0.7 inside the square and the general pattern conspicuously
resembles our experiment. The Fourier spectrum of the inner square
pattern has been performed and it corresponds to a rational
approximant of an octagonal quasicrystal generated by the following
vectors: $(2/\sqrt{5},0)$, $(0,2/\sqrt{5})$, $2(2/\sqrt{5},1/\sqrt{5})$, $2(2/\sqrt{5},-1/\sqrt{5})$, $2(1/\sqrt{5},2/\sqrt{5})$, $2(-1/\sqrt{5},2/\sqrt{5})$. Fourier spectra of Figs.
5(a) and (b) correspond to quasicrystalline states.
This suggests that the hopping energy could be a parameter to induce
quasicrystal-crystal transitions in confined quantum states.
In conclusion, we have described here a hydrodynamic experiment that
gives rise to the confinement of wave states on a free surface. It
constitutes a stirring macroscopic experimental scenario which
models some salient features of a quantum well and stimulates the
study and visualization of confined quasiperiodic quantum states for
the first time.
Acknowledgements.
Technical support from C. Zorrilla and S. Tehuacanero is gratefully acknowledged.
This work has been partially supported by the Spanish MCYT (BFM20010202 and
MAT2002-04429), the Mexicans DGAPA-UNAM (IN-108502-3) and CONACyT (D40615-F),
the Argentineans UBACYT (x210 and x447) and Fundación Antorchas, and the
University of Alicante.
References
(1)
S. He and J.D. Maynard, Phys. Rev. Lett. 62,
1888 (1989); J.D. Maynard, Rev. Mod. Phys. 73, 401 (2001).
(2)
C.L. Gardner, SIAM J. Appl. Math. 54,
409 (1994).
(3)
M. Torres, J.P. Adrados and F.R. Montero de Espinosa,
Nature (London) 398, 114 (1999)
(4)
M. Torres, J.P. Adrados, F.R. Montero de Espinosa,
D. García-Pablos and J. Fayos, Phys. Rev. E 63, 11204 (2000).
(5)
M. Torres, J.P Adrados, J.L. Aragón, P. Cobo and S.
Tehuacanero, Phys. Rev. Lett. 90, 114501 (2003). See
also Physical Review Focus: http://focus.aps.org/story/v11/st11
(6)
X. Hu, Y. Shen, X. Liu, R. Fu, and J. Zi, Phys. Rev. E
69, 030201(R) (2004); M. Peplow, Nature (London) 428,
713 (2004).
(7)
T. Chou, J. Fluid Mech. 369, 333 (1998).
(8)
M.F. Crommie, C.P. Lutz and D.M. Eigler, Nature
(London) 363, 524 (1993); M.F. Crommie, C.P. Lutz and
D.M. Eigler, Science 262, 218 (1993).
(9)
Gérard Colas des Francs, et al., Phys. Rev. Lett.
86, 4950 (2001); C. Chicanne, et al., Phys. Rev.
Lett. 88, 097402 (2002).
(10)
Y. Hasegawa and P. Avouris, Phys. Rev. Lett.
71, 1071 (1993).
(11)
S. Douady, J. Fluid Mech. 221, 383 (1990).
(12)
L.D. Landau and E.M. Lifshitz, Fluid Mechanics
(Pergamon Press, London, 1959).
(13)
H. Lamb, Hydrodynamics (Cambridge University
Press, Cambridge, 1932).
(14)
E.F. Bartholomeusz, Proc. Cambridge Phil. Soc.
54, 106 (1958).
(15)
J.W. Miles, J. Fluid Mech. 28, 755 (1967).
(16)
A. Messiah, Mècanique Quantique (Dunod,
Paris, 1962).
(17)
E. Cuevas, E. Louis, and J. A. Vergés
Phys. Rev. Lett. 77, 1970 (1996). |
Predicting Ergonomic Risks During Indoor Object Manipulation Using Spatiotemporal Convolutional Networks
Behnoosh Parsa${}^{1}$, Ekta U. Samani${}^{1}$, Rose Hendrix${}^{1}$, Shashi M. Singh${}^{2}$, Santosh Devasia${}^{1}$, and Ashis G. Banerjee${}^{3}$
*This work was supported in part by a generous gift from Amazon Robotics.${}^{1}$B. Parsa, E. Samani, R. Hendrix, and S. Devasia are with the Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA, behnoosh,ektas,rmhend,devasia@uw.edu${}^{2}$S. M. Singh is with the Department of Mechanical Engineering, Indian Institute of Technology Gandhinagar, Palaj 382355, Gujarat, India, shashi.singh@iitgn.ac.in${}^{3}$A. G. Banerjee is with the Department of Industrial & Systems Engineering and the Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA, ashisb@uw.edu
Abstract
Automated real-time prediction of the ergonomic risks of manipulating objects is a key unsolved challenge in developing effective human-robot collaboration systems for logistics and manufacturing applications. We present a foundational paradigm to address this challenge by formulating the problem as one of action segmentation from RGB-D camera videos. Spatial features are first learned using a deep convolutional model from the video frames, which are then fed sequentially to temporal convolutional networks to semantically segment the frames into a hierarchy of actions, which are either ergonomically safe, require monitoring, or need immediate attention. For performance evaluation, in addition to an open-source kitchen dataset, we collected a new dataset comprising twenty individuals picking up and placing objects of varying weights to and from cabinet and table locations at various heights. Results show very high (87-94)% F1 overlap scores among the ground truth and predicted frame labels for videos lasting over two minutes and comprising a large number of actions.
Deep Learning in Robotics and Automation, Human-Centered Automation, Computer Vision for Automation, Action Segmentation, Ergonomic Safety
I Introduction
One of the key considerations for viable human-robot collaboration (HRC) in industrial settings is safety. This consideration is particularly important when a robot operates in close proximity to humans and assists them with certain tasks in increasingly automated factories and warehouses. Therefore, it is not surprising that a lot of effort has gone into identifying and implementing suitable HRC safety measures [1]. Typically, the efforts focus on designing collaborative workspaces to minimize interferences between human and robot activities [2], installing redundant safety protocols and emergency robot activity termination mechanisms through multiple sensors [2], and developing both predictive and reactive collision avoidance strategies [3]. These efforts have resulted in the expanded acceptance and use of industrial robots, both mobile and stationary, leading to increased operational flexibility, productivity, and quality [4].
However, ergonomic safety of the collaborating humans is an extremely important related topic that has not received much attention until recently. Unlike other commonly considered safety measures, a lack of ergonomic safety does not lead to immediate injury concerns or fatality risks. It, however, causes or increases the likelihood of causing longterm injuries in the form of musculoskeletal disorders (MSDs) [5]. According to a recent report by the U.S. Bureau of Labor Statistics, there were 349,050 reported cases of MSDs in 2016 just in the U.S. [6], leading to tens of billions of dollars in healthcare costs.
Therefore, we need to develop collaborative workspaces that minimize the ergonomic risks due to repetitive and/or physically strenuous tasks involving awkward human postures such as bending, stretching, and twisting. Keeping this need in mind, researchers have started working on defining suitable ergonomic safety measures, formulating cost functions for the collaborating robots (co-bots) to optimize over the joint human-robot action spaces, using the right fidelity human motion model, and learning to deal with modeling errors and operational uncertainties (see the various topics presented at a recently organized workshop [7]).
Here, we focus on developing an automated real-time ergonomic risk assessment framework for indoor object manipulation. The goal of this framework is to enable the co-bots to take over a majority of the risky manipulation actions, allowing the humans to engage in more cognitively challenging tasks or supervisory control activities. However, the conventional ergonomic risk assessment methods are based on observations and self-reports, making them error-prone, time consuming, and labor-intensive [8]. Alternatively, RGB-D sensors are placed everywhere in the workspaces and motion sensors are attached to different body parts of the co-workers [9], leading to expensive and unwieldy solutions. Recently, computer vision techniques are being applied on both images and videos [10], and interactive 3D simulation followed by automated post-processing is being employed [11] to realize efficient and convenient solutions. We adopt the final approach in this work without relying on any kind of post-processing.
Unsurprisingly, quite a few researchers have used deep learning over the past couple of years to assess the risks of performing occupational tasks. In particular, this area of research has become quite popular in the construction industry. Recent works include that by Fang et al. [12], who presented a Faster Region-Convolutional Neural Network (Faster R-CNN) method for automated detection of non-hardhat-use from far-field surveillance videos. Ding et al. [13] used a hybrid deep learning method to determine unsafe human behaviors at construction sites. Fang et al. [14] developed another Faster R-CNN method to identify workers who were not wearing harnesses from a large database of construction sites images. Outside of the construction sector, Abobakr et al. [15] employed deep residual CNNs to predict the joint angles of manufacturing workers from individual camera depth images. Mehrizi et al. [16] proposed a multi-view based deep perceptron approach for markerless 3D pose estimation in the context of object lifting tasks. While all these works present useful advances and report promising performances, they do not provide a general-purpose framework to assess or predict the ergonomic risks for any representative set of object manipulation tasks commonly performed in the industry.
Here, we present a first of its kind end-to-end deep learning system and a new evaluation dataset to address this shortcoming for indoor manipulation. Our learning methods are adapted from video action detection and segmentation literature. In action detection, a sparse set of action segments are computed, where the segments are defined by start times, end times, and class labels. For example, Singh et al. [17] proposed the use of a Bi-directional Long Short Term Memory (Bi-LSTM) layer after feature extraction from a multi-stream CNN to reduce detection errors. In action segmentation, an action is predicted for every video frame. Representative works include that by Fathi et al. [18], who showed that state changes at the start and end frames of actions provided good segmentation performance. Kuehne et al. [19] used reduced Fisher vectors for visual representation of every frame, which were then fitted to Gaussian mixture models. Huang et al. [20] presented a temporal classification framework in the case of weakly supervised action labeling. While there are overlaps between these two forms of video processing, our problem closely resembles the latter. We, therefore, employ state-of-the-art segmentation methods that have been shown to work well on challenging datasets.
II Ergonomic Risk Assessment Model
We use a well-established ergonomic model, known as the rapid entire body assessment (REBA) model [21], which is popularly used in the industry. The REBA model assigns scores to the human poses, within a range of 1-15, on a frame-by-frame basis by accounting for the joints motions and angles, load conditions, and activity repetitions. An action with an overall score of less than 3 is labeled as ergonomically safe, a score between 3-7 is deemed to be medium risk that requires monitoring, and every other action is considered high risk that needs attention.
Skeletal information for the TUM Kitchen dataset is available in the bvh (Biovision Hierarchy) file format. We use the bvh parser from the MoCap Toolbox [22] in MATLAB to read this information as the XYZ coordinates of thirty three markers (joints and end sites) corresponding to every frame. For the UW IOM dataset, the positions of twenty five different joints are recorded directly in the global coordinate system for each frame using the Kinect sensor with the help of a toolbox [23] that links Kinect and MATLAB. For every frame, the vectors corresponding to different body segments such as fore-arm, upper-arm, leg, thigh, lower half spine, upper half spine, and so on, are computed. The extension, flexion, and abduction (as applicable) of the various body segments are computed by taking the projection of the two body segment vectors that constitute the angle on the plane of motion. These angles of extension, flexion, and abduction are used to assign the trunk, neck, leg, upper arm, lower arm, and wrist scores [21].
We define three different thresholds as a part of our implementation, namely, zero threshold, binary threshold, and abduction threshold. Zero threshold is used for trunk bending, such that any trunk bending angle less than this value is regarded as no bending. Binary threshold is defined to answer whether the trunk is twisted and/or side flexed. Trunk twisting and trunk side flexion less than this value are ignored. Abduction threshold, though similar to the binary threshold, is separately defined for shoulder abduction considering the considerably larger allowable range of abduction (about 150°) as against a smaller allowable range of trunk twisting. Due to the non-availability of rotation information of the neck, we assume that the neck is twisted when the trunk is twisted. The nature of the performed actions does not involve arm rotations, and they are ignored while computing the upper arm score.
The computed trunk, neck, leg, upper arm, lower arm, and wrist scores are used to assign the REBA score on a frame-by-frame basis using lookup tables [21]. The REBA scores assigned for each frame are then aggregated over actions and participants. This aggregated value is considered as the final REBA score for that particular action.
III Deep Learning Models
III-A Spatial Features Extraction
We adapt two variants of VGG16 convolutional neural network models [24] for spatial feature extraction. The first model is based on the VGG16 model that is pre-trained on the ImageNet database [25]. The second model involves fine-tuning the last two convolutional layers of the VGG16 base. In both the models, the flattened output of the last convolutional layer is connected to a fully connected layer with a drop-out of 0.5 and then fed into a classifier. We always use ReLU and softmax as the activation functions, and Adam [26] as the optimizer.
We also use a simplified form of the pose-based CNN (P-CNN) features [27] that only consider the full images and not the different image patches. Optical flow [28] is first computed for each consecutive pair of RGB datasets, and the flow map is stored as an image [29]. A motion network, introduced in [29], containing five convolutional and three fully-connected layers, is then used to extract frame descriptors for all the optical flow images. Subsequently, the VGG-f network [30], pre-trained on ImageNet, is used to extract another set of frame descriptors for all the RGB images. The VGG-f network also contains five convolutional and three fully connected layers. The two sets of frame descriptors are put together as arrays in the same sequence as that of the video frames to construct motion-based and appearance-based video descriptors, respectively. The appearance and motion-based video descriptors are then normalized and concatenated to form the final video descriptor (spatial features).
III-B Video Segmentation Methods
We use two kinds of temporal convolutional networks (TCNs), both of which use encoder-decoder architectures to capture long-range temporal patterns in videos [31]. In the first network, referred as the encoder decoder-TCN, or ED-TCN, a hierarchy of temporal convolutions, pooling, and upsampling layers is used. The network does not have a large number of layers, but each layer includes a set of long convolutional filters. We use the ReLU activation function and a categorical cross-entropy loss function with RMSProp [32] as the optimizer. In the second network, termed as dilated-TCN, or D-TCN, dilated upsampling and skip connections are added between the layers. We use the gated activation function as it is inspired by the WaveNet [33] and Adam optimizer. We also use two other segmentation methods for comparison purposes. The first method is Bi-LSTM[34], a recurrent neural network commonly used for analyzing sequential data streams. The second method is support vector machine, or SVM, which is extremely popular for any kind of classification problem.
III-C Video Segmentation Performance Metrics
In addition to frame-based accuracy, which is the percentage of frames labeled correctly for the related sequence as compared to the ground truth (manually annotated), we report edit-score and F1 overlap score to evaluate the performance of the various methods. The edit-score [35] measures the correctness of Levenshtein distance to the segmented predictions. The F1 overlap score [35], combines classification precision and recall to reduce the sensitivity of the predictions to minor temporal shifts between the predicted and ground truth values, as such shifts might be caused by subjective variabilities among the annotators.
IV System Architecture and Datasets
IV-A System Architecture
We develop an end-to-end automated ergonomic risk prediction system as shown in Fig. 1. The Figure shows that the prediction works in two stages. In the first stage (top half of the Figure), which only needs to be done once for a given dataset, ergonomic risk labels are computed for each object manipulation action class based on the skeletal models extracted from the RGB-D camera videos. Simultaneously, the videos are annotated carefully to assign an action label to each and every frame. These two types of labels are then used to learn a modified VGG16 model for the entire set of available videos. In the second stage (bottom half of the Figure), during training, the exact sequence of video frames is fed to the learned VGG16 model to extract useful spatial features. The array of extracted features is then fed in the same order to train a TCN on how to segment the videos by identifying the similarities and changes in the features corresponding to actions executions and transitions, respectively. For testing, a similar procedure is followed except that the trained TCN is now employed to segment unlabeled videos into semantically meaningful actions with known ergonomic risk categories. It is possible to replace the VGG16 model by another deep neural network model that also accounts for human motions (e.g., P-CNN) to achieve slightly better segmentation performance at the expense of longer training and prediction times.
IV-B Datasets
IV-B1 TUM Kitchen Dataset
The TUM Kitchen dataset [36], consists of nineteen videos, at twenty-five frames per second, taken by four different monocular cameras, numbered from 0 to 3. Each video captures regular actions performed by an individual in a kitchen involving walking, picking up, and placing utensils to and from cabinets, drawers, and tables. The average duration of the videos is about two minutes. The dataset also includes skeletal models of the individual through 3D reconstruction of the camera images. These models are constructed using a markerless full body motion tracking system through hierarchical sampling for Bayesian estimation and layered observation modeling to handle environmental occlusions [37]. We categorize the actions into twenty-one classes or labels, where each label follows a two-tier hierarchy with the first tier indicating a motion verb (close, open, pick-up, place, reach, stand, twist, and walk) and the second tier denoting the location (cabinet, drawer) or mode of object manipulation (do not hold, hold with one hand, and hold with both hands).
IV-B2 University of Washington Indoor Object Manipulation Dataset
Considering the dearth of suitable videos capturing object manipulation actions involving awkward poses and repetitions, we collected a new University of Washington Indoor Object Manipulation (UW IOM) dataset using an IRB-approved study. The dataset comprises videos of twenty participants within the age group of 18-25 years, of which fifteen are males and the remaining five are females. The videos are recorded using a Kinect Sensor for Xbox One at an average rate of twelve frames per second. Each participant carries out the same set of tasks in terms of picking up six objects (three empty boxes and three identical rods) from three different vertical racks, placing them on a table, putting them back on the racks from where they are picked up, and then walking out of the scene carrying the box from the middle rack. The boxes are manipulated with both the hands while the rods are manipulated using only one hand. The above tasks are repeated in the same sequence three times such that the duration of every video is approximately three minutes. We categorize the actions into seventeen labels, where each label follows a four-tier hierarchy. The first tier indicates whether the box or the rod is manipulated, the second tier denotes human motion (walk, stand, and bend), the third tier captures the type of object manipulation if applicable (reach, pick-up, place, and hold), and the fourth tier represents the relative height of the surface where manipulation is taking place (low, medium, and high). Representative snapshots from one of the videos are shown in Fig. 2. The UW IOM dataset is available for free download and use at: https://data.mendeley.com/datasets/xwzzkxtf9s/draft?a=c81c8954-6cad-4888-9bec-6e7e09782a01.
V Experimental Details and Results
V-A Implementation Details
For each participant (video), we first compute the REBA score for all the frames. The zero threshold is set to 5o, the binary threshold to 10o and the abduction threshold is selected as 30o for these computations. For the UW IOM dataset, we then compute the median of the REBA scores assigned to all the frames belonging to a particular action. We then take the median over all the participants to determine the final REBA score for that action.
The framewise skeletal information available for the TUM Kitchen dataset has a variable lag with respect to the video frames, i.e., the skeleton does not lie exactly on the human pose in the RGB image. Therefore, aggregating over actions and participants according to the RBG image annotations does not result in meaningful REBA scores. We, therefore, reduce the length of both the video annotations of the RGB frames and the framewise REBA scores to 100 using a constant step size of number of frames/100 for every video. We then compute the average REBA score for every action in a particular video using the reduced video annotations and framewise scores. The maximum score assigned to a particular action among all the videos is considered as the final REBA score for that action.
The pre-trained VGG16 model for spatial features extraction is trained for 200 epochs with 300 steps per epoch on the TUM Kitchen dataset, and 300 epochs with 300 steps per epoch on the UW IOM dataset with a step-size of 1e-5. The fine-tuned model is trained with the same number of epochs for the TUM Kitchen dataset but with 500 steps per epoch on the UW IOM dataset with 300 steps per epoch and a step-size of 1e-7. The total number of training and validation samples for the TUM Kitchen dataset are 24,052 and 5,290, respectively. For the UW IOM dataset, we train over 27,539 samples and validate over 6,052 samples. The models are learned using the open-source TensorFlow machine learning software library [38] and Python-based Keras [39] neural network library as the backend. To implement the simplified P-CNN model, we modify the MATLAB package provided with [27].
We evaluate the performance of the four segmentation methods by splitting our datasets into five splits, in each of which, the videos are assigned randomly to mutually exclusive training and test sets of fixed sizes. For both the TCN methods, training is terminated after 500 epochs in each of the splits as the validation accuracy stops improving afterward. We use a learning rate of 0.001 for both the methods. D-TCN includes five stacks, each with three layers, and a set of {32, 64, 96} filters in each of the three layers. Filter duration duration, defined as the mean segment duration for the shortest class from the training set, is chosen to be 10 seconds. Similarly, training for Bi-LSTM is terminated after 200 epochs for each split as the validation accuracy does not change any further. Bi-LSTM uses Adam optimizer with a learning rate of 0.001, softmax activation function, and categorical cross-entropy loss function. We choose a linear kernel to train the SVM and use squared hinge loss as the loss function. All the training and testing are done on a workstation running Windows 10 operating system, equipped with a 3.7GHz 8 Core Intel Xeon W-2145 CPU, GPU ZOTAC GeForce GTX 1080 Ti, and 64 GB RAM.
V-B Results
V-B1 Ergonomic Risk Assessment Labels
For the TUM Kitchen dataset, fifteen actions are labeled to be medium risk, while the remaining six are deemed as high risk. The high risk actions are associated with closing, opening, and reaching motions, although there is no perfect correspondence due to a lack of fidelity of the skeletal models on which the risk scores are based upon.
In case of the UW IOM dataset, three actions are labeled as low risks, eleven actions are considered medium risk, and the remaining three are identified as high risk. The high risk actions include picking up a box from the top rack and placing objects (box and rod) on the top rack. Walking without holding any object, walking while holding a box, and picking up a rod from the mid-level rack while standing are regarded as low risk, i.e., safe actions. Fig. 2 shows the corresponding ergonomic risk labels for these different actions depicted in the video snapshots
V-B2 Video Segmentation Outcomes
Table I provides a quantitative performance assessment of the two variants of our segmentation method on the TUM Kitchen dataset for camera # 2 videos. Both the variants perform satisfactorily with respect to all the three performance measures. In fact, the ED-TCN method achieves an F1 overlap score of almost 88%, which has not been previously reported for any action segmentation problem with more than twenty labels to the best of our knowledge. Our TCN methods also outperform Bi-LSTM and SVM substantially. Just for comparison purposes, it is interesting to note that the validation accuracy of image classification is 82.80% and 73.46% using the pretrained and fine-tuned VGG16 models, respectively.
Fig. 3 demonstrates that regardless of whether the spatial features are extracted using a pre-trained or fine-tuned VGG16 architecture, both the TCN methods are able to segment the frames into the correct (or more precisely, same as the manually annotated) actions substantially better than Bi-LSTM and SVM. In fact, the global frame-by-frame classification accuracy value is very high, between (86-91)%, using the TCN methods. Furthermore, both the TCN methods almost always predict the correct sequence of actions unlike the other two widely-used classification methods.
The difference in performance between the TCN and other two segmentation methods is even more pronounced in case of the UW IOM dataset, which includes a larger variety of object manipulation actions. As shown in Table II, SVM performs rather poorly particularly with respect to edit score and F1 overlap values owing to over-segmentation and sequence prediction errors. Bi-LSTM performs somewhat better with the best results obtained using the spatial features generated from a simplified form of P-CNN. Interestingly enough, ED-TCN performs substantially better than D-TCN regardless of the spatial feature extraction method being used. This finding is also consistent with the results for different grocery shopping, gaze tracking, and salad preparation datasets presented in [31]. It happens most likely due to the ability of ED-TCN to identify fine-grained actions without causing over-segmentation by modeling long-term temporal dependencies through max pooling over large time windows. In fact, the edit scores for ED-TCN are close to 90% and the F1 overlap values are more than 93% when we use the fine-tuned VGG16 and P-CNN models. The performance measures are almost identical between the two models with P-CNN yielding marginally better results. The validation accuracy is 75.97% and 73.86% for the pre-trained VGG16 and fine-tuned VGG16 models, respectively, which are similar to the values for the TUM Kitchen dataset. Fig. 4 reinforces these observations on a representative UW IOM dataset video.
If we only use the spatial features, image classification validation accuracy is either comparable to (for the TUM Kitchen dataset), or lower than the video segmentation test accuracy (for the UW IOM dataset). Noting that validation accuracy is typically greater than test accuracy for any supervised learning problem, we would expect segmentation accuracy to be much lower than the reported values in the absence of the temporal neural networks. On the other hand, segmentation performance depends quite a bit on the choice of the spatial feature extraction model, particularly in the case of the more challenging UW IOM dataset. This reinforces the intuition that both spatial and temporal characteristics are important in analyzing long-duration human action videos.
It is not surprising to observe that the TCN methods perform better using edit score and F1 overlap score as the measure instead of global accuracy. As also reported in [31], accuracy is susceptible to erroneous and subjective manual annotation of the video frames, particularly during the transitions from one action to the next, where identifying the exact frame when one action ends and the next one begins is often open to individual interpretation. Both edit score and F1 score are more robust to such annotation issues as compared to accuracy, and, therefore, serve as better indicators of true system performance.
To further evaluate the general applicability of our action segmentation methods, we consider two additional test scenarios: TUM Kitchen videos taken from camera # 1, and a truncated UW IOM dataset comprising only one sequence of object manipulation actions per participant. Table III reports the action segmentation outcomes using just the fine-tuned VGG16 model since it yields better results than the pretrained VGG16 model on our regular test datasets. The trends are more or less the same as in our regular datasets. The actual measures are almost identical for the complete and truncated UW IOM dataset. Thus, our methods seem to be robust to sample size, provided all the actions are covered adequately with a sufficient number of instances in the training set, and the actions occur in the same sequence in all the videos. The actual measures for our TCN methods are only slightly lower for the different TUM Kitchen dataset. Thus, performances appear to be independent of the manner (camera orientation) in which the videos are recorded. The validation accuracy is equal to 76.81% and 75.28% for the different TUM Kitchen and the truncated UW IOM dataset, respectively, which are nearly identical to the corresponding values for the regular TUM Kitchen and complete UW IOM datasets.
V-B3 System Computation Times
In addition to characterizing the goodness of action segmentation, we are interested in knowing how long does it take to learn the spatial feature extraction models, to train the segmentation methods, and to compute the framewise action labels during testing.
The learning times for the pre-trained and fine-tuned VGG16 models are 20,844.11 seconds and 30,564.39 seconds, respectively, in case of the complete UW IOM dataset. As expected, the learning time for the fine-tuned VGG16 model is somewhat lower and equal to 25,414.24 seconds in case of the truncated UW IOM dataset. For the TUM Kitchen dataset, the corresponding value is 31,753.18 seconds.
Using the fine-trained VGG16 model, in case of the complete UW IOM dataset, the overall training times are 252.73 $\pm$ 0.85, 237.76 $\pm$ 0.72, 2,172.23 $\pm$ 11.22, and 60.54 $\pm$ 1.54 seconds across the five data splits for the D-TCN, ED-TCN, Bi-LSTM, and SVM methods, respectively. The corresponding testing times are 0.10, 0.10, 1.09, and 0.09 seconds (the standard errors are negligible), respectively, for an average number of 8,261 frames, which implies that real-time action prediction is highly feasible. These values are almost identical using the pre-trained VGG16 model.
In case of the truncated UW IOM dataset, using the fine-tuned VGG16 model, the overall training times are 113.3 $\pm$ 1.52, 90.37 $\pm$ 0.72, 754.79 $\pm$ 14.16, and 13.76 $\pm$ 0.59 seconds across the five data splits for the D-TCN, ED-TCN, Bi-LSTM, and SVM methods, respectively. The corresponding testing times are 0.04, 0.03, 0.40, and 0.04 seconds (the standard errors are again negligible), respectively, for an average number of 2,916 frames.
For the TUM Kitchen dataset, the overall training times are 91.19 $\pm$ 1.04, 74.53 $\pm$ 0.65, 619.21 $\pm$ 1.82, and 15.68 $\pm$ 0.67 seconds across the five data splits for the D-TCN, ED-TCN, Bi-LSTM, and SVM methods, respectively. The corresponding testing times are 0.03, 0.02, 0.33, and 0.03 seconds (negligible standard errors), respectively, for an average number of 6,311 frames.
We further note that the TCN methods also have acceptable training times of the order of a few minutes for reasonably large datasets. This characteristic enables our system to adapt quickly to changing object manipulation tasks. On the other hand, the training times are considerably larger for Bi-LSTM, similar to the results reported in [31].
VI Discussion
In case of the more challenging UW IOM dataset, we observe that our TCN methods demonstrate better segmentation performance when spatial features are extracted using the fine-tuned VGG16 model instead of the pre-trained VGG16 model. Consequently, we decided to use P-CNN features to examine whether additional spatial features would further facilitate learning the temporal aspects of the videos for the action segmentation methods. As introduced in [27], P-CNN features are descriptors for video clips that are restricted to only one action per clip. All the frame features of a video clip are aggregated over time using different schemes that result in a single descriptor comprising information about the action in that clip. However, our goal is to process full-length videos with multiple actions. A single time-aggregated descriptor for an entire sequence of multiple actions is of less value to us, as time aggregation results in the loss of important information about the sequence of actions as well as the transitions between the different actions. Hence, we skip the time aggregation step to obtain a video descriptor of the same length as the number of frames in the full-length video.
Also, P-CNN features are originally generated by stacking normalized time-aggregated descriptors for ten different patches, i.e., five patches of the RGB image (namely, full body, upper body, left hand, right hand, and full image) and corresponding five patches of the optical flow image. These patches are cropped from the RGB and optical flow frames, respectively, using the relevant body joint positions. The missing parts in the patches are filled with gray pixels, before resizing them as necessary for the CNN input layer. This filling step is done using a scale factor available along with the joint positions for the dataset used in [27]. Such a scale factor is, however, not available for our TUM Kitchen and UW IOM datasets. On experimenting with various common values for this scale factor, we observe that it needs to be different for every video as each participant has a somewhat different body structure. Therefore, we only use the full image patches in our simplified form of P-CNN to avoid this issue.
VII Conclusions
In this letter, we present an end-to-end deep learning system to accurately predict the ergonomic risks during indoor object manipulation using camera videos. Our system comprises effective spatial features extraction and sequential feeding of the extracted features to the temporal neural networks for real time segmentation into meaningful actions.
The segmentation methods work well with just standard (RGB) camera videos, irrespective of how the spatial features are extracted, provided depth cameras are used to generate reliable ergonomic risk scores for all the possible actions corresponding to a known object manipulation environment. Consequently, it makes our system useful for widespread deployment in factories and warehouses by eliminating the need for expensive RGB-D cameras that have to be mounted on mobile co-bots or installed at a large number of fixed locations.
In the future, we intend to further enhance our system to segment the videos satisfactorily, when either the actions are not always performed in the same sequence, or, the same set of actions are not carried out by all the humans. We plan to use the spatiotemporal correlations among the manipulated objects and their affordances, within, potentially, a generative deep learning model, for this purpose. Subsequently, we aim to use our system as the initial step for automated action recognition, where the goal would be to infer the future actions of a human given a set of executed actions.
References
[1]
A. M. Zanchettin, N. M. Ceriani, P. Rocco, H. Ding, and B. Matthias, “Safety
in human-robot collaborative manufacturing environments: Metrics and
control,” IEEE Trans. Autom. Sci. Eng., vol. 13, no. 2, pp.
26 754–26 772, Apr. 2016.
[2]
G. Michalos, S. Makris, P. Tsarouchi, T. Guasch, D. Kontovrakis, and
G. Chryssolouris, “Design considerations for safe human-robot collaborative
workplaces,” Procedia CIRP, vol. 37, pp. 248–253, Dec. 2015.
[3]
S. Robla-Gómez, V. M. Becerra, J. R. Llata, E. Gonzalez-Sarabia,
C. Torre-Ferrero, and J. Perez-Oria, “Working together: A review on safe
human-robot collaboration in industrial environments,” IEEE Access,
vol. 5, pp. 26 754–26 773, Nov. 2017.
[4]
I. Maurtua, A. Ibarguren, J. Kildal, L. Susperregi, and B. Sierra,
“Human-robot collaboration in industrial applications: Safety, interaction
and trust,” Int. J. Adv. Robot. Syst., vol. 14, no. 4, pp. 1–10,
Jul. 2017.
[5]
M. Helander, A guide to human factors and ergonomics. CRC Press, 2005.
[6]
“Back injuries prominent in work-related musculoskeletal disorder cases in
2016,” Aug 2018. [Online]. Available:
https://www.bls.gov/opub/ted/2018/back-injuries-prominent-in-work-related-musculoskeletal-disorder-cases-in-2016.htm
[7]
“ICRA 2018 workshop.” [Online]. Available:
https://hri.iit.it/news/organizations/icra-2018-workshop
[8]
P. Spielholz, B. Silverstein, M. Morgan, H. Checkoway, and J. Kaufman,
“Comparison of self-report, video observation and direct measurement methods
for upper extremity musculoskeletal disorder physical risk factors,”
Ergonomics, vol. 44, no. 6, pp. 588–613, Jun. 2001.
[9]
C. Li and S. Lee, “Computer vision techniques for worker motion analysis to
reduce musculoskeletal disorders in construction,” in Comput. Civil
Eng., 2011, pp. 380–387.
[10]
A. Golabchi, S. Han, J. Seo, S. Han, S. Lee, and M. Al-Hussein, “An automated
biomechanical simulation approach to ergonomic job analysis for workplace
design,” J. Construction Eng. Manag., vol. 141, no. 8, p. 04015020,
Jan. 2015.
[11]
X. Li, S. Han, M. Gül, and M. Al-Hussein, “Automated post-3D
visualization ergonomic analysis system for rapid workplace design in modular
construction,” Autom. Construction, vol. 98, pp. 160–174, Jan. 2019.
[12]
Q. Fang, H. Li, X. Luo, L. Ding, H. Luo, T. Rose, and W. An, “Detecting
non-hardhat-use by a deep learning method from far-field surveillance
videos,” Autom. Construction, vol. 85, pp. 1–9, Jan. 2018.
[13]
L. Ding, W. Fang, H. Luo, P. E. Love, B. Zhong, and X. Ouyang, “A deep hybrid
learning model to detect unsafe behavior: integrating convolution neural
networks and long short-term memory,” Autom. Construction, vol. 86,
pp. 118–124, Feb. 2018.
[14]
W. Fang, L. Ding, H. Luo, and P. E. Love, “Falls from heights: A computer
vision-based approach for safety harness detection,” Autom.
Construction, vol. 91, pp. 53–61, Feb. 2018.
[15]
A. Abobakr, D. Nahavandi, J. Iskander, M. Hossny, S. Nahavandi, and M. Smets,
“A kinect-based workplace postural analysis system using deep residual
networks,” in IEEE Int. Syst. Eng. Symp., 2017, pp. 1–6.
[16]
R. Mehrizi, X. Peng, Z. Tang, X. Xu, D. Metaxas, and K. Li, “Toward
marker-free 3D pose estimation in lifting: A deep multi-view solution,”
in IEEE Int. Conf. Autom. Face Gesture Recognit., 2018, pp. 485–491.
[17]
B. Singh, T. K. Marks, M. Jones, O. Tuzel, and M. Shao, “A multi-stream
bi-directional recurrent neural network for fine-grained action detection,”
in IEEE Conf. Comput. Vis. Pattern Recognit, 2016, pp. 1961–1970.
[18]
A. Fathi and J. M. Rehg, “Modeling actions through state changes,” in
IEEE Conf. Comput. Vis. Pattern Recognit, 2013, pp. 2579–2586.
[19]
H. Kuehne, J. Gall, and T. Serre, “An end-to-end generative framework for
video segmentation and recognition,” in IEEE Winter Conf. Appl.
Comput. Vis., 2016, pp. 1–8.
[20]
D.-A. Huang, L. Fei-Fei, and J. C. Niebles, “Connectionist temporal modeling
for weakly supervised action labeling,” in Eur. Conf. Comput. Vis.,
2016, pp. 137–153.
[21]
S. Hignett and L. McAtamney, “Rapid entire body assessment,” in
Handbook of Human Factors and Ergonomics Methods. CRC Press, 2004, pp. 97–108.
[22]
B. Burger and P. Toiviainen, “MoCap toolbox - A Matlab toolbox for
computational analysis of movement data,” in Sound Music Comput.
Conf., 2013, pp. 172–178.
[23]
J. R. Terven and D. M. Córdova-Esparza, “Kin2. a kinect 2 toolbox for
matlab,” Sci. Comput. Prog., vol. 130, pp. 97–106, 2016.
[24]
K. Simonyan and A. Zisserman, “Very deep convolutional networks for
large-scale image recognition,” in Int. Conf. Learning
Representations, 2014.
[25]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A
large-scale hierarchical image database,” in IEEE Conf. Comput. Vis.
Pattern Recognit., 2009, pp. 248–255.
[26]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in
Int. Conf. Learning Representations, 2014.
[27]
G. Chéron, I. Laptev, and C. Schmid, “P-CNN: Pose-based CNN features
for action recognition,” in IEEE Int. Conf. Comput. Vis., 2015, pp.
3218–3226.
[28]
T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow
estimation based on a theory for warping,” in Eur. Conf. Comput.
Vis., 2004, pp. 25–36.
[29]
G. Gkioxari and J. Malik, “Finding action tubes,” in IEEE Conf. Comput.
Vis. Pattern Recognit., 2015, pp. 759–768.
[30]
K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil
in the details: Delving deep into convolutional nets,” in British
Mach. Vis. Conf., 2014.
[31]
C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager, “Temporal
convolutional networks for action segmentation and detection,” in IEEE
Conf. Comput. Vis. Pattern Recognit., 2017, pp. 156–165.
[32]
“CS University of Toronto CSC321 Winter 2014 - lecture six slides.”
[Online]. Available: http://www.cs.toronto.edu/~tijmen/csc321
[33]
A. Van Den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves,
N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative
model for raw audio,” in Speech Synthesis Workshop, 2016.
[34]
A. Graves, S. Fernández, and J. Schmidhuber, “Bidirectional LSTM
networks for improved phoneme classification and recognition,” in Int.
Conf. Artif. Neural Networks, 2005, pp. 799–804.
[35]
C. Lea, R. Vidal, and G. D. Hager, “Learning convolutional action primitives
for fine-grained action recognition,” in IEEE Int. Conf. Robot.
Autom., 2016, pp. 1642–1649.
[36]
M. Tenorth, J. Bandouch, and M. Beetz, “The TUM kitchen data set of everyday
manipulation activities for motion tracking and action recognition,” in
IEEE Int. Conf. Comput. Vis. Workshops, 2009, pp. 1089–1096.
[37]
J. Bandouch and M. Beetz, “Tracking humans interacting with the environment
using efficient hierarchical sampling and layered observation models,” in
IEEE Int. Comput. Vis. Workshops, 2009, pp. 2040–2047.
[38]
Tensorflow, “tensorflow/tensorflow.” [Online]. Available:
https://github.com/tensorflow/tensorflow
[39]
Keras-Team, “keras-team/keras,” Feb. 2019. [Online]. Available:
https://github.com/keras-team/keras |
∎
11institutetext: Imane Guellil 22institutetext: Aston university; Birmingham, UK
Folding Space, Birmingham, UK
22email: i.guellil@aston.ac.uk 33institutetext: Ahsan Adeel 44institutetext: School of Mathematics and Computer Science, University of Wolverhampton, UK
55institutetext: Faical Azouaou, Mohamed Boubred, Yousra Houichi, Akram Abdelhaq Moumna 66institutetext: Laboratoire des Méthodes de Conception des Systèmes. Ecole nationale Supérieure d’Informatique,
BP 68M, 16309, Oued-Smar, Alger, Algérie.
http://www.esi.dz
Sexism detection: The first corpus in Algerian dialect with a code-switching in Arabic/ French and English
Imane Guellil
Ahsan Adeel
Faical Azouaou
Mohamed Boubred
Yousra Houichi
Akram Abdelhaq Moumna
Abstract
In this paper, an approach for hate speech detection against women in Arabic community on social media (e.g. Youtube) is proposed. In the literature, similar works have been presented for other languages such as English. However, to the best of our knowledge, not much work has been conducted in the Arabic language. A new hate speech corpus (Arabic_fr_en) is developed using three different annotators. For corpus validation, three different machine learning algorithms are used, including deep Convolutional Neural Network (CNN), long short-term memory (LSTM) network and Bi-directional LSTM (Bi-LSTM) network. Simulation results demonstrate the best performance of the CNN model, which achieved F1-score up to 86% for the unbalanced corpus as compared to LSTM and Bi-LSTM.
Keywords: Hate speech detection; Arabic language; Sexism detection; Deep learning
1 Introduction
In the literature, several hate speech definitions are adopted. However, the definition of Nockleby nockleby2000hate has largely been used schmidt2017survey ; zhang2018detecting ; zhang2018hate . According to Nockleby, Hate speech is commonly defined as any communication that disparages or defames a person or a group based on some characteristic such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristic. To illustrate how this hate could be presented in textual exchange, Schmidt et al. schmidt2017survey presented a few examples such as ’Go fucking kill yourself and die already a useless, ugly pile of shit scumbag’, ’Hope one of those bitches falls over and breaks her leg’ etc.
Based on the recent survey of Schmidt et al. schmidt2017survey , in this paper, the term Hate speech is used as compared to abusive speech which has widely been used in the literature andrusyak2018detection ; gorrell2018twits , offensive language risch2018fine ; pitsilis2018detecting ; puiu2019semeval or cyberbullying dadvar2018cyberbullying ; van2018automatic . Chetty and Alathur chetty2018hate categorised
hate speech into four groups: gender hate speech (including any form of misogyny, sexism, etc.), religious hate speech (including any kind of religious discrimination, such as Islamic sects, anti-Christian, anti-Hinduism, etc.), Racist hate speech (including any sort of racial offence or tribalism, xenophobia, etc.) and disability (including any sort of offence to an individual suffering from health condition) al2019detection .
With the online proliferation of hate speech, a significant number of research studies have been presented in the last few years. The majority of these studies detect general hate speech burnap2014hate ; davidson2017automated ; wiegand2018inducing and focused on detecting sexism and racism on social media waseem2016hateful ; pitsilis2018detecting ; kshirsagar2018predictive . In contrast, only a few studies saha2018hateminers focused on the detection of hate speech against women (only by distinguishing between hateful and non-hateful comments). However, almost all studies are dedicated to English where other languages such as Arabic is also one of four top used languages on the Internetguellil2018approche ; guellil2018arabizi ). To bridge the gap, in this paper, we propose a novel approach to detect hate speech against women in the Arabic community.
2 Background
2.1 Hate speech
The research literature adopts different definitions of hate speech. However, the definition of nockleby2000hate was recently largely used by many authors such as, de2018automatic ; schmidt2017survey ; zhang2018detecting ; madisetty2018aggression and zhang2018hate . According to Nockleby, "Hate speech is commonly defined as any communication that disparages or defames a person or a group based on some characteristic such as race, colour,
ethnicity, gender, sexual orientation, nationality,
religion, or other characteristics" nockleby2000hate . For illustrating how this hate can be presented in textual exchange, schmidt2017survey provided some examples:
•
Go fucking kill yourself and die already a useless, ugly pile of shit scumbag.
•
The Jew Faggot Behind The Financial Collapse.
•
Hope one of those bitches falls over and breaks her leg.
Based on the recent survey of schmidt2017survey , we decided to use the term Hate speech (which is the most commonly used) rather than other terms present in the literature for the same phenomenon such as: abusive speech andrusyak2018detection ; gorrell2018twits , offensive language risch2018fine ; pitsilis2018detecting ; puiu2019semeval or cyberbullying dadvar2018cyberbullying ; van2018automatic . According to Chetty and Alathur chetty2018hate ,
hate speech is categorised into four categories: gendered hate speech, religious hate speech, racist hate speech and disability. Gendered hate speech includes any form of misogyny, sexism, etc. Religious hate speech includes any kind of religious discrimination, such as Islamic sects, Anti-Christian, anti-Hinduism, etc. Racist hate speech includes any sort of racial offence or tribalism, xenophobia, etc. Disability includes any sort of offence to an individual suffering from health which limits to do some of the life activities al2019detection .
2.2 Arabic in social media
Arabic is one of the six official languages of the United Nations111http://www.un.org/en/sections/about-un/official-languages/ eisele2010multiun ; ziemski2016united ; guellil2020arabic . It is the official language of 22 countries. It is spoken by more than 400 million speakers. Arabic is also recognised as the 4th most used language of the Internet al2016prototype ; boudad2017sentiment . All the works in the literature habash2010introduction ; farghaly2009arabic ; harrat2017maghrebi ; guellil2019arabic ; guellil2020arautosenti classify Arabic in three main varieties: 1) Classical Arabic (CA) which is the form of Arabic language used in literary texts. The Quran 222The Quran is a scripture which, according to Muslims, is
the verbatim words of Allah containing over 77,000 words revealed through Archangel Gabriel to Prophet Muhammad over 23 years beginning in 610 CE. It is divided into 114 chapters of varying sizes, where each chapter is divided into verses, adding up to
a total of 6,243 verses. The work of Sharaf et al. sharaf2012qursim is considered to be the highest form of CA text sharaf2012qurana . 2) Modern Standard Arabic (MSA) which is used for writing as well as formal conversations. 3) Dialectal Arabic which is used in daily life communication, informal exchanges,etc boudad2017sentiment . However, Arabic speakers on social media, discussion forums and Short Messaging System (SMS) often use a non standard romanisation called ’Arabizi’ darwish2014arabizi ; bies2014transliteration ; guellil2020role . For example, the Arabic sentence: \<rAny fr.hAnT>, which means I am happy, is written in Arabizi as ’rani fer7ana’. Hence, Arabizi is an Arabic text written using Latin characters, numerals and some punctuation darwish2014arabizi ; guellil2018arabizi . Moreover, most of Arabic people are bilingual, where the Mashreq side (Egypt, Gulf, etc) often use English and the Maghreb side (Tunisia, Algeria, etc) often use French, as second language. This linguistic richness contribute to increase a well known phenomenon on social media which is code switching. Therefore, Arabic pages also contain messages such as: "\<rAny> super \<fr.hAnT>" or "\<rAny> very \<fr.hAnT>" meaning I am very happy. In addition, messages purely written in French or in English are also possible.
Many work have been proposed, in order to deal with Arabic and Arabizi darwish2014arabizi ; guellil2017arabizi . Extracting opinions, analysing sentiments and emotion represent an emerging research area for Arabic and its dialects guellil2017approche ; guellil2018sentialg ; imane2019set . However, few studies are dedicated to analyse extreme negative sentiment such as hate speech. Arabic hate speech detection is relatively a new research area where we were able to to collect only few works. These work are described in more details in the following section.
3 Related work
3.1 Hate speech detection
3.1.1 General hate speech detection
Burnap and Williams burnap2014hate investigated the spread of hate speech after Lee Rigby murder in UK. The authors collected 450,000 tweets and randomly picked 2,000 tweets for the manual annotation conducted by CrowdFlower (CF) workers333https://www.figure-eight.com/. Each tweet was annotated by 4 annotators. The final dataset contains 1,901 annotated tweets. The authors used three classification algorithmsand the best achieved classification results were up to 0.77 (for F1-score) using BLR. Davidson et al. davidson2017automated distinguished between hateful and offensive speech by applying LR classifier The authors automatically extracted a set of tweets and manually annotated 24,802, randomly selected by CF workers. Their model achieved an F1 score of 0.90 but suffered poor generalisation capability with up to 40% misclassification. Weigand et al. wiegand2018inducing also focused on the detection of abusive language. The authors used several features and lexical resources to build an abusive lexicon. Afterwards, constructed lexicon in an SVM classification was used. In this work, publicly available datasets were used razavi2010offensive ; warner2012detecting ; waseem2016hateful .
It is to be noted that all the aforementioned studies have been conducted with English language. However, a few other studies in some other languages are also conducted recently such as Italian del2017hate ,
German koffer2018discussing ,
Indonesian alfina2017hate , Russian andrusyak2018detection . However, only a limited number of researches have focused on hate speech detection in Arabic language. Abozinadah et al. abozinadah2015detection evaluated different machine learning algorithms to detect abusive Arabic tweets. The authors manually selected and annotated 500 accounts associated to the abusive extracted tweets and used three classification algorithms The best results were obtained with the Naîve Bayes (NB) classifier with F1-score up to 0.90. Mubarek et al. mubarak2017abusive focused on the detection and classification of the obscene and offensive Arabic tweets. The authors used the Log Odds Ration (LOR) For evaluation, the authors manually annotated 100 tweets and obtained a F1-score up to 0.60. Haidar et al. haidar2017multilingual proposed a system to detect and stop cyberbullying on social media. The authors manually annotated a dataset of 35,273 tweets from Middle East Region (specially from Lebanon, Syria, Gulf Area and Egypt). For classification, the authors used SVM and NB and obtained the best results with SVM achieving F1-score up to 0.93. More recently, Alakrot et al. alakrot2018dataset described a step by step construction of an offensive dataset of Youtube Arabic comments. The authors extracted 167,549 Youtube comments from 150 Youtube video. For annotation, 16,000 comments were randomly picked (annotated by 3 annotators). Finally, Albadi et al. albadi2018they addressed the detection of Religious Arabic hate speech. The authors manually annotated 6,136 tweets (where 5,569 were used for training and 567 for testing). For feature extraction, AraVec soliman2017aravec was used. Guellil et al.guellil2020detecting also proposed a corpus for detecting hate speech against politician. This corpus is in Arabic/Algerian dialect. It includes 5,000 Youtube comments that were manually annotated by three annotators. For extracting features, the authors relied on both Word2vec and fastText. For classification both shallow and deep learning algorithms were used.
3.1.2 Sexism detection (Hate speech against women)
Waseem et al. waseem2016hateful used LR classification algorithm to detect sexism and racism on social media. The authors manually annotated dataset containing 16,914 tweets where 3,383 tweets are for sexist content, 1,972 for racist content, and 11,559 for neither sexist or racism. For dataset generation, the authors used Twitter API for extracting tweets containing some keywords related to women. The authors achieved F1-score up to 0.73. The work of Waseem et al. waseem2016hateful is considered as a benchmark by many researchers al2019detection ; pitsilis2018detecting ; kshirsagar2018predictive . The idea of Pitsilis et al. pitsilis2018detecting is to employ a neural network solution composed of multiple Long-Short-Term-Memory (LSTM) based classifiers in order to detect sexism and racism in social media. The authors carried out many experiments achieving the best F1-score of 0.93. Kshirsagar et al. kshirsagar2018predictive also focused on racism and sexism detection and their approach is also based on neural network. However, in this work, the author also used word embedding for extracting feature combining with a Muli-Layer Perception (MLP) based classifier. The best achieved F1-score was up to 0.71. Saha et al. saha2018hateminers presented a model to detect hate speech against women. The authors used several algorithms to extract features such as bag-of-words (BOW), TF-IDF and sentence embeddings with different classification algorithms such as LR, XGBoost and CatBoost. The best achieved F1-score was 0.70 using LR classifier. Zhang et al. zhang2018detecting proposed a hybrid model combining CNN and LSTM to detect hate speech. The authors applied their model on 7 datasets where 5 are publicly available waseem2016hateful ; waseem2016you ; gamback2017using ; park2017one ; davidson2017automated .
3.2 Motivation and contribution
The hate speech detection on social media is relatively a new but an important topic. There are very few publicly available corpora mostly dedicated to English. Even for English, less than 10 resources are publicly available. More recently, researchers have presented work in other languages including German, Italian, Arabic. However, most of the work focuses on detecting a general hate speech not against a specific community. On Arabic, only 5 research studies are presented in the literature which are mainly focused on Twitter. This paper focuses on Youtube which is the second biggest social media platform, after Facebook, with 1.8 billion users kallas2017top ; alakrot2018dataset . The major contributions of this study are: Development of a novel hate speech corpus against women containing MSA and Algerian dialect, written in Arabic, Arabizi, French, and English. The corpus constitutes 5,000 manually annotated comments. For corpus validation, three deep learning algorithms (CNN, LSTM, and bi-LSTM) are used for hate speech classification. For feature extraction, algorithms such as word2vec, FasText, etc., are used.
4 Methodology
Figure 1 illustrates the general steps for constructing the annotated corpus for classifying hate speech regarding women. This figure includes some examples as well.
4.1 Dataset creation
4.1.1 Data collection
Youtube comments related to videos about women are used. Feminine adjective such as: \<jmylT> meaning beautiful, \<jAy.hT> meaning stupid or \<klbT> meaning a dog are targeted. A video on Youtube is recognised by a unique identifier (video_id). For example the video having an id equal to "TJ2WfhfbvZA" handling a radio emission about unfaithful women and the video having an id equal to "_VimCUVXwaQ" gives advices to women for becoming beautiful. Three annotator, manually review the obtained video from the keyword and manually selected 335 video_id. We used Youtube Data API444https://developers.google.com/youtube/v3/ and a python script to automatically extract comments of each video_id and their replies. At the end, we were able to collect 373,984 comments extracted for the period between February and March 2019, we call this corpus Corpus_Youtube_women.
4.1.2 Data annotation
For the annotation, we randomly select 5,000 comments. The annotation was done by three annotators, natives speaker of Arabic and its dialects. The annotators was separated and they had one week for manually annotated the selected comments using two labels, 1 (for hate) and 0 (for non-hate). The following points illustrate the main aspects figuring in the annotators guideline:
•
The annotators should classify each comments containing injuries, hate, abusive or vulgar or offensive language against women as a comment containing hate.
•
The annotators should be as objective as they can. Even if they approve the comment, they should consider it as containing hate speech is it is offensive against women.
•
For having a system dealing with all type of comments, the annotators were asked to annotate all the 5,000 comments, even if the comment speak about football or something not related to women at all. However they asked to annotate this comment with 0 and to add the label w (meaning without interest).
•
When the annotators are facing a situation where they really doubt about the right label, they were asked to put the label p (for problem) rather than putting a label with which they are not convinced.
At the beginning of the annotation process, we received lots questions such as: 1) Have the hate have to be addressed to women, how to classify a message containing hate regarding men? 2) Have the hate comments absolutely contains terms indicating hate or have the annotators to handle irony?, etc. For the first question, we precise that the comments have to be addressed to women. Any others comment have to be labelled with 0 For the second question, we asked the annotators to also consider the irony and sarcasm.
After completion of the annotation process, we concentrate on the comments obtaining the same labels from all annotators. Then, we constructed two dataset. The first one (Corpus_1) contains 3,798 comments which are annotated with the same labels (0 or 1) from the three annotators. Among this comments 792 (which represent 20.85%) are annotated as hateful and 3006 as non-hateful. Hence, this corpus is very unbalanced. The second one (Corpus_2) represents the balanced version of (Corpus_1). For constructing this corpus, we randomly picked up 1,006 comments labelled as non-hateful and we picked up all the comments annotated as hateful. Then, we constructed a balanced corpus containing 1,798 comments.
For better illustrating the annotated data, we present Figure 2. The column message contains some comments (extracted from Youtube) and manually annotated. The following three columns illustrate the annotation given by each one of the annotators. When all the annotators agree, the messages are kept in the corpus. In the other case, they are removed. The removed messages are not considered for the training (we proceed in this manner in order to increase the precision. We also plan to extend this corpus in the future automatically, hence, the precision is crucial). If the message is annotated as hateful by all the annotators, it is then annotated as hateful and column hate receives 1. In the other case, it receives 0.
4.2 Hate speech detection
4.2.1 Features extraction
We use two different algorithm for features extraction which are, Word2vec mikolov2013efficient and FasText joulin2016bag . We use Word2vec with classic methods and we use FasText with Deep learning methods. Word2vec describes two architectures for computing continuous vectors representations, the Skip-Gram (SG) and Continuous Bag-Of-Words (CBOW). The former predicts the context-words from a given source word, while the latter does the inverse and predicts a word given its context window mikolov2013efficient . As for Word2vec, Fastext models is also based on either the skip-gram (SG) or the continuous bag-of-words (CBOW) architectures. The key difference between FastText and Word2Vec is the use of n-grams. Word2Vec learns vectors only for complete words found in the training corpus. FastText learns vectors for the n-grams that are found within each word, as well as each complete word grave2018learning . In this work we rely on both representations of word2vec and fasText (i.e SG and CBOW).
For Word2vec model, we used the Gensim toolkit555https://radimrehurek.com/gensim/models/word2vec.html. For fasText, we use the fasText library proposed by Facebook on Github666https://github.com/facebookresearch/fastText. For both Word2vec/fasText, we use a context of 10 words to produce representations for both CBOW and SG of length 300. We trained the Word2vec/fasText models on the corpus Corpus_Youtube_women
4.2.2 Classification
For comparing the results, we use both classification methods, classic and deep learning based. For classic method, we use five classification Algorithms such as: GaussianNB (GNB), LogisticRegression (LR), RandomForset (RF), SGDClassifier (SGD, with loss=’log’ and penalty=’l1’) and LinearSVC (LSVC with C=’1e1’). For their implementation phase, we were inspired by the classification algorithm proposed by Altowayan et al. altowayan2016word . For the deep learning classification we use three models CNN, LSTM and Bi-LSTM. For each model, we use six layers. The first layer is a randomly-initialised word embedding layer that turns words in sentences into a feature map. The weights of embedding_matrix are calculated using fasText (with both SG and CBOW implementation). This layer is followed by a CNN/ LSTM/BiLSTM layer that scans the feature map (depending on the model that we defined). These layers are used with 300 filters and a width of 7, which means that each filter is trained to detect a certain pattern in a 7-gram window of words. Global maxpooling is applied to the output generated by CNN/LSTM/BiLSTM layer to take the maximum score of each pattern. The main function of the pooling layer is to reduce the dimensionality of the CNN/LSTM/BiLSTM representations by down-sampling the output and keeping the maximum value. For reducing over-fitting by preventing complex co-adaptations on training data, a Dropout layer with a probability equal to 0.5 is added. The obtained scores are then feeded to a single feed-forward (fully-connected) layer with Relu activation. Finally, the output of that layer goes through a sigmoid layer that predicts the output classes. For all the models we used Adam optimisers with epoch 100 and an early_stopping parameter for stopping the iteration in the absence of improvements.
5 Experimentation and Results
5.1 Experimental results
For showing the impact of balanced/unbalanced corpus, we present the different results related to the detection of Hateful/non hateful detection separately. Table 1 presents the results obtained on Corpus_1. It can be seen from Table 1 that the F1-score obtained on the unbalanced corpus (Corpus_1) is up to 86%. This result is obtained using the SG model associated to the CNN algorithm. Concerning the word2vec model, it can be seen that RF and LSVC algorithms give the best results for both SG and CBOW. However, deep learning classifiers (CNN, Bi-LSTM) associated to SG model of fastText outperformed the others classifiers. In addition, SG models outperforms CBOW models for all the used classifiers.
Table 2 presents the results obtained on Corpus_2. It can be seen from Table 2 that the F1-score obtained on the balanced corpus (Corpus_2) is up to 85%. For the experiments using word2vec, LSVC seems to be the best choice. LSVC gives the best F1-score with SG and CBOW (up to 0.80 with SG and up to 0.77 with CBOW). LR slightly outperforms LSVC on SG model (where F1-score is up to 0.81), however, the results decrease with CBOW model (up to 0.72). As well as to the results presented in Table 1, deep learning classifiers associated to SG model of fastText outperforms the others classifiers. The SG model also outperforms CBOW model for all the used classifiers, on this corpus as well.
From Both Tables 1 and 2 , it can be seen that the best F1-score obtained on Corpus_1 (up to 86%) is slightly better the F1-score obtained on Corpus_1. However only 65% of hateful comment were correctly classifier using Corpus_1, where 83% are correctly classified using Corpus_2. It also can be observed that deep learning classifiers are more appropriate with unbalanced data (F1-score up to 65%) where the classic classifiers (GNB, LR, ect) are able to correctly classify only 52%.
5.2 Perspective of improvements
The presented results are pretty good but they could be improved by integrating some pre-treatments. The first one is related to Arabizi transliteration. As Arabic people used both scripts Arabic and Arabizi. Handling them together or classifying Arabizi without calling the transliteration step could give wrong results. We previously showed that the transliteration consequently improved the results of sentiment analysis guellil2018arabizi . We previously present a transliteration based on rules-based approach guellil2018arabizi ; guellil2018approche but we conclude that a corpus based approach would certainly improve the results. Hence, we plan to propose a corpus-based approach for transliteration and apply this approach on the annotated corpus for having one script used for Arabic language. In addition to scripts, Arabic people also use other languages to express their opinions in social media, such as French or English. However, the proportion of these language is not really important comparing to the proportion of Arabic and Arabizi. In the context of this study, we handle all the languages in the same corpus. However, a language identification step would consequently improve the results. Hence, as an improvement to this work, we plan to propose an identification approach between Arabizi, French and English (because they share the same script).
Also, this study is the first one, to the best of our knowledge presenting a publicly available corpus777Freely available for the research purpose, after the paper acceptance dedicated to Arabic hate-speech detection against women. This resource was manually annotated by three native annotators following a guideline and standards. We are actually working on extending this resource. We are first targeting 10,000 comments. We are also working on automatic techniques in order to increase this corpus. Our principal aim is to relate our previous studies guellil2018sentialg ; guellil2018arabizi ; imane2019set related to sentiment analysis to the hate speech detection.
6 Conclusion
Hate speech detection is a research area attracting the research community interest more and more. Different studies have been proposed and most of them are quietly recent (during 2016 and 2019). The purpose of this studies is mitigated between the detection of hate speech in general and and hate speech targeting a special community or a special group. In this context, the principal aim of this paper is to detect hate speech against women in Arabic community on social media. We automatically collected data related to women from Youtube. Afterwards, we randomly select 5,000 comments and give them to three annotators in order to labelled them as hateful or non-hateful. However, for increasing the precision, we concentrate on the portion of the corpus were all the annotators were agree. It allow us to construct a corpus containing 3,798 comments (where 3.006 are non-hateful and 792 are hateful). We also constructed a balanced corpus containing 1,798 comment randomly picked up from the aforementioned one. For validating the constructed corpus, we used different machine learning algorithm such as LSVC, GNB, SGD, etc and deep learning one such as CNN? LSTM, etc. However, The exeperimental results showed that the deep learning classifiers (especially CNN, Bi-LSTM) outperform the other classifiers by respectively achieving and F1-score up to 86%.
For improving this work we plan to integrate a transliteration system for transforming Arabizi to Arabic. We also plan to identify the different language before proceeding to the classification. Finally, we also plan to automatically increase the training corpus.
References
(1)
Abozinadah, E.A., Mbaziira, A.V., Jones, J.: Detection of abusive accounts with
arabic tweets.
International Journal of Knowledge Engineering 1(2),
113–119 (2015)
(2)
Al-Hassan, A., Al-Dossari, H.: Detection of hate speech in social networks: A
survey on multilingual corpus.
Computer Science & Information Technology (CS & IT) 9(2),
83 (2019)
(3)
Al-Kabi, M., Al-Ayyoub, M., Alsmadi, I., Wahsheh, H.: A prototype for a
standard arabic sentiment analysis corpus.
Int. Arab J. Inf. Technol. 13(1A), 163–170 (2016)
(4)
Alakrot, A., Murray, L., Nikolov, N.S.: Dataset construction for the detection
of anti-social behaviour in online communication in arabic.
Procedia Computer Science 142, 174–181 (2018)
(5)
Albadi, N., Kurdi, M., Mishra, S.: Are they our brothers? analysis and
detection of religious hate speech in the arabic twittersphere.
In: 2018 IEEE/ACM International Conference on Advances in Social
Networks Analysis and Mining (ASONAM), pp. 69–76. IEEE (2018)
(6)
Alfina, I., Mulia, R., Fanany, M.I., Ekanata, Y.: Hate speech detection in the
indonesian language: A dataset and preliminary study.
In: 2017 International Conference on Advanced Computer Science and
Information Systems (ICACSIS), pp. 233–238. IEEE (2017)
(7)
Altowayan, A.A., Tao, L.: Word embeddings for arabic sentiment analysis.
In: Big Data (Big Data), 2016 IEEE International Conference on, pp.
3820–3825. IEEE (2016)
(8)
Andrusyak, B., Rimel, M., Kern, R.: Detection of abusive speech for mixed
sociolects of russian and ukrainian languages.
RASLAN 2018 Recent Advances in Slavonic Natural Language Processing
p. 77 (2018)
(9)
Bies, A., Song, Z., Maamouri, M., Grimes, S., Lee, H., Wright, J., Strassel,
S., Habash, N., Eskander, R., Rambow, O.: Transliteration of arabizi into
arabic orthography: Developing a parallel annotated arabizi-arabic script
sms/chat corpus.
In: Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language
Processing (ANLP), pp. 93–103 (2014)
(10)
Boudad, N., Faizi, R., Thami, R.O.H., Chiheb, R.: Sentiment analysis in arabic:
A review of the literature.
Ain Shams Engineering Journal (2017)
(11)
Burnap, P., Williams, M.L.: Hate speech, machine classification and statistical
modelling of information flows on twitter: Interpretation and communication
for policy decision making pp. 1–18 (2014)
(12)
Chetty, N., Alathur, S.: Hate speech review in the context of online social
networks.
Aggression and violent behavior (2018)
(13)
Dadvar, M., Eckert, K.: Cyberbullying detection in social networks using deep
learning based models; a reproducibility study.
arXiv preprint arXiv:1812.08046 (2018)
(14)
Darwish, K.: Arabizi detection and conversion to arabic.
In: Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language
Processing (ANLP), pp. 217–224 (2014)
(15)
Davidson, T., Warmsley, D., Macy, M., Weber, I.: Automated hate speech
detection and the problem of offensive language.
In: Eleventh International AAAI Conference on Web and Social Media
(2017)
(16)
De Smedt, T., De Pauw, G., Van Ostaeyen, P.: Automatic detection of online
jihadist hate speech.
arXiv preprint arXiv:1803.04596 (2018)
(17)
Del Vigna12, F., Cimino23, A., Dell Orletta, F., Petrocchi, M., Tesconi, M.:
Hate me, hate me not: Hate speech detection on facebook.
In: Proceedings of the First Italian Conference on Cybersecurity
(ITASEC17) (2017)
(18)
Eisele, A., Chen, Y.: Multiun: A multilingual corpus from united nation
documents.
In: LREC (2010)
(19)
Farghaly, A., Shaalan, K.: Arabic natural language processing: Challenges and
solutions.
ACM Transactions on Asian Language Information Processing (TALIP)
8(4), 14 (2009)
(20)
Gambäck, B., Sikdar, U.K.: Using convolutional neural networks to classify
hate-speech.
In: Proceedings of the first workshop on abusive language online, pp.
85–90 (2017)
(21)
Gorrell, G., Greenwood, M.A., Roberts, I., Maynard, D., Bontcheva, K.: Twits,
twats and twaddle: Trends in online abuse towards uk politicians.
In: Twelfth International AAAI Conference on Web and Social Media
(2018)
(22)
Grave, E., Bojanowski, P., Gupta, P., Joulin, A., Mikolov, T.: Learning word
vectors for 157 languages.
arXiv preprint arXiv:1802.06893 (2018)
(23)
Guellil, I., Adeel, A., Azouaou, F., Benali, F., Hachani, A.e., Hussain, A.:
Arabizi sentiment analysis based on transliteration and automatic corpus
annotation.
In: Proceedings of the 9th Workshop on Computational Approaches to
Subjectivity, Sentiment and Social Media Analysis, pp. 335–341 (2018)
(24)
Guellil, I., Adeel, A., Azouaou, F., Chennoufi, S., Maafi, H., Hamitouche, T.:
Detecting hate speech against politicians in arabic community on social
media.
International Journal of Web Information Systems (2020)
(25)
Guellil, I., Adeel, A., Azouaou, F., Hussain, A.: Sentialg: Automated corpus
annotation for algerian sentiment analysis.
In: 9th International Conference on Brain Inspired Cognitive
Systems(BICS 2018) (2018)
(26)
Guellil, I., Azouaou, F., Abbas, M., Fatiha, S.: Arabizi transliteration of
algerian arabic dialect into modern standard arabic.
In: Social MT 2017: First workshop on Social Media and User Generated
Content Machine Translation (co-located with EAMT 2017) (2017)
(27)
Guellil, I., Azouaou, F., Benali, F., Hachani, A.E., Mendoza, M.: The role of
transliteration in the process of arabizi translation/sentiment analysis.
In: Recent Advances in NLP: The Case of Arabic Language, pp.
101–128. Springer (2020)
(28)
Guellil, I., Azouaou, F., Benali, F., Hachani, a.e., Saadane, H.: Approche
hybride pour la translitération de l’arabizi algérien : une
étude préliminaire.
In: Conference: 25e conférence sur le Traitement Automatique des
Langues Naturelles (TALN), May 2018, Rennes, FranceAt: Rennes, France.
https://www.researchgate.net/publication … (2018)
(29)
Guellil, I., Azouaou, F., Chiclana, F.: Arautosenti: Automatic annotation and
new tendencies for sentiment classification of arabic messages.
Social Network Analysis and Mining 10(1), 1–20 (2020)
(30)
Guellil, I., Azouaou, F., Saâdane, H., Semmar, N.: Une approche fondée
sur les lexiques d’analyse de sentiments du dialecte algérien (2017)
(31)
Guellil, I., Mendoza, M., Azouaou, F.: Arabic dialect sentiment analysis with
zero effort.$\backslash$$\backslash$case study: Algerian dialect.
Inteligencia Artificial 23(65), 124–135 (2020)
(32)
Guellil, I., Saâdane, H., Azouaou, F., Gueni, B., Nouvel, D.: Arabic
natural language processing: An overview.
Journal of King Saud University-Computer and Information Sciences
(2019)
(33)
Habash, N.Y.: Introduction to arabic natural language processing.
Synthesis Lectures on Human Language Technologies 3(1),
1–187 (2010)
(34)
Haidar, B., Chamoun, M., Serhrouchni, A.: A multilingual system for
cyberbullying detection: Arabic content detection using machine learning.
Advances in Science, Technology and Engineering Systems Journal
2(6), 275–284 (2017)
(35)
Harrat, S., Meftouh, K., Smaïli, K.: Maghrebi arabic dialect processing:
an overview.
In: ICNLSSP 2017-International Conference on Natural Language, Signal
and Speech Processing (2017)
(36)
Imane, G., Kareem, D., Faical, A.: A set of parameters for automatically
annotating a sentiment arabic corpus.
International Journal of Web Information Systems (2019)
(37)
Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient
text classification.
arXiv preprint arXiv:1607.01759 (2016)
(38)
Kallas, P.: Top 15 most popular social networking sites and apps.
Consultado em Setembro 20, 2017 (2017)
(39)
Köffer, S., Riehle, D.M., Höhenberger, S., Becker, J.: Discussing the
value of automatic hate speech detection in online debates.
Multikonferenz Wirtschaftsinformatik (MKWI 2018): Data Driven
X-Turning Data in Value, Leuphana, Germany (2018)
(40)
Kshirsagar, R., Cukuvac, T., McKeown, K., McGregor, S.: Predictive embeddings
for hate speech detection on twitter.
arXiv preprint arXiv:1809.10644 (2018)
(41)
Madisetty, S., Desarkar, M.S.: Aggression detection in social media using deep
neural networks.
In: Proceedings of the First Workshop on Trolling, Aggression and
Cyberbullying (TRAC-2018), pp. 120–127 (2018)
(42)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word
representations in vector space.
arXiv preprint arXiv:1301.3781 (2013)
(43)
Mubarak, H., Darwish, K., Magdy, W.: Abusive language detection on arabic
social media.
In: Proceedings of the First Workshop on Abusive Language Online, pp.
52–56 (2017)
(44)
Nockleby, J.T.: Hate speech.
Encyclopedia of the American constitution 3(2), 1277–1279
(2000)
(45)
Park, J.H., Fung, P.: One-step and two-step classification for abusive language
detection on twitter.
arXiv preprint arXiv:1706.01206 (2017)
(46)
Pitsilis, G.K., Ramampiaro, H., Langseth, H.: Detecting offensive language in
tweets using deep learning.
arXiv preprint arXiv:1801.04433 (2018)
(47)
Puiu, A.B., Brabete, A.O.: Semeval-2019 task 6: Identifying and categorizing
offensive language in social media.
arXiv preprint arXiv:1903.00665 (2019)
(48)
Razavi, A.H., Inkpen, D., Uritsky, S., Matwin, S.: Offensive language detection
using multi-level classification.
In: Canadian Conference on Artificial Intelligence, pp. 16–27.
Springer (2010)
(49)
Risch, J., Krebs, E., Löser, A., Riese, A., Krestel, R.: Fine-grained
classification of offensive language.
In: 14th Conference on Natural Language Processing KONVENS 2018
(2018)
(50)
Saha, P., Mathew, B., Goyal, P., Mukherjee, A.: Hateminers: Detecting hate
speech against women.
arXiv preprint arXiv:1812.06700 (2018)
(51)
Schmidt, A., Wiegand, M.: A survey on hate speech detection using natural
language processing.
In: Proceedings of the Fifth International Workshop on Natural
Language Processing for Social Media, pp. 1–10 (2017)
(52)
Sharaf, A.B.M., Atwell, E.: Qurana: Corpus of the quran annotated with
pronominal anaphora.
In: LREC. Citeseer (2012)
(53)
Sharaf, A.B.M., Atwell, E.: Qursim: A corpus for evaluation of relatedness in
short texts.
In: LREC, pp. 2295–2302 (2012)
(54)
Soliman, A.B., Eissa, K., El-Beltagy, S.R.: Aravec: A set of arabic word
embedding models for use in arabic nlp.
Procedia Computer Science 117, 256–265 (2017)
(55)
Van Hee, C., Jacobs, G., Emmery, C., Desmet, B., Lefever, E., Verhoeven, B.,
De Pauw, G., Daelemans, W., Hoste, V.: Automatic detection of cyberbullying
in social media text.
PloS one 13(10), e0203,794 (2018)
(56)
Warner, W., Hirschberg, J.: Detecting hate speech on the world wide web.
In: Proceedings of the Second Workshop on Language in Social Media,
pp. 19–26. Association for Computational Linguistics (2012)
(57)
Waseem, Z.: Are you a racist or am i seeing things? annotator influence on hate
speech detection on twitter.
In: Proceedings of the first workshop on NLP and computational social
science, pp. 138–142 (2016)
(58)
Waseem, Z., Hovy, D.: Hateful symbols or hateful people? predictive features
for hate speech detection on twitter.
In: Proceedings of the NAACL student research workshop, pp. 88–93
(2016)
(59)
Wiegand, M., Ruppenhofer, J., Schmidt, A., Greenberg, C.: Inducing a lexicon of
abusive words–a feature-based approach pp. 1046–1056 (2018)
(60)
Zhang, Z., Luo, L.: Hate speech detection: A solved problem? the challenging
case of long tail on twitter.
Semantic Web (Preprint), 1–21 (2018)
(61)
Zhang, Z., Robinson, D., Tepper, J.: Detecting hate speech on twitter using a
convolution-gru based deep neural network.
In: European Semantic Web Conference, pp. 745–760. Springer (2018)
(62)
Ziemski, M., Junczys-Dowmunt, M., Pouliquen, B.: The united nations parallel
corpus v1. 0.
In: LREC (2016) |
Diverse and Styled Image Captioning Using SVD-Based Mixture of Recurrent Experts
Marzieh Heidari
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
&Mehdi Ghatee111The corresponding author
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
ghatee@aut.ac.ir
&Ahmad Nickabadi
Department of Computer Engineering
Amirkabir University of Technology (Tehran Polytechnic)
Iran
&Arash Pourhasan Nezhad
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
Abstract
With great advances in vision and natural language processing, the generation of image captions becomes a need. In a recent paper, Mathews, Xie and He [1], extended a new model to generate styled captions by separating semantics and style. In continuation of this work, here a new captioning model is developed including an image encoder to extract the features, a mixture of recurrent networks to embed the set of extracted features to a set of words, and a sentence generator that combines the obtained words as a stylized sentence. The resulted system that entitled as Mixture of Recurrent Experts (MoRE), uses a new training algorithm that derives singular value decomposition (SVD) from weighting matrices of Recurrent Neural Networks (RNNs) to increase the diversity of captions. Each decomposition step depends on a distinctive factor based on the number of RNNs in MoRE. Since the used sentence generator gives a stylized language corpus without paired images, our captioning model can do the same. Besides, the styled and diverse captions are extracted without training on a densely labeled or styled dataset. To validate this captioning model, we use Microsoft COCO which is a standard factual image caption dataset. We show that the proposed captioning model can generate a diverse and stylized image captions without the necessity of extra-labeling. The results also show better descriptions in terms of content accuracy.
Diverse and Styled Image Captioning Using SVD-Based Mixture of Recurrent Experts
Marzieh Heidari
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
Mehdi Ghatee†††The corresponding author
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
ghatee@aut.ac.ir
Ahmad Nickabadi
Department of Computer Engineering
Amirkabir University of Technology (Tehran Polytechnic)
Iran
Arash Pourhasan Nezhad
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
January 17, 2021
Keywords Image Captioning $\cdot$
Deep Learning $\cdot$
Singular Value Decomposition $\cdot$
Mixture of Experts $\cdot$
Diverse Captioning
1 Introduction
Generating human-like captions for images automatically, namely, image captioning, has risen as an interdisciplinary research issue at the crossing point of computer vision and natural language processing [1, 2, 3, 4, 5, 6, 7]. It has numerous imperative industrial applications, such as assistant facilities for visually impaired individuals, visual knowledge in chatting robots, and photo sharing on social media. For producing genuine human-like image captions, an image captioning framework is required to understand the visual content of input image and write captions with proper linguistic properties. Nonetheless, most existing image captioning frameworks center around the vision side that describes the visual content in an objective and neutral manner (factual captions), while the language side, e.g. linguistic style, is regularly disregarded. In fact, linguistic style [8] is an essential factor that reflects human personality [9], influences purchasing decisions[10] and fosters social interactions [11]. The different styles in image captioning is also an important problem that has been referred by [11, 12, 5].
Generating styled, diverse and accurate captions for an image is an open challenge. For gaining diversity in generated captions some works require manually created, densely labeled, image caption datasets[13, 14, 15], some use GANs[16] to achieve diversity which mostly suffers from poor accuracy. Also For gaining style most works use styled datasets[17, 18].
We address the problem of necessity of gathering styled and densely labeled datasets to generate styled and diverse captions by presenting a novel unified model architecture that can generate styled, diverse and accurate captions without using extra labels and trained only on a standard image captioning dataset and a styled corpus.
Central to our approach is reducing the requirement of immense, densely labeled and styled dataset for image captioning. We propose a model for generating styled, diverse, and semantically relevant image captions containing an image embedder, a Term Generator, and a Sentence Generator. Image embedder is the one before the last layer of a pre-trained CNN that takes an image as input and outputs the visual features of the image. Term Generator is an MoRE that is responsible for diversity. Each RNN expert generates a specific word sequence. During the training of each RNN in Term Generator, at the end of each epoch, we filter out a part of deep network weights using SVD decomposition to generate diverse captions without an extra label. Previously SVD has been used for network compression [19] and overfitting controlling [20], but this is the first time SVD is used for diverse captioning. Sentence Generator is responsible for controlling style. It learns style from a corpus of stylized text without aligned images.
We evaluate our model on COCO dataset[21]. After the evaluation of sentences by each Term Generator expert, we extracted the vocabulary from their sentences. The vocabulary sets are different in both of the lengths and their content.
Our contribution is developing a new model that can generate styled, diverse, and accurate captions for images without training on a densely labeled dataset.
To discuss on some related works, in earlier image captioning studies, template-based models [22, 23] or retrieval-based models [24] were commonly used. The template-based models distinguish visual concepts from a given image and fill them into some well-defined formats to make sentences. In this way, the generations suffer from lack of diversity. The retrieval-based models discover the foremost reliable sentences from existing ones and are not able to produce new descriptions.
On the other hand, end-to-end trainable image captioning models are the result of recent advances in deep learning and the release of large scale datasets such as COCO [25] and Flickr30k[26]. Most modern image captioning models use the encoder-decoder framework [2, 5, 7, 27, 28], where a convolutional neural network (CNN) encodes the input image in vector space feature embeddings which are fed into an RNN. The RNN takes the image encoding as input and the word generated in the current time step to generate a complete sentence one word at a time. Maximum likelihood estimation is typically used for training. It has been shown that attention mechanisms[5, 29, 30] and high-level attributes/concepts [31, 32] can help image captioning. Recently, reinforcement learning has been introduced for image captioning models to directly optimize task-specific metrics [33, 34]. Some works adopt Generative Adversarial Networks (GANs)[16] to diverse or generate human-like captions [35]. Some works have been also adapted by weakly-supervised training methods [36] for making richer captions.
1.1 Diverse Image Captioning
Generating diverse captions for images and videos has been studied in the recent years[37, 38, 39, 40, 41, 42]. Some techniques such as GAN-based methods [39, 37] and VAE-based methods [38, 41] are used to improve the diversity and accuracy of descriptions. Some others [43, 37] studied generating descriptive paragraphs for images. Also in [44], a method has been proposed to apply the part-of-speech of words in generated sentences. In [45], generated sentences can contain words of different topics. The research of [13], generates descriptions for each semantic informative region in images. In addition, in [46], a particular guiding object that is presented in the image is chosen to be necessarily presented in the generated description. Most of these approaches require additional labels in a dataset, e.g. Visual Genome [47]. In what follows, we propose a new scheme without the necessity to give additional labels.
1.2 Stylized Image Captioning
Stylized image captioning aims at generating captions that are successfully stylized and describe the image content accurately. The works proposed for tackling this task
can be divided into two categories: models using parallel stylized image-caption data (the supervised mode) [17, 48, 49, 50] and models using non-parallel stylized corpus (the semi-supervised mode) [18, 1]. SentiCap [17] handles the positive/negative styles and proposes to model word changes with two parallel Long Short Term Memory (LSTM) networks and word-level supervision. StyleNet [18] handles the humorous/romantic styles by factoring the input weight matrices to contain a style specific factor matrix. SF-LSTM [48] experiments on the above four caption styles and proposes to learn two groups of matrices to capture the factual and stylized knowledge, respectively. You et al. [50] proposed two simple methods to inject sentiments into image captions. They could control the sentiment by providing different sentiment labels. See and Chat [51] in the first step retrieved the visually similar images form their dataset which contains 426K images with 11 million associated comments, to input image using k nearest neighbors and then ranked their comments to get the most relevant comment for the input image. There are more than 25 comments for each image on average. Since the comments of the dataset are styled, the resulted caption was styled too.
2 A New Model with a Mixture of Experts
As stated in Section 1.1, the current image captioning methods need extra labels on the training data to generate stylized captions. In this section, we propose a novel image captioning model that applies diversity and style to the generated captions without requiring additional labels. The goal of our proposed model is to take an image, a target style, and a diversity factor as input and generate a sentence within the target style. To find better results, we use an ensemble of neural networks. To this aim, different kinds of ensemble models can be used. For example, ensemble of neural networks in tree structure [52, 53] or mixture of experts [54, 55] can be followed. We follow a Mixture of Recurrent Experts (MoRE) in what follows.
The architecture of the proposed model is illustrated in Figure 2. The model is comprised of three basic components, i.e., an image encoder, a Term Generator, and a Sentence Generator. The image encoder is a CNN that extracts visual features from the input image. The Term Generator is a mixture of some RNNs that takes visual features extracted by CNN and SVD factors as input and gives a sequence of semantic terms. The Sentence Generator is an attention-based RNN that takes this sequence of semantic terms and the target style and decodes them into a sentence in natural language with a specific style that describes the image. Each RNN of the Term Generator component has a specific SVD factor. The SVD factor shows what portion of the RNN weight matrix is saved during training. At test time, the SVD factors determine which one of the experts is responsible to generate the sequence of semantic terms from visual features.
Each Term Generator uses a different SVD factor to cause diversity in extracted words and consequently diversity in the generated captions. Moreover, we designed a two-stage learning strategy to train the Term Generator networks and the Sentence Generator network separately. We train the Term Generators on a dataset of image caption pairs and the Sentence Generator on a corpus of styled text data such as romantic novels.
2.1 Image Encoder
This module encodes any image $I$ to get features utilizing a deep CNN. Previous studies use different types of image features. The image features could be local visual features for every semantic segment of image [7] or a static global representation of the image [5]. A visual context vector is obtained by directly using the static feature or calculating adaptively with a soft-attention mechanism[7, 56]. In this paper, we use the static features to remain consistency with the previous works. The image features are extracted from the second last layer of the Inception-v3[57] of CNN pre-trained on ImageNet [58].
2.2 Term Generator
The Term Generator network is an MoRE that maps an input image, denoted by $I$ to an ordered sequences of semantic terms $x={x_{1},x_{2},...,x_{M}},x_{i}\in V^{word}$. Each semantic term is a word with a part-of-speech tag. These words indicate the objects, scene, and activity in the image. This generator should completely capture the visual semantics and should be independent of linguistic style; because Sentence Generator is responsible for applying a style to caption.
Our MoRE is trained by an SVD based approximation method inspired by[59].
For a learnt weight matrix $W$, by approximating by an SVD, one can find
$$\displaystyle W_{m\times n}=U_{m\times n}\Sigma_{n\times n}V^{T}_{n\times n}$$
(1)
where $\Sigma$ is a diagonal matrix with singular values on the diagonal in the decreasing order. $m$ columns of $U$ and $n$ columns of $V$ are called the left-singular vectors and the right-singular vectors of $A$, respectively. By approximating $A$ by the greatest $k$ components of this decomposition, one can substitute the following instead of $A:$
$$\displaystyle W_{m\times n}=U_{m\times l}\Sigma_{l\times l}V^{T}_{l\times n}$$
(2)
By changing $l$, the different variations of the weighting matrix can be defined for Term Generator networks and each approximation interprets any image differently. Thus, the outputs will be variant. In our model, for $i^{th}$ RNN, we define a diversity factor $k=\frac{i}{R}$ where $R$ is the number of RNNs in MoRE. For each expert of MoRE, $l=[k*rank(W)]$ shows the portion of the principal components of matrix $W$ that remains in the learning model. Noe that $rank(W)$ denotes the rank of $W.$ The effect of different diversity factors on the generating the sequence of semantic terms is shown in Fig 1.
The architecture of all experts of MoRE are similar and is a CNN+RNN inspired by Show and Tell [5], see the middle part of Fig. 2. The image feature vector passes through a densely connected layer and then through an RNN with Gated Recurrent Unit (GRU) cells[60]. The word list $x$ is shorter than a full sentence, which speeds up training and alleviates the effect of forgetting long sequences. At each time-step $t$, there are two inputs to the GRU cell. The first is the previous hidden state summarizing the image $I$ and word history $x_{1},...,x_{t-1}$. The second is the GloVe embedding vector $E_{x_{t}}$ of the current word. A fully connected layer with softmax takes the output $h_{t}$ and produces a categorical distribution for the next word in the sequence $x_{t+1}$. Argmax decoding can be used to recover the entire word sequence from the conditional probabilities. See Eq. (1) in [1]. We set $x_{1}$ as the beginning-of-sequence token and terminate when the sequence exceeds a maximum length or when the end-of-sequence token is generated.
2.3 Sentence Generator
The Sentence Generator, shown in the upper part of
Fig. 2, maps the sequence of semantic terms to a sentence with a specific style. For example, given the word list “girl”, “posture”, “refrigerator”, and “DESCRIPTIVE” as the requested style, a suitable caption is “A girl standing in a kitchen beside a refrigerator.” Also the same list of words with “STORY” as the expected style as the input is “I saw the girl standing in the kitchen, and I was staring at the refrigerator”. Given the list of words $x$ and a target style $z,$ we generate an output caption $y={y_{1},y_{2},...,y_{t},...,y_{L}},$ where $y_{t}\in V^{out}$ and $V^{out}$ is the output word vocabulary. To do so, the idea of [1] is used by considering an RNN sequence-to-sequence sentence generator network with attention over the input sequence. This is an auto-encoder that maps input word sequence to a vector space and decodes the sequence to a sentence to describe the image in a suitable style. Encoder component for sequence $x$ consists of a GloVe vector embedding followed by a batch normalization layer [61] and a bidirectional RNN [62] with GRU cells. The Bidirectional RNN [1] is implemented as two independent RNNs. They run in opposite directions with shared embeddings. For details, we refer to Eq. (4) of [1].
3 Experimental Setup
We conduct experiments on publicly available image caption dataset, Microsoft COCO [25]. COCO is a large image captioning dataset, containing 82783, 40504, and 40775 images for training, validation, and test, respectively. Each image is labeled with 5 human-generated descriptions for image captioning. All labels are converted to lower case and tokenized. We use Semstyle training and testing split sets [1] for both factual and stylized captions.
We consider 9 baseline methods and compare them with 5 variants of our proposed captioning model. The considered baselines are as the following:
•
Show and Tell [5] constructed a CNN as an encoder and an RNN as the decoder.
•
Neural Talk [63] used the images and their regions alignments to captions to learn and to generate descriptions of image regions.
•
StyleNet [18] originally has been trained on FlickrStyle10K [18]. The implementation of StyleNet and StyleNet-COCO are from [18]. This reimplementation makes the trained datasets match and consequently makes the approaches comparable. StyleNet generates styled captions, while StyleNet-COCO generates descriptive captions.
•
Neural-storyteller [64] is a model trained on romance text (from the same source as ours).
•
JointEmbedding maps images and sentences to a continuous multi-modal vector space [65] and uses a separate decoder, that has been trained on the romance text, to decode from this space.
•
SemStyle [1] is our reference to develop the model and maps image features to a word sequence and then maps the sequence to a caption.
•
SGC [4] applies the Scene Graph Captioner (SGC)
framework for the image captioning task.
•
Hierarchical Attention [3] uses a hierarchical attention model by
utilizing both of the global CNN features and the local object features for more effective feature representation and reasoning in image captioning.
•
VSV-VRA-POS [2] the adapts the language models for
word generation to the specific syntactic structure of sentences and visual skeleton of the image.
The variants of our proposed captioning model are as the following:
1.
Shuffled words model is genuinely base-line that during the training of the Sentence Generator, the input words are out of order. This gives a little noise to input and the results are less overfitted.
2.
Shuffles words+batch normalization model is "Shuffled words" model that uses a batch normalization layer after the embedding layer of Sentence Generator. This makes features more general and consequently more general captions.
3.
Shuffles words+Glove+batch normalization model uses freezed weights of Glove pre-trained embedding.
4.
Full model applies a specific drop-out to embedding layers.
5.
Kaldi GRU is a full model that uses Kaldi Speech Recognition [66] GRUs as encoder and decoder in Sentence Generator instead of typical GRUs.
3.1 Evaluation Metrics
We use two types of metrics to evaluate the proposed image captioning model. The first type is automatic relevance metrics. In this part, similar to [1], we consider captioning metrics including BLEU [67], METEOR[68], ROUGE_L [69], and CIDEr [70] which are based on n-gram overlap and SPICE [71] based on f-score over semantic tuples extracted from COCO reference sentences [21]. As the second type of metric, we consider automatic style metrics. In this part, we measure how often a generated caption has the correct target-style according to a pre-trained style classifier. The CLassifier Fraction (CLF) metric [1], is the fraction of generated captions classified as styled by a binary classifier. This classifier is logistic regression with 1,2-gram occurrence features trained on styled sentences and COCO training captions. Its cross-validation precision is 0.992 at a recall of 0.991.
3.2 Training details
In our experiments, the model is optimized with Adam [72]. The learning rate is set to 1e-3. We clip gradients to [-5, 5] and apply dropout to image and sentence embeddings. The mini-batch size is 64 for both the Term Generator and the Sentence Generator. Both the Term Generator and Sentence Generator use separate 512-dimensional GRUs and word embedding vectors. The Term Generator has a vocabulary of 10000 words while the Sentence Generator has two vocabularies: one for encoder input another for the decoder – both vocabularies have 20000 entries to account for a broader scope. The number of intersecting words between the Term Generator and the Sentence Generator is 8266 with both datasets, and 6736 without. Image embeddings come from the second last layer of the Inception-v3 CNN[57] and are 2048 dimensional.
We used Glove [73] frozen weights for embedding layers of both Term Generator and Sentence Generator of Semstyle’s baseline. The model suffered from overfitting so we adopt the following regularization techniques in order to fix this problem:
•
Instead of normal drop-out, we used embedding-specific drop-out proposed by Merity et al. [74].
•
We adopt batch normalization [61] for both modules of the model including Term Generator and Sentence Generator.
•
For Term Generator, we used weight decay [75] with a coefficient of 1e-6.
•
In training Sentence Generator instead of feeding ordered semantic terms, we shuffled them so the model learns to generate sentences from unordered semantic terms that improve the generalization.
We trained three Term Generator experts. Each expert of MoRE applies SVD on the weighting matrices in the Term Generator model, and afterward reconstruct the model based on the inherent sparseness of the original matrices. For each expert, we saved $[k*rank(W)]$ of principal components of learned weight matrix $W$ for SVD approximation. This approach has been used by [59] for the neural network training process. After reconstruction, the accuracy decreases but the final classification results improve. After every epoch, the reduced weights are replaced in the model. Afterward, we fine-tune the reconstructed model using the back-propagation method to receive better accuracy.
3.3 Results
The results of measurements are presented in Table 1. Table 2 shows the results of the automatic metrics on caption style learned from romance novels. This comparison is similar to [1]. Also, our full model generates descriptive captions. It accomplishes semantic relevance scores comparable to the Show and Tell [5], with a SPICE of 0.166 vs 0.154, and BLEU-4 of 0.252 vs 0.238. Thus utilizing semantic words is a competitive way to distill image semantics. Really, the Term Generator and the Sentence Generator constitute a compelling vision-to-language pipeline. Additionally, our model can create different caption styles, where the CLF metric for captions classification is 99.995%, when the target style is descriptive. Figure 3 demonstrates some quantitative results for different styles alongside results of the same images generated by SemStyle which is the most similar approach to ours.
In addition, our model generates styled captions in 74.1% of cases, based on CLF. SPICE score is 0.145 which is better than the presented baselines. Results are shown in Table 2.
As one can see in Table 2 there are three models (SGC [4], VSV-VRA-POS [2] and Hierarchical Attention [3] ) with better relevance scores compared with our work. These three works are presented to show the impact of using unpaired captions and a unified model for multi-style captioning. The trade-off for such a model would be the loss of relevance scores. Because when you have only one objective (which is generating similar captions to ground-truth) the similarity score would be higher compared with the works that peruse more objects. Since, our model tries to improve the captions for all styles, our method can outperform other one-objective methods, in real situations.
3.4 Component Analysis
According to Table 3 and Table 4 the poorest result is gotten when Kaldi GRU is used instead of regular GRU cells. By shuffling the input words of Sentence Generator input in training, we can see a little improvement which is the result of decreasing overfit caused by adding this noise. Adding a Batch Normalization layer boosts the results so that in some metrics such as BLEU3, BLEU4, and Cider the best result is achieved by this model. Adding Glove frozen weights only improves styled captioning by increasing the number of styled captions by 10%. But it decreases other scores slightly. This score-dropping is the result of overfitting so in the full model, we added embedding-specific dropout layer to fix this issue. As a result, relevance scores boost up again and in addition, another 12% added to the styled caption on style evaluation.
For evaluating diversity in generated sentences using different SVD factors, we counted words of generated captions. As shown in Table 5, the number of words and the average of words count per sentence are decreasing as much as the factor decreases. Different factors for model aims at producing more diverse and novel captions which may not appear in the ground truths. Then, their similarity metric scores are generally less than the full model since fewer n-grams match with ground truths. Therefore, these metrics particularly represent a quality of pattern matching, instead of overall quality from the human perspective.
In Figure 4, all models are trained by the same set of vocabulary. The extracted vocabulary from generated captions for all models is not completely similar. This indicates diversity in generated captions by different models of MoRE.
4 Conclusion
We have proposed a multi-style, diverse image captioning model by using the unpaired stylized corpus. This model includes the following components:
•
A CNN as the feature extractor
•
A Mixture of RNNs (MoRE) to embed features into a set of words
•
An RNN that gets the output of MoRE and generates a sentence as the final output
Our model can generate human-like, appropriately stylized, visually grounded, and style-controllable captions. Besides, the captions made rich and diverse using a mixture of experts. The results on the COCO dataset, show that the performance of our proposed captioning model is better than previous works in case of accuracy, diversity, and styled captions. This improves the results of the previous works in BLEU, CIDEr, SPICE, ROUGE_L, and CLF metrics. For the future works, one can consider the different ensemble methods instead of MoRE. Also to avoid overfitting, the different methods can be compared [76] to obtain the most effective one.
References
[1]
Alexander Mathews, Lexing Xie, and Xuming He.
Semstyle: Learning to generate stylised image captions using
unaligned text.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 8591–8600, 2018.
[2]
Liang Yang and Haifeng Hu.
Visual skeleton and reparative attention for part-of-speech image
captioning system.
Computer Vision and Image Understanding, 189:102819, 2019.
[3]
Shiyang Yan, Yuan Xie, Fangyu Wu, Jeremy S Smith, Wenjin Lu, and Bailing Zhang.
Image captioning via hierarchical attention mechanism and policy
gradient optimization.
Signal Processing, 167:107329, 2020.
[4]
Ning Xu, An-An Liu, Jing Liu, Weizhi Nie, and Yuting Su.
Scene graph captioner: Image captioning based on structural visual
representation.
Journal of Visual Communication and Image Representation,
58:477–485, 2019.
[5]
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan.
Show and tell: A neural image caption generator.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 3156–3164, 2015.
[6]
Jinhui Tang, Xiangbo Shu, Zechao Li, Guo-Jun Qi, and Jingdong Wang.
Generalized deep transfer networks for knowledge propagation in
heterogeneous domains.
ACM Transactions on Multimedia Computing, Communications, and
Applications (TOMM), 12(4s):68
[7]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan
Salakhudinov, Rich Zemel, and Yoshua Bengio.
Show, attend and tell: Neural image caption generation with visual
attention.
In International conference on machine learning, pages
2048–2057, 2015.
[8]
Allan Bell.
Language style as audience design.
Language in society, 13(2):145–204
[9]
James W. Pennebaker and Laura A. King.
Linguistic styles: Language use as an individual difference.
Journal of personality and social psychology, 77(6):1296 1939–1315, 1999.
[10]
Stephan Ludwig, Ko De Ruyter, Mike Friedman, Elisabeth C. Brüggen, Martin
Wetzels, and Gerard Pfann.
More than words: The influence of affective content and linguistic
style matches in online reviews on conversion rates.
Journal of Marketing, 77(1):87–103
[11]
Cristian Danescu-Niculescu-Mizil, Michael Gamon, and Susan Dumais.
Mark my words!: linguistic style accommodation in social media.
In Proceedings of the 20th international conference on World
wide web, pages 745–754. ACM, 2011.
[12]
Ellie Pavlick and Ani Nenkova.
Inducing lexical style properties for paraphrase and genre
differentiation.
In Proceedings of the 2015 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language
Technologies, pages 218–224, 2015.
[13]
Justin Johnson, Andrej Karpathy, and Li Fei-Fei.
Densecap: Fully convolutional localization networks for dense
captioning.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 4565–4574, 2016.
[14]
Mark Yatskar, Michel Galley, Lucy Vanderwende, and Luke Zettlemoyer.
See no evil, say no evil: Description generation from densely labeled
images.
In Proceedings of the Third Joint Conference on Lexical and
Computational Semantics (* SEM 2014), pages 110–120, 2014.
[15]
Linjie Yang, Kevin Tang, Jianchao Yang, and Li-Jia Li.
Dense captioning with joint inference and visual context.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 2193–2202, 2017.
[16]
Dianqi Li, Qiuyuan Huang, Xiaodong He, Lei Zhang, and Ming-Ting Sun.
Generating diverse and accurate visual captions by comparative
adversarial learning.
arXiv preprint arXiv:1804.00861, 2018.
[17]
Alexander Patrick Mathews, Lexing Xie, and Xuming He.
Senticap: Generating image descriptions with sentiments.
In Thirtieth AAAI conference on artificial intelligence, 2016.
[18]
Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng.
Stylenet: Generating attractive visual captions with styles.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 3137–3146, 2017.
[19]
Koen Goetschalckx, Bert Moons, Patrick Wambacq, and Marian Verhelst.
Efficiently combining svd, pruning, clustering and retraining for
enhanced neural network compression.
In Proceedings of the 2nd International Workshop on Embedded and
Mobile Deep Learning, pages 1–6, 2018.
[20]
Mohammad Mahdi Bejani and Mehdi Ghatee.
Adaptive svd regularization for deep neural networks learning
systems.
Neural Networks, 128:33–46, 2020.
[21]
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr
Dollár, and C. Lawrence Zitnick.
Microsoft coco captions: Data collection and evaluation server.
arXiv preprint arXiv:1504.00325, 2015.
[22]
Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li,
Yejin Choi, Alexander C Berg, and Tamara L Berg.
Babytalk: Understanding and generating simple image descriptions.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
35(12):2891–2903, 2013.
[23]
Desmond Elliott and Arjen de Vries.
Describing images using inferred visual dependency representations.
In Proceedings of the 53rd Annual Meeting of the Association for
Computational Linguistics and the 7th International Joint Conference on
Natural Language Processing (Volume 1: Long Papers), pages 42–52, 2015.
[24]
Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He,
Geoffrey Zweig, and Margaret Mitchell.
Language models for image captioning: The quirks and what works.
arXiv preprint arXiv:1505.01809, 2015.
[25]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva
Ramanan, Piotr Dollár, and C Lawrence Zitnick.
Microsoft coco: Common objects in context.
In European conference on computer vision, pages 740–755.
Springer, 2014.
[26]
Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia
Hockenmaier, and Svetlana Lazebnik.
Flickr30k entities: Collecting region-to-phrase correspondences for
richer image-to-sentence models.
In Proceedings of the IEEE international conference on computer
vision, pages 2641–2649, 2015.
[27]
Jing Wang, Jianlong Fu, Jinhui Tang, Zechao Li, and Tao Mei.
Show, reward and tell: Automatic generation of narrative paragraph
from photo stream by adversarial training.
In Thirty-Second AAAI Conference on Artificial Intelligence,
2018.
[28]
Zhilin Yang, Ye Yuan, Yuexin Wu, William W Cohen, and Ruslan R Salakhutdinov.
Review networks for caption generation.
In Advances in Neural Information Processing Systems, pages
2361–2369, 2016.
[29]
Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher.
Knowing when to look: Adaptive attention via a visual sentinel for
image captioning.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 375–383, 2017.
[30]
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen
Gould, and Lei Zhang.
Bottom-up and top-down attention for image captioning and visual
question answering.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 6077–6086, 2018.
[31]
Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, and Tao Mei.
Boosting image captioning with attributes.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 4894–4902, 2017.
[32]
Luowei Zhou, Chenliang Xu, Parker Koch, and Jason J. Corso.
Image caption generation with text-conditional semantic attention.
arXiv preprint arXiv:1606.04621, 2, 2016.
[33]
Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava
Goel.
Self-critical sequence training for image captioning.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 7008–7024, 2017.
[34]
Li Zhang, Flood Sung, Feng Liu, Tao Xiang, Shaogang Gong, Yongxin Yang, and
Timothy M. Hospedales.
Actor-critic sequence training for image captioning.
arXiv preprint arXiv:1706.09601, 2017.
[35]
Rakshith Shetty, Marcus Rohrbach, Lisa Anne Hendricks, Mario Fritz, and Bernt
Schiele.
Speaking the same language: Matching machine to human captions by
adversarial training.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 4135–4144, 2017.
[36]
Hai-Tao Zheng, Zhe Wang, Ningning Ma, Jinyuan Chen, Xi Xiao, and Arun Kumar
Sangaiah.
Weakly-supervised image captioning based on rich contextual
information.
Multimedia Tools and Applications, 77(14):18583–18599, 2018.
[37]
Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin.
Towards diverse and natural image descriptions via a conditional gan.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 2970–2979, 2017.
[38]
Unnat Jain, Ziyu Zhang, and Alexander G Schwing.
Creativity: Generating diverse questions using variational
autoencoders.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 6485–6494, 2017.
[39]
Rakshith Shetty, Marcus Rohrbach, Lisa Anne Hendricks, Mario Fritz, and Bernt
Schiele.
Speaking the same language: Matching machine to human captions by
adversarial training.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 4135–4144, 2017.
[40]
Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun,
Stefan Lee, David Crandall, and Dhruv Batra.
Diverse beam search: Decoding diverse solutions from neural sequence
models.
arXiv preprint arXiv:1610.02424, 2016.
[41]
Liwei Wang, Alexander Schwing, and Svetlana Lazebnik.
Diverse and accurate image description using a variational
auto-encoder with an additive gaussian encoding space.
In Advances in Neural Information Processing Systems, pages
5756–5766, 2017.
[42]
Mingxing Zhang, Yang Yang, Hanwang Zhang, Yanli Ji, Heng Tao Shen, and Tat-Seng
Chua.
More is better: Precise and detailed image captioning using online
positive recall and missing concepts mining.
IEEE Transactions on Image Processing, 28(1):32–44, 2018.
[43]
Moitreya Chatterjee and Alexander G Schwing.
Diverse and coherent paragraph generation from images.
In Proceedings of the European Conference on Computer Vision
(ECCV), pages 729–744, 2018.
[44]
Aditya Deshpande, Jyoti Aneja, Liwei Wang, Alexander Schwing, and David A
Forsyth.
Diverse and controllable image captioning with part-of-speech
guidance.
arXiv preprint arXiv:1805.12589, 2(8), 2018.
[45]
Yuzhao Mao, Chang Zhou, Xiaojie Wang, and Ruifan Li.
Show and tell more: Topic-oriented multi-sentence image captioning.
In IJCAI, pages 4258–4264, 2018.
[46]
Yue Zheng, Yali Li, and Shengjin Wang.
Intention oriented image captions with guiding objects.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 8395–8404, 2019.
[47]
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua
Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al.
Visual genome: Connecting language and vision using crowdsourced
dense image annotations.
International Journal of Computer Vision, 123(1):32–73, 2017.
[48]
Tianlang Chen, Zhongping Zhang, Quanzeng You, Chen Fang, Zhaowen Wang, Hailin
Jin, and Jiebo Luo.
“factual”or“emotional”: Stylized image captioning with adaptive
learning and attention.
In Proceedings of the European Conference on Computer Vision
(ECCV), pages 519–535, 2018.
[49]
Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, and Jason Weston.
Engaging image captioning via personality.
arXiv preprint arXiv:1810.10665, 2018.
[50]
Quanzeng You, Hailin Jin, and Jiebo Luo.
Image captioning at will: A versatile scheme for effectively
injecting sentiments into image descriptions.
arXiv preprint arXiv:1801.10121, 2018.
[51]
Jingwen Chen, Ting Yao, and Hongyang Chao.
See and chat: automatically generating viewer-level comments on
images.
Multimedia Tools and Applications, 78(3):2689–2702, 2019.
[52]
Shadi Abpeykar and Mehdi Ghatee.
An ensemble of rbf neural networks in decision tree structure with
knowledge transferring to accelerate multi-classification.
Neural Computing and Applications, 31(11):7131–7151, 2019.
[53]
Shadi Abpeykar, Mehdi Ghatee, and Hadi Zare.
Ensemble decision forest of rbf networks via hybrid feature
clustering approach for high-dimensional data classification.
Computational Statistics & Data Analysis, 131:12–36, 2019.
[54]
Elham Abbasi, Mohammad Ebrahim Shiri, and Mehdi Ghatee.
A regularized root–quartic mixture of experts for complex
classification problems.
Knowledge-Based Systems, 110:98–109, 2016.
[55]
Ali Pashaei, Mehdi Ghatee, and Hedieh Sajedi.
Convolution neural network joint with mixture of extreme learning
machines for feature extraction and classification of accident images.
Journal of Real-Time Image Processing, pages 1–16, 2019.
[56]
Xinwei He, Yang Yang, Baoguang Shi, and Xiang Bai.
Vd-san: Visual-densely semantic attention network for image caption
generation.
Neurocomputing, 328:48–55, 2019.
[57]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew
Wojna.
Rethinking the inception architecture for computer vision.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 2818–2826, 2016.
[58]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma,
Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al.
Imagenet large scale visual recognition challenge.
International journal of computer vision, 115(3):211–252,
2015.
[59]
Jian Xue, Jinyu Li, and Yifan Gong.
Restructuring of deep neural network acoustic models with singular
value decomposition.
In Interspeech, pages 2365–2369, 2013.
[60]
Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau,
Fethi Bougares, Holger Schwenk, and Yoshua Bengio.
Learning phrase representations using rnn encoder-decoder for
statistical machine translation.
arXiv preprint arXiv:1406.1078, 2014.
[61]
Sergey Ioffe and Christian Szegedy.
Batch normalization: Accelerating deep network training by reducing
internal covariate shift.
arXiv preprint arXiv:1502.03167, 2015.
[62]
Mike Schuster and Kuldip K Paliwal.
Bidirectional recurrent neural networks.
IEEE Transactions on Signal Processing, 45(11):2673–2681,
1997.
[63]
Andrej Karpathy and Li Fei-Fei.
Deep visual-semantic alignments for generating image descriptions.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 3128–3137, 2015.
[64]
Jamie Ryan Kiros.
neural-storyteller, a recurrent neural network for generating little
stories about images.
available at<>, GitHub, Inc., retrieved on Nov, 26:4, 2016.
[65]
Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel.
Unifying visual-semantic embeddings with multimodal neural language
models.
arXiv preprint arXiv:1411.2539, 2014.
[66]
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek,
Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz,
et al.
The kaldi speech recognition toolkit.
In IEEE 2011 workshop on automatic speech recognition and
understanding, number CONF. IEEE Signal Processing Society, 2011.
[67]
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.
Bleu: a method for automatic evaluation of machine translation.
In Proceedings of the 40th annual meeting on association for
computational linguistics, pages 311–318. Association for Computational
Linguistics, 2002.
[68]
Michael Denkowski and Alon Lavie.
Meteor universal: Language specific translation evaluation for any
target language.
In Proceedings of the ninth workshop on statistical machine
translation, pages 376–380, 2014.
[69]
Chin-Yew Lin.
Rouge: A package for automatic evaluation of summaries.
In Text summarization branches out, pages 74–81, 2004.
[70]
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh.
Cider: Consensus-based image description evaluation.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 4566–4575, 2015.
[71]
Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould.
Spice: Semantic propositional image caption evaluation.
In European Conference on Computer Vision, pages 382–398.
Springer, 2016.
[72]
Diederik P. Kingma and Jimmy Ba.
Adam: A method for stochastic optimization.
arXiv preprint arXiv:1412.6980, 2014.
[73]
Jeffrey Pennington, Richard Socher, and Christopher Manning.
Glove: Global vectors for word representation.
In Proceedings of the 2014 conference on empirical methods in
natural language processing (EMNLP), pages 1532–1543, 2014.
[74]
Stephen Merity, Nitish Shirish Keskar, and Richard Socher.
Regularizing and optimizing lstm language models.
arXiv preprint arXiv:1708.02182, 2017.
[75]
Anders Krogh and John A Hertz.
A simple weight decay can improve generalization.
In Advances in neural information processing systems, pages
950–957, 1992.
[76]
Mohammad Mahdi Bejani and Mehdi Ghatee.
Overfitting control in shallow and deep neural networks: A systematic
review.
Artificial Intelligence Review, 2020. |
Taming GANs with Lookahead
Tatjana Chavdarova
Idiap, EPFL
&Mattéo Pagliardini$*$
EPFL
&Martin Jaggi
EPFL
&François Fleuret
Idiap, EPFL
equal contribution, correspondence to {firstname.lastname}@epfl.ch
Abstract
Generative Adversarial Networks are notoriously challenging to train. The underlying minimax optimization is highly susceptible to the variance of the stochastic gradient and the rotational component of the associated game vector field. We empirically demonstrate the effectiveness of the Lookahead meta-optimization method for optimizing games, originally proposed for standard minimization. The backtracking step of Lookahead naturally handles the rotational game dynamics, which in turn enables the gradient ascent descent method to converge on challenging toy games often analyzed in the literature. Moreover, it implicitly handles high variance without using large mini-batches, known to be essential for reaching state of the art performance. Experimental results on MNIST, SVHN, and CIFAR-10, demonstrate a clear advantage of combining Lookahead with Adam or extragradient, in terms of performance, memory footprint, and improved stability. Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of $13.65$ without using the class labels, bringing state-of-the-art GAN training within reach of common computational resources.
\newfloatcommand
capbtabboxtable[][\FBwidth]
\newfloatcommandcapbalgboxalgorithm[][\FBwidth]
1 Introduction
Gradient-based methods are the workhorse of machine learning. These methods optimize the parameters of a model with respect to a single objective $f:\mathcal{X}\rightarrow\mathbb{R}$. However, an increasing interest for multi-objective optimization arises in various domains–such as mathematics, economics, multi-agent reinforcement learning (Omidshafiei et al., 2017)–where several agents aim at optimizing their own cost function $f_{i}:\mathcal{X}_{1}\times\dots\times\mathcal{X_{N}}\rightarrow\mathbb{R}$ simultaneously.
One particularly successful such class of algorithms are the Generative Adversarial Networks (Goodfellow et al., 2014, (GANs)), which consist of two players referred to as a generator and a discriminator. GANs were originally formulated as minimax optimization $f:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}$ (Von Neumann and Morgenstern, 1944), where the generator and the discriminator aim at minimizing and maximizing the same value function, see § 2.
A natural generalization of gradient descent for minimax problems is the gradient descent ascent (GDA) algorithm, which alternates between a gradient descent step for the min-player and a gradient ascent step for the max-player.
In the ideal case, minimax training aims at finding a Nash equilibrium where no player has the incentive of changing its parameters.
Despite the impressive quality of the generated samples that GANs have demonstrated–relative to classical maximum likelihood-based generative models, these algorithms remain notoriously difficult to train. In particular, poor performance (sometimes manifesting as “mode collapse”), brittle dependency on hyperparameters, or divergence are often reported.
Consequently, obtaining state-of-the-art performance was shown to require large computational resources (Brock et al., 2019), making well-performing models unavailable for common budgets of computational resources.
To train GANs, practitioners originally adopted methods that are known to perform well on standard single-objective minimization. However, an understanding of the fundamental differences in terms of optimization as well as the stationary points of a game is currently missing (Jin et al., 2019).
Moreover, it was recently empirically shown that:
(i) GANs often converge to a locally stable stationary point that is not a differential Nash equilibrium (Berard et al., 2020);
(ii) increased batch size improves GAN performances (Brock et al., 2019) in contrast to minimization (Defazio and Bottou, 2018; Shallue et al., 2018).
A principal reason behind these differences is attributed to the rotations arising due to the adversarial component of the associated vector field of the gradient of the two player’s parameters (Mescheder et al., 2018; Balduzzi et al., 2018), which are atypical for minimization.
More precisely, the Jacobian of the associated vector field (see definition in § 2) can be decomposed into a symmetric and antisymmetric component (Balduzzi et al., 2018), which behave as a “potential” (Monderer and Shapley, 1996) and a Hamiltonian game. For our purposes, a “potential” game can be seen as standard minimization, where gradient converges in contrast to the Hamiltonian game where GDA exhibits cyclic behavior on the level sets of the Hamiltonian scalar function $\mathcal{H}$. While gradient and gradient descent on $\mathcal{H}$
converge on potential and Hamiltonian game, respectively, games are often combination of the two, making this general case hard to solve.
In the context of minimization, Zhang et al. (2019) recently proposed the “Lookahead” algorithm, which intuitively uses an update direction by “looking ahead” at the sequence of parameters that change with higher variance due to stochastic gradient estimates–generated by some inner optimizer.
Lookahead was shown to improve the stability during training and to reduce the variance of the so called “slow” weights.
Contributions.
In this paper we investigate extensions of the Lookahead algorithm to minimax problems, and empirically benchmark their combination with currently used optimization methods for GANs.
Our contributions can be summarized as follows:
•
We extend the Lookahead algorithm to a meta-optimizer for minimax, called “lookahead–minimax”, in a way that takes into account the rotational component of the associated vector field, yielding an algorithm that is straightforward to implement.
•
We motivate the use of Lookahead for games by considering the extensively studied toy bilinear example (Goodfellow, 2016) and show that:
(i) the use of lookahead allows for convergence of the otherwise diverging GDA on the classical bilinear game in full-batch setting (see § 3.1.1), as well as
(ii) it yields good performance on challenging stochastic variants of this game, despite the high variance (see § 3.1.2).
•
We empirically investigate the performance of lookahead on GANs on three standard datasets–MNIST, CIFAR-10, and SVHN as well as with standard optimization methods for GANs–GDA and Extragradient (both using Adam, Kingma and Ba, 2015), called LA–AltGAN and LA–ExtraGradient, respectively. We observe consistent performance and stability improvements at a negligible additional cost that does not require additional forward and backward passes, see § 4.
•
Finally, we report a new state of the art result on CIFAR-10, while outperforming the class-conditional BigGAN (Brock et al., 2019) using $30$ times smaller model and $16$ times smaller minibatches on unconditional image generation (known to be harder than using the class labels) , by obtaining FID score of $13.65$, see Table 2.
2 Background
GANs formulation.
Given the data distribution $p_{d}$, the generator is a mapping $G:z\mapsto x$, where $z$ is sampled from a known distribution $z\sim p_{z}$ and ideally $x\sim p_{d}$.
The discriminator $D:x\mapsto D(x)\in[0,1]$ is a binary classifier whose output represents a conditional probability estimate that an $x$ sampled from a balanced mixture of real data from $p_{d}$ and $G$-generated data is actually real.
The optimization of GAN is formulated as a differentiable two-player game where the generator $G$ with parameters ${\bm{\theta}}$, and the discriminator $D$ with parameters ${\bm{\varphi}}$, aim at minimizing their own cost function ${\mathcal{L}}^{{\bm{\theta}}}$ and ${\mathcal{L}}^{{\bm{\varphi}}}$, respectively, as follows:
$${\bm{\theta}}^{\star}\in\operatorname*{arg\,min}_{{\bm{\theta}}\in\Theta}{%
\mathcal{L}}^{{\bm{\theta}}}({\bm{\theta}},\bm{{\bm{\varphi}}}^{\star})\qquad%
\text{and}\qquad\bm{{\bm{\varphi}}}^{\star}\in\operatorname*{arg\,min}_{{\bm{%
\varphi}}\in\Phi}{\mathcal{L}}^{{\bm{\varphi}}}({\bm{\theta}}^{\star},{\bm{%
\varphi}})\,.$$
(2P-G)
When ${\mathcal{L}}^{{\bm{\varphi}}}=-{\mathcal{L}}^{{\bm{\theta}}}=:{\mathcal{L}}$ this game is called a zero-sum game and (2P-G) is a minimax problem:
$$\min_{{\bm{\theta}}\in\Theta}\max_{{\bm{\varphi}}\in\Phi}\,{\mathcal{L}}({\bm{%
\theta}},{\bm{\varphi}})$$
(SP)
Minimax optimization methods.
As GDA does not converge for some simple convex-concave game, Korpelevich (1976) proposed the extragradient method, where a “prediction” step is performed to obtain an extrapolated point $({\bm{\theta}}_{t+\frac{1}{2}},{\bm{\varphi}}_{t+\frac{1}{2}})$ using GDA, and the gradients at the extrapolated point are then applied to the current iterate $({\bm{\theta}}_{t},{\bm{\varphi}}_{t})$ as follows:
$$\hskip-8.535827pt\text{Extrapolation:}\left\{\begin{aligned} \displaystyle{\bm%
{\theta}}_{t+\frac{1}{2}}&\displaystyle={\bm{\theta}}_{t}-\eta\nabla_{\bm{%
\theta}}{\mathcal{L}}^{{\bm{\theta}}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})\\
\displaystyle{\bm{\varphi}}_{t+\frac{1}{2}}&\displaystyle={\bm{\varphi}}_{t}-%
\eta\nabla_{{\bm{\varphi}}}{\mathcal{L}}^{{\bm{\varphi}}}({\bm{\theta}}_{t},{%
\bm{\varphi}}_{t})\end{aligned}\right.\text{Update:}\left\{\begin{aligned} %
\displaystyle{\bm{\theta}}_{t+1}&\displaystyle={\bm{\theta}}_{t}-\eta\nabla_{%
\bm{\theta}}{\mathcal{L}}^{{\bm{\theta}}}({\bm{\theta}}_{t+\frac{1}{2}},{\bm{%
\varphi}}_{t+\frac{1}{2}})\\
\displaystyle{\bm{\varphi}}_{t+1}&\displaystyle={\bm{\varphi}}_{t}-\eta\nabla_%
{{\bm{\varphi}}}{\mathcal{L}}^{{\bm{\varphi}}}({\bm{\theta}}_{t+\frac{1}{2}},{%
\bm{\varphi}}_{t+\frac{1}{2}})\end{aligned}\right.,$$
(EG)
where $\eta$ denotes the step size.
In the context of zero-sum games, the extragradient method converges for any convex-concave function ${\mathcal{L}}$ and any closed convex sets $\Theta$ and $\Phi$, (see Harker and Pang, 1990, Thm. 12.1.11).
The joint vector field.
Mescheder et al. (2017) and Balduzzi et al. (2018) argue that the vector field obtained by concatenating the gradients of the two players gives more insights of the dynamics than studying the loss surface. The joint vector field (JVF) and the Jacobian111Note that in general $v(\cdot)$ is not a gradient vector field and unlike the Hessian, $v^{\prime}(\cdot)$ is non-symmetric. of JVF are defined as:
$$\hskip-10.812047ptv({\bm{\theta}},{\bm{\varphi}})=\left(\begin{aligned} %
\displaystyle\nabla_{{\bm{\theta}}}{\mathcal{L}}^{{\bm{\theta}}}({\bm{\theta}}%
,{\bm{\varphi}})\\
\displaystyle\nabla_{{\bm{\varphi}}}{\mathcal{L}}^{{\bm{\theta}}}({\bm{\theta}%
},{\bm{\varphi}})\end{aligned}\right)\,,\text{and}\quad\;v^{\prime}({\bm{%
\theta}},{\bm{\varphi}})=\left(\begin{aligned} \displaystyle\nabla_{{\bm{%
\theta}}}^{2}{\mathcal{L}}^{{\bm{\theta}}}({\bm{\theta}},{\bm{\varphi}})\quad%
\nabla_{{\bm{\varphi}}}\nabla_{{\bm{\theta}}}{\mathcal{L}}^{{\bm{\theta}}}({%
\bm{\theta}},{\bm{\varphi}})\\
\displaystyle\nabla_{{\bm{\theta}}}\nabla_{{\bm{\varphi}}}{\mathcal{L}}^{{\bm{%
\varphi}}}({\bm{\theta}},{\bm{\varphi}})\quad\nabla_{{\bm{\varphi}}}^{2}{%
\mathcal{L}}^{{\bm{\varphi}}}({\bm{\theta}},{\bm{\varphi}})\end{aligned}\right%
)\,,\text{ resp.}$$
(JVF)
Rotational component of the game vector field.
Berard et al. (2020) show empirically that GANs converge to a locally stable stationary point (Verhulst, 1990, LSSP) that is not a differential Nash equilibrium–defined as
a point where the norm of the Jacobian is zero and where the Hessian of both the players are definite positive.
LSSP is defined as a point $({\bm{\theta}}^{\star},{\bm{\varphi}}^{\star})$ where:
$$\hskip-10.812047ptv({\bm{\theta}}^{\star},{\bm{\varphi}}^{\star})=0,\quad\;%
\text{and}\quad\;\mathcal{R}(\lambda)>0,\forall\lambda\in Sp(v^{\prime}({\bm{%
\theta}}^{\star},{\bm{\varphi}}^{\star}))\,,$$
(LSSP)
where $Sp(\cdot)$ denotes the spectrum of $v^{\prime}(\cdot)$ and $\mathcal{R}(\cdot)$ the real part.
In summary,
(i) if all the eigenvalues of $v^{\prime}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})$ have positive real part the point $({\bm{\theta}}_{t},{\bm{\varphi}}_{t})$ is LSSP, and
(ii) if the eigenvalues of $v^{\prime}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})$ have imaginary part, the dynamics of the game exhibit rotations.
Impact of noise due to the stochastic gradient estimates on games.
Chavdarova et al. (2019) point out that relative to minimization, noise impedes more the game optimization, and show that there exists a class of zero-sum games for which the stochastic extragradient method diverges. Intuitively, bounded noise of the stochastic gradient hurts the convergence as with higher probability the noisy gradient points in a direction that makes the algorithm to diverge from the equilibrium, due to the properties of $v^{\prime}(\cdot)$ (see Fig.1, Chavdarova et al., 2019).
3 Lookahead for minimax objectives
Lookahead for single objective optimization.
In the context of minization, Zhang et al. (2019) recently proposed the “Lookahead” algorithm where at every step $t$:
(i) a copy of the current iterate ${\bm{\omega}}_{t}$ is made: ${\bm{\omega}}_{t}^{\mathcal{P}}\leftarrow{\bm{\omega}}_{t}$ (where $\mathcal{P}$ stands for “prediction”),
(ii) ${\bm{\omega}}_{t}^{\mathcal{P}}$is then updated for $k$ times yielding ${\bm{\omega}}_{t+k}^{\mathcal{P}}$, and finally
(iii) the actual update ${\bm{\omega}}_{t+1}$ is obtained as a point that lies on a line between the two iterates: the current ${\bm{\omega}}_{t}$ and the predicted one ${\bm{\omega}}_{t+k}^{\mathcal{P}}$:
$${\bm{\omega}}_{t+1}\leftarrow{\bm{\omega}}_{t}+\alpha({\bm{\omega}}_{t+k}^{%
\mathcal{P}}-{\bm{\omega}}_{t})\text{, where $\alpha\in[0,1]$}\,.$$
(LA)
Note that Lookahead uses two additional hyperparameters:
(i) $k$–the number of steps to do prediction, as well as
(ii) $\alpha$–controls how large step we make towards the predicted iterate ${\bm{\omega}}^{\mathcal{P}}$: the larger the closest, and when $\alpha=1$ (LA) is equivalent to regular optimization (has no impact).
Besides the extra hyperparameters, Lookahead was shown to help the used optimizer to be more resilient to the choice of its hyperparameters, as well as to reduce the variance of the gradient estimates (Zhang et al., 2019).
Using lookahead, Zhang et al. (2019) were able to achieve faster convergence across different tasks, with minimal computational overhead. Recently, by viewing Lookahead as a multi-agent optimization with two agents, Wang et al. (2020) proved under certain assumptions that Lookahead converges to a first order stationary point.
Lookahead–minimax.
As games in the general case are a combination of “potential” (attraction) and Hamiltonian vector field, it is natural to consider the extension of Lookahead on games, as besides reducing the variance, taking a point on a line between two points on a cyclic trajectory would bring us closer to the solution, as illustrated in Fig. 1.
Alg. 1 summarizes the proposed Lookahead–minimax algorithm.
For the purpose of fair comparison, as a step we count each update of both the players, while covering the case of using different update ratio $r$ for the two players–lines 4–6.
To mitigate oscillations, we extend (LA) in joint parameter space $({\bm{\theta}},{\bm{\varphi}})$. More precisely, every $k$ steps, given a previous stored checkpoint $({\bm{\theta}},{\bm{\varphi}})$ we perform a backtracking step using (LA), see lines 10 & 11.
Lookahead Vs. Lookahead–minimax.
Note how Alg. 1 differs from applying Lookahead to both the players separately. The obvious difference is for the case $r\neq 1$, as the backtracking is done at different number of updates of ${\bm{\varphi}}$ and ${\bm{\theta}}$.
The key difference is in fact that after applying (LA) to one of the players, we do not use the resulting interpolated point to update the parameters of the other player–a version we refer to as “Alternating–Lookahead”, see § C.
Instead, (LA) is applied to both the players at the same time, which we found that outperforms the former. Unless otherwise emphasized, we focus on the “joint” version, as described in Alg. 1.
3.1 Motivating example: the bilinear game
We argue that Lookahead-minimax allows for improved stability and performance on minimax problems due to two main reasons:
(i) It allows for faster optimization in presence of a Hamiltonian vector field associated with minimax optimization; as well as
(ii) it reduces the noise due to making more conservative steps.
In the following, we disentangle the two, and show in § 3.1.1 that Lookahead-minimax converges fast in the full-batch setting, without presence of noise.
Moreover, besides that the GDA algorithm is known to diverge on this example, we show that when combined with lookahead it converges.
In § 3.1.2 we consider the challenging problem of (Chavdarova et al., 2019), specifically designed to have high variance of the gradient methods, and we show that besides therein proposed Stochastic Variance Reduced Extragradient (Chavdarova et al., 2019, SVRE), Lookahead-minimax is the only method that converges on this experiment, while considering all methods of (Gidel et al., 2019a, §7.1).
More precisely, we consider the following bilinear problem:
$$\min_{{\bm{\theta}}\in\mathbb{R}^{d}}\max_{{\bm{\varphi}}\in\mathbb{R}^{d}}{%
\mathcal{L}}({\bm{\theta}},{\bm{\varphi}})=\min_{{\bm{\theta}}\in\mathbb{R}^{d%
}}\max_{{\bm{\varphi}}\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n}({\bm{\theta}%
}^{\top}{\bm{b}}_{i}+{\bm{\theta}}^{\top}{\bm{A}}_{i}{\bm{\varphi}}+{\bm{c}}_{%
i}^{\top}{\bm{\varphi}}),$$
(1)
with ${\bm{\theta}},{\bm{\varphi}},{\bm{b}},{\bm{c}}\in\mathbb{R}^{d}$ and ${\bm{A}}\in\mathbb{R}^{n\times d\times d}$.
We set $n=d=100$, and draw $[{\bm{A}}_{i}]_{kl}=\delta_{kli}$ and $[{\bm{b}}_{i}]_{k},[{\bm{c}}_{i}]_{k}\sim\mathcal{N}(0,1/d)\,,\;1\leq k,l\leq d$, where $\delta_{kli}=1$ if $k=l=i$, and $0$ otherwise.
3.1.1 The full-batch setting
In the batch setting each parameter update uses the full dataset.
In Fig. 2 we compare:
(i) GDAwith learning rate $\eta=10^{-4}\text{ and }\eta=0.3$ (in blue), which oscilates arount the optimum with small enough learning rate, and diverges otherwise;
(ii) Unroll-Ywhere the max-player is unrolled $k$ steps, before updating the min player, as in (Metz et al., 2017);
(iii) Unroll-XYwhere both the players are unrolled $k$ steps with fixed opponent, and the actual updates are done with un unrolled opponent (see § A);
(iv) LA–GDAwith $\alpha=0.5\text{ and }\alpha=0.4$ (in red and pink, resp.) which combines Alg. 1 with GDA.
(v) ExtraGradient–Eq. EG; as well as
(vi) LA–ExtraGrad, which combines Alg. 1 with ExtraGradient.
See § A for definition of all the algorithms used, as well as details on the implementation. Note that all algorithms are normalized by the number of passes, where as one pass we count a forward and backward pass.
Interestingly, we observe that Lookahead–Minimax allows GDA to converge on this example, and moreover speeds up the convergence of ExtraGradient.
3.1.2 The stochastic setting
In this section, we show that besides SVRE, Lookahead–minimax also converges on (1). In addition, we test all the methods of (Gidel et al., 2019a, §7.1) using minibatches of several sizes $B={1,16,64}$, and sampling without replacement.
In particular, we tested:
(i) the Adam method combined with GDA (shown in blue);
(ii) ExtraGradient–Eq. EG; as well as
(iii) ExtraAdamproposed by (Gidel et al., 2019a);
(iv) our proposed method LA-GDA (Alg. 1) combined with GDA; as well as
(v) SVRE(Chavdarova et al., 2019, Alg.$1$)for completeness.
Fig. 3 depicts our results.
See § A for details on the implementation and choice of hyperparameters.
We observe that besides the good performance of LA-GDA on games in the batch setting, it also has the property to cope well large variance of the gradient estimates, and it converges without using restarting.
4 Experiments
In this section, we empirically benchmark Lookahead–minimax (Alg. 1) for training GANs.
4.1 Experimental setup
Datasets.
For empirical comparison we used the following image datasets:
(i) MNIST(Lecun and Cortes, 1998),
(ii) CIFAR-10(Krizhevsky, 2009, §3), and
(iii) SVHN(Netzer et al., 2011).
using resolution of $28\!\times\!28$ for MNIST,
and $3\!\times\!32\!\times\!32$ for the rest of the datasets.
Metrics.
We used the Inception score (IS, Salimans et al., 2016) and the Fréchet Inception distance (FID, Heusel et al., 2017) as most commonly used performance metrics for image synthesis.
We used their respective original implementations, and sample size of $50000$, see § D for details. We compute FID and IS at every $10000$ iterations of each algorithm. See § D.1 for details.
DNN architectures.
For experiments on MNIST, we used the DCGAN architectures (Radford et al., 2016), described in § D.2.1.
For SVHN and CIFAR-10, we used the ResNet architectures, replicating the setup of (Miyato et al., 2018; Chavdarova et al., 2019), described in details in D.2.2.
Optimization methods.
We conduct experiments using the following optimization methods for GANs:
(i) AltGan:the standard alternating GAN,
(ii) ExtraGrad:the extragradient method, as well as
(iii) UnrolledGAN:proposed by Metz et al. (2017).
We combine Lookahead-minimax with (i) and (ii), and we refer to these as LA–AltGAN and LA–ExtraGrad, respectively or for both as LA–GAN for brevity.
All methods use the Adam optimizer (Kingma and Ba, 2015).
We also compute Exponential Moving Average (EMA) and uniform averaging of the running iterates, see their definitions in § B.
4.2 Results and Discussion
Comparison with baselines.
Table 1 summarizes our comparison of combining Alg. 1 with AltGAN and ExtraGrad.
On CIFAR-10, we observe that the iterates (column “no avg”) of LA–AltGAN and LA–ExtraGrad obtain notably better performances than those of the corresponding baselines, and using EMA on LA–AltGAN and LA–ExtraGrad further improved the FID and IS scores obtained with LA–AltGAN.
On SVHN, we observe similar relative comparisons between LA–GAN and baselines, but also–as opposed to CIFAR-10–we see that in some cases uniform averaging reduces performance and that EMA for AltGAN did not provide competitive results, as the iterates diverged relatively early.
Although we obtain our best results with EMA, we see that the iterates of LA–GAN without averaging reach state-of-art results, and that the relative improvement of EMA is reduced compared to the baseline.
Finally, we report our results on MNIST, where the training of all baselines is stable, to investigate if Lookahead-minimax allows for obtaining better final performances. Hence, we run each experiment for $100$K iterations, despite that all methods converge earlier. The best FID scores of the iterates (column “no avg”) are obtained with LA–ExtraGrad, and for EMA using LA–ExtraGrad and Unrolled–GAN. Note, however, that Unrolled–GAN is computationally much more expensive (in the order of the ratio $4:22$–as we used $20$ steps of unrolling what gave best results, see § D). This confirms that in the absence of noise and for stable baselines, LA–GAN yields improvement for games. Fig. 4 depicts that after convergence LA–GAN shows no rotations.
Benchmark on CIFAR-10 using reported results.
Table 2 summarizes the recently reported best obtained FID and IS scores on CIFAR-10. Although using the class labels–Conditional GAN is known to improve GAN performances (Radford et al., 2016), we outperform BigGAN (Brock et al., 2019) on CIFAR-10.
Notably, our model and BigGAN have $5.1$M and $158.3$M parameters in total, respectively, and we use minibatch size of $128$, whereas BigGAN uses $2048$ samples.
Additional memory & computational cost. The additional memory cost of Lookahead-minimax is negligible as it only requires storing one copy per model (${\bm{\theta}}^{\mathcal{P}}\text{ and }{\bm{\varphi}}^{\mathcal{P}}$ in Alg. 1). Note that EMA and uniform averaging have the same extra memory requirement–both of which are updated each step whereas LA–GAN is updated once every $k$ iterations.
On the choice of $\alpha$ and $k$.
In all our experiments we fixed $\alpha=0.5$, and we tested with few values of $k$, while keeping $k$ fixed throughout the training. We observe that all values of $k$ improve upon the baseline, both in terms of stability and performance. We also observed that having smaller value of $k$ makes AltGAN more stable, as if $k$ is large, the algorithm quickly diverges as it becomes similar to AltGAN. On the other hand, when combining Lookahead-minimax with ExtraGradient we could use larger $k$ as ExtraGradient is more stable, and usually diverges later then AltGAN.
Stability of convergence.
We observe that LA–GAN consistently improved the stability of its respective baseline, see Fig. 5. Although we used update ratio of $1:5$ for $G:D$–known to improve stability, our baselines diverged in all the experiments, whereas only few LA–GAN experiments diverged (later in the training relative to the baseline), see additional results in § E.
5 Related work
Parameter averaging.
In the context of convex single-objective optimization, taking an arithmetic average of the parameters as by Polyak and Juditsky (1992); Ruppert (1988) is well-known to yield faster convergence for convex functions and allowing the use of larger constant step-sizes in the case of stochastic optimization (Dieuleveut et al., 2017).
Parameter averaging has recently gained more interest in deep learning in general (Garipov et al., 2018), in natural language processing (Merity et al., 2018), and particularly in GANs (Yazıcı et al., 2019) where researchers often report the performance of a running uniform or exponential moving average of the iterates.
Such averaging as a post-processing after training is fundamentally different from immediately applying averages during training.
Lookahead (Zhang et al., 2019) as of our interest here in spirit is closer to extrapolation methods (Korpelevich, 1976) which rely on gradients taken not at the current iterate but at an extrapolated point for the current trajectory.
For highly complex optimization landscapes such as in deep learning, the effect of using gradients at perturbations of the current iterate has a desirable smoothing effect which is known to help training speed and stability in the case of non-convex single-objective optimization
(Wen et al., 2018; Haruki et al., 2019)
GANs.
Several proposed methods for GANs are motivated by the “recurrent
dynamics”. For example,
(i) Gidel et al. (2019a)and Yadav et al. (2018) use prediction steps to stabilize GANs,
(ii) Metz et al. (2017)update the generator using “unrolled” version of the discriminator,
(iii) Daskalakis et al. (2018)propose Optimistic Mirror Decent (OMD) for training Wasserstein GANs (Arjovsky et al., 2017),
(iv) Balduzzi et al. (2018)propose the Symplectic Gradient Adjustment (SGA), and
(v) Chavdarova et al. (2019)propose the SVRE method, which combines the extragradient method with stochastic variance reduced gradient (SVRG, Johnson and Zhang, 2013).
Besides its simplicity, the key benefit of LA–GAN is that it handles well both the rotations of the vector field as well as noise from stochasticity, thus performing well on real–world applications.
6 Conclusion
Motivated by the adversarial component of games and the negative impact of noise on games, we proposed and extension of the Lookahead algorithm to games, called “Lookahead–minimax”. On the bilinear toy example we observe that combining Lookahead–minimax with standard gradient methods converges, and that Lookahead–minimax handles well high variance of the gradient estimates.
Exponential moving averaging of the iterates is known to help obtain improved performances for GANs, yet it does not impact the actual iterates, hence does not stop the algorithm from (early) divergence.
Lookahead–minimax goes beyond such averaging, requires less computation than running averages, and it is straightforward to implement.
It can be applied to any optimization method, and in practice it consistently improves the stability of its respective baseline.
Performance-wise, using Lookahead–minimax we obtained new state–of–art result on CIFAR–10 of $13.65$ FID, outperforming BigGAN which uses the annotated classes, and requires $30$–times larger models.
As Lookahead–minimax uses two additional hyperparameters, future directions include developing adaptive schemes of obtaining these coefficients throughout training, which could speed up further the convergence of Lookahead–minimax.
Broader Impact
By simplifying the training process and lowering its computational requirements, we hope this work will stimulate the research in new fields of applications for GANs and minimax problems in general. Furthermore, we hope it will bridge the gap between obtaining well-performing GANs using large computational resources and common setups of limited computation.
By improving GANs performances, inevitably we are also increasing the potential for misuses of this technology, those we already know today such as fake impersonation through video and audio generation, and those still to come that we need to prevent. As a mitigation strategy, we believe the scientific community should promote better education in the matter, more accountable information streams, and more adequate laws. We are hopeful that new arising challenges will create new research areas (e.g. ASVspoof challenge) helping to restore balance.
{ack}
TC was funded in part by the grant
200021-169112 from the Swiss National Science Foundation, and MP was
funded by the grant 40516.1 from the Swiss Innovation Agency.
TC would like to thank Hugo Berard for insightful discussions on the optimization landscape of GANs and sharing their code,
as well as David Balduzzi for insightful discussions on $n$-player differentiable games.
References
Arjovsky et al. (2017)
M. Arjovsky, S. Chintala, and L. Bottou.
Wasserstein generative adversarial networks.
In ICML, 2017.
Balduzzi et al. (2018)
D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Graepel.
The mechanics of n-player differentiable games.
In ICML, 2018.
Berard et al. (2020)
H. Berard, G. Gidel, A. Almahairi, P. Vincent, and S. Lacoste-Julien.
A closer look at the optimization landscapes of generative
adversarial networks.
In ICLR, 2020.
Brock et al. (2019)
A. Brock, J. Donahue, and K. Simonyan.
Large scale GAN training for high fidelity natural image synthesis.
In ICLR, 2019.
Bruck (1977)
R. E. Bruck.
On the weak convergence of an ergodic iteration for the solution of
variational inequalities for monotone operators in hilbert space.
Journal of Mathematical Analysis and Applications, 1977.
Chavdarova et al. (2019)
T. Chavdarova, G. Gidel, F. Fleuret, and S. Lacoste-Julien.
Reducing noise in gan training with variance reduced extragradient.
In NeurIPS, 2019.
Daskalakis et al. (2018)
C. Daskalakis, A. Ilyas, V. Syrgkanis, and H. Zeng.
Training GANs with optimism.
In ICLR, 2018.
Defazio and Bottou (2018)
A. Defazio and L. Bottou.
On the ineffectiveness of variance reduced optimization for deep
learning.
arXiv:1812.04529, 2018.
Dieuleveut et al. (2017)
A. Dieuleveut, A. Durmus, and F. Bach.
Bridging the gap between constant step size stochastic gradient
descent and markov chains.
arXiv:1707.06386, 2017.
Garipov et al. (2018)
T. Garipov, P. Izmailov, D. Podoprikhin, D. Vetrov, and A. G. Wilson.
Loss surfaces, mode connectivity, and fast ensembling of dnns.
arXiv:1802.10026, 2018.
Gidel et al. (2019a)
G. Gidel, H. Berard, P. Vincent, and S. Lacoste-Julien.
A variational inequality perspective on generative adversarial nets.
In ICLR, 2019a.
Gidel et al. (2019b)
G. Gidel, R. A. Hemmat, M. Pezeshki, R. L. Priol, G. Huang, S. Lacoste-Julien,
and I. Mitliagkas.
Negative momentum for improved game dynamics.
In AISTATS, 2019b.
Glorot and Bengio (2010)
X. Glorot and Y. Bengio.
Understanding the difficulty of training deep feedforward neural
networks.
In AISTATS, 2010.
Goodfellow (2016)
I. Goodfellow.
Nips 2016 tutorial: Generative adversarial networks.
arXiv:1701.00160, 2016.
Goodfellow et al. (2014)
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
A. Courville, and Y. Bengio.
Generative adversarial nets.
In NIPS, 2014.
Harker and Pang (1990)
P. T. Harker and J.-S. Pang.
Finite-dimensional variational inequality and nonlinear
complementarity problems: a survey of theory, algorithms and applications.
Mathematical programming, 1990.
Haruki et al. (2019)
K. Haruki, T. Suzuki, Y. Hamakawa, T. Toda, R. Sakai, M. Ozawa, and M. Kimura.
Gradient Noise Convolution (GNC): Smoothing Loss Function for
Distributed Large-Batch SGD.
arXiv:1906.10822, 2019.
He et al. (2015)
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
arXiv:1512.03385, 2015.
Heusel et al. (2017)
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter.
GANs trained by a two time-scale update rule converge to a local
nash equilibrium.
In NIPS, 2017.
Ioffe and Szegedy (2015)
S. Ioffe and C. Szegedy.
Batch normalization: Accelerating deep network training by reducing
internal covariate shift.
In ICML, 2015.
Jin et al. (2019)
C. Jin, P. Netrapalli, and M. I. Jordan.
Minmax optimization: Stable limit points of gradient descent ascent
are locally optimal.
arXiv:1902.00618, 2019.
Johnson and Zhang (2013)
R. Johnson and T. Zhang.
Accelerating stochastic gradient descent using predictive variance
reduction.
In NIPS, 2013.
Karras et al. (2018)
T. Karras, T. Aila, S. Laine, and J. Lehtinen.
Progressive growing of gans for improved quality, stability, and
variation.
In ICLR. OpenReview.net, 2018.
Kingma and Ba (2015)
D. P. Kingma and J. Ba.
Adam: A method for stochastic optimization.
In ICLR, 2015.
Korpelevich (1976)
G. Korpelevich.
The extragradient method for finding saddle points and other
problems.
Matecon, 1976.
Krizhevsky (2009)
A. Krizhevsky.
Learning Multiple Layers of Features from Tiny Images.
Master’s thesis, 2009.
Lecun and Cortes (1998)
Y. Lecun and C. Cortes.
The MNIST database of handwritten digits.
1998.
URL http://yann.lecun.com/exdb/mnist/.
Merity et al. (2018)
S. Merity, N. S. Keskar, and R. Socher.
Regularizing and optimizing LSTM language models.
In ICLR, 2018.
Mescheder et al. (2017)
L. Mescheder, S. Nowozin, and A. Geiger.
The numerics of GANs.
In NIPS, 2017.
Mescheder et al. (2018)
L. Mescheder, A. Geiger, and S. Nowozin.
Which Training Methods for GANs do actually Converge?
In ICML, 2018.
Metz et al. (2017)
L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein.
Unrolled generative adversarial networks.
In ICLR, 2017.
Miyato et al. (2018)
T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida.
Spectral normalization for generative adversarial networks.
In ICLR, 2018.
Monderer and Shapley (1996)
D. Monderer and L. S. Shapley.
Potential games.
Games and economic behavior, 1996.
Netzer et al. (2011)
Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng.
Reading digits in natural images with unsupervised feature learning.
2011.
URL http://ufldl.stanford.edu/housenumbers/.
Omidshafiei et al. (2017)
S. Omidshafiei, J. Pazis, C. Amato, J. P. How, and J. Vian.
Deep decentralized multi-task multi-agent reinforcement learning
under partial observability.
In ICML, 2017.
Polyak and Juditsky (1992)
B. T. Polyak and A. B. Juditsky.
Acceleration of stochastic approximation by averaging.
SIAM Journal on Control and Optimization, 1992.
doi: 10.1137/0330046.
Radford et al. (2016)
A. Radford, L. Metz, and S. Chintala.
Unsupervised representation learning with deep convolutional
generative adversarial networks.
In ICLR, 2016.
Ruppert (1988)
D. Ruppert.
Efficient estimations from a slowly convergent robbins-monro process.
Technical report, Cornell University Operations Research and
Industrial Engineering, 1988.
Salimans et al. (2016)
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen.
Improved techniques for training GANs.
In NIPS, 2016.
Shallue et al. (2018)
C. J. Shallue, J. Lee, J. Antognini, J. Sohl-Dickstein, R. Frostig, and G. E.
Dahl.
Measuring the effects of data parallelism on neural network training.
arXiv:1811.03600, 2018.
Song and Ermon (2019)
Y. Song and S. Ermon.
Generative modeling by estimating gradients of the data distribution.
In NeurIPS, pages 11895–11907, 2019.
Szegedy et al. (2015)
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.
Rethinking the inception architecture for computer vision.
arXiv:1512.00567, 2015.
Verhulst (1990)
F. Verhulst.
Nonlinear Differential Equations and Dynamical Systems.
1990.
ISBN 0387506284.
Von Neumann and Morgenstern (1944)
J. Von Neumann and O. Morgenstern.
Theory of games and economic behavior.
Princeton University Press, 1944.
Wang et al. (2020)
J. Wang, V. Tantia, N. Ballas, and M. Rabbat.
Lookahead converges to stationary points of smooth non-convex
functions.
In ICASSP, 2020.
Wen et al. (2018)
W. Wen, Y. Wang, F. Yan, C. Xu, C. Wu, Y. Chen, and H. Li.
Smoothout: Smoothing out sharp minima to improve generalization in
deep learning.
arXiv:1805.07898, 2018.
Yadav et al. (2018)
A. Yadav, S. Shah, Z. Xu, D. Jacobs, and T. Goldstein.
Stabilizing adversarial nets with prediction methods.
2018.
Yazıcı et al. (2019)
Y. Yazıcı, C.-S. Foo, S. Winkler, K.-H. Yap, G. Piliouras, and
V. Chandrasekhar.
The unusual effectiveness of averaging in gan training.
In ICLR, 2019.
Zhang et al. (2019)
M. Zhang, J. Lucas, J. Ba, and G. E. Hinton.
Lookahead optimizer: k steps forward, 1 step back.
In NeurIPS. 2019.
Appendix A Experiments on the bilinear example
In this section we list the details regarding our implementation of the experiments on the bilinear example of (1) that were presented in § 3.1. In particular:
(i) in § A.1 we list the implementational details of the benchmarked algorithms,
(ii) in § A.2 and A.3 we list the hyperparameters used in § 3.1.1 and § 3.1.2, respectively and finally
(iii) in § A.4 we present visualizations in aim to improve the reader’s intuition on how Lookahead-minimax works on games.
A.1 Implementation details
Gradient Descent Ascent (GDA).
We use an alternating implementation of GDA where the players are updated sequentially, as follows:
$${\bm{\varphi}}_{t+1}={\bm{\varphi}}_{t}-\eta\nabla_{{\bm{\varphi}}}{\mathcal{L%
}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t}),\hskip 30.0pt{\bm{\theta}}_{t+1}={\bm%
{\theta}}_{t}+\eta\nabla_{{\bm{\theta}}}{\mathcal{L}}({\bm{\theta}}_{t},{\bm{%
\varphi}}_{t+1})$$
(GDA)
ExtraGrad.
Our implementation of extragradient follows (EG), with ${\mathcal{L}}^{{\bm{\theta}}}(\cdot)=-{\mathcal{L}}^{{\bm{\varphi}}}(\cdot)$, thus:
$$\hskip-8.535827pt\text{Extrapolation:}\left\{\begin{aligned} \displaystyle{\bm%
{\theta}}_{t+\frac{1}{2}}&\displaystyle={\bm{\theta}}_{t}-\eta\nabla_{\bm{%
\theta}}{\mathcal{L}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})\\
\displaystyle{\bm{\varphi}}_{t+\frac{1}{2}}&\displaystyle={\bm{\varphi}}_{t}+%
\eta\nabla_{{\bm{\varphi}}}{\mathcal{L}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})%
\end{aligned}\right.\text{Update:}\left\{\begin{aligned} \displaystyle{\bm{%
\theta}}_{t+1}&\displaystyle={\bm{\theta}}_{t}-\eta\nabla_{\bm{\theta}}{%
\mathcal{L}}({\bm{\theta}}_{t+\frac{1}{2}},{\bm{\varphi}}_{t+\frac{1}{2}})\\
\displaystyle{\bm{\varphi}}_{t+1}&\displaystyle={\bm{\varphi}}_{t}+\eta\nabla_%
{{\bm{\varphi}}}{\mathcal{L}}({\bm{\theta}}_{t+\frac{1}{2}},{\bm{\varphi}}_{t+%
\frac{1}{2}})\end{aligned}\right..$$
(EG–ZS)
Unroll-Y.
Unrolling was introduced by Metz et al. [2017] as a way to mitigate mode collapse of GANs. It consists of finding an optimal max–player ${\bm{\varphi}}^{\star}$ for a fixed min–player ${\bm{\theta}}$, i.e. $\text{ }{\bm{\varphi}}^{\star}({\bm{\theta}})=\operatorname*{arg\,max}_{{\bm{%
\varphi}}}{\mathcal{L}}^{{\bm{\varphi}}}({\bm{\theta}},{\bm{\varphi}})$ through “unrolling” as follows:
$${\bm{\varphi}}_{t}^{0}={\bm{\varphi}}_{t},\hskip 20.0pt{\bm{\varphi}}_{t}^{m+1%
}({\bm{\theta}})={\bm{\varphi}}_{t}^{m}-\eta\nabla_{{\bm{\varphi}}}{\mathcal{L%
}}^{{\bm{\varphi}}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t}^{m}),\hskip 20.0pt{%
\bm{\varphi}}_{t}^{\star}({\bm{\theta}}_{t})=\lim_{m\to\infty}{\bm{\varphi}}_{%
t}^{m}({\bm{\theta}})\,.$$
In practice $m$ is a finite number of unrolling steps, yielding ${\bm{\varphi}}_{t}^{m}$.
The min–player ${\bm{\theta}}_{t}$, e.g. the generator, can be updated using the unrolled ${\bm{\varphi}}_{t}^{m}$, while the update of ${\bm{\varphi}}_{t}$ is unchanged:
$${\bm{\theta}}_{t+1}={\bm{\theta}}_{t}-\eta\nabla_{{\bm{\theta}}}{\mathcal{L}}^%
{{\bm{\theta}}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t}^{m}),\hskip 20.0pt{\bm{%
\varphi}}_{t+1}={\bm{\varphi}}_{t}-\eta\nabla_{{\bm{\varphi}}}{\mathcal{L}}^{{%
\bm{\varphi}}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})$$
(UR–X)
Unroll-XY.
While Metz et al. [2017] only unroll one player (the discriminator in their GAN setup), we extended the concept of unrolling to games and for completeness also considered unrolling both players. For the bilinear experiment we also have that ${\mathcal{L}}^{{\bm{\theta}}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})=-{\mathcal%
{L}}^{{\bm{\varphi}}}({\bm{\theta}}_{t},{\bm{\varphi}}_{t})$.
Adam.
Adam [Kingma and Ba, 2015] computes an exponentially decaying average of both past gradients $m_{t}$ and squared gradients $v_{t}$, for each parameter of the model as follows:
$$\displaystyle m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}$$
(2)
$$\displaystyle v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})g_{t}^{2}\,,$$
(3)
where the hyperparameters $\beta_{1},\beta_{2}\in[0,1]$, $m_{0}=0$, $v_{0}=0$, and $t$ denotes the iteration $t=1,\dots T$. $m_{t}$ and $v_{t}$ are respectively the estimates of the first and the second moments of the stochastic gradient. To compensate the bias toward $0$ due to their initialization to $m_{0}=0$, $v_{0}=0$, Kingma and Ba [2015] propose to use bias-corrected estimates of these first two moments:
$$\displaystyle\hat{m}_{t}=\frac{m_{t}}{1-\beta_{1}^{t}}$$
(4)
$$\displaystyle\hat{v}_{t}=\frac{v_{t}}{1-\beta_{2}^{t}}.$$
(5)
Finally, the Adam update rule for all parameters at $t$-th iteration ${\bm{\omega}}_{t}$ can be described as:
$${\bm{\omega}}_{t+1}={\bm{\omega}}_{t}-\eta\frac{\hat{{\bm{m}}}_{t}}{\sqrt{\hat%
{{\boldsymbol{v}}}_{t}}+\epsilon}.$$
(Adam)
Extra-Adam.
Gidel et al. [2019a] adjust Adam for extragradient (EG) and obtain the empirically motivated ExtraAdam which re-uses the same running averages of (Adam) when computing the extrapolated point ${\bm{\omega}}_{t+\frac{1}{2}}$ as well as when computing the new iterate ${\bm{\omega}}_{t+1}$ [see Alg.4, Gidel et al., 2019a].
We used the provided implementation by the authors.
SVRE.
Chavdarova et al. [2019] propose SVRE as a way to cope with variance in games that may cause divergence otherwise. We used the restarted version of SVRE as used for the problem of (1) described in [Alg3, Chavdarova et al., 2019], which we describe in Alg. 2 for completeness–where $d_{{\bm{\theta}}}$ and $d_{{\bm{\varphi}}}$ denote “variance corrected” gradient:
$$\displaystyle{\bm{d}}_{{\bm{\varphi}}}({\bm{\theta}},{\bm{\varphi}},{\bm{%
\theta}}^{\mathcal{S}},{\bm{\varphi}}^{\mathcal{S}})$$
$$\displaystyle:={\bm{\mu}}_{{\bm{\varphi}}}+\nabla_{{\bm{\varphi}}}{\mathcal{L}%
}^{{\bm{\varphi}}}({\bm{\theta}},{\bm{\varphi}},\mathcal{D}[n_{d}],\mathcal{Z}%
[n_{z}])-\nabla_{{\bm{\varphi}}}{\mathcal{L}}^{\bm{\varphi}}({\bm{\theta}}^{%
\mathcal{S}},{\bm{\varphi}}^{\mathcal{S}},\mathcal{D}[n_{d}],\mathcal{Z}[n_{z}])$$
(6)
$$\displaystyle{\bm{d}}_{{\bm{\theta}}}({\bm{\theta}},{\bm{\varphi}},{\bm{\theta%
}}^{\mathcal{S}},{\bm{\varphi}}^{\mathcal{S}})$$
$$\displaystyle:={\bm{\mu}}_{{\bm{\theta}}}+\nabla_{{\bm{\theta}}}{\mathcal{L}}^%
{\bm{\theta}}({\bm{\theta}},{\bm{\varphi}},\mathcal{Z}[n_{z}])-\nabla_{{\bm{%
\theta}}}{\mathcal{L}}^{\bm{\theta}}({\bm{\theta}}^{\mathcal{S}},{\bm{\varphi}%
}^{\mathcal{S}},\mathcal{Z}[n_{z}])\,,$$
(7)
where ${\bm{\theta}}^{\mathcal{S}}$ and ${\bm{\varphi}}^{\mathcal{S}}$ are the snapshots and ${\bm{\mu}}_{{\bm{\theta}}}$ and ${\bm{\mu}}_{{\bm{\varphi}}}$ their respective gradients. $\mathcal{D}$ and $\mathcal{Z}$ denote the finite data and noise datasets.
With a probability $p$ (fixed) before the computation of ${\bm{\mu}}_{{\bm{\varphi}}}^{\mathcal{S}}$ and ${\bm{\mu}}_{{\bm{\theta}}}^{\mathcal{S}}$, we decide whether to restart SVRE (by using the averaged iterate as the new starting point–Alg. 2, Line $6$–$\bar{\bm{\omega}}_{t}$) or computing the batch snapshot at a point ${\bm{\omega}}_{t}$.
For consistency, we used the provided implementation by the authors.
A.2 Hyperparameters used for the full-batch setting
Optimal $\alpha$.
In the full-batch bilinear problem, it is possible to derive the optimal $\alpha$ parameter for a small enough $\eta$. Given the optimum ${\bm{\omega}}^{\star}$, the current iterate ${\bm{\omega}}$, and the “previous” iterate ${\bm{\omega}}^{\mathcal{P}}$ before $k$ steps, let ${\bm{x}}={\bm{\omega}}^{\mathcal{P}}+\alpha({\bm{\omega}}-{\bm{\omega}}^{%
\mathcal{P}})$ be the next iterate selected to be on the interpolated line between ${\bm{\omega}}^{\mathcal{P}}$ and ${\bm{\omega}}$. We aim at finding ${\bm{x}}$ (or in effect $\alpha$) that is closest to ${\bm{\omega}}^{\star}$. For an infinitesimally small learning rate, a GDA iterate would revolve around ${\bm{\omega}}^{\star}$, hence $\lVert{\bm{\omega}}-{\bm{\omega}}^{\star}\rVert=\lVert{\bm{\omega}}^{\mathcal{%
P}}-{\bm{\omega}}^{\star}\rVert=r$. The shortest distance between ${\bm{x}}$ and ${\bm{\omega}}^{\star}$ would be according to:
$$r^{2}=\lVert{\bm{\omega}}^{\mathcal{P}}-{\bm{x}}\rVert^{2}+\lVert{\bm{x}}-{\bm%
{\omega}}^{\star}\rVert^{2}=\lVert{\bm{\omega}}-{\bm{x}}\rVert^{2}+\lVert{\bm{%
x}}-{\bm{\omega}}^{\star}\rVert^{2}$$
Hence the optimal ${\bm{x}}$, for any $k$, would be obtained for $\lVert{\bm{\omega}}-{\bm{x}}\rVert=\lVert{\bm{\omega}}^{\mathcal{P}}-{\bm{x}}\rVert$, which is given for $\alpha=0.5$.
In the case of larger learning rate, for which the GDA iterates diverge, we would have $\lVert{\bm{\omega}}^{\mathcal{P}}-{\bm{\omega}}^{\star}\rVert=r_{1}<\lVert{\bm%
{\omega}}-{\bm{\omega}}^{\star}\rVert=r_{2}$ as we are diverging. Hence the optimal ${\bm{x}}$ would follow $\lVert{\bm{\omega}}^{\mathcal{P}}-{\bm{x}}\rVert<\lVert{\bm{\omega}}-{\bm{x}}\rVert$, which is given for $\alpha<0.5$. In Fig. 2 we indeed observe LA-GDA with $\alpha=0.4$ converging faster than with $\alpha=0.5$.
Hyperparameters.
Unless otherwise specified the learning rate used is fixed to $\eta=0.3$. For both Unroll-Y and Unroll-XY, we use $6$ unrolling steps. When combining Lookahead-minimax with GDA or Extragradient, we use a $k$ of $6$ and and $\alpha$ of $0.5$ unless otherwise emphasized.
A.3 Hyperparameters used for the stochastic setting
The hyperparameters used in the stochastic bilinear experiment of (1) are listed in Table 3. We tuned the hyperparameters of each method independently, for each batch-size. We tried $\eta$ ranging from $0.005$ to $1$.
When for all values of $\eta$ the method diverges, we set $\eta=0.005$ in Fig. 3.
To tune the first moment estimate of Adam $\beta_{1}$, we consider values ranging from $-1$ to $1$, as Gidel et al. reported that negative $\beta_{1}$ can help in practice. We used $\alpha\in\{0.3,0.5\}$ and $k\in[5,3000]$.
Fig. 6 depicts the final performance of Lookahead–Minimax, using different values of $k$.
Note that, the choice of plotting the distance to the optimum at a particular final iteration is causing the frequent oscillations of the depicted performances, since the iterate gets closer to the optimum only after the “backtracking” step.
Besides the misleading oscillations, one can notice the trend of how the choice of $k$ affects the final distance to the optimum.
Interestingly, the case of $B=16$ in Fig. 6 captures the periodicity of the rotating vector field, what sheds light on future directions in finding methods with adaptive $k$.
A.4 Illustrations of GAN optimization with lookahead
In Fig. 7 we consider a 2D bilinear game $\min_{x}\max_{y}x\cdot y$, and we illustrate the convergence of Lookahead–Minimax.
Interestingly, Lookahead makes use of the rotations of the game vector field caused by the adversarial component of the game.
Although standard-GDA diverges with all three shown learning rates, Lookahead–Minimax converges.
Moreover, we see Lookahead–Minimax with larger learning rate of $\eta=0.4$ (and fixed $k$ and $\alpha$) in fact converges faster then the case $\eta=0.1$, what indicates that Lookahead–Minimax is also sensitive to the value of $\eta$, besides that it introduces additional hyperparameters $k$ and $\alpha$
Appendix B Parameter averaging
Polyak parameter averaging was shown to give fastest convergence rates among all stochastic gradient algorithms for convex functions, by minimizing the asymptotic variance induced by the algorithm [Polyak and Juditsky, 1992]. This, so called Ruppet–Polyak averaging, is computed as the arithmetic average of the parameters:
$$\tilde{\bm{\theta}}_{RP}=\frac{1}{T}\sum_{t=1}^{T}{\bm{\theta}}^{(t)}\,,\quad T%
\geq 1\,.$$
(RP–Avg)
In the context of games, weighted averaging was proposed by Bruck [1977] as follows:
$$\tilde{\bm{\theta}}_{\text{WA}}^{(T)}=\frac{\sum_{t=1}^{T}\rho^{(t)}{\bm{%
\theta}}^{(t)}}{\sum_{t=1}^{T}\rho^{(t)}}\,.$$
(W–Avg)
Eq. (W–Avg) can be computed efficiently online as:
${\bm{\theta}}^{(t)}_{\text{WA}}=(1-\gamma^{(t)}){\bm{\theta}}^{(t-1)}_{\text{%
WA}}+\gamma^{(t)}{\bm{\theta}}^{(t)}$ with $\gamma\in[0,1]$.
With $\gamma=\frac{1}{t}$ we obtain the Uniform Moving Averages (UMA) whose performance is reported in our experiments in § 4 and is computed as follows:
$${\bm{\theta}}_{\text{UMA}}^{t}=(1-\frac{1}{t}){\bm{\theta}}_{\text{UMA}}^{(t-1%
)}+\frac{1}{t}{\bm{\theta}}^{(t)}\,,\quad t=1,\dots,T\,.$$
(UMA)
Analogously, we compute the Exponential Moving Averages (EMA) in an online fashion using $\gamma=1-\beta<1$, as follows:
$${\bm{\theta}}_{\text{EMA}}^{t}=\beta{\bm{\theta}}_{\text{EMA}}^{(t-1)}+(1-%
\beta){\bm{\theta}}^{(t)}\,,\quad t=1,\dots,T\,.$$
(EMA)
In all our experiments, following related works [Yazıcı et al., 2019, Gidel et al., 2019a, Chavdarova et al., 2019], we fix $\beta=0.9999$.
Appendix C Details on the Lookahead–minimax algorithm and alternatives
For completeness, in this section we consider an alternative implementation of Lookahead-minimax, which naively applies (LA) on each player separately, which we refer to as “alternating–lookahead”.
This in turn uses a “backtracked” iterate to update the opponent, rather than performing the “backtracking” step at the same time for both the players.
In other words, the fact that line 9 of Alg. 3 is executed before updating ${\bm{\theta}}$ in line 12, and vice versa, does not allow for Lookahead to help deal with the rotations typical for games.
On SVHN and CIFAR-10, the joint Lookahead-minimax consistently gave us the best results, as can be seen in Figure 8 and 9. On MNIST, the alternating and joint implementations worked equally well, see Figure 9.
Appendix D Details on the implementation
For our experiments, we used the PyTorch222https://pytorch.org/ deep learning framework, whereas for computing the FID and IS metrics, we used the provided implementations in Tensorflow333https://www.tensorflow.org/ for consistency with related works.
D.1 Metrics
We provide more details about the metrics enumerated in § 4.
Both FID and IS use:
(i) the Inception v3 network [Szegedy et al., 2015] that has been trained on the ImageNet dataset consisting of ${\sim}1$ million RGB images of $1000$ classes, $C=1000$.
(ii) a sample of $m$ generated images $x\sim p_{g}$, where usually $m=50000$.
D.1.1 Inception Score
Given an image $x$, IS uses the softmax output of the Inception network $p(y|x)$ which represents the probability that $x$ is of class $c_{i},i\in 1\dots C$, i.e., $p(y|x)\in[0,1]^{C}$.
It then computes the marginal class distribution $p(y)=\int_{x}p(y|x)p_{g}(x)$.
IS measures the Kullback–Leibler divergence $\mathbb{D}_{KL}$ between the predicted conditional label distribution $p(y|x)$ and the marginal class distribution $p(y)$.
More precisely, it is computed as follows:
$$\displaystyle IS(G)=\exp\big{(}\mathbb{E}_{x\sim p_{g}}[\mathbb{D}_{KL}(p(y|x)%
||p(y))]\big{)}=\exp\big{(}\frac{1}{m}\sum_{i=1}^{m}\sum_{c=1}^{C}p(y_{c}|x_{i%
})\log{\frac{p(y_{c}|x_{i})}{p(y_{c})}}\big{)}.$$
(8)
It aims at estimating
(i) if the samples look realistic i.e., $p(y|x)$ should have low entropy, and
(ii) if the samples are diverse (from different ImageNet classes) i.e., $p(y)$ should have high entropy.
As these are combined using the Kullback–Leibler divergence, the higher the score is, the better the performance. Note that the range of IS scores at convergence varies across datasets, as the Inception network is pretrained on the ImageNet classes. For example, we obtain low IS values on the SVHN dataset as a large fraction of classes are numbers, which typically do not appear in the ImageNet dataset. Since MNIST has greyscale images, we used a classifier trained on this dataset and used $m=5000$. For the rest of the datasets, we used the original implementation444https://github.com/openai/improved-gan/ of IS in TensorFlow, and $m=50000$.
As the Inception Score considers the classes as predicted by the Inception network, it can be prone not to penalize visual artefacts as long as those do not alter the predicted class distribution. In Fig. 10 we show some images generated by our best model according to IS. Those images exhibit some visible unrealistic artifacts, while enough of the image is left for us to recognise a potential image label. For this reason we consider that the Fréchet Inception Distance is a more reliable estimator of image quality.
However, we reported IS for completeness.
D.1.2 Fréchet Inception Distance
Contrary to IS, FID aims at comparing the synthetic samples $x\sim p_{g}$ with those of the training dataset $x\sim p_{d}$ in a feature space. The samples are embedded using the first several layers of the Inception network. Assuming $p_{g}$ and $p_{d}$ are multivariate normal distributions, it then estimates the means ${\bm{m}}_{g}$ and ${\bm{m}}_{d}$ and covariances $C_{g}$ and $C_{d}$, respectively for $p_{g}$ and $p_{d}$ in that feature space. Finally, FID is computed as:
$$\displaystyle\mathbb{D}_{\text{FID}}(p_{d},p_{g})\approx d^{2}(({\bm{m}}_{d},C%
_{d}),({\bm{m}}_{g},C_{g}))=\|{\bm{m}}_{d}-{\bm{m}}_{g}\|_{2}^{2}+Tr(C_{d}+C_{%
g}-2(C_{d}C_{g})^{\frac{1}{2}}),$$
(9)
where $d^{2}$ denotes the Fréchet Distance.
Note that as this metric is a distance, the lower it is, the better the performance.
We used the original implementation of FID555https://github.com/bioinf-jku/TTUR in Tensorflow, along with the provided statistics of the datasets.
D.2 Architectures & Hyperparameters
Description of the architectures.
We describe the models we used in the empirical evaluation of Lookahead-minimax by listing the layers they consist of, as adopted in GAN works, e.g. [Miyato et al., 2018].
With “conv.” we denote a convolutional layer and “transposed conv” a transposed convolution layer [Radford et al., 2016].
The models use Batch Normalization [Ioffe and Szegedy, 2015] and Spectral Normalization layers [Miyato et al., 2018].
D.2.1 Architectures for experiments on MNIST
For experiments on the MNIST dataset, we used the DCGAN architectures [Radford et al., 2016], listed in Table 4, and the parameters of the models are initialized using PyTorch default initialization.
For experiments on this dataset, we used the non saturating GAN loss as proposed [Goodfellow et al., 2014]:
$$\displaystyle{\mathcal{L}}_{D}=\mathbb{E}_{x\sim p_{d}}\log(D(x))+\mathbb{E}_{%
z\sim p_{z}}\log(D(G(z)))$$
(10)
$$\displaystyle{\mathcal{L}}_{G}=\mathbb{E}_{z\sim p_{z}}\log(D(G(z))),$$
(11)
where $p_{d}$ and $p_{z}$ denote the data and the latent distributions (the latter to be predefined).
D.2.2 ResNet architectures for Cifar-10 and SVHN
We replicate the experimental setup described for CIFAR-10 and SVHN in [Miyato et al., 2018, Chavdarova et al., 2019], as listed in Table 6.
This setup uses the hinge version of the adversarial non-saturating loss, see [Miyato et al., 2018].
As a reference, our ResNet architectures for CIFAR-10 have approximately $85$ layers–in total for G and D, including the non linearity and the normalization layers.
D.2.3 Unrolling implementation
In Section A.1 we explained how we implemented unrolling for our full-batch bilinear experiments. Here we describe our implementation for our MNIST and CIFAR-10 experiments.
Unrolling is computationally intensive, which can become a problem for large architectures. The computation of $\nabla_{{\bm{\varphi}}}{\mathcal{L}}^{{\bm{\varphi}}}({\bm{\theta}}^{m}_{t},{%
\bm{\varphi}}_{t})$, with $m$ unrolling steps, requires the computation of higher order derivatives which comes with a $\times m$ memory footprint and a significant slowdown. Due to limited memory, one can only backpropagate through the last unrolled step, bypassing the computation of higher order derivatives. We empirically see the gradient is small for those derivatives. In this approximate version, unrolling can be seen as of the same family as extragradient, computing its extrapolated points using more than a single step. We tested both true and approximate unrolling on MNIST, with a number of unrolling steps ranging from $5$ to $20$. The full unrolling that performs the backpropagation on the unrolled discriminator was implemented using the Higher666https://github.com/facebookresearch/higher library. On CIFAR-10 we only experimented with approximate unrolling over $5$ to $10$ steps due to the large memory footprint of the ResNet architectures used for the generator and discriminator, making the other approach infeasible given our resources.
D.2.4 Hyperparameters used on MNIST
Table 7 lists the hyperparameters that we used for our experiments on the MNIST dataset.
D.2.5 Hyperparameters used on SVHN
Table 8 lists the hyperparameters used for experiments on SVHN.
These values were selected for each algorithm independently after tuning the hyperparameters for the baseline.
D.2.6 Hyperparameters used on CIFAR-10
The reported results on CIFAR-10 were obtained using the hyperparameters listed in Table 9.
These values were selected for each algorithm independently after tuning the hyperparameters. For the baseline methods we selected the hyperparameters giving the best performances. Consistent with the results reported by related works, we also observed that using larger ratio of updates of the discriminator and the generator improves the stability of the baseline, and we used $r=5$.
We observed that using learning rate decay delays the divergence, but does not improve the best FID scores, hence we did not use it in our reported models.
Appendix E Additional experimental results
In Fig. 5 we compared the stability of LA–AltGAN methods against their AltGAN baselines on both the CIFAR-10 and SVHN datasets. Analogously, in Fig. 11 we report the comparison between LA–ExtraGrad and ExtraGrad over the iterations. We observe that the experiments on SVHN with ExtraGrad are more stable than those of CIFAR-10. Interestingly, we observe that:
(i) LA–ExtraGradient improves both the stability and the performance of the baseline on CIFAR-10, see Fig. 10(a), and
(ii) when the stability of the baseline is relatively good as on the SVHN dataset, LA–Extragradient still improves its performances, see Fig. 10(b).
E.1 Samples of LA–GAN Generators
In this section we present random samples of the generators of our LAGAN experiments trained on CIFAR-10 and SVHN. Figures 12 and 13 are generated by some of our LA-AltGAN models trained on CIFAR-10 with and without EMA. Similarly, Figures 14 and 15 are generated by some of our LA-ExtraGrad models trained on CIFAR-10 with and without EMA. Finally, we show samples of our LA-ExtraGrad models trained on the SVHN dataset with and without EMA, see Figures 16 and 17 respectively. |
Dimension of the boundary in different metrics
Riku Klén
and
Ville Suomala
Department of Mathematics and Statistics
FI-20014 University of Turku
Finland
Department of Mathematical sciences
P.O Box 3000
FI-90014 University of Oulu
Finland
riku.klen@utu.fi
ville.suomala@oulu.fi
Abstract.
We consider metrics on Euclidean domains $\Omega\subset\mathbb{R}^{n}$ that are induced by
continuous densities $\rho\colon\Omega\rightarrow(0,\infty)$ and study
the Hausdorff and packing dimensions of the boundary of $\Omega$
with respect to these metrics.
Key words and phrases:Hausdorff dimension, packing dimension, conformal metric,
density metric
2000 Mathematics Subject Classification: Primary 28A78, Secondary 30C65
Contents
1 Introduction
2 Notation
3 Preliminary lemmas
4 Dimension estimates on general domains
5 Conformal densities
6 Further examples, remarks, and questions
1. Introduction
Let $\Omega\subset\mathbb{R}^{n}$ be a domain. For $x,y\in\Omega$, we
denote by $d(x,y)$ the internal Euclidean distance between $x$ and
$y$ defined as
$$d(x,y)=\inf_{\gamma}\operatorname{\ell}(\gamma),$$
where the infimum is taken over all rectifiable curves in $\Omega$ with
endpoints $x$ and $y$ and $\operatorname{\ell}$ refers to the standard Euclidean
length. It is well known and easy to see that $d$ defines a metric on
$\Omega$ called the internal metric. Furthermore, we may extend
this metric to the internal boundary
$\partial\Omega_{d}=\overline{\Omega}_{d}\setminus{\Omega}$, where
$\overline{\Omega}_{d}$ is the standard metric completion of $\Omega$ with
respect to $d$.
Let $\rho\colon\Omega\rightarrow(0,\infty)$ be a continuous
function. We define the $\rho$-length of a rectifiable curve
$\gamma\subset\Omega$
as
$$\operatorname{\ell}_{\rho}(\gamma)=\int_{\gamma}\rho(z)|dz|$$
where $|dz|$ denotes integration with respect to arclength. The
$\rho$-distance between $x,y\in\Omega$ is then given by
$$d_{\rho}(x,y)=\inf_{\gamma}\operatorname{\ell}_{\rho}(\gamma),$$
where the infimum is again over all curves
joining $x$ to $y$ in $\Omega$.
This defines a metric on $\Omega$ and as with the internal metric, we
may extend it to the $\rho$-boundary of $\Omega$ defined as
$\partial_{\rho}\Omega=\overline{\Omega}_{\rho}\setminus\Omega$, where
$\overline{\Omega}_{\rho}$ is the standard metric completion of
$\Omega$ with respect to $d_{\rho}$. Observe that the internal metric $d$ corresponds to $d_{\rho}$ for the constant function $\rho\equiv 1$.
Thus, given $\rho$ as above (a density in what follows), we
have two complete metric spaces $(\overline{\Omega}_{d},d)$ and
$(\overline{\Omega}_{\rho},d_{\rho})$ which need not be topologically
equivalent. For simplicity, however, we only deal with cases in which
$\partial_{\rho}\Omega$ may be
naturally identified with a metric subspace of $\partial\Omega_{d}$.
In this paper, we will consider $\dim_{\rho}(\partial_{\rho}\Omega)$ and
$\operatorname{Dim}_{\rho}(\partial_{\rho}\Omega)$, the Hausdorff and packing dimensions
of $\partial_{\rho}\Omega$ with respect to $d_{\rho}$
(For more comprehensive notation and definitions, we refer to Section
2 below).
Classically, this sort of problems arise in connection to
harmonic measures and the boundary behaviour of conformal maps [8, 9, 13, 5, 7]. In that setting, $\rho=|f^{\prime}|$ for a conformal map $f$ and
$d_{\rho}$ corresponds to the internal metric on the image domain. The
Hausdorff dimension, $\dim_{\rho}(\partial_{\rho}\Omega)$, has been
analysed also for a much larger collection of so called conformal
densities on the unit ball $\mathbb{B}^{n}\subset\mathbb{R}^{n}$. See
[2, 1, 11].
Although we provide some estimates in the setting of conformal densities, our main goal is to study general densities defined on
John domains in $\mathbb{R}^{n}$, and to provide tools to estimate
the values of the dimensions $\dim_{\rho}(\partial_{\rho}\Omega)$ and $\operatorname{Dim}_{\rho}(\partial_{\rho}\Omega)$. Because of this, our methods are perhaps more geometric than analytic.
Given $A\subset\overline{\Omega}_{d}$, we denote by $d(x,A)=\inf_{a\in A}d(x,a)$
the internal distance from $x$ to $A$ and, moreover, abbreviate
$d(x)=d(x,\partial\Omega_{d})$. Of course, $d(x)$ is just the Euclidean distance to the boundary of $\Omega$.
Let us consider the following simple example: Suppose that
$\Omega\subsetneq\mathbb{R}^{n}$ has smooth boundary, $-1<\beta<0$, and define a
density $\rho(x)=d(x)^{\beta}$. Then it is well known, and easy to
see that $\partial_{\rho}\Omega$ is a “snowflake”. More precisely,
$d_{\rho}(x,y)\approx d(x,y)^{1+\beta}$ for all
$x,y\in\partial_{\rho}\Omega$. Thus, the effect of $\rho$ on the dimensions
of the boundary is described by a power law
$$\operatorname{Dim}_{d}(\partial\Omega_{d})/\operatorname{Dim}_{\rho}(\partial_%
{\rho}\Omega)=\dim_{d}(\partial\Omega_{d})/\dim_{\rho}(\partial_{\rho}\Omega)=%
1+\log\rho(x)/\log d(x).$$
Keeping this example in mind, it is now natural to consider (the upper
and lower) limits of the quantity $\log\rho(y)/\log d(y)$ as $y$
approaches the boundary of $\Omega$. Under sufficient assumptions,
this leads to multifractal type formulas for the dimension of
$\partial_{\rho}\Omega$. For instance, we obtain the
following result.
Theorem 1.1.
Let $\Omega\subset\mathbb{R}^{n}$ be a John domain and $\rho>c>0$ a density. Suppose that
$$i(x)=\lim_{y\in\Omega,\,y\rightarrow x}\frac{\log\rho(y)}{\log d(y)}$$
exists at all points $x\in\partial\Omega_{d}$ and satisfies $i(x)>-1$. Then
$$\dim_{\rho}(\partial_{\rho}\Omega)=\sup_{\beta>-1}(1+\beta)^{-1}\dim_{d}(\{x%
\in\partial\Omega_{d}\,:i(x)\leq\beta\})$$
An analogous formula holds for the packing dimension.
This theorem is a simple special case of a more general result,
Theorem 4.2, and it can be used to obtain a formula
for the dimensions $\dim_{\rho}(\partial_{\rho}\Omega)$ and
$\operatorname{Dim}_{\rho}(\partial_{\rho}\Omega)$ in many situations.
A generic case is the following: $\Omega=\mathbb{B}^{n}$,
$C\subset\partial\mathbb{B}^{n}$ is a Cantor set with $0<\dim_{d}C<\operatorname{Dim}_{d}C<n$ and $\rho(x)=d(x,C)^{\beta}$ for some $\beta>-1$ (Example
4.4).
In Theorem 1.1, there is
an annoying
lack of generality since we have to consider inner limits in
the definition of $i(x)$. The situation is different if we know
that the distance $d_{\rho}(x,y)$ between points
$x,y\in\partial_{\rho}\Omega$ is realised along curves that are
“non-tangential”. If the density satisfies a suitable Harnack inequality
together with a Gehring-Hayman type estimate, then it is enough to
consider limits along some fixed cones. For
conformal densities, for instance, we may replace the quantity
$i(x)$ by a radial version
$k(x)=\lim_{t\uparrow 1}\log\rho(tx)/\log(1-t)$; see Section
5 where we actually consider upper and lower limits as
$t\uparrow 1$.
Section 6 contains several examples and some open
questions. Most notably, in Example 6.3 we construct
a new nontrivial example of a conformal density with multifractal type
boundary behavior.
As our results indicate, a careful inspection of the power exponents and the
size of certain sub and super level sets of these quantities can be
used to study the dimensions $\dim_{\rho}(\partial_{\rho}\Omega)$ and
$\operatorname{Dim}_{\rho}(\partial_{\rho}\Omega)$. Although the main idea in most of our
results is the same, it is perhaps not possible to find a general
statement which would fit into all, or even most, of the interesting
situations. Often, a suitable case study and a combination of
different ideas is needed in order to deduce the relevant information
(for instance, see Examples 4.7, 6.2, and
6.3). We strongly
believe that the ideas we have used can be applied also elsewhere,
beyond the results of this paper.
2. Notation
Let $\Omega\subset\mathbb{R}^{n}$ be a domain. For technical reasons, we want to
be able to naturally identify $\partial_{\rho}\Omega$ with a subset of $\partial\Omega_{d}$. To ensure this, we assume throughout this paper that
for all sequences
$(x_{i})$, $x_{i}\in\Omega$, the following two conditions are satisfied:
(A1)
$$\displaystyle\text{If }(x_{i})\text{ converges in }\overline{\Omega}_{\rho},%
\text{ then it
converges in }\overline{\Omega}_{d}.$$
(A2)
$$\displaystyle\text{If }(x_{i})\text{ converges in }\overline{\Omega}_{d},\text%
{ it has at
most one accumulation point in }\partial_{\rho}\Omega.$$
In other words, (A1) means that the identity mapping
$(\Omega,d_{\rho})\to(\Omega,d)$ has a continuous extension $f\colon\overline{\Omega}_{\rho}\to\overline{\Omega}_{d}$ and, furthermore, (A2)
means that this $f$
is injective.
Definition 2.1.
A density is a continuous function $\rho\colon\Omega\rightarrow(0,\infty)$
satisfying (A1) and (A2). For simplicity, we also
require that $\partial_{\rho}\Omega\neq\emptyset$.
Whenever we talk about a curve $\gamma$, we
assume that it is rectifiable, is arc-length parametrized, and that
$\gamma(t)\in\Omega$ for all $0<t<\operatorname{\ell}(\gamma)$ (the endpoints may
or may not belong to $\partial\Omega_{d}$). Note that the internal length of a curve equals the Euclidean length of the curve.
We say that $\Omega\subset\mathbb{R}^{n}$ is an $\alpha$-John
domain for $0<\alpha\leq 1$, if there is $x_{0}\in\Omega$ such that all points $x\in\Omega$ may be
joined to $x_{0}$ by an $\alpha$-cone, i.e. by a curve
$\gamma$ joining $x$ to $x_{0}$
such that $d(\gamma(t))\geq\alpha\,t$ for all $0\leq t\leq\operatorname{\ell}(\gamma)$. If $\alpha$ is not important, we simply talk about
John domains.
Let $\gamma\subset\Omega$ be a curve. We say
that $\gamma$ is an $\alpha$-cigar if
(2.1)
$$d(\gamma(t))\geq\alpha\min\{t,\operatorname{\ell}(\gamma)-t\}\text{ for all }0%
\leq t\leq\operatorname{\ell}(\gamma).$$
For technical purposes, we define an
$\alpha$-distance between points $x,y\in\Omega$ as
$$d_{\alpha}(x,y)=\inf_{\gamma}\operatorname{\ell}(\gamma)$$
and this time the infimum is taken over all $\alpha$-cigars $\gamma$
joining $x$ and $y$.
It is easy to see that if $\Omega$ is an $\alpha$-John domain, then any two points
$x,y\in\overline{\Omega}_{d}$
may be joined by an $\alpha$-cigar. Thus $d_{\alpha}(x,y)<\infty$ for
all $x,y\in\overline{\Omega}_{d}$.
Note however that $d_{\alpha}$ is not
necessarily a metric since it may be infinite and even if it happens to be finite, it may fail to satisfy the triangle
inequality.
Let $X=(X,d_{X})$ be a separable metric space. We denote balls
$B_{X}(x,r)=\{y\in X\,:\,d_{X}(y,x)<r\}$ and spheres $S_{X}(x,r)=\{y\in X\,:\,d_{X}(x,y)=r\}$. Given $A\subset X$, we define its $s$-dimensional
Hausdorff and packing measures, $\mathcal{H}^{s}_{X}(A)$ and
$\mathcal{P}^{s}_{X}(A)$, respectively, by the following procedure:
$$\displaystyle\mathcal{H}^{s,\varepsilon}_{X}(A)$$
$$\displaystyle=\inf\left\{\sum_{i=1}^{\infty}\operatorname{diam}_{X}(A_{i})^{s}%
\colon A\subset\bigcup_{i\in\mathbb{N}}A_{i}\text{ and }\operatorname{diam}_{X%
}(A_{i})<\varepsilon\text{ for all }i\right\},$$
$$\displaystyle\mathcal{H}^{s}_{X}(A)$$
$$\displaystyle=\lim_{\varepsilon\downarrow 0}\mathcal{H}^{s,\varepsilon}_{X}(A),$$
$$\displaystyle P^{s,\varepsilon}_{X}(A)$$
$$\displaystyle=\sup\left\{\sum_{i=1}^{\infty}r_{i}^{s}\colon\{B_{X}(x_{i},r_{i}%
)\}\text{ is a packing of }A\text{ with }r_{i}\leq\varepsilon\right\},$$
$$\displaystyle P^{s}_{X}(A)$$
$$\displaystyle=\lim_{\varepsilon\downarrow 0}P^{s,\varepsilon}_{X}(A),$$
$$\displaystyle\mathcal{P}^{s}_{X}(A)$$
$$\displaystyle=\inf\left\{\sum_{i=1}^{\infty}P^{s}_{X}(A_{i})\colon A\subset%
\bigcup_{i=0}^{\infty}A_{i}\right\},$$
where $0<\varepsilon,s<\infty$ and a packing of $A$ is a disjoint collection of
balls with centres in $A$.
We define the Hausdorff and packing dimensions of
$A\subset X$, respectively, as
$$\displaystyle\dim_{X}(A)$$
$$\displaystyle=\sup\{s\geq 0\colon\mathcal{H}^{s}_{X}(A)=\infty\}=\inf\{s\geq 0%
\,:\,\mathcal{H}^{s}_{X}(A)=0\},$$
$$\displaystyle\operatorname{Dim}_{X}(A)$$
$$\displaystyle=\sup\{s\geq 0\colon\mathcal{P}^{s}_{X}(A)=\infty\}=\inf\{s\geq 0%
\,:\,\mathcal{P}^{s}_{X}(A)=0\},$$
with the conventions $\sup\emptyset=0$, $\inf\emptyset=\infty$.
When the domain $\Omega\subset\mathbb{R}^{n}$
has been fixed, we use all the
notation introduced above with the subscript $d$ when referring to the internal metric.
Moreover, given a density $\rho\colon\Omega\rightarrow(0,\infty)$,
we use the subscript $\rho$ to refer to the corresponding notions in
terms of the
metric $d_{\rho}$. For example, given $x\in\overline{\Omega}_{d}$,
$y\in\overline{\Omega}_{\rho}$, and $r>0$ we have
$B_{d}(x,r)=\{z\in\overline{\Omega}_{d}\,:\,d(z,x)<r\}$ and
$S_{\rho}(y,r)=\{z\in\overline{\Omega}_{\rho}\,:\,d_{\rho}(z,y)=r\}$.
We also use the notation $B_{\alpha}(x,r)$ for balls in terms of the
“distance” $d_{\alpha}$.
When referring to “round” Euclidean balls we use a subindex $e$, so
$B_{e}(x,r)=\{y\in\mathbb{R}^{n}\,:\,|y-x|<r\}$ where $|\cdot|$ is the usual
Euclidean distance. We also denote $\mathbb{B}^{n}=B_{e}(0,1)\subset\mathbb{R}^{n}$
and $S^{n-1}=S_{e}(0,1)\subset\mathbb{R}^{n}$.
Observe that if
$A\subset\overline{\Omega}_{\rho}$,
both notations $\operatorname{diam}_{d}(A)$ and $\operatorname{diam}_{\rho}(A)$ make sense, since by
(A1) and (A2), if $x,y\in A$, then $d(x,y),d_{\rho}(x,y)<\infty$ are well defined.
To finish this section, we introduce various limits that are used
later to obtain dimension bounds for $\partial_{\rho}\Omega$.
For a domain $\Omega\subset\mathbb{R}^{n}$, a density $\rho$ and
$x\in\partial\Omega_{d}$, we define
(2.2)
$$i^{-}(x)=\liminf_{\underset{y\in\Omega}{y\rightarrow x}}\frac{\log\rho(y)}{%
\log d(y)},\quad i^{+}(x)=\limsup_{\underset{y\in\Omega}{y\rightarrow x}}\frac%
{\log\rho(y)}{\log d(y)},$$
where the limits are considered with respect to the internal
metric. Observe that $i^{+}(x)\geq-1$ for all $x\in\partial_{\rho}\Omega$, but $i^{-}(x)$ does not
have to be bounded from below.
For a domain $\Omega\subset\mathbb{R}^{n}$, a density $\rho$ and $\beta>-1$, we define
(2.3)
$$\displaystyle d^{+}(\beta)$$
$$\displaystyle=$$
$$\displaystyle\dim_{d}\{x\in\partial_{\rho}\Omega\,:\,i^{+}(x)\leq\beta\},$$
(2.4)
$$\displaystyle D^{+}(\beta)$$
$$\displaystyle=$$
$$\displaystyle\operatorname{Dim}_{d}\{x\in\partial_{\rho}\Omega\,:\,i^{+}(x)%
\leq\beta\},$$
(2.5)
$$\displaystyle d^{-}(\beta)$$
$$\displaystyle=$$
$$\displaystyle\dim_{d}\{x\in\partial_{\rho}\Omega\,:\,i^{-}(x)\leq\beta\},$$
(2.6)
$$\displaystyle D^{-}(\beta)$$
$$\displaystyle=$$
$$\displaystyle\operatorname{Dim}_{d}\{x\in\partial_{\rho}\Omega\,:\,i^{-}(x)%
\leq\beta\}.$$
For a density $\rho$ on
$\mathbb{B}^{n}$ and $x\in S^{n-1}$, we set
(2.7)
$$k^{-}(x)=\liminf_{r\uparrow 1}\frac{\log\rho(rx)}{\log(1-r)},\quad k^{+}(x)=%
\limsup_{r\uparrow 1}\frac{\log\rho(rx)}{\log(1-r)}.$$
Note that for $\Omega=\mathbb{B}^{n}$ we have $i^{-}(x)\leq k^{-}(x)\leq k^{+}(x)\leq i^{+}(x)$ for $x\in S^{n-1}$.
Occasionally, we need to make the following technical assumption for the metric
$d_{\rho}$:
Assumption 2.2.
For each $x\in\partial_{\rho}\Omega$ and each $\varepsilon>0$, there is
$r>0$ such that for all $y\in B_{\rho}(x,r)$ there is a curve $\gamma$
joining $x$ to $y$ in $\Omega$ such that $h(\gamma)\geq d(x,y)^{1+\varepsilon}$ and
$\ell_{\rho}(\gamma)\leq d_{\rho}(x,y)^{1-\varepsilon}$.
Here $h(\gamma)=\sup_{y\in\gamma}d(y)$ is the maximal distance of
$\gamma$ from the boundary (the “height” of $\gamma)$.
This assumption should be understood as a very mild monotonicity condition with respect to $d(x)$. It is used to obtain dimension lower bounds for the part of $\partial_{\rho}\Omega$ where $i^{+}\geq 0$. Close to such points, it is hard to obtain lower estimates for the $\rho$-length of curves that stay very close to $\partial\Omega$.
In fact, if the condition (2.2) fails, it may happen that $\dim_{\rho}\partial_{\rho}\Omega=\operatorname{Dim}_{\rho}\partial_{\rho}%
\Omega=0$ even if $\Omega$ is a half-space, $\dim_{d}\partial_{\rho}\Omega>0$, and $i^{+}$ is uniformly bounded. See Example 4.6.
The assumption 2.2 is a natural generalisation of the Gehring-Hayman condition valid for conformal densities, see
(5.3).
We summarise our main notation in Table 1.
3. Preliminary lemmas
We start by recalling the following simple lemma giving estimates
on expansion and compression behaviour of Hölder type
maps.
Lemma 3.1.
Suppose that $Z$ and $Y$ are separable metric spaces and let $f\colon Z\rightarrow Y$, $0<\delta<\infty$ and $X\subset Z$.
(1)
If for each $x\in X$ there are
$0<r_{x},C_{x}<\infty$ so that
$f(B_{Z}(x,r))\subset B_{Y}(f(x),C_{x}r^{\delta})$ for all $0<r<r_{x}$, then
(3.1)
$$\displaystyle\delta\dim_{Y}(f(X))$$
$$\displaystyle\leq\dim_{Z}(X),$$
(3.2)
$$\displaystyle\delta\operatorname{Dim}_{Y}(f(X))$$
$$\displaystyle\leq\operatorname{Dim}_{Z}(X).$$
(2)
If for each $x\in X$ there are $0<C_{x}<\infty$
and a sequence
$r_{x,i}>0$ such that $\lim_{i\rightarrow\infty}r_{x,i}=0$ and
$f(B_{Z}(x,r_{x,i}))\subset B_{Y}(f(x),C_{x}r_{x,i}^{\delta})$ for all $i$,
then
(3.3)
$$\delta\dim_{Y}(f(X))\leq\operatorname{Dim}_{Z}(X).$$
Proof.
The proof of (3.1) is standard.
We give some details for (3.2) and (3.3).
To prove
(3.2), we first observe that $X=\bigcup_{n\in\mathbb{N}}X_{n}$ where
$$X_{n}=\{x\in X\,:\,f(B_{Z}(x,r))\subset B_{Y}(f(x),nr^{\delta})\text{ for
all }0<r<1/n\}.$$
Let $0<\varepsilon,s<\infty$, $A\subset X_{n}$ and suppose that
$B_{Y}(x_{i},r_{i})$, $i\in\mathbb{N}$ is a
packing of $f(A)$ so that $r_{i}<\min\{\varepsilon,n^{1-\delta}\}$ for
each $i$. If $y_{i}\in A\cap f^{-1}\{x_{i}\}$ it follows that
$f(B_{Z}(y_{i},n^{-1/\delta}r_{i}^{1/\delta}))\subset B_{Y}(x_{i},r_{i})$ (note that there can be more than one $y_{i}$ with $f(y_{i})=x_{i}$, choosing any of them will do). Thus,
$B_{Z}(y_{i},n^{-1/\delta}r_{i}^{1/\delta})$ is a packing of $A$. Letting
$\varepsilon\downarrow 0$, this
implies $P^{s}_{Y}(f(A))\leq n^{s}P^{s\delta}_{Z}(A)$ for all $A\subset X_{n}$. As $A\subset X_{n}$ is arbitrary, we also get $\mathcal{P}^{s}_{Y}(f(X_{n})\leq n^{s}\mathcal{P}^{s\delta}_{Z}(X_{n})$, in particular
$\operatorname{Dim}_{Y}(f(X_{n}))\leq\operatorname{Dim}_{Z}(X_{n})/\delta$. The claim (3.2) now
follows
as $X=\cup_{n\in\mathbb{N}}X_{n}$.
In order to prove (3.3), let
$$X_{n}=\{x\in X\,:\,f(B_{Z}(x,r_{x,i}))\subset B_{Y}(f(x),nr_{x,i}^{\delta})%
\text{ for some sequence }r_{x,i}\downarrow 0\}.$$
Then $X=\cup_{n\in\mathbb{N}}X_{n}$. Choose $A\subset X_{n}$ and fix
$s,\varepsilon>0$. Applying the standard $5R$-covering theorem
(see e.g. [10, Theorem 2.1]) to the
collection
$$\mathcal{B}=\{B_{Y}(f(x),nr^{\delta})\,:\,x\in A,0<r<\varepsilon,f(B_{Z}(x,r))%
\subset B_{Y}(f(x),nr^{\delta})\}$$
we find a pairwise
disjoint subcollection $\{B_{Y}(f(x_{i}),nr_{i}^{\delta})\}_{i}$ of
$\mathcal{B}$ so that
$f(A)\subset\cup_{i}B_{Y}(f(x_{i}),5nr_{i}^{\delta})$. As $\{B_{Z}(x_{i},r_{i})\}_{i}$ is a packing of $A$, we get
$\mathcal{H}^{s/\delta,5n\varepsilon^{\delta}}_{Y}(f(A))\leq(5n)^{s/\delta}P^{s%
,\varepsilon}_{Z}(A)$ and letting $\varepsilon\downarrow 0$,
$\mathcal{H}^{s/\delta}_{Y}(f(A))\leq(5n)^{s/\delta}P^{s}_{Z}(A)$. As $A\subset X_{n}$ is arbitrary, we also get $\mathcal{H}^{s/\delta}_{Y}(f(X_{n}))\leq(5n)^{s/\delta}\mathcal{P}^{s}_{Z}(X_{%
n})$ and finally
$\dim_{Y}(f(X))\leq\operatorname{Dim}_{Z}(X)/\delta$ since $X=\cup_{n\in\mathbb{N}}X_{n}$.
∎
Below, we give a variant of Lemma 3.1 in terms of the
metrics $d$ and $d_{\rho}$.
Lemma 3.2.
Suppose that $\Omega\subset\mathbb{R}^{n}$ is a domain and
$\rho\colon\Omega\rightarrow(0,\infty)$ is a density. Let $A\subset\partial_{\rho}\Omega$ and $0\leq\delta\leq\infty$.
(1)
If
$$\liminf_{r\downarrow 0}\frac{\log\left(\operatorname{diam}_{\rho}(B_{d}(x,r))%
\right)}{\log r}\geq\delta$$
for all $x\in A$, then
$\delta\dim_{\rho}(A)\leq\dim_{d}(A)$ and
$\delta\operatorname{Dim}_{\rho}(A)\leq\operatorname{Dim}_{d}(A)$.
(2)
If
$$\liminf_{r\downarrow 0}\frac{\log\left(\operatorname{diam}_{d}(B_{\rho}(x,r))%
\right)}{\log r}\geq\delta$$
for all $x\in A$, then
$\dim_{\rho}(A)\geq\delta\dim_{d}(A)$ and
$\operatorname{Dim}_{\rho}(A)\geq\delta\operatorname{Dim}_{d}(A)$.
(3)
If
$$\limsup_{r\downarrow 0}\frac{\log\left(\operatorname{diam}_{\rho}(B_{d}(x,r))%
\right)}{\log r}\geq\delta$$
for all $x\in A$, then
$\delta\dim_{\rho}(A)\leq\operatorname{Dim}_{d}(A)$.
(4)
If
$$\limsup_{r\downarrow 0}\frac{\log\left(\operatorname{diam}_{d}(B_{\rho}(x,r))%
\right)}{\log r}\geq\delta$$
for all $x\in A$, then
$\operatorname{Dim}_{\rho}(A)\geq\delta\dim_{d}(A)$.
Proof.
All the claims (1)–(4) follow easily from Lemma 3.1 applied to the mapping
$f\colon(\overline{\Omega}_{\rho},d)\rightarrow(\overline{\Omega}_{\rho},d_{%
\rho})$, $x\mapsto x$ and its inverse.
To prove
(1), for instance, fix $\lambda<\delta$. Then for all
$x\in A$, there is
$r_{x}>0$ so that
$B_{d}(x,r)\subset B_{\rho}(x,r^{\lambda})$ when
$0<r<r_{x}$. Thus, Lemma 3.1 (1) implies
$\lambda\dim_{\rho}(A)\leq\dim_{d}(A)$ and $\lambda\operatorname{Dim}_{\rho}(A)\leq\operatorname{Dim}_{d}(A)$. Letting
$\lambda\uparrow\delta$, yields (1).
∎
We end the preliminaries with the following lemma.
Lemma 3.3.
For all $0<\alpha\leq 1$ and $n\in\mathbb{N}$, there exists constants
$N=N(\alpha,n)\in\mathbb{N}$ and $c=c(\alpha)<\infty$ so that for all $\alpha$-John domains
$\Omega\subset\mathbb{R}^{n}$ the following holds: For all
$x\in\overline{\Omega}_{d}$ and
$r>0$, there are points $x_{1},\ldots,x_{N}\in\overline{\Omega}_{d}$ so
that
$B_{d}(x,r)\subset\cup_{i=1}^{N}B_{\alpha/2}(x_{i},cr)$.
Proof.
For all $y\in B_{d}(x,r)$, let $\gamma_{y}$ be an $\alpha$-cone that joins
$y$ to $x_{0}$, where $x_{0}\in\Omega$ is a fixed John centre of
$\Omega$. Moreover, we let
$$A_{y}=\{z\in\Omega\,:\,d(z,\gamma_{y}(t))<\tfrac{\alpha}{3}t\text{ for
some }0<t<\operatorname{\ell}(\gamma_{y})\}.$$
We may assume that $d(x,x_{0})\geq 2r$ since otherwise $B_{d}(x,r)\subset B_{\alpha}(x_{0},2r)$.
We first claim that if $y,z\in B_{d}(x,r)$ such that $B_{d}(x,2r)\cap A_{y}\cap A_{z}\neq\emptyset$, then
$y$ and $z$ may be joined by an $(\alpha/2)$-cigar
$\gamma$ with $\operatorname{\ell}(\gamma)\leq c(\alpha)r$. For this, we may
assume that $d(x)<2r$ as otherwise the Euclidean line segment joining
$y$ to $z$ suites as $\gamma$.
Assume that $w\in B_{d}(x,2r)\cap A_{y}\cap A_{z}$ and choose
$t_{y},t_{z}>0$ so that $d(w,\gamma_{y}(t_{y}))<\tfrac{\alpha}{3}t_{y}$ and
$d(w,\gamma_{z}(t_{z}))<\tfrac{\alpha}{3}t_{z}$. Let $\gamma$ denote
the curve which consists of $\gamma_{y}|_{0<t\leq t_{y}}$,
$\gamma_{z}|_{0<t<t_{z}}$ and the two (Euclidean) line segments joining $w$ to
$\gamma_{y}(t_{y})$ and $\gamma_{z}(t_{z})$. As
$(t_{y}+\tfrac{\alpha}{3}t_{y})\tfrac{\alpha}{2}\leq\alpha t_{y}-\tfrac{\alpha}%
{3}t_{y}$ (and similarly for $t_{z}$), it follows that $\gamma$ is an
$\tfrac{\alpha}{2}$-cigar. Now $B_{e}(w,\tfrac{2}{3}\alpha t_{y})\subset\Omega$, $B_{e}(w,\tfrac{2}{3}\alpha t_{z})\subset\Omega$ by the $\alpha$-cone condition. Combining this with
the fact $d(w)\leq|w-x|+d(x)\leq 4r$ implies $t_{y},t_{z}\leq\tfrac{6}{\alpha}r$ and
consequently
$$\operatorname{\ell}(\gamma)\leq\left(1+\frac{\alpha}{3}\right)\left(t_{y}+t_{z%
}\right)\leq\left(\frac{4}{\alpha}+1\right)r=c(\alpha)r.$$
Let $x_{1},\ldots,x_{N}\in B_{d}(x,r)$ be such that $B_{d}(x,2r)\cap A_{x_{j}}\cap A_{x_{i}}=\emptyset$ whenever $i\neq j$. It suffices to show that $N\leq N(n,\alpha)$. For each $i$, let $y_{i}=\gamma_{x_{i}}(r)$.
Then $B_{d}(y_{i},\alpha r/3)=B_{e}(y_{i},\alpha r/3)\subset A_{x_{i}}\cap B_{d}(x,2r)$ and a volume
comparison yields
$N(r\alpha/3)^{n}\leq 2^{n}r^{n}$ implying the claim for $N(n,\alpha)=(6/\alpha)^{n}$.
∎
Remark 3.4.
A subset of the boundary of a
John domain has the same Hausdorff dimension both in the internal and
the
Euclidean metric. Indeed, it follows as in the above proof that for
any $x$ which is an Euclidean boundary point of $\Omega$, the
set $B_{e}(x,r)\cap\Omega$ may be covered by $N=N(\alpha,r)$
balls of radius $c(\alpha)r$ in the internal metric. A slightly more
detailed argument
implies a similar statement for the packing
dimension.
4. Dimension estimates on general domains
We first derive some straightforward dimension bounds
arising from the local power law behaviour of the density $\rho$
near $\partial\Omega$. For the definition of $i^{-}(x)$ and $i^{+}(x)$
recall (2.2). The relevant assumptions are slightly different
for the upper and lower bounds, and also depend on the sign of
$i^{\pm}$. Roughly speaking, the positive values of $i^{\pm}$ correspond to
expansion behaviour (of $d_{\rho}$ compared to $d$), whereas the
negative values are related to compression of dimensions. If we aim to find
the exact values of $\dim_{\rho}(\partial_{\rho}\Omega)$ and
$\operatorname{Dim}_{\rho}(\partial_{\rho}\Omega)$, then we are usually more interested
in the set where $i^{\pm}$ are negative.
Lemma 4.1.
Suppose that $\Omega\subset\mathbb{R}^{n}$, $\rho$ is a density
on $\Omega$,
$\beta>-1$,
$A\subset\{x\in\partial_{\rho}\Omega\,:\,i^{+}(x)\leq\beta\}$ and
$B\subset\{x\in\partial_{\rho}\Omega\,:\,i^{-}(x)\geq\beta\}$.
If $\beta<0$ or if Assumption 2.2 holds, then
(1)
$(1+\beta)\dim_{\rho}(A)\geq\dim_{d}(A)$,
(2)
$(1+\beta)\operatorname{Dim}_{\rho}(A)\geq\operatorname{Dim}_{d}(A)$.
If $\Omega$ is a John domain, or if $\beta>0$, we have
(3)
$(1+\beta)\dim_{\rho}(B)\leq\dim_{d}(B)$,
(4)
$(1+\beta)\operatorname{Dim}_{\rho}(B)\leq\operatorname{Dim}_{d}(B)$.
Proof.
Assume first that $\beta<0$ and choose $\beta<s<0$.
Now, for all $x\in A$, there is $q>0$ so that
$\rho(y)>d(y)^{s}$ for all $y\in B_{d}(x,q)$.
Let $r<(q/2)^{1+s}$
and choose $y\in B_{\rho}(x,r)$ such
that $d(x,y)>\operatorname{diam}_{d}(B_{\rho}(x,r))/3$. Also, let $\gamma$ be a curve
joining $x$ to
$y$ such that $\operatorname{\ell}_{\rho}(\gamma)<r$.
Then $\gamma\subset B_{d}(x,q)$ as otherwise there is a curve $\gamma^{\prime}\subset\gamma\cap\overline{B}_{d}(x,q)$ connecting $x$ to $\partial B_{d}(x,q)$, and then
$$\operatorname{\ell}_{\rho}(\gamma)\geq\operatorname{\ell}_{\rho}(\gamma^{%
\prime})=\int_{\gamma^{\prime}}\rho(z)|dz|\geq\int_{\gamma^{\prime}}d(\gamma(t%
))^{s}dt\geq\ell(\gamma^{\prime})^{1+s}\geq q^{1+s},$$
which is impossible.
Now
$\rho(z)>d(z)^{s}$ for all $z\in\gamma$ and combining this with the fact
$\operatorname{\ell}(\gamma)\geq d(x,y)$, we obtain
$$r>\operatorname{\ell}_{\rho}(\gamma)=\int_{\gamma}\rho(z)|dz|>\left(\frac{d(x,%
y)}{2}\right)^{1+s}.$$
This yields
$\operatorname{diam}_{d}(B_{\rho}(x,r))<3d(x,y)<6r^{1/(1+s)}$.
As this holds for all $0<r<(q/2)^{1/(1+s)}$, we get
(4.1)
$$\liminf_{r\downarrow 0}\frac{\log\operatorname{diam}_{d}(B_{\rho}(x,r))}{\log r%
}\geq\frac{1}{1+s}$$
for all $x\in A$.
Assume now that $s>\beta\geq 0$ and that Assumption 2.2
holds. Let $x\in A$, $\varepsilon>0$, $y\in\overline{\Omega}_{\rho}$. If
$d_{\rho}(x,y)$ is small, then Assumption 2.2 gives a curve
$\gamma$ joining $x$ to $y$ with $h(\gamma)\geq d(x,y)^{1+\varepsilon}$
and $\ell_{\rho}(\gamma)\leq d_{\rho}(x,y)^{1-\varepsilon}$. Thus, for $r>0$ small
enough, and all $y\in B_{\rho}(x,r)$, we have
$$\displaystyle d_{\rho}(x,y)^{1-\varepsilon}\geq\operatorname{\ell}_{\rho}(%
\gamma)=\int_{\gamma}\rho(z)|dz|\geq h(\gamma)(h(\gamma)/2)^{s}\geq 2^{-s}d(x,%
y)^{(1+s)(1+\varepsilon)}$$
for some curve joining $x$ and $y$.
This shows that under Assumption 2.2,
(4.1) holds true also if $\beta\geq 0$.
The claims (1) and (2) now follow using Lemma
3.2 (2) and letting $s\downarrow\beta$.
In order to prove the claims (3) and (4), in view of Lemma 3.2
(1), it suffices to show that
(4.2)
$$\liminf_{r\downarrow 0}\frac{\log\operatorname{diam}_{\rho}(B_{d}(x,r))}{\log r%
}\geq 1+\beta$$
for all $x\in B$. Let $x\in B$ and $s<\beta$. Then there is $q>0$ so that
$\rho(y)<d(y)^{s}$ for all $y\in B_{d}(x,q)$. Let $r<q/(2c+1)$, where
$c=c(\alpha)<\infty$ is the constant of Lemma 3.3 and
where $\alpha>0$ is such that $\Omega$ is an $\alpha$-John domain. By Lemma
3.3, we find $x_{1},\ldots,x_{N}\in B_{d}(x,(c+1)r)$,
$N=N(n,\alpha)$, such that
$B_{d}(x,r)\subset\bigcup_{i=1}^{N}B_{\alpha/2}(x_{i},cr)$.
Let $x_{i}\in\{x_{1},\ldots,x_{N}\}$ and $y\in B_{\alpha/2}(x_{i},cr)$. Then there is
an $(\alpha/2)$-cigar
$\gamma$ joining $x_{i}$ to $y$ with $\operatorname{\ell}(\gamma)<cr$. Assume that
$s\leq 0$. Since
$r<q/(2c+1)$, we have
$\rho(\gamma(t))<d(\gamma(t))^{s}\leq\alpha^{s}\min\{t,\operatorname{\ell}(%
\gamma)-t\}^{s}$
for all $0<t<\operatorname{\ell}(\gamma)$ and thus
$$d_{\rho}(x_{i},y)\leq\int_{\gamma}\rho(z)|dz|\leq 2\alpha^{s}\int_{t=0}^{%
\operatorname{\ell}(\gamma)/2}t^{s}\,dt=c_{1}\operatorname{\ell}(\gamma)^{1+s}%
<c^{1+s}c_{1}r^{1+s}$$
giving $\operatorname{diam}_{\rho}(B_{\alpha/2}(x_{i},r))\leq 2\cdot c^{1+s}c_{1}r^{1+%
s}=c_{2}r^{1+s}$, where $c_{2}<\infty$ depends only on
$\alpha$, $n$, and $s$.
As
$B_{d}(x,r)$
is connected, we arrive at
(4.3)
$$\operatorname{diam}_{\rho}(B_{d}(x,r))\leq\sum_{i=1}^{N}\operatorname{diam}_{%
\rho}(B_{\alpha/2}(x_{i},cr))\leq Nc_{2}r^{1+s}.$$
If $s\geq 0$, we arrive at the same estimate by using the trivial
estimate $\rho(z)\leq\ell(\gamma)^{s}$ for all $z\in\gamma$.
Since (4.3) holds for all sufficiently small $r>0$ and $s<\beta$
is arbitrary, we get (4.2).
∎
Next we will use the Lemma 4.1 to obtain
multifractal type formulas for estimating the dimension of
$\partial_{\rho}\Omega$.
To recall the definitions of $d^{\pm}(\beta)$ and $D^{\pm}(\beta)$, see (2.3)-(2.6).
Theorem 4.2.
Let $\Omega\subset\mathbb{R}^{n}$ be a John domain, and $\rho$ a density on
$\Omega$ so that Assumption 2.2 is satisfied. Then
(4.4)
$$\displaystyle\dim_{\rho}(\partial_{\rho}\Omega)\geq\sup\limits_{\beta>-1}\frac%
{d^{+}(\beta)}{1+\beta},$$
(4.5)
$$\displaystyle\operatorname{Dim}_{\rho}(\partial_{\rho}\Omega)\geq\sup\limits_{%
\beta>-1}\frac{D^{+}(\beta)}{1+\beta},$$
(4.6)
$$\displaystyle\dim_{\rho}\left(\partial_{\rho}\Omega\cap\{x\,:\,i^{-}(x)>-1\}\right)$$
$$\displaystyle\leq\sup\limits_{\beta>-1}\frac{d^{-}(\beta)}{1+\beta},$$
(4.7)
$$\displaystyle\operatorname{Dim}_{\rho}\left(\partial_{\rho}\Omega\cap\{x\,:\,i%
^{-}(x)>-1\}\right)$$
$$\displaystyle\leq\sup\limits_{\beta>-1}\frac{D^{-}(\beta)}{1+\beta}.$$
Proof.
Let us prove (4.4) and (4.6). The other estimates are
obtained similarly
with the help of the corresponding statements of Lemma 4.1.
Let
$$s<\sup\limits_{\beta>-1}\frac{d^{+}(\beta)}{1+\beta}$$
and pick $\beta>-1$ such that
$\dim_{d}\{x\in\partial_{\rho}\Omega\,:\,i^{+}(x)\leq\beta\}>s(1+\beta)$.
Combining this with Lemma
4.1 (1) gives
$$\displaystyle\dim_{\rho}\left(\partial_{\rho}\Omega\right)\geq\dim_{\rho}\{x%
\in\partial_{\rho}\Omega\,:\,i^{+}(x)\leq\beta\}\geq\frac{\dim_{d}\{x\in%
\partial_{\rho}\Omega\,:\,i^{+}(x)\leq\beta\}}{1+\beta}>s$$
proving (4.4).
To prove (4.6), we observe that given an interval
$[a,b]\subset(-1,\infty)$, Lemma 4.1 (3) gives
$$\displaystyle\dim_{\rho}\{x\in\partial_{\rho}\Omega\,:\,i^{-}(x)\in[a,b]\}$$
$$\displaystyle\leq\dim_{d}\{x\in\partial\Omega_{d}\,:\,i^{-}(x)\in[a,b]\}/(1+a)$$
$$\displaystyle\leq\frac{1+b}{1+a}\sup_{\beta>-1}\frac{d^{-}(\beta)}{1+\beta}.$$
For any
$\varepsilon>0$ we may cover the interval $(-1,\infty)$ with intervals
$[a_{i},b_{i}]_{i\in\mathbb{N}}$ so that $1+b_{i}<(1+\varepsilon)(1+a_{i})$ for all
$i$. Consequently,
$$\displaystyle\dim_{\rho}(\partial_{\rho}\Omega\cap\{x\,:\,i^{-}(x)>-1\})$$
$$\displaystyle\leq\sup_{i\in\mathbb{N}}\dim_{\rho}(\partial_{\rho}\Omega\cap\{x%
\,:\,i^{-}(x)\in[a_{i},b_{i}]\})$$
$$\displaystyle<(1+\varepsilon)\sup_{\beta>-1}\frac{d^{-}(\beta)}{1+\beta}.$$
Now (4.6)
follows as $\varepsilon\downarrow 0$.
∎
Remarks 4.3.
a)
Suppose that $\Omega$ is a John domain, $\rho$ satisfies Assumption
2.2 and $i^{-}(x)>-1$ for
all $x\in\partial_{\rho}\Omega$.
Then Theorem 4.2 gives a formula for calculating
$\dim_{\rho}(\partial_{\rho}\Omega)$ provided that
$\sup_{\beta>-1}d^{+}(\beta)/(1+\beta)$ and $\sup_{\beta>-1}d^{-}(\beta)/(1+\beta)$
coincide. In particular, this is the case if $-1<i^{-}(x)=i^{+}(x)$ for
all $x\in\partial_{\rho}\Omega$.
A similar statement is, of course, true for the packing
dimension.
See also the examples below.
b) In general it is not possible to control
$\dim_{\rho}\{x\in\partial_{\rho}\Omega\,:\,i^{-}(x)\leq-1\}$ in terms
of $\dim_{d}\{x\in\partial_{\rho}\Omega\,:\,i^{-}(x)\leq-1\}$.
Let $\Omega=\mathbb{B}^{n}$ and choose a continuous
$f\colon(0,\infty)\to(0,\infty)$ such that $\int_{t=0}^{1}f(t)\,dt<\infty$ and $\log f(t)/\log t\to-1$ as $t\downarrow 0$. Then it is possible to construct a Cantor set $C\subset S^{n-1}$
such that $\dim_{d}(C)=0$ and $\dim_{\rho}(C)=\infty$ for
$\rho(x)=f(\operatorname{dist}(x,C))$. See also
[2, Proposition 7.1], where a similar type of example is considered.
In the following example, all four of the inequalities
(4.4)–(4.7) hold with equalities.
Example 4.4.
Let $\Omega=B_{e}(0,1)\subset\mathbb{R}^{n}$ and let $C\subset S^{n-1}$ be a Cantor
set with $\dim_{d}C=s$ and $\operatorname{Dim}_{d}C=t$. Let $\beta>-1$ and
$\rho(x)=d(x,C)^{\beta}$. Then $\dim_{\rho}(C)=s/(1+\beta)$,
$\operatorname{Dim}_{\rho}(C)=t/(1+\beta)$, and $\dim_{\rho}(\partial\Omega_{d}\setminus C)=\operatorname{Dim}_{\rho}(\partial%
\Omega_{d}\setminus C)=n-1$. Thus
$\dim_{\rho}(\partial_{\rho}\Omega)=\max\{n-1,s/(1+\beta)\}$ and
$\operatorname{Dim}_{\rho}(\partial_{\rho}\Omega)=\max\{n-1,t/(1+\beta)\}$.
Below, we construct an example to show that all inequalities in
Theorem 4.2 can be strict.
Example 4.5.
There exist domains $\Omega$ and densities $\rho$ such that all four
of the inequalities (4.4)-(4.7) are strict.
Let $\Omega=\{(x,y)\in\mathbb{R}^{2}\,:\,y>0\}$ be the upper half-plane and
fix $-1<q<s<p<0$.
Define $A_{k}=\{(n2^{-2k},2^{-2k})\,:\,n\in\mathbb{Z}\}$,
$B_{k}=\{(n2^{-2k+1},2^{-2k+1})\,:\,n\in\mathbb{Z}\}$, and
$r_{k}=2^{-100k^{2}}$ for all $k\in\mathbb{N}$. Then choose a continuous density
$\rho\colon\Omega\rightarrow(0,\infty)$ so that $\rho(z)=2^{-2kq}$ if
$z\in A_{k}$, $\rho(z)=2^{-(2k+1)p}$ if $z\in B_{k}$ and $\rho(z)=d(z)^{s}$
if $z\in\Omega\setminus\left(\cup_{k\in\mathbb{N}}\cup_{x\in A_{k}\cup B_{k}}B_{d}%
(x,r_{k})\right)$. Then $i^{+}(x)\geq p$ and $i^{-}(x)\leq q$ for all
$x\in\partial\Omega_{d}$. Thus
$$\displaystyle\sup_{\beta>-1}d^{+}(\beta)/(1+\beta)=\sup_{\beta>-1}D^{+}(\beta)%
/(1+\beta)\leq 1/(1+p),$$
$$\displaystyle\sup_{\beta>-1}d^{-}(\beta)/(1+\beta)=\sup_{\beta>-1}D^{-}(\beta)%
/(1+\beta)\geq 1/(1+q).$$
On the
other hand, it is easy to see that
$\dim_{\rho}(\partial_{\rho}\Omega)=\operatorname{Dim}_{\rho}(\partial_{\rho}%
\Omega)=1/(1+s)$.
Our next example shows that the claims (1) and (2) of Lemma 4.1 do not necessarily hold without the Assumption 2.2.
Example 4.6.
Let $0<\alpha_{n}<1$ be a sequence satisfying $\sum_{n=1}^{\infty}\alpha_{n}<\infty$. We construct a Cantor set $C\subset[0,1]$ with the following procedure: Let $I_{\varnothing}=[0,1]$, $\ell_{0}=1$, $I_{0}=[0,(1-\alpha_{1})/2]$, $I_{1}=[(1+\alpha_{1})/2,1]$ and $\ell_{1}=(1-\alpha_{1})/2$. Suppose $n\in\mathbb{N}$, $\mathtt{i}\in\{0,1\}^{n}$, and that $I_{\mathtt{i}}$ with $\operatorname{diam}(I_{\mathtt{i}})=\ell_{n}$ has been defined. We then define inductively $I_{\mathtt{i}0}$ and $I_{\mathtt{i}1}$ to be the subintervals of $I_{\mathtt{i}}$ with length $\ell_{n+1}=\ell_{n}(1-\alpha_{n})/2$ such that $I_{\mathtt{i}0}$ has the same left endpoint as $I_{\mathtt{i}}$ and $I_{\mathtt{i}1}$ has the same right endpoint as $I_{\mathtt{i}}$. We also denote by $J_{\mathtt{i}}$ the interval between $I_{\mathtt{i}0}$ and $I_{\mathtt{i}1}$. The $(\alpha_{n})$-Cantor set $C=C(\alpha_{n})$ is then defined as
$$C=\bigcap_{n\in\mathbb{N}}\bigcup_{\mathtt{i}\in\{0,1\}^{n}}I_{\mathtt{i}}\,.$$
For each $n\in\mathbb{N}$, we may choose $0<h_{n}<\ell_{n}$ such that
(4.8)
$$2^{n}\ell_{n}^{1/n}h_{n}^{1/n}\leq 1\,.$$
We also require that $h_{n+1}\leq h_{n}$.
Next we define a density $\rho$ on the upper half-plane $H$. For each $n$, and $\mathtt{i}\in\{0,1\}^{n}$, let $T_{\mathtt{i}}$ and $U_{\mathtt{i}}$ be the
isosceles triangles with base $J_{\mathtt{i}}$ and heights $h_{n}$ and $h_{n}/2$
respectively. For $\mathtt{i}=\varnothing$, we define $T_{\varnothing}=\{(x,y)\in H\,:\,x<0\text{ and }y<-2x\}\cup\{(x,y\in H\,:\,x>1%
\text{ and }y<2x-2\}$ and $U_{\varnothing}=\{(x,y)\in H\,:\,(x,2y)\in T\}$.
We define
$$\rho(z)=\begin{cases}&d(z)^{-1},\text{ if }z\in\cup_{\mathtt{i}}U_{\mathtt{i}}%
\\
&d(z),\text{ if }z\notin\cup_{\mathtt{i}}T_{\mathtt{i}}\,,\end{cases}$$
where the union is over all $\mathtt{i}\in\{\varnothing\}\cup_{n\in\mathbb{N}}\{0,1\}^{n}$
Moreover, we extend $\rho$ continuously into the strips $T_{\mathtt{i}}\setminus U_{\mathtt{i}}$ such that it is monotone in the $y$-coordinate.
It is now easy to see that $\partial_{\rho}H=C$ and that $i^{+}=1$ on $\partial_{\rho}H$. Since $\sum_{n}\alpha_{n}<\infty$, it follows that $\mathcal{L}(C)>0$ and thus in particular $\dim_{d}(C)=\operatorname{Dim}_{d}(C)=1$. If $n\in\mathbb{N}$ and $\mathtt{i}\in\{0,1\}^{n}$, we can connect any two points of $C\cap I_{\mathtt{i}}$ by two vertical segments of length $h_{n}$ and the horisontal segment between their tops such that apart from endpoints, these segments lie completely outside $\cup_{\mathtt{i}}T_{\mathtt{i}}$. This implies $\operatorname{diam}_{\rho}(C\cap I_{\mathtt{i}})\leq h_{n}^{2}+\ell_{n}h_{n}%
\leq 2\ell_{n}h_{n}$ and thus for each $n$, there is a covering of $\partial_{\rho}H$ by $2^{n}$ sets of $\rho$-diameter $2\ell_{n}h_{n}$.
Combining with (4.8) and letting $n\rightarrow\infty$ yields $\dim_{\rho}(\partial_{\rho}H)=\operatorname{Dim}_{\rho}(\partial_{\rho}H)=0$. This shows that the claims (1) and (2) of Lemma 4.1 are not valid.
The final example of this section shows that neither the estimates (3)–(4) of
Lemma 4.1 nor (4.6)–(4.7) of Theorem
4.2 need hold if
$\Omega$ is not a John domain.
Example 4.7.
We construct a snowflake type domain
$\Omega\subset\mathbb{R}^{2}$ that does not satisfy (3) nor
(4) of Lemma 4.1.
To begin with, we fix $0<s<1/2$ and let
$0<\alpha_{1}<1/2$.
We
start with an equilateral triangle with sides of length $l_{0}=1$ and
replace the middle $\alpha_{1}$-th portion of each of the sides by two
segments of length $l_{1}=(1-\alpha_{1})/2$.
We continue inductively.
At the step $k$, we have $3\cdot 4^{k}$
segments of length $l_{k}$ and we replace the middle $\alpha_{k}$-th
portion of each of these segments by two line segments of length
$l_{k+1}=l_{k}(1-\alpha_{k+1})/2$, see Figure 1. The numbers
$\alpha_{k}$ are defined so that
(4.9)
$$\alpha_{k+1}=l_{k}^{1-2s}(1-\alpha_{k+1})/2\,.$$
Observe that $\alpha_{k}\downarrow 0$ as $k\rightarrow\infty$.
We denote by $\Omega_{k}$ the domain bounded by the
line segments at step $k$ and define
$\Omega=\cup_{k\in\mathbb{N}}\Omega_{k}$.
We denote by $\Sigma\subset\partial\Omega_{d}$ the part of the boundary
that joins two vertexes of the original equilateral triangle and does
not contain the third vertex.
For notational convenience, we consider only points of
$\Sigma$. This does not affect the generality as
$\partial\Omega_{d}\setminus\Sigma$ consist of two translates of
$\Sigma$. For $x\in\Sigma$, we let
$a(x)\in\{1,2,3,4\}^{\mathbb{N}}$ denote its coding or “address” arising from the
enumeration of the segments in each level as in the
Figure 1. Note that this address is unique outside a
countable set of
points.
Next we define
$\rho(z)=d(z)^{-1/2}$ for all $z\in\Omega$ and consider the set
$A=\{x\in\Sigma\,:\,a(x)\in\{2,3\}^{\mathbb{N}}\}$. It is easy to see
that there are numbers $0<D_{1}<D_{2}<\infty$, so that
$\dim_{d}(A)=D_{1}=\operatorname{Dim}_{d}(A)$ and $\dim_{d}(\partial\Omega_{d})=D_{2}=\operatorname{Dim}_{d}(\partial\Omega_{d})$ (actually
$D_{1}=1$ and $D_{2}=2$ but this is not essential). If we show that
(4.10)
$$\dim_{\rho}(A)=\operatorname{Dim}_{\rho}(A)=D_{1}/s,$$
then it follows that the claims (3) and
(4) of Lemma 4.1 do not hold. Observe that
$i^{-}(x)=i^{+}(x)=-1/2$ for all $x\in\partial\Omega_{d}$.
Let $x\in A$ and
$y\in\overline{\Omega}_{d}$
and choose the smallest $k\in\mathbb{N}$ so that
$l_{k}<2d(x,y)$. Let $z$ be as in Figure
2, i.e. $z$ is the “base point” of a cone of $\Omega_{k}$
with “side-length” $l_{k}$ which is closest to $x$. Then
$$\displaystyle d_{\rho}(x,z)\leq c_{0}\sum_{n=k}^{\infty}\alpha_{n}^{-1/2}l_{n}%
^{1/2}=c_{0}\sum_{n=k}^{\infty}l_{n-1}^{s}\leq c_{1}l_{k}^{s}\leq 2^{s}c_{1}d(%
x,y)^{s}.$$
Here the first equality follows from (4.9) and the former estimate holds because $l_{k}/4<l_{k+1}<l_{k}/2$ for all
$k\in\mathbb{N}$. By a similar argument, it follows that $d_{\rho}(z,y)\leq c_{2}l_{k}^{s}\leq c_{3}d(x,y)^{s}$. Thus $d_{\rho}(x,y)\leq cd(x,y)^{s}$.
On the other hand, it is clear that
$d_{\rho}(x,y)\geq c_{4}l_{k}^{s}\geq c^{\prime}d(x,y)^{s}$, since $a(x)\in\{2,3\}^{\mathbb{N}}$.
Thus, we have $c^{\prime}d_{\rho}(x,y)\leq d(x,y)^{s}\leq cd_{\rho}(x,y)$, in other words $B_{\rho}(x,c^{\prime}r)\subset B_{d}(x,r^{s})\subset B_{\rho}(x,cr)$, for all $x\in A$ and $y\in\overline{\Omega}_{d}$ where the
constants $0<c^{\prime},c<\infty$ are independent of the points $x$ and $y$.
The claim (4.10) now follows from Lemma 3.2.
Remark 4.8.
Suppose that $A\subset\{x\in\partial_{\rho}\Omega\,:\,i^{-}(x)\geq\beta\}$ has the
following accessibility property for some $1\leq\lambda<-1/\beta$: For
each $x\in\partial\Omega_{d}$ there are $0<r,c<\infty$ such that for all $y\in B_{d}(x,r)\cap A$ there exists a curve $\gamma$ joining $x$ and
$y$ so that $d(\gamma(t),\partial\Omega_{d})\geq c\min\{t^{\lambda},(\operatorname{\ell}(%
\gamma)-t)^{\lambda}\}$ for all $0<t<\operatorname{\ell}(\gamma)$. Then the
proof of Lemma 4.1 with trivial modifications implies
$(1+\lambda\beta)\dim_{\rho}(A)\leq\dim_{d}(A)$ and
$(1+\lambda\beta)\operatorname{Dim}_{\rho}(A)\leq\operatorname{Dim}_{d}(A)$. On the other hand, if
for each $x\in B\subset\{x\in\partial_{\rho}\Omega\,:\,i^{+}(x)\leq\beta\}$ there are
$0<r,c<\infty$ so that for all curves $\gamma$ with $\gamma(0)=x$ we
have $d(\gamma(t),\partial\Omega_{d})<ct^{\lambda}$ for $0<t<r$, then we
get $(1+\lambda\beta)\dim_{\rho}(B)\geq\dim_{d}(B)$,
$(1+\lambda\beta)\operatorname{Dim}_{\rho}(B)\geq\operatorname{Dim}_{d}(B)$. The previous example
shows that these estimates are sharp.
5. Conformal densities
The results in the last section, are based on estimates of the
quantities $i^{+}(x)$ and $i^{-}(x)$ which are defined as internal
limits when $\Omega\ni y\rightarrow x\in\partial\Omega_{d}$. This causes a
lack of the generality; it is quite possible that $i^{+}(x)=0$ and
$i^{-}(x)=-1$ for all $x\in\partial\Omega_{d}$. (For instance, choose
$\beta=-1$, $\lambda=0$ in the forthcoming Example 6.3.)
However, if we have additional information on the geometry of
$(\overline{\Omega}_{\rho},d_{\rho})$, then it might be enough to consider the
ratios $\log\rho(y)/\log d(y)$ along some fixed curves or cones.
The purpose of this section is to show that this is the case
for so called conformal densities which
arise naturally in connection with conformal and quasiconformal
mappings and their generalisations,
see [2].
A density $\rho$ on $\mathbb{B}^{n}$ is called a conformal density if
there are constants $1\leq c_{0},c_{1}\leq\infty$ such that for each
$x\in\mathbb{B}^{n}$ and for all $y\in B_{e}(x,d(x)/2)$ we have
(5.1)
$$c_{0}^{-1}\leq\rho(y)/\rho(x)\leq c_{0},$$
and moreover,
(5.2)
$$\mu_{\rho}(B_{\rho}(x,r))\leq c_{1}r^{n}$$
for all $r>0$.
Here $\mu_{\rho}$ is the measure given by
$\mu_{\rho}(E)=\int_{E}\rho^{n}\,d\mathcal{L}^{n}$ for $E\subset\mathbb{B}^{n}$.
In the literature, (5.1) is often called the Harnack inequality, and one refers
to (5.2) as a volume growth condition.
An important corollary of the conditions (5.1)–(5.2) is the following Gehring-Hayman type estimate: There exists $1\leq c<\infty$ such that
(5.3)
$$c^{-1}d_{\rho}(x,y)\leq\int_{t=0}^{d(x,y)}\rho\left((1-t)x\right)\,dt+\int_{t=%
0}^{d(x,y)}\rho\left((1-t)y\right)\,dt\leq cd_{\rho}(x,y)$$
for all $x,y\in\partial_{\rho}\mathbb{B}^{n}$.
See [2, Theorem 3.1] and also [6].
Motivated by this estimate, we consider variants $k^{-}$ and $k^{+}$
of the quantities $i^{-}$ and $i^{+}$ for a density $\rho$ on
$\mathbb{B}^{n}$ at $x\in S^{n-1}$. Recall that
$k^{-}(x)=\liminf_{r\uparrow 1}\log\rho(rx)/\log(1-r)$, and
$k^{+}(x)=\limsup_{r\uparrow 1}\log\rho(rx)/\log(1-r)$.
Occasionally we also use $k^{-}$ and $k^{+}$ when $\Omega=\mathbb{H}$ is an open half
space and then the limits are considered along straight lines
orthogonal to the boundary of $\mathbb{H}$. The reduction to $k^{\pm}$ is possible since (5.3) is a much stronger condition than the Assumption 2.2 that was used earlier for the same purpose.
In the following result we only assume that
(5.1) and (5.3) hold. Thus, the result applies to a
slightly larger collection of densities than the conformal
densities. See [12], and also Example 6.3
to follow.
Theorem 5.1.
Suppose that $\rho$ is a density on
$\mathbb{B}^{n}$ that satisfies the conditions (5.1) and
(5.3). Let $\beta>-1$,
$$\displaystyle A\subset\{x\in\partial_{\rho}\mathbb{B}^{n}\,:\,k^{+}(x)\leq%
\beta\},$$
$$\displaystyle B\subset\{x\in\partial_{\rho}\mathbb{B}^{n}\,:\,k^{-}(x)\geq%
\beta\},$$
$$\displaystyle C\subset\{x\in\partial_{\rho}\mathbb{B}^{n}\,:\,k^{-}(x)\leq%
\beta\}.$$
Then
(1)
$(1+\beta)\dim_{\rho}(A)\geq\dim_{d}(A)$,
(2)
$(1+\beta)\dim_{\rho}(B)\leq\dim_{d}(B)$,
(3)
$(1+\beta)\operatorname{Dim}_{\rho}(A)\geq\operatorname{Dim}_{d}(A)$,
(4)
$(1+\beta)\operatorname{Dim}_{\rho}(B)\leq\operatorname{Dim}_{d}(B)$,
(5)
$(1+\beta)\operatorname{Dim}_{\rho}(C)\geq\dim_{d}(C)$.
Proof.
The claims (1)–(4) have proofs very similar to the
proofs of the corresponding statements of Lemma
4.1.
We first apply (5.3) to conclude that for each
$x\in A$ and $y\in\mathbb{B}^{n}\setminus B_{d}(x,r)$, we have
$$d_{\rho}(x,y)\geq c^{-1}\int_{t=0}^{r}\rho((1-t)x)\,dt\geq c^{-1}\int_{t=0}^{r%
}t^{s}\geq c_{0}r^{1+s}$$
if $s>\beta$ and $r>0$ is small. This implies
$\operatorname{diam}_{d}(B_{\rho}(x,r))\leq c_{1}r^{1/(1+s)}$ and the claims (1) and
(3) now follow by Lemma 3.2 (2).
To prove (2) and (4), let $s<\beta$
and for $n\in\mathbb{N}$, denote
$$B_{n}=\{x\in B\,:\,\rho((1-t)x)<t^{s}\text{ for all }0<t<1/n\}.$$
Using
(5.3), we find $r_{0}>0$ so that
$$d_{\rho}(x,y)\leq c_{2}\int_{t=0}^{d(x,y)}t^{s}\leq c_{3}d(x,y)^{1+s}$$
whenever $x,y\in B_{n}$ and $d(x,y)<r_{0}$. In other words,
$\operatorname{diam}_{\rho}(B_{d}(x,r)\cap B_{n})\leq c_{4}r^{1+s}$ when
$0<r<r_{0}^{1/(1+s)}$. Now the Lemma 3.2 (1)
implies $(1+s)\dim_{\rho}(B_{n})\leq\dim_{d}(B_{n})$ and
$(1+s)\operatorname{Dim}_{\rho}(B_{n})\leq\operatorname{Dim}_{d}(B_{n})$. Note that it is enough to assume
$\liminf_{r\downarrow 0}(\log\left(\operatorname{diam}_{\rho}(B_{d}(x,r)\cap A)%
\right))/(\log r)\geq\delta$ in
Lemma 3.2 (1) (since
$\dim_{\partial_{\rho}\mathbb{B}^{n}}(A)=\dim_{(A,d_{\rho})}(A)$ and
$\operatorname{Dim}_{\partial_{\rho}\mathbb{B}^{n}}(A)=\operatorname{Dim}_{(A,d%
_{\rho})}(A)$). The claims
(2) and (4) now follow since $B=\cup_{n\in\mathbb{N}}$ and
$s<\beta$ is arbitrary.
It remains to prove (5). Let $x\in C$ and $s>\beta$. Then
there is a sequence $0<r_{i}\downarrow 0$ such that
$\rho((1-r_{i})x)>r_{i}^{s}$ for all $i$. Combined with (5.1), this
gives
$$\int_{t=0}^{r_{i}}\varrho((1-t)x)\,dt\geq c_{5}r_{i}^{1+s}$$
and using also (5.3), $\operatorname{diam}_{d}(B_{\rho}(x,c_{6}r_{i}^{1+s}))\leq r_{i}$. Thus
$$\limsup_{r\downarrow 0}\frac{\log\operatorname{diam}_{d}(B_{\rho}(x,r))}{\log r%
}\geq\frac{1}{1+s}$$
and (5) follows from Lemma 3.2 (4).
∎
Remarks 5.2.
a) Using the claims (1)–(4) of Theorem
5.1 one may derive multifractal type
formulas completely analogous to (4.4)–(4.7).
Using
(5), we have moreover, that $\operatorname{Dim}_{\rho}(\partial_{\rho}B_{d}(0,1))\geq\sup_{\beta>-1}\frac{e%
^{-}(\beta)}{1+\beta}$ where
(5.4)
$$e^{-}(\beta)=\dim_{d}(\{x\in\partial_{\rho}B_{d}(0,1)\,:\,k^{-}(x)\leq\beta\}).$$
Example 6.2 shows that this is sharp
in the sense that one can not replace $\dim_{d}$ by $\operatorname{Dim}_{d}$ in defining
$e^{-}(\beta)$ even if $\rho$ is a conformal density.
b) We formulated the above result for densities defined on
$\mathbb{B}^{n}$. The same proof goes through for any John domain
$\Omega\subset\mathbb{R}^{n}$ if the condition (5.3) is replaced by
$$c^{-1}d_{\rho}(x,y)\leq\int_{t=0}^{d(x,y)}\rho(\gamma_{x}(t))\,dt+\int_{t=0}^{%
d(x,y)}\rho(\gamma_{y}(t))\,dt\leq cd_{\rho}(x,y),$$
where $\gamma_{x}$ is a fixed $\alpha$-cone with $\gamma_{x}(0)=x$ for
each $x\in\partial_{\rho}\Omega$. Actually, we could even weaken this
in the spirit of (2.2) and assume only that for all
$\varepsilon>0$, we have
$$d_{\rho}(x,y)^{1+\varepsilon}\leq\int_{t=0}^{d(x,y)}\rho(\gamma_{x}(t))\,dt+%
\int_{t=0}^{d(x,y)}\rho(\gamma_{y}(t))\,dt\leq d_{\rho}(x,y)^{1-\varepsilon}$$
when $d(x,y)$ is small enough.
c) Makarov [9, Theorems 0.5, 0.6] proved results essentially similar to Theorem 5.1 (1)–(2) in case $\beta>0$ and $\rho=|f^{\prime}|$ for $f$ conformal. He also showed [9, Theorem 0.8] that $k^{-}$ cannot be replaced by $k^{+}$ in (2).
d) In [2], Bonk, Koskela, and Rohde proved the following deep
fact. If $\rho$ is a
conformal density on $\mathbb{B}^{n}$, then:
(5.5)
$$\text{There is }E\subset S^{n-1}\text{ with }\dim_{d}E=0\text{ such that
}\dim_{\rho}(\partial_{\rho}\mathbb{B}^{n}\setminus E)\leq n.$$
See [2, Theorem 7.2]. As a central
tool, they used an estimate analogous to Theorem
5.1 (2). In fact, combining Theorem 5.1 (2) and [2, Theorem
5.2] gives a simpler proof for (5.5) than the one given in
[2]. However, their result is
quantitatively stronger than (5.5).
e) A generic situation in which Theorem 5.1
is stronger than Theorem 4.2 will be discussed in Example 6.3.
6. Further examples, remarks, and questions
We first give the example mentioned in Remark 5.2 e) showing
that one can not replace $\dim_{d}$ by $\operatorname{Dim}_{d}$ in defining $e^{-}(\beta)$. We will make use of the following lemma. We formulate it in a more general setting, for future reference.
Lemma 6.1.
Let $\Omega\subset\mathbb{R}^{n}$ be a $(2\alpha)$-John domain and $C\subset\partial\Omega_{d}$. Suppose that $\widetilde{\rho}\colon(0,\infty)\rightarrow(0,\infty)$ is nonincreasing and and satisfies $\int_{0}^{1}\widetilde{\rho}(t)dt<\infty$. Define $\rho(x)=\widetilde{\rho}(d(x,C))$ for $x\in\Omega$. Then for all $x\in C$ and
$0<r<\operatorname{diam}_{d}(\Omega)/2$, it holds
(6.1)
$$\displaystyle\operatorname{diam}_{d}\left(B_{\rho}\left(x,\frac{1}{2}\int_{t=0%
}^{r}\widetilde{\rho}(t)dt\right)\right)\leq 2r,$$
(6.2)
$$\displaystyle\operatorname{diam}_{\rho}\left(B_{d}(x,r)\right)\leq c_{1}\int_{%
t=0}^{c_{2}r}\widetilde{\rho}(t)\,dt$$
for some constants $0<c_{1},c_{2}<\infty$ that depend only on $\alpha$ and
$n$.
Proof.
Let $x\in C$ and $y\in\overline{\Omega}_{d}$. Denote $d=d(x,y)$ and suppose that $\gamma$ is
a curve joining $x$ and $y$. To prove (6.1), it suffices to
show that
(6.3)
$$\operatorname{\ell}_{\rho}(\gamma)\geq\frac{1}{2}\int_{t=0}^{d}\widetilde{\rho%
}(t)\,dt.$$
Let $h=h(\gamma)=\max_{0\leq t\leq L}d(\gamma(t))$ where $L=\operatorname{\ell}(\gamma)$. Then
$\operatorname{\ell}_{\rho}(\gamma)\geq\tfrac{1}{2}\int_{t=0}^{h}\widetilde{%
\rho}(t)\,dt+\tfrac{1}{2}d\rho(h)$. If
$h\geq d$ the estimate (6.3) clearly holds. If $h<d$,
then $d\widetilde{\rho}(h)\geq\int_{t=h}^{d}\widetilde{\rho}(t)dt$ since $\widetilde{\rho}$ is nonincreasing
and consequently
$$\operatorname{\ell}_{\rho}(\gamma)\geq\frac{1}{2}\left(\int_{r=0}^{h}%
\widetilde{\rho}(r)\,dr+d\widetilde{\rho}(h)\right)\geq\frac{1}{2}\int_{t=0}^{%
d}\widetilde{\rho}(t)\,dt.$$
This settles the proof of (6.1).
To prove (6.2), let $x\in C$ and $r>0$. We use
Lemma 3.3 to cover
$B_{d}(x,r)$ with sets
$B_{\alpha}(x_{i},cr)$, $i=1,\ldots,N=N(n,\alpha)$. Let $y\in B_{\alpha}(x_{i},cr)$
and pick an $\alpha$-cigar $\gamma$ with $\operatorname{\ell}(\gamma)\leq cr$
joining $y$ to $x_{i}$. Now
(6.4)
$$d_{\rho}(y,x_{i})\leq\int_{\gamma}\widetilde{\rho}(d(z,C))|dz|\leq 2\int_{t=0}%
^{cr/2}\widetilde{\rho}(\alpha t)\,dt=\frac{2}{\alpha}\int_{t=0}^{\alpha cr/2}%
\widetilde{\rho}(t)\,dt.$$
As $B_{d}(x,r)$ is (path-)connected and is covered by $N$ sets of the type
$B_{\alpha}(x_{i},cr_{i})$, we arrive at
$\operatorname{diam}_{\rho}(B_{d}(x,r))\leq(4N/\alpha)\int_{t=0}^{\alpha cr/2}%
\widetilde{\rho}(t)\,dt$
proving the claim.
∎
Example 6.2.
We show that $\dim_{d}$ cannot be replaced by $\operatorname{Dim}_{d}$ in (5.4)
even if $\rho$ is a conformal density.
We first fix numbers $0<a<b<1/2$,
$-1<\lambda<\eta<0$, and $\xi$ such that
(6.5)
$$a^{1+\lambda}=b^{1+\eta}=\xi,$$
and
(6.6)
$$-\log 2<\log\xi<-\tfrac{1}{2}\log 2.$$
Let us also pick natural numbers $n_{1}<N_{1}<n_{2}<N_{2}<n_{3}<N_{3}<\ldots$. We let
$C\subset S^{1}$ denote a Cantor set constructed as follows (See the construction in Example 4.6). We start
with an arc of length $1$ and remove an arc of length $1-2a$ from
the middle. Next, we remove arcs of length $a(1-2a)$ from
the middle of the two remaining arcs. We iterate this
construction for $n_{1}$ steps. After these $n_{1}$ steps, we have
$2^{n_{1}}$ arcs of length $a^{n_{1}}$. At the step $n_{1}+1$,
we remove arcs of relative length $1-2b$ from the middle of
each of these arcs. We continue the construction with the
parameter $b$ for $N_{1}-n_{1}$ steps. Then we use again the
parameter $a$ for $n_{2}-N_{1}$ steps and so on.
We denote by $E_{k,1}\ldots,E_{k,2^{k}}$ the arcs remaining after
$k$ steps and denote by $\ell_{k}$ the length of these arcs.
What remains at the
end is the Cantor set $C=\cap_{k\in\mathbb{N}}\cup_{i=1}^{2^{k}}E_{k,i}$.
Let $r_{0}=R_{1}=1$, $r_{1}=\ell_{n_{1}}=a^{n_{1}}$,
$R_{2}=\ell_{N_{1}}=a^{n_{1}}b^{N_{1}-n_{1}}$,
$r_{2}=\ell_{n_{2}}=a^{n_{1}+n_{2}-N_{1}}b^{N_{1}-n_{1}}$ and so
on. Thus $r_{i}$ (resp. $R_{i}$) is the length of a construction interval
of $C$ of level $n_{i}$ (resp. $N_{i-1}$). We define
$\rho(x)=\widetilde{\rho}(\operatorname{dist}(x,C))$ for all $x\in\mathbb{B}^{2}$,
where $\widetilde{\rho}$ is the function defined by
$$\widetilde{\rho}(t)=\begin{cases}\left(\frac{R_{1}R_{2}\cdots R_{k}}{r_{0}r_{1%
}\cdots r_{k-1}}\right)^{\eta-\lambda}t^{\lambda},\quad r_{k}\leq t\leq R_{k},%
\\
\left(\frac{R_{1}R_{2}\cdots R_{k}}{r_{0}r_{1}\cdots r_{k}}\right)^{\eta-%
\lambda}t^{\eta},\quad R_{k+1}\leq t\leq r_{k}.\end{cases}$$
Now, if $N_{i}/n_{i},n_{i+1}/N_{i}\rightarrow\infty$ fast enough, it
is easy to see that $\dim_{d}C=-\log 2/\log a$ and $\operatorname{Dim}_{d}C=-\log 2/\log b$, see e.g. [10, p. 77]. Moreover, it then
follows that $k^{-}(x)=\lambda$ if $x\in C$ and $k^{-}(x)=0$
otherwise.
Next, let $h_{k}=\int_{t=0}^{\ell_{k}}\widetilde{\rho}(t)\,dt$. Since
(6.7)
$$\widetilde{\rho}(\ell_{k})\ell_{k}=\xi^{k}$$
for all $k$ (combine (6.5) with the definitions), it follows that
$$\tfrac{1}{2}\xi^{k}=\tfrac{1}{2}\widetilde{\rho}(\ell_{k})\ell_{k}\leq\int_{t=%
\ell_{k+1}}^{\ell_{k}}\widetilde{\rho}(t)\leq\widetilde{\rho}(\ell_{k+1})\ell_%
{k}\leq a^{\lambda}\widetilde{\rho}(\ell_{k})\ell_{k}=a^{\lambda}\xi^{k}.$$
Thus
(6.8)
$$\tfrac{1}{2}\xi^{k}\leq h_{k}=\sum_{m\geq k}\int_{t=\ell_{m+1}}^{\ell_{m}}%
\widetilde{\rho}(t)\,dt\leq c_{0}\xi^{k}.$$
From Lemma 6.1, it follows that for each
$I=I_{k,i}$ we have
(6.9)
$$c_{1}h_{k}\leq\operatorname{diam}_{\rho}(I)\leq c_{2}h_{k}$$
for some constants $0<c_{1}<c_{2}<\infty$. Let $\mu$ be the natural
probability measure on $C$ that satisfies $\mu(I_{k,i})=2^{-k}$. Then
$$\lim_{k\rightarrow\infty}\frac{\log\mu(I_{k,i})}{\log(\operatorname{diam}_{%
\rho}(I_{k,i}))}=\frac{-\log 2}{(1+\lambda)\log a},$$
using (6.8) and (6.9). But this implies
$\dim_{\rho}(C)=\operatorname{Dim}_{\rho}(C)=(-\log 2)/((1+\lambda)\log a)$, see e.g. [4, Proposition
10.1] and [3, Corollary 3.20]. Thus,
$$\displaystyle 1<\operatorname{Dim}_{\rho}(C)$$
$$\displaystyle=\operatorname{Dim}_{\rho}(S^{1})=\frac{-\log 2}{(1+\lambda)\log a%
}<\frac{-\log 2}{(1+\lambda)\log b}=\frac{\operatorname{Dim}_{d}(C)}{1+\lambda}$$
$$\displaystyle=\sup_{\beta>-1}\frac{\operatorname{Dim}_{d}(\{x\in\partial_{\rho%
}B_{d}(0,1)\,:\,k^{-}(x)\leq\beta\}}{1+\beta}\,,$$
recall (6.6).
It remains to prove that $\rho$ is a conformal density. The condition
(5.1) is clearly satisfied so we only have to verify
(5.2). We show this for $x\in C$ and $0<r<1$ (the general case
$x\in\mathbb{B}^{2}$ follows easily from this). Using (5.1) we
may also assume that
$r=h_{k}$ for some $k\in\mathbb{N}$. For each $m\geq k$, we denote
$$A_{m}=\{y\in B_{d}(x,c_{3}\ell_{k})\,:\,\ell_{m}\leq d(y,C)\leq c_{3}\ell_{m}\}.$$
Then $B_{\rho}(x,h_{k})\subset\cup_{m\geq k}A_{m}$, for a suitable constant
$1<c_{3}<\infty$, recall (6.9). Moreover, it follows from
(5.1) and (6.7) that
$c_{4}\xi^{m}/\ell_{m}\leq\rho(y)\leq c_{5}\xi^{m}/\ell_{m}$
for all $y\in A_{m}$,
where $0<c_{4}<c_{5}<\infty$ depend only on $a,b,\lambda$, and
$\eta$. Since $\mathcal{L}^{2}(A_{m})\leq c_{6}2^{m-k}\ell_{m}^{2}$,
we arrive at
$$\mu_{\rho}(A_{m})=\int_{A_{m}}\rho^{2}\,d\mathcal{L}^{2}\leq c_{7}2^{m-k}\xi^{%
2m}.$$
As $2\xi^{2}<1$ by (6.6), this yields
$$\displaystyle\mu_{\rho}(B_{\rho}(x,h_{k}))\leq\sum_{m\geq k}\mu_{\rho}(A_{m})%
\leq c_{7}\sum_{m\geq k}2^{m-k}\xi^{2m}\leq c_{8}\xi^{2k}\leq c_{9}h_{k}^{2},$$
where the last estimate follows from (6.8).
Below, we construct a “multifractal type” example and calculate the
Hausdorff dimension of the boundary using Theorem 5.1.
Example 6.3.
We construct a domain and a conformal density that satisfies Gehring-Hayman condition (5.3) and compute the Hausdorff dimension of the boundary.
We define a density $\rho$ on the upper half-plane $H\subset\mathbb{R}^{2}$
(actually we
define $\rho(z)$ only for $z\in[0,1]\times(0,3]$ but the
definition is easily extended to the whole of $H$). Let
$-1<\beta,\lambda<0$, $\beta\neq\lambda$.
We consider the triadic decomposition of $[0,1]$; Let $I_{\varnothing}=[0,1]$,
$I_{0}=[0,1/3]$, $I_{1}=[1/3,2/3]$, and $I_{2}=[2/3,1]$.
If $n\in\mathbb{N}$ and, $\textbf{i}\in\{0,1,2\}^{n}$, let $I_{\textbf{i}0},I_{\textbf{i}1},I_{\textbf{i}2}$ denote its triadic subintervals
enumerated from left to right. For each such triadic interval
$I=I_{\textbf{i}}$, let $Q_{\textbf{i}}=I\times\left[|I|,3|I|\right]$.
Next we define weights $\rho_{\textbf{i}}$ inductively by the rules
$\rho_{\varnothing}=1$ and
$\rho_{\textbf{i}0}=\rho_{\textbf{i}2}=3^{-\lambda}\rho_{\textbf{i}}$,
$\rho_{\textbf{i}1}=3^{-\beta}\rho_{\textbf{i}}$.
Let $\rho\colon[0,1]\times(0,3]\rightarrow(0,\infty)$ be a density such that
$\rho(x_{\textbf{i}})=\rho_{\textbf{i}}$ if $x_{\textbf{i}}$ is the centre
point of $Q_{\textbf{i}}$. We also require that the condition
(5.1) holds with some $c_{0}<\infty$. This is possible because
of the symmetric definition of $\rho_{\textbf{i}}$:
If $I_{\textbf{i}}$ and $I_{\textbf{j}}$ are neighbouring intervals of the
same length, then
$3^{-|\beta-\lambda|}\leq|\rho_{\textbf{i}}/\rho_{\textbf{j}}|\leq 3^{|\beta-%
\lambda|}$.
We will next show that the Gehring-Hayman condition (5.3) holds for the
density $\rho$. Let $x,y\in[0,1]$ with $y-x=r>0$.
Let $\gamma_{1}$, $\gamma_{2}$, and $\gamma_{3}$ be the line segments
joining $(x,0)$ to $(x,r)$, $(x,r)$ to $(y,r)$, and
$(y,r)$ to $(y,0)$, respectively. Then a direct calculation using the
definitions gives
$$\displaystyle\int_{\gamma_{1}}\rho(z)\,|dz|\leq c_{1}\int_{t=0}^{r}t^{\min\{%
\beta,\lambda\}}\frac{\rho(x,r)}{r^{\min\{\beta,\lambda\}}}\,dt\leq c_{2}r\rho%
(x,r),$$
$$\displaystyle\int_{\gamma_{3}}\rho(z)\,|dz|\leq c_{1}\int_{t=0}^{r}t^{\min\{%
\beta,\lambda\}}\frac{\rho(y,r)}{r^{\min\{\beta,\lambda\}}}\,dt\leq c_{2}r\rho%
(y,r).$$
Combining these estimates with (5.1), we obtain
(6.10)
$$c_{3}\operatorname{\ell}_{\rho}(\gamma_{i})\leq\operatorname{\ell}_{\rho}(%
\gamma_{2})\leq c_{4}\operatorname{\ell}_{\rho}(\gamma_{i})$$
for $i=1,3$. The condition (5.3) is satisfied if we can show
that $\operatorname{\ell}_{\rho}(\gamma)\geq c\operatorname{\ell}_{\rho}(\gamma_{2})$ for any curve joining $x$ and $y$ in $H$. Denote
$h=h(\gamma)=\max_{0<t<\operatorname{\ell}(\gamma)}d(\gamma(t))$. If $h\leq r$, it follows
that $\operatorname{\ell}_{\rho}(\gamma)\geq c\operatorname{\ell}_{\rho}(\gamma_{2})$ since $\rho$ is
essentially decreasing on each vertical line segment. More precisely
using (5.3) and the definitions of the weights
$\rho_{\textbf{i}}$, we get
(6.11)
$$\rho(a,tb)\geq c_{5}\rho(a,b)$$
if $(a,b)\in[0,1]\times(0,3]$ and $0<t<1$. Now suppose
that $h>r$ and let $z=\gamma(t_{0})$ where
$t_{0}=\min\{t>0\,:\,d(\gamma(t))=r\}$. If $d(z,\gamma_{2})<r$,
it follows easily from (5.1) that $\operatorname{\ell}_{\rho}(\gamma)\geq c\operatorname{\ell}_{\rho}(\gamma_{2})$. If $d(z,\gamma_{2})\geq r$, let $\eta$ be the
line segment joining $z$ to the closest point of $\gamma_{2}$. Then
(6.11) implies $\operatorname{\ell}_{\rho}(\gamma)\geq c_{5}\operatorname{\ell}_{\rho}(\eta)%
\geq c\operatorname{\ell}_{\rho}(\gamma_{2})$ where the last estimate follows using
(5.1). This settles the proof of (5.3).
We will next compute the Hausdorff dimension of the boundary. Let $0\leq t\leq 1$ and denote $A_{t}=\{x\in[0,1]\,:\,k^{-}(x)=k^{+}(x)=t\beta+(1-t)\lambda\}$.
Then
$$\displaystyle A_{t}$$
$$\displaystyle=\left\{x=\sum_{i\in\mathbb{N}}x_{i}3^{-i}\,:\,x_{i}\in\{0,1,2\}%
\text{ and }\lim_{n\rightarrow\infty}\#\{1\leq i\leq n\,:\,x_{i}=1\}/n=t\}%
\right\}.$$
Using this expression, we get
(6.12)
$$\dim_{d}(A_{t})=\operatorname{Dim}_{d}(A_{t})=\frac{-t\log t+(t-1)\log((1-t)/2%
)}{\log 3}.$$
Indeed, if $\mu_{t}$ is the unique Borel probability measure on $[0,1]$
that satisfies $\mu_{t}(I_{\textbf{i}1})=t\mu_{t}(I_{\textbf{i}})$ and
$\mu_{t}(I_{\textbf{i}0})=\mu_{t}(I_{\textbf{i}2})$ for all triadic intervals
$I_{\textbf{i}}$, then we have
$$\lim_{r\downarrow 0}\frac{\log\mu_{t}((B_{d}(x,r))}{\log r}=\frac{-t\log t+(t-%
1)\log((1-t)/2))}{\log 3}$$
and this implies
(6.12). For instance,
see [4, Proposition 10.4].
Thus, from Theorem
5.1 and (6.12),
we get
(6.13)
$$\dim_{\rho}(A_{t})=\operatorname{Dim}_{\rho}(A_{t})=\frac{-t\log t+(t-1)\log((%
1-t)/2)}{(1+t\beta+(1-t)\lambda)\log 3}\,.$$
If $f(\beta,\lambda)$ is the maximum of (6.13) over
all $0\leq t\leq 1$, then we conclude that
$$\operatorname{Dim}_{\rho}(\partial_{\rho}H)\geq\dim_{\rho}(\partial_{\rho}H)%
\geq f(\beta,\lambda).$$
To finish this example, we show that for the Hausdorff dimension,
there is an equality in the above estimate. We give the proof in the
case $\beta<\lambda$, the case $\lambda<\beta$ can be handled with
similar arguments. First, we observe using Theorem
5.1 (2) that
$$\dim_{\rho}(\{k^{-}(x)\geq\beta/3+2\lambda/3\})\leq 1/(1+\beta/3+2\lambda/3)<f%
(\beta,\lambda),$$
where the strict inequality is obtained via
differentiating (6.13) at
$t=1/3$. On the other hand, if $t>1/3$, and $A^{-}_{t}=\{x\in[0,1]\,:\,k^{-}(x)\leq t\beta+(1-t)\lambda\}$, then
$$A^{-}_{t}=\{x=\sum_{i\in\mathbb{N}}x_{i}3^{-i}\,:\,\limsup_{n\rightarrow\infty%
}\#\{1\leq i\leq n\,:\,x_{i}=1\}/n\geq t\}\}$$
and thus $\dim_{d}(A^{-}_{t})\leq(-t\log t+(t-1)\log((1-t)/2))/\log 3$. To
see this, observe that
$$\liminf_{r\downarrow 0}\frac{\log\mu_{t}(B_{d}(x,r))}{\log r}\leq\frac{-t\log t%
+(t-1)\log((1-t)/2))}{\log 3}$$
for all $x\in A^{-}_{t}$ and
use [4, Proposition 10.1]. Now, using the analogue of
(4.6) for $k^{-}$ implies $\dim_{\rho}(\partial_{\rho}H)\leq f(\beta,\lambda)$, and consequently $\dim_{\rho}(\partial_{\rho}H)=f(\beta,\lambda)$.
Remarks 6.4.
a) One can estimate the numbers $f(\beta,\lambda)$ numerically. For
instance, if $\beta=-1/2$ and $\lambda=-1/3$, then
$f(\beta,\lambda)\approx 1.65$.
b)
Inspecting (6.13), it is easy to
see that
$$\max\{1/(1+\beta/3+2\lambda/3),\log 2/((1+\lambda)\log 3)\}<f(\beta,\lambda)<1%
/(1+\min\{\beta,\lambda\})$$
for all
choices of $\beta$ and $\lambda$.
c) If above $\beta,\lambda>-1/2$, then it is not hard to see that
$\rho$ satisfies (5.2) and
thus is a conformal density.
We do not know if also
$\operatorname{Dim}_{\rho}(\partial_{\rho}H)\leq f(\beta,\lambda)$:
Question 6.5.
In Example 6.3, is it true that
$\operatorname{Dim}_{\rho}(\partial_{\rho}H)=f(\beta,\lambda)$?
We cannot use Theorem 5.1 to solve this question
since it can be shown that $\operatorname{Dim}_{d}(\{x\,:k^{-}(x)=\min\{\beta,\lambda\})=1$.
It is true that $\dim_{\rho}(\partial_{\rho}\mathbb{B}^{n})\geq n-1$ for all
conformal densities $\rho$ defined on $\mathbb{B}^{n}$. This deep
fact was proved in [1]. A straightforward estimate using
Theorem 5.1 and (5.1) only implies that
$\dim_{\rho}(\partial_{\rho}\mathbb{B}^{n})\geq c(n,c_{0})>0$, where $c_{0}$ is the
constant in (5.1). See also [2, Proposition 7.1]. Next we
provide an example of a density $\rho$ on the upper half-plane $H$ such that
$\operatorname{Dim}_{\rho}(\partial_{\rho}H)=0$ and $\dim_{d}(\mathbb{R}\setminus\partial_{\rho}H)=0$.
Example 6.6.
We construct a density with $\operatorname{Dim}_{\rho}(\partial_{\rho}H)=0$ and $\dim_{d}(\mathbb{R}\setminus\partial_{\rho}H)=0$.
Given an interval $I\subset\mathbb{R}$, let $T_{I}$ and $U_{I}$ be the
isosceles triangles with base $I$ and heights $|I|$ and $|I|/2$
respectively. Denote
$S_{I}=T_{I}\setminus U_{I}$.
To begin with, let $I_{1},I_{2},\ldots$ be disjoint intervals so that
$C=\mathbb{R}\setminus\cup I_{i}$ forms a Cantor set (a nowhere dense closed set
without isolated points). Moreover, we assume that
$\sum_{i}\operatorname{diam}_{d}(I_{i})\leq 1$. Let $\rho(x)=\exp(-1/d(x))$ if
$x\in H\setminus\cup_{i}T_{I_{i}}$. We define
$\rho$ on each strip $S_{I_{i}}$ so that
(6.14)
$$\ell_{\rho}(\gamma)\geq 1$$
for
any curve joining $U_{I_{i}}$ to $H\setminus T_{I_{i}}$.
We also require that $\rho$ extends continuously to the lower
boundary $\Gamma_{I_{i}}$ of $S_{I_{i}}$ (excluding the two endpoints of
$I_{i}$) and that
(6.15)
$$\ell_{\rho}(\gamma)=\infty$$
if $\gamma$ is a curve on $S_{I_{i}}$ whose one endpoint is an
endpoint of $I_{i}$. We remark that the condition (6.15)
as well as the condition (6.17) below, are only used to ensure
that the assumption (A2) is satisfied.
Now for
each $x,y\in C$ with $d(x,y)=d>0$, we have
$$d_{\rho}(x,y)\leq 2\int_{t=0}^{d}\exp(-1/t)\,dt+d\exp(-1/d)\leq 3\exp(-1/d).$$
Thus, for each $n\in\mathbb{N}$, there is $\delta>0$ such that
$d_{\rho}(x,y)\leq d(x,y)^{n}$ if $x,y\in C$ and $d(x,y)<\delta$. By Lemma
3.1, this implies $\operatorname{Dim}_{\rho}(C)=0$.
We continue the construction inside the triangles $U_{I_{i}}$. We choose
intervals $I_{i,j}\subset I_{i}$ so that
$C_{i}=I_{i}\setminus\cup_{j}I_{i,j}$ is a Cantor set and
(6.16)
$$\sum_{i,j}\operatorname{diam}_{d}(I_{i,j})^{1/2}\leq 1.$$
We define
$\rho(x)=f_{i}(x)\exp(-1/d(x))$ on $U_{i}\setminus\cup_{j}T_{I_{i,j}}$
where $f_{i}(x)$ is a continuous weight that is bounded if $x$ is
bounded away from the endpoints of $I_{i}$. Close to the
endpoints of $I_{i}$, we make $f_{i}$ so large that
(6.17)
$$\ell_{\rho}(\gamma)=\infty$$
if $\gamma$ is a curve on
$U_{I_{i}}$ whose one endpoint is an endpoint of $I_{i}$.
Also, we define $\rho$ on the strips $S_{I_{i}}$ so that
analogues of (6.14) and (6.15) hold.
As above, we see that $\operatorname{Dim}_{\rho}(C_{i})=0$ for all $i$.
We continue the construction inductively inside the triangles
$U_{I_{i,j}}$. At the step $n$, we obtain Cantor sets $C_{n,i}$
with $\operatorname{Dim}_{\rho}(C_{n,i})=0$. At the end, $\partial_{\rho}H$ will
be the union of all these Cantor sets. Replacing the exponent $1/2$ in
(6.16) by $1/n$ at the step $n$ implies that
$\dim_{d}(\mathbb{R}\setminus\partial_{\rho}H)=0$.
It would be interesting to know, if the analogy of
(5.5) for the packing dimension holds.
Question 6.7.
If $\rho$ is a conformal density on $\mathbb{B}^{n}$, does there exist a
set $A\subset S^{n-1}$ with $\operatorname{Dim}_{d}(A)=0$ such that
$\operatorname{Dim}_{\rho}(S^{n-1}\setminus A)\leq n$?
Acknowledgements. The first author was supported by the
Academy of Finland project #120972 and he wishes to thank professor
Pekka Koskela. The second author was supported by the Academy of
Finland project #126976.
References
[1]
Mario Bonk and Pekka Koskela.
Conformal metrics and size of the boundary.
Amer. J. Math., 124(6):1247–1287, 2002.
[2]
Mario Bonk, Pekka Koskela, and Steffen Rohde.
Conformal metrics on the unit ball in Euclidean space.
Proc. London Math. Soc., 77(3):635–664, 1998.
[3]
Colleen D. Cutler.
The density theorem and Hausdorff inequality for packing measure in
general metric spaces.
Illinois J. Math., 39(4):676–694, 1995.
[4]
Kenneth Falconer.
Techniques in fractal geometry.
John Wiley & Sons Ltd., Chichester, 1997.
[5]
John B. Garnett and Donald E. Marshall.
Harmonic measure, volume 2 of New Mathematical
Monographs.
Cambridge University Press, Cambridge, 2005.
[6]
F. W. Gehring and W. K. Hayman.
An inequality in the theory of conformal mapping.
J. Math. Pures Appl. (9), 41:353–361, 1962.
[7]
C. Kenig, D. Preiss, and T. Toro.
Boundary structure and size in terms of interior and exterior
harmonic measures in higher dimensions.
J. Amer. Math. Soc., 22(3):771–796, 2009.
[8]
N. G. Makarov.
On the distortion of boundary sets under conformal mappings.
Proc. London Math. Soc. (3), 51(2):369–384, 1985.
[9]
N. G. Makarov.
Conformal mapping and Hausdorff measures.
Ark. Mat., 25(1):41–89, 1987.
[10]
Pertti Mattila.
Geometry of Sets and Measures in Euclidean Spaces: Fractals and
rectifiability.
Cambridge University Press, Cambridge, 1995.
[11]
Tomi Nieminen.
Conformal metrics and boundary accessibility.
Illinois J. Math., 53(1):25–38, 2009.
[12]
Tomi Nieminen and Timo Tossavainen.
Conformal metrics on the unit ball: the Gehring-Hayman property
and the volume growth.
Conform. Geom. Dyn., 13:225–231, 2009.
[13]
Ch. Pommerenke.
Boundary behaviour of conformal maps, volume 299 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of
Mathematical Sciences].
Springer-Verlag, Berlin, 1992. |
Parameterized Resolution with bounded conjunction
Stefan Dantchev and Barnaby Martin
Engineering and Computing Sciences, Durham University, U.K.
Supported by EPSRC grant EP/G020604/1.
Abstract
We provide separations between the parameterized versions of Res$(1)$ (Resolution) and Res$(2)$. Using a different set of parameterized contradictions, we also separate the parameterized versions of Res${}^{*}(1)$ (tree-Resolution) and Res${}^{*}(2)$.
1 Introduction
In a series of papers [8, 3, 4, 5] a program of parameterized proof complexity is initiated and various lower bounds and classifications are extracted. The program generally aims to gain evidence that $\textsc{W}[2]$ is different from FPT (though in the journal version [9] of [8] the former becomes W[SAT], and in the note [14] $\textsc{W}[1]$ is entertained).
Parameterized proof (in fact, refutation) systems aim at refuting parameterized contradictions which are pairs $(\mathcal{F},k)$ in which $\mathcal{F}$ is a propositional CNF with no satisfying assingment of weight $\leq k$. Several parameterized (hereafter often abbreviated as “p-”) proof systems are discussed in [8, 3, 5]. The lower bounds in [8, 3] and [5] amount to proving that the systems p-tree-Resolution, p-Resolution and p-bounded-depth Frege are not fpt-bounded. Indeed, this is witnessed by the Pigeonhole principle, and so holds even when one considers parameterized contradictions $(\mathcal{F},k)$ where $\mathcal{F}$ is itself an actual contradiction. Such parameterized contradictions are termed “strong” in [5], in which the authors suggest these are the only parameterized contradictions that should be considered, as general lower bounds – even in p-bounded-depth Frege – are trivial (see [5]). We sympathise with this outlook, but remind that there are alternative parameterized proof systems built from embedding (see [8, 9]) for which no good lower bounds are known even for general parameterized contradictions.
Krajíček introduced the system Res$(j)$ of Resolution-with-bounded-conjunction in [13]. The tree-like variant of this system is normally denoted Res${}^{*}(j)$. Res$(j+1)$ incorporates Res$(j)$ and is ostensibly more powerful. This was demonstrated first for Res$(1)$ and Res$(2)$ in [2], where a quasi-polynomial separation was given. This was improved in [1], until an exponential separation was given in [16], together with like separations for Res$(j)$ and Res$(j+1)$, for $j>1$. Similar separations of Res${}^{*}(j)$ and Res${}^{*}(j+1)$ were given in [11]. We are motivated mainly by the simplified and improved bounds of [7], which use relativisations of the Least number principle, LNP${}_{n}$ and an ordered variant thereof, the Induction principle, IP${}_{n}$. The contradiction LNP${}_{n}$ asserts that a partial $n$-order has no minimal element. In the literature it enjoys a myriad of alternative names: the Graph Ordering Principle GOP, Ordering Principle OP and Minimal Element Principle MEP. Where the order is total it is also known as TLNP and GT. The contradiction IP${}_{n}$ uses the built-in order of $\{1,\ldots,n\}$ and asserts that: $1$ has property $P$, $n$ fails to have property $P$, and any number having property $P$ entails a larger number also having property $P$. Relativisation of these involves asserting that everything holds only on some non-empty subset of the domain (in the case of IP${}_{n}$ we force $1$ and $n$ to be in this relativising subset).
In the world of parameterized proof complexity, we already have lower bounds for $\mathrm{p\mbox{-}Res}(j)$ (as we have for p-bounded-depth Frege), but we are still interested in separating levels $\mathrm{p\mbox{-}Res}(j)$. We are again able to use the relativised least number principle, RLNP${}_{n}$ to separate p-Res$(1)$ and p-Res$(2)$. Specifically, we prove that $(\mathrm{RLNP}_{n},k)$ admits a polynomial-sized in $n$ refutation in Res$(2)$, but all p-Res$(1)$ refutations of $(\mathrm{RLNP}_{n},k)$ are of size $\geq n^{\sqrt{k/4}}$. Although we use the same principle as [7], the proof given there does not adapt to the parameterized world, and instead we look for inspiration to the proof given in [5] for the Pigeonhole principle. For tree-Resolution, the situation is more complicated. The Relativised induction principle RIP${}_{n}$ of IP${}_{n}$ admits fpt-bounded proofs in Res${}^{*}(1)$, indeed of size $O(k!)$, therefore we are forced to alter this principle. Thus we come up with the Relativised vectorised induction principle RVIP${}_{n}$. We are able to show that $(\mathrm{RVIP}_{n},k)$ admits $O(n^{4})$ refutations in Res${}^{*}(2)$, while every refutation in Res${}^{*}(1)$ is of size $\geq n^{k/16}$. Note that both of our parameterized contradictions are “strong”, in the sense of [5]. We go on to give extended versions of RVIP${}_{n}$ and explain how they separate p-Res${}^{*}(j)$ from p-Res${}^{*}(j+1)$, for $j>1$.
This paper is organised as follows. After the preliminaries, we give our separations of p-Res${}^{*}(j)$ from p-Res${}^{*}(j+1)$ in Section 3 and our separation of p-Res$(1)$ from p-Res$(2)$ in Section 4. We then conclude with some remarks and open questions.
2 Preliminaries
A parameterized language is a language $L\subseteq\Sigma^{*}\times\mathbb{N}$; in an instance $(x,k)\in L$, we refer to $k$ as the parameter. A parameterized language is fixed-parameter tractable (fpt - and in FPT) if membership in $L$ can be decided in time $f(k).|x|^{O(1)}$ for some computable function $f$. If FPT is the parameterized analog of P, then (at least) an infinite chain of classes vye for the honour to be the analog of NP. The so-called W-hierarchy sit thus: $\textsc{FPT}\subseteq\textsc{W}[1]\subseteq\textsc{W}[2]\subseteq\ldots%
\subseteq\textsc{W[SAT]}$. For more on parameterized complexity and its theory of completeness, we refer the reader to the monographs [10, 12]. Recall that the weight of an assignment to a propositional formula is the number of variables evaluated to true. Of particular importance to us is the parameterized problem Bounded-CNF-Sat whose input is $(\mathcal{F},k)$ where $\mathcal{F}$ is a formula in CNF and whose yes-instances are those for which there is a satisfying assignment of weight $\leq k$. Bounded-CNF-Sat is complete for the class $\textsc{W}[2]$, and its complement (modulo instances that are well-formed formulae) PCon is complete for the class co-$\textsc{W}[2]$. Thus, PCon is the language of parameterized contradictions, $(\mathcal{F},k)$ s.t. $\mathcal{F}$ is a CNF which has no satisfying assignment of weight $\leq k$.
A proof system for a parameterized language $L\subseteq\Sigma^{*}\times\mathbb{N}$ is a poly-time computable function $P:\Sigma^{*}\rightarrow\Sigma^{*}\times\mathbb{N}$ s.t. $\mathrm{range}(P)=L$. $P$ is fpt-bounded if there exists a computable function $f$ so that each $(x,k)\in L$ has a proof of size at most $f(k).|x|^{O(1)}$.
These definitions come from [3, 4, 5] and are slightly different from those in [8, 9] (they are less unwieldy and have essentially the same properties). The program of parameterized proof complexity is an analog of that of Cook-Reckow [6], in which one seeks to prove results of the form $\textsc{W}[2]\neq$co-$\textsc{W}[2]$ by proving that parameterized proof systems are not fpt-bounded. This comes from the observation that there is an fpt-bounded parameterized proof system for a co-$\textsc{W}[2]$-complete $L$ iff $\textsc{W}[2]=$co-$\textsc{W}[2]$.
Resolution is a refutation system for sets of clauses (formulae in CNF) $\Sigma$. It operates on clauses by the resolution rule, in which from $(P\vee x)$ and $(Q\vee\neg x)$ one can derive $(P\vee Q)$ ($P$ and $Q$ are disjunctions of literals), with the goal being to derive the empty clause. The only other permitted rule in weakening – from $P$ to derive $P\vee l$ for a literal $l$. We may consider a Resolution refutation to be a DAG whose sources are labelled by initial clauses, whose unique sink is labelled by the empty clause, and whose internal nodes are labelled by derived clauses. As we are not interested in polynomial factors, we will consider the size of a Resolution refutation to be the size of this DAG. Further, we will measure this size of the DAG in terms of the number of variables in the clauses to be resolved – we will never consider CNFs with number of clauses superpolynomial in the number of variables.
We define the restriction of Resolution, tree-Resolution, in which we insist the DAG be a tree.
The system of parameterized Resolution [8] seeks to refute the parameterized contradictions of PCon. Given $(\mathcal{F},k)$, where $\mathcal{F}$ is a CNF in variables $x_{1},\ldots,x_{n}$, it does this by providing a Resolution refutation of
$$\mathcal{F}\cup\{\neg x_{i_{1}}\vee\ldots\vee\neg x_{i_{k+1}}:1\leq i_{1}<%
\ldots<i_{k+1}\leq n\}.$$
(1)
Thus, in parameterized Resolution we have built-in access to these additional clauses of the form $\neg x_{i_{1}}\vee\ldots\vee\neg x_{i_{k+1}}$, but we only count those that appear in the refutation.
A $j$-clause is an arbitrary disjunction of conjunctions of size at most $j$. Res$(j)$ is a system to refute a set of $j$-clauses. There are four derivation rules. The $\wedge$-introduction rule allows one to derive from $P\vee\bigwedge_{i\in I_{1}}l_{i}$ and $Q\vee\bigwedge_{i\in I_{2}}l_{i}$, $P\vee Q\vee\bigwedge_{i\in I_{1}\cup I_{2}}l_{i}$, provided $|I_{1}\cup I_{2}|\leq j$ ($P$ and $Q$ are $j$-clauses). The cut (or resolution) rule allows one to derive from $P\vee\bigvee_{i\in I}l_{i}$ and $Q\vee\bigwedge_{i\in I}\neg l_{i}$, $P\vee Q$. Finally, the two weakening rules allow the derivation of $P\vee\bigwedge_{i\in I}l_{i}$ from $P$, provided $|I|\leq j$, and $P\vee\bigwedge_{i\in I_{1}}l_{i}$ from $P\vee\bigwedge_{i\in I_{1}\cup I_{2}}l_{i}$.
If we turn a Res$(j)$ refutation of a given set of $j$-clauses $\Sigma$ upside-down, i.e. reverse the edges of the underlying graph and negate the $j$-clauses on the vertices, we get a special kind of restricted branching $j$-program. The restrictions are
as follows.
Each vertex is labelled by a $j$-CNF which partially represents the
information
that can be obtained along any path from the source to the vertex (this is a record in the parlance of [15]).
Obviously, the (only) source is labelled with the constant $\top$.
There are two kinds of queries, which can be made by a vertex:
1.
Querying a new $j$-disjunction, and branching on the answer: that is, from $\mathcal{C}$ and the question $\bigvee_{i\in I}l_{i}?$ we split on $\mathcal{C}\wedge\bigvee_{i\in I}l_{i}$ and $\mathcal{C}\wedge\bigwedge_{i\in I}\neg l_{i}$.
2.
Querying a known $j$-disjunction, and splitting it according to
the answer: that is, from $\mathcal{C}\wedge\bigvee_{i\in I_{1}\cup I_{2}}l_{i}$ and the question $\bigvee_{i\in I_{1}}l_{i}?$ we split on $\mathcal{C}\wedge\bigvee_{i\in I_{1}}l_{i}$ and $\mathcal{C}\wedge\bigvee_{i\in I_{2}}l_{i}$.
There are two ways of forgetting information. From $\mathcal{C}_{1}\cup\mathcal{C}_{2}$ we can move to $\mathcal{C}_{1}$. And from $\mathcal{C}\wedge\bigvee_{i\in I_{1}}l_{i}$ we can move to $\mathcal{C}\wedge\bigvee_{i\in I_{1}\cup I_{2}}l_{i}$. The point is that forgetting allows us to equate the information obtained along two different branches and thus to merge them into a single new vertex. A sink of the branching $j$-program must be labelled with the negation of a $j$-clause from $\Sigma$. Thus the branching $j$-program is supposed by default to solve the Search problem for $\Sigma$: given an assignment of the variables, find a clause which is falsified under this assignment.
The equivalence between a Res$(j)$ refutation of $\Sigma$ and a branching $j$-program of the kind above is obvious. Naturally, if we allow querying single variables only, we get branching $1$-programs – decision DAGs – that correspond to Resolution. If we do not
allow the forgetting of information, we will not be able to merge distinct
branches, so what we get is a class of decision trees that correspond
precisely to the tree-like version of these refutation systems. These decision DAGs permit the view of Resolution as a game between a Prover and Adversary (originally due to Pudlak in [15]). Playing from the unique source, Prover questions variables and Adversary answers either that the variable is true or false (different plays of Adversary produce the DAG). Internal nodes are labelled by conjunctions of facts (records to Pudlak) and the sinks hold conjunctions that contradict an initial clause. Prover may also choose to forget information at any point – this is the reason we have a DAG and not a tree. Of course, Prover is destined to win any play of the game – but a good Adversary strategy can force that the size of the decision DAG is large, and many Resolution lower bounds have been expounded this way.
We may consider any refutation system as a parameterized refutation system, by the addition of the clauses given in (1). In particular, parameterized Res$(j)$ – p-Res$(j)$ – will play a part in the sequel.
3 Separating p-Res${}^{*}(j)$ and p-Res${}^{*}(j+1)$
The Induction Principle $\mathrm{IP}_{n}$ (see [7]) is given by the following clauses:
$$\begin{array}[]{cl}P_{1},\neg P_{n}\\
\bigvee_{j>i}S_{i,j}&i\in[n-1]\\
\neg S_{i,j}\vee\neg P_{i}\vee P_{j}&i\in[n-1],j\in[n]\end{array}$$
The Relativised Induction Principle $\mathrm{RIP}_{n}$ (see [7]) is similar, and is given as follows.
$$\begin{array}[]{cl}R_{1},P_{1},R_{n},\neg P_{n}\\
\bigvee_{j>i}S_{i,j}&i\in[n-1]\\
\neg S_{i,j}\vee\neg R_{i}\vee\neg P_{i}\vee R_{j}&i\in[n-1],j\in[n]\\
\neg S_{i,j}\vee\neg R_{i}\vee\neg P_{i}\vee P_{j}&i\in[n-1],j\in[n]\\
\end{array}$$
The important properties of $\mathrm{IP}_{n}$ and $\mathrm{RIP}_{n}$, from the perspective of [7], are as follows. $\mathrm{IP}_{n}$ admits refutation in $\mathrm{Res}^{*}(1)$ in polynomial size, as does $\mathrm{RIP}_{n}$ in $\mathrm{Res}^{*}(2)$. But all refutations of $\mathrm{RIP}_{n}$ in $\mathrm{Res}^{*}(1)$ are of exponential size. In the parameterized world things are not quite so well-behaved. Both $\mathrm{IP}_{n}$ and $\mathrm{RIP}_{n}$ admit refutations of size, say, $\leq 4k!$ in $\mathrm{p\mbox{-}Res}^{*}(1)$; just evaluate variables $S_{i,j}$ from $i:=n-1$ downwards. Clearly this is an fpt-bounded refutation. We are forced to consider something more elaborate, and thus we introduce the Relativised Vectorised Induction Principle $\mathrm{RVIP}_{n}$.
$$\begin{array}[]{cl}R_{1},P_{1,1},R_{n},\neg P_{n,j}&j\in[n]\\
\bigvee_{l>i,m\in[n]}S_{i,j,l,m}&i,j\in[n]\\
\neg S_{i,j,l,m}\vee\neg R_{i}\vee\neg P_{i,j}\vee R_{l}&i\in[n-1],j,l,m\in[n]%
\\
\neg S_{i,j,l,m}\vee\neg R_{i}\vee\neg P_{i,j}\vee P_{l,m}&i\in[n-1],j,l,m\in[%
n]\\
\end{array}$$
3.1 Lower bound: A strategy for Adversary over $\mathrm{RVIP}_{n}$
We will give a strategy for Adversary in the game representation of a $\mathrm{Res}^{*}(1)$ refutation. For convenience, we will assume that Prover never questions the same variable twice.
Information conceded by Adversary of the form $R_{i},\neg R_{i},P_{i,j}$ and $S_{i,j,l,m}$ makes the element $i$ busy ($\neg P_{i,j}$ and $\neg S_{i,j,l,m}$ do not).
The source is the largest element $i$ for which there is a $j$ such that Adversary has conceded $R_{i}\wedge P_{i,j}$. Initially, the source is $1$. Adversary always answers $R_{1},P_{1,1},$ $R_{n},\neg P_{n,j}$ (for $j\in[n]$), according to the axioms.
If $i$ is below the source. When Adversary is asked $R_{i}$, $P_{i,j}$ or $S_{i,j,l,m}$, then he answers $\bot$.
If $i$ is above the source. When Adversary is asked $R_{i}$, or $P_{i,j}$, then he gives Prover a free choice unless: 1.) $R_{i}$ is asked when some $P_{i,j}$ was previously answered $\top$ (in this case $R_{i}$ should be answered $\bot$); or 2.) Some $P_{i,j}$ is asked when $R_{i}$ was previously answered $\top$ (in this case $P_{i,j}$ should be answered $\bot$). When Adversary is asked $S_{i,j,l,m}$, then again he offers Prover a free choice. If Prover chooses $\top$ then Adversary sets $P_{i,j}$ to $\bot$.
Suppose $i$ is the source. Then Adversary answers $P_{i,j}$ and $S_{i,j,k,l}$ as $\bot$, unless $R_{i}\wedge P_{i,j}$ witnesses the source. If $R_{i}\wedge P_{i,j}$ witnesses the source, then, if $k$ is not the next non-busy element above $i$, answer $S_{i,j,l,m}$ as $\bot$. If $k$ is the next non-busy element above $i$, then give $S_{i,j,l,m}$ a free choice, unless $\neg P_{l,m}$ is already conceded by Adversary, in which case answer $\bot$.
Using this strategy, Adversary can not be caught lying until either he has conceded that $k$ variables are true, or he has given Prover at least $n-k$ free choices.
Let $T(p,q)$ be some monotone decreasing function that bounds the size of the game tree from the point at which Prover has answered $p$ free choices $\top$ and $q$ free choices $\bot$. We can see that $T(p,q)\geq T(p+a,q)+T(p,q+a)+1$ and $T(k,n-k)\geq 0$. The following solution to this recurrence can be found in [9].
Corollary 1.
Every $\mathrm{p\mbox{-}Res}^{*}(1)$ refutation of $\mathrm{RVIP}_{n}$ is of size $\geq n^{k/16}$.
We may increase the number of relativising predicates to define $\mathrm{RVIP}^{r}_{n}$ (note $\mathrm{RVIP}^{1}_{n}=\mathrm{RVIP}_{n}$).
$$\begin{array}[]{cl}R^{1}_{1},\ldots,R^{1}_{1},P_{1,1},R^{1}_{n},\ldots,R^{r}_{%
n}\neg P_{n,j}&j\in[n]\\
\bigvee_{l>i,m\in[n]}S_{i,j,l,m}&i,j\in[n]\\
\neg S_{i,j,l,m}\vee\neg R^{1}_{i}\vee\ldots\vee\neg R^{r}_{i}\vee\neg P_{i,j}%
\vee R^{1}_{l}&i\in[n-1],j,l,m\in[n]\\
\vdots\\
\neg S_{i,j,l,m}\vee\neg R^{1}_{i}\vee\ldots\vee\neg R^{r}_{i}\vee\neg P_{i,j}%
\vee R^{r}_{l}&i\in[n-1],j,l,m\in[n]\\
\neg S_{i,j,l,m}\vee\neg R^{1}_{i}\vee\ldots\vee\neg R^{r}_{i}\vee\neg P_{i,j}%
\vee P_{l,m}&i\in[n-1],j,l,m\in[n]\\
\end{array}$$
We show how to adapt the previous argument in order to demonstrate the following.
Corollary 2.
Every $\mathrm{p\mbox{-}Res}^{*}(j+1)$ refutation of $\mathrm{RVIP}{j}$ is of size $\geq n^{k/16}$.
We use essentially the same Adversary strategy in a branching $j$-program. We answer questions $l_{1}\vee\ldots\vee l_{j}$ as either forced or free exactly according to the disjunction of how we would have answered the corresponding $l_{i}$s, $i\in[j]$, before (if one $l_{i}$ is free, then the disjunction is also free).
The key point is that once some positive disjunction involving some subset of $R^{1}_{i},\ldots,R^{r}_{i}$ or $P_{i,j}$ (never all of these together, of course), is questioned then, on a positive answer to this, the remaining unquestioned variables of this form should be set to $\bot$.
3.2 Upper bound: a $\mathrm{Res}^{*}(j+1)$ refutation of $\mathrm{RVIP}^{j}_{n}$
Look at the simpler, but very similar, refutation of $\mathrm{RIP}_{n}$ in $\mathrm{Res}^{*}(2)$, of size $O(n^{2})$, as appears in Figure 1.
Proposition 1.
There is a refutation of $\mathrm{RVIP}^{j}_{n}$ in $\mathrm{Res}^{*}(j+1)$, of size $O(n^{j+3})$.
Proof.
We give the branching program for $j:=1$ – the generalisation is clear.
$$\xymatrix{\neg R_{n}\vee\neg P_{n,n}?\ar[d]_{\top}\ar[r]^{\bot}&\#&\\
\vdots\ar[d]_{\top}\\
\neg R_{n}\vee\neg P_{n,1}?\ar[d]_{\top}\ar[r]^{\bot}&\#&\\
\neg R_{n-1}\vee\neg P_{n-1,n}?\ar[d]_{\top}\ar[r]^{\bot}&S_{n-1,n}?\ar[d]_{%
\top}\ar[r]^{\bot}&\#\\
\vdots\ar[d]_{\top}&\#&\\
\neg R_{n-1}\vee\neg P_{n-1,1}?\ar[d]_{\top}\ar[r]^{\bot}&S_{n-1,n}?\ar[d]_{%
\top}\ar[r]^{\bot}&\#\\
\vdots\ar[d]_{\top}&\#\\
\neg R_{1}\vee\neg P_{1,n}?\ar[d]_{\top}\ar[r]^{\bot}&S_{1,n}?\ar[d]_{\top}\ar%
[r]^{\bot}&\cdots\ar[r]^{\bot}&S_{1,2}?\ar[d]_{\top}\ar[r]^{\bot}&\#\\
\vdots\ar[d]_{\top}&\#&&\#\\
\neg R_{1}\vee\neg P_{1,1}?\ar[d]_{\top}\ar[r]^{\bot}&S_{1,n}?\ar[d]_{\top}\ar%
[r]^{\bot}&\cdots\ar[r]^{\bot}&S_{1,2}?\ar[d]_{\top}\ar[r]^{\bot}&\#\\
\#&\#&&\#\\
}$$
∎
4 Separating p-Res$(1)$ and p-Res$(2)$
The Relativized Least Number Principle $\mathrm{RLNP}_{n}$ is given by the following clauses:
$$\begin{array}[]{cl}\neg R_{i}\vee\neg L_{i,i}&i\in[n]\\
\neg R_{i}\vee\neg R_{j}\vee\neg R_{k}\vee\neg L_{i,j}\vee\neg L_{j,k}\vee L_{%
i,k}&i,j,k\in[n]\\
\bigvee_{i\in[n]}S_{i,j}&j\in[n]\\
\neg S_{i,j}\vee\neg R_{j}\vee R_{i}&i,j\in[n]\\
\neg S_{i,j}\vee\neg R_{j}\vee\neg L_{i,j}&i,j\in[n]\\
R_{n}\end{array}$$
The salient properties of $\mathrm{RLNP}_{n}$ are that it is polynomial to refute in $\mathrm{Res}(2)$, but exponential in $\mathrm{Res}(1)$ (see [7]). Polynomiality clearly transfers to fpt-boundedness in $\mathrm{p\mbox{-}Res}(2)$, so we address the lower bound for $\mathrm{p\mbox{-}Res}(1)$.
4.1 Lower bound: A strategy for Adversary over $\mathrm{RLNP}_{n}$
We will give a strategy for Adversary in the game representation of a $\mathrm{p\mbox{-}Res}(1)$ refutation. The argument used in [7] does not adapt to the parameterized case, so we instead use a technique developed for the Pigeonhole principle by Razborov in [5].
Recall that a parameterized clause is of the form $\neg v_{1}\vee\ldots\vee\neg v_{k+1}$ (where each $v_{i}$ is some $R$ ,$L$ or $S$ variable). The $i,j$ appearing in $R_{i}$, $L_{i,j}$ and $S_{i,j}$ are termed co-ordinates. We define the following random restrictions. Set $R_{n}:=\top$. Randomly choose $i_{0}\in[n-1]$ and set $R_{i_{0}}:=\top$ and $L_{i_{0},n}=S_{i_{0},n}:=\top$. Randomly choose $n-\sqrt{n}$ elements from $[n-1]\setminus{i_{0}}$, and call this set $\mathcal{C}$. Set $R_{i}:=\bot$ for $i\in\mathcal{C}$. Pick a random bijection $\pi$ on $\mathcal{C}$ and set $L_{i,j}$ and $S_{i,j}$, for $i,j\in\mathcal{C}$, according to whether $\pi(j)=i$. Set $L_{i,j}=L_{j,i}:=\bot$, if $j\in\mathcal{C}$ and $i\in[n]\setminus(\mathcal{C}\cup\{i_{0}\})$.
What is the probability that a parameterized clause is not evaluated to true by the random assignment? We allow that each of $\neg R_{n}$, $\neg R_{i,0}$, $\neg L_{i_{o},n}$ and $\neg S_{i_{0},n}$ appear in the clause – leaving $k+1-4=k-3$ literals, within must appear $\sqrt{(k-3)/4}$ distinct co-ordinates. The probability that some $\neg R_{i}$ is not true is $\leq\frac{\sqrt{n}}{n-\sqrt{n}}\leq\frac{2}{\sqrt{n}}$. The probability that some $\neg L_{i,j}$ is not true, where one of the co-ordinates $i,j$ is possibly mentioned before, is $\leq\frac{1}{\sqrt{n}}\frac{1}{n-\sqrt{n}}\cdot\frac{n-\sqrt{n}}{n}\leq\frac{2%
}{\sqrt{n}}$. Likewise with $\neg S_{i,j}$. Thus we get that the probability that a parameterized clause is not evaluated to true by the random assignment is $\leq\frac{2}{\sqrt{n}}^{\sqrt{(k-3)/4}}\leq(n/4)^{-\sqrt{k-3}}\leq n^{-\sqrt{k%
/4}}$.
Now we are ready to complete the proof. Suppose fewer than $n^{\sqrt{k/4}}$ parameterized clauses appear in a $\mathrm{p\mbox{-}Res}(1)$ refutation of $\mathrm{RLNP}_{n}$, then there is a random restriction as per the previous paragraph that evaluates all of these clauses to true. What remains is a $\mathrm{Res}(1)$ refutation of $\mathrm{RLNP}_{\sqrt{n}}$, which must be of size larger than $n^{\sqrt{k/4}}$ itself, for $n$ sufficiently large (see [7]). Thus we have proved.
Theorem 1.
Every $\mathrm{p\mbox{-}Res}(1)$ refutation of $\mathrm{RLNP}_{n}$ is of size $\geq n^{\sqrt{k/4}}$.
5 Concluding remarks
It is most natural when looking for separators of $\mathrm{p\mbox{-}Res}^{*}(1)$ and $\mathrm{p\mbox{-}Res}^{*}(2)$ to look for CNFs, like $\mathrm{RVIP}_{n}$ that we have given. $\mathrm{p\mbox{-}Res}^{*}(2)$ is naturally able to process $2$-clauses and we may consider $\mathrm{p\mbox{-}Res}^{*}(1)$ acting on $2$-clauses, when we think of it using any of the clauses obtained from those $2$-clauses by distributivity. In this manner, we offer the following principle as being fpt-bounded for $\mathrm{p\mbox{-}Res}^{*}(2)$ but not fpt-bounded for $\mathrm{p\mbox{-}Res}^{*}(1)$. Consider the two axioms $\forall x(\exists y\neg S(x,y)\wedge T(x,y))\vee P(x)$ and $\forall x,yT(x,y)\rightarrow S(x,y)$. This generates the following system $\Sigma_{PST}$ of $2$-clauses.
$$\begin{array}[]{cl}P_{i}\vee\bigvee_{j\in[n]}(\neg S_{i,j}\wedge T_{i,j})&i\in%
[n]\\
\neg T_{i,j}\vee S_{i,j}&i,j\in[n]\end{array}$$
Note that the expansion of $\Sigma_{PST}$ to CNF makes it exponentially larger. It is not hard to see that $\Sigma_{PST}$ has refutations in $\mathrm{p\mbox{-}Res}^{*}(2)$ of size $O(kn)$, while any refutation in $\mathrm{p\mbox{-}Res}^{*}(1)$ will be of size $\geq n^{k/2}$.
All of our upper bounds, i.e. for both $\mathrm{RVIP}_{n}$ and $\mathrm{RLNP}_{n}$, are in fact polynomial, and do not depend on $k$. That is, they are rather more than fpt-bounded. If we want examples that depend also on $k$ then we may enforce this easily enough, as follows. For a set of clauses $\Sigma$, build a set of clauses $\Sigma^{\prime}_{k}$ with new propositional variables $A$ and $B_{1},B^{\prime}_{1},\ldots,B_{k+1},B^{\prime}_{k+1}$. From each clause $\mathcal{C}\in\Sigma$, generate the clause $A\vee\mathcal{C}$ in $\Sigma^{\prime}_{k}$. Finally, augment $\Sigma^{\prime}_{k}$ with the following clauses: $\neg A\vee B_{1}\vee B^{\prime}_{1}$, …, $\neg A\vee B_{k+1}\vee B^{\prime}_{k+1}$. If $\Sigma$ admits refutation of size $\Theta(n^{c})$ in $\mathrm{p\mbox{-}Res}^{*}(j)$ then $(\Sigma^{\prime}_{k},k)$ admits refutation of size $\Theta(n^{c}+2^{k+1})$. The parameterized contradictions so obtained are no longer “strong”, but we could even enforce this by augmenting instead a Pigeonhole principle from $k+1$ to $k$.
It is hard to prove p-Res$(1)$ lower bounds for parameterized
$k$-clique on a random graph [4], but we now introduce a contradiction that looks similar but for which lower bounds should be easier.
It is a variant of the Pigeonhole principle which could give us another very natural separation of $\mathrm{p\mbox{-}Res}(1)$ from $\mathrm{p\mbox{-}Res}(2)$. Define the contradiction PHP${}_{k+1,n,k}$, on variables $p_{i,j}$ ($i\in[k+1]$ and $j\in[n]$) and $q_{i,j}$ ($i\in[n]$ and $j\in[k]$), and with clauses:
$$\begin{array}[]{ll}\neg p_{i,j}\vee\neg p_{l,j}&i\neq l\in[k+1];j\in[n]\\
\neg q_{i,j}\vee\neg q_{l,j}&i\neq l\in[n];j\in[k]\\
\bigvee_{\lambda\in[n]}p_{i,\lambda}&i\in[k]\\
\neg p_{i,j}\vee\bigvee_{\lambda\in[k]}q_{j,\lambda}&j\in[n]\\
\end{array}$$
We conjecture that this principle, which admits fpt-bounded refutation in $\mathrm{p\mbox{-}Res}(2)$, does not in $\mathrm{p\mbox{-}Res}(1)$.
Finally, we leave open the technical question as to whether suitably defined, further-relativised versions of RLNP${}_{n}$ can separate $\mathrm{p\mbox{-}Res}(j)$ from $\mathrm{p\mbox{-}Res}(j+1)$. We conjecture that they can.
References
[1]
A. Atserias and M. Bonet.
On the automatizability of resolution and related propositional proof
systems.
In 16th Annual Conference of the European Association for
Computer Science Logic, 2002.
[2]
Albert Atserias, Maria Luisa Bonet, and Juan Luis Esteban.
Lower bounds for the weak pigeonhole principle and random formulas
beyond resolution.
Inf. Comput., 176(2):136–152, 2002.
[3]
Olaf Beyersdorff, Nicola Galesi, and Massimo Lauria.
Hardness of parameterized resolution.
Technical report, ECCC, 2010.
[4]
Olaf Beyersdorff, Nicola Galesi, and Massimo Lauria.
Parameterized complexity of dpll search procedures.
In Theory and Applications of Satisfiability Testing - SAT 2011
- 14th International Conference, SAT 2011, pages 5–18, 2011.
[5]
Olaf Beyersdorff, Nicola Galesi, Massimo Lauria, and Alexander A. Razborov.
Parameterized bounded-depth frege is not optimal.
In Automata, Languages and Programming - 38th International
Colloquium, ICALP (1) 2011., pages 630–641, 2011.
[6]
S. Cook and R. Reckhow.
The relative efficiency of propositional proof systems.
Journal of Symbolic Logic, 44(1):36–50, March 1979.
[7]
S. Dantchev.
Relativisation provides natural separations for resolution-based
proof systems.
In Computer Science - Theory and Applications, First
International Computer Science Symposium in Russia, CSR 2006, St. Petersburg,
Russia, June 8-12, 2006, Proceedings, volume 3967 of Lecture Notes in
Computer Science, pages 147–158. Springer, 2006.
[8]
Stefan Dantchev, Barnaby Martin, and Stefan Szeider.
Parameterized proof complexity.
In 48th IEEE Symp. on Foundations of Computer Science, pages
150–160, 2007.
[9]
Stefan Dantchev, Barnaby Martin, and Stefan Szeider.
Parameterized proof complexity.
Computational Complexity, 20, 2011.
[10]
Rodney G. Downey and Michael R. Fellows.
Parameterized Complexity.
Monographs in Computer Science. Springer Verlag, 1999.
[11]
J.L. Esteban, N. Galesi, and J. Mesner.
On the complexity of resolution with bounded conjunctions.
In Proceedings of the 29th International Colloquium on Automata,
Languages and Programming, 2002.
[12]
Jörg Flum and Martin Grohe.
Parameterized Complexity Theory, volume XIV of Texts in
Theoretical Computer Science. An EATCS Series.
Springer Verlag, 2006.
[13]
J. Krajíĉek.
On the weak pigeonhole principle.
Fundamenta Mathematica, 170:123–140, 2001.
[14]
Barnaby Martin.
Parameterized proof complexity and W[1].
CoRR: arxiv.org/abs/1203.5323, 2012.
Submitted to Information Processing Letters.
[15]
P. Pudlák.
Proofs as games.
American Mathematical Monthly, pages 541–550, June-July 2000.
[16]
N. Segerlind, S. Buss, and R. Impagliazzo.
A switching lemma for small restrictions and lower bounds for $k$-dnf
resolution.
In Proceedings of the 43rd annual symposium on Foundations Of
Computer Science. IEEE, November 2002. |
TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise
Amirmasoud Ghiassi
1TU Delft, The Netherlands
11email: lydiaychen@ieee.org
1{s.ghiassi,t.younesian}@tudelft.nl
Taraneh Younesian
1TU Delft, The Netherlands
11email: lydiaychen@ieee.org
1{s.ghiassi,t.younesian}@tudelft.nl
Robert Birke
2ABB Future Labs, Switzerland
2robert.birke@ch.abb.com
Lydia Y.Chen
1TU Delft, The Netherlands
11email: lydiaychen@ieee.org
1{s.ghiassi,t.younesian}@tudelft.nl
Abstract
Robustness to label noise is a critical property for weakly-supervised classifiers trained on massive datasets. The related work on resilient deep networks tend to focus on a limited set of synthetic noise patterns, and with disparate views on their impacts, e.g., robustness against symmetric v.s. asymmetric noise patterns.
In this paper, we first derive analytical bound for any given noise patterns. Based on the insights, we design TrustNet that first adversely learns the pattern of noise corruption, being it both symmetric or asymmetric, from a small set of trusted data. Then, TrustNet is trained via a robust loss function, which weights the given labels against the inferred labels from the learned noise pattern.
The weight is adjusted based on model uncertainty across training epochs.
We evaluate TrustNet on synthetic label noise for CIFAR-10 and CIFAR-100, and real-world data with label noise, i.e., Clothing1M. We compare against state-of-the-art methods demonstrating the strong robustness of TrustNet under a diverse set of noise patterns.
Keywords:Noisy label patterns resilient deep networks adversarial learning robust loss function
1 Introduction
Dirty data is a long standing challenge for machine learning models. The recent surge of self-generated data significantly aggravates the dirty data problems [2, 27]. It is shown that data sets collected from the wild can contain corrupted labels as high as 40% [26]. Even widely-adopted curated data sets, e.g., CIFAR-10, have incorrectly labeled images [3]. The high learning capacity of deep neural networks can memorize the pattern of correct data and, unfortunately, dirty data as well [1]. As a result, when training on data with non-negligible dirty labels [29], the learning accuracy of deep neural networks can significantly drop.
While the prior art deems it imperative to derive robust neural networks that are resilient to label noise, there is a disparity in which noise patterns to consider and evaluate. The majority of robust deep networks against dirty labels focus on synthetic label noise, which can be symmetric or asymmetric. The former case [3] assumes noise labels can be corrupted into any other classes with equal probability, where the later case [24] assumes only a particular set of classes are swapped, e.g., truck images are often mislabeled as automobile class in CIFAR-10. Patterns of noisy labels observed from real-life data sets, e.g., Clothing1M [26], exhibit not only high percentages of label noise but also more complicated patterns mixing symmetric and asymmetric noises. Moreover, there is a disagreement among related work on which noise patterns are more detrimental to regular networks and difficult to defend against [16, 22].
Noise patterns are commonly captured in transition matrices [3], which describe the probability of how a true label is corrupted into another fake and observable label. A large body of prior art estimates such a labels transition matrix without knowing the true labels and incorporates such information into the learning process [18], particularly the computation of the loss function. Accurate estimation of the transition matrix can improve the robustness of neural networks, but it is extremely complicated when lacking the information on true labels and encountering sophisticated noise patterns [10].
In contrast, adversarial learning [21, 15] advocates to train classification networks jointly with adversarial examples, i.e., corrupted labels. As such, the transition matrix can be conveniently learned with a sufficient number of adversarial examples and their ground truth. This is what we term trusted data that contain not only given labels but also true labels validated by experts.
Hendrycks et. al [10] shows that the resilience of deep networks can be greatly improved by leveraging such trusted data. The challenge is how to learn adversarial networks using a minimum set of trusted data that is difficult to obtain.
In this paper, we first develop a thorough understanding of the noise patterns, ranging from symmetric, asymmetric, and a mix of them. We extend the analysis from [3] and derive the analytical bound for classification accuracy for any given noise pattern. Our theoretical analysis compares real-world noise patterns against synthetic, symmetric, and simple asymmetric, noise.
Our findings on a diverse set of noise patterns lead us to focus on challenging cases where existing robust networks [18, 25] may fall short of defending against.
The second contribution of this paper is a novel noise resilient deep network, namely TrustNet, which leverages a small holdout of trusted data to estimate the noise transition matrix efficiently. Different from conventional adversarial learning, TrustNet only tries to estimate the noise transition matrix, instead of learning the overall representation of adversarial data and hence requires only a small set of adversarial examples. Specifically, we first estimate the noise transition matrix through training ExpertNet [4] on a small set of trusted data, i.e., 10% of the training data. Such a trained ExpertNet can take images and their given labels as inputs and provide estimated labels – additional label information. The core training step of TrustNet is to weight the loss function from the given labels dynamically and inferred labels from ExpertNet. The specific weights are dynamically adjusted every epoch, based on the model confidence. We evaluate TrustNet on CIFAR-10 and CIFAR-100, whose labels are corrupted by synthetically generated noise transition patterns. TrustNet is able to achieve higher accuracy than SCL [24], D2L [25], Boostrap [19] and Forward [18] in all most challenging scenarios. We also demonstrate the effectiveness of TrustNet on a noisy real-world data set, i.e., Clothing1M, and also achieve higher accuracy.
2 Related work
The problem of noisy labeled data has been addressed in several recent studies. We first summarize the impact of noise patterns, followed by the defense strategies that specifically leverage noise patterns.
2.1 Impact of Noise Patterns
Understanding the effect of label noise on the performance of the learning models is crucial to make them robust.
The impact of label noise in deep neural networks is first characterized [3] by the theoretical testing accuracy over a limited set of noise patterns. [22] suggests an undirected graphical model for modeling label noise in deep neural networks and indicates the symmetric noise to be more challenging than asymmetric. Having multiple untrusted data sources is studied by [12] by considering label noise as one of the attributes of mistrust and assigning weights to different sources based on their reliability. However, it remains unclear how various kinds of noise patterns impact learning.
2.2 Resilient Networks Against (A)Symmetric Noise
Symmetric Noise
The following studies tackle the problem of symmetric label noise, meaning that corrupted labels can be any of the remaining classes with equal probability. One approach is to train the network based on noise resilient loss functions. D2L [16] monitors the changes in Local Intrinsic Dimension (LID) and incorporates LID into their loss function for the symmetric label noise. [10] introduces a loss correction technique and estimates a label corruption matrix for symmetric and asymmetric noise.
Leveraging two different neural networks is another method to overcome label noise. Co-teaching [8] trains two neural networks while crossing the samples with the smallest loss between the networks for both symmetric and asymmetric noise patterns.
Co-teaching$+$ [28] focuses on updating by disagreement between the two networks on small-loss samples. [11] combats uniform label flipping via a curriculum provided by the MentorNet for the StudentNet. However, these works do not explicitly model the noise pattern in their resilient models.
Asymmetric Noise
Another stream of related work considers both symmetric and asymmetric noise.One key idea is to differentiate clean and noisy samples by exploring their dissimilarity.
[9, 14] introduce class prototypes for each class and compare the samples with the prototypes to detect noisy and clean samples.
Decoupling [17] uses two neural networks and updates the networks when a disagreement happens between the networks.
Estimation of the noise transition matrix is another line of research to overcome label noise. Masking [7] uses human cognition to estimate noise and build a noise transition matrix. Forward [18] corrects the deep neural network loss function while benefiting from a noise transition matrix. However, these studies do not consider the information in the noisy labels to estimate the matrix.
Building a robust loss function against label noise has been studied in the following works, although the dynamics of the learning model seem to be neglected. [30] provides a generalization of categorical cross entropy loss function for deep neural networks. The study [24], namely SCL, uses symmetric cross entropy as the loss function. Bootstrapping [19] combines perceptual consistency with the prediction objective by using a reconstruction loss. The research in [5, 20] changes the architecture of the neural network by adding a linear layer on top.
In this work, we study both symmetric and various kinds of asymmetric label noise. We use the information in the noisy labels to estimate the noise transition matrix in an adversarial learning manner. Furthermore, we benefit from a dynamic update in our proposed loss function to tackle the label noise problem.
3 Understanding DNNs trained with noisy labels
In this section, we present theoretical bounds on the test accuracy of deep neural networks assumed to have high learning capacity. Test accuracy is a common metric defined as the probability that the predicted label is equal to the given label. We extend prior art results [3] by deriving bounds for generic label noise distributions. We apply our formulation on three exemplary study cases and verify the theoretical bounds against experimental data. Finally, we compare bounds for different noise patterns providing insights on their difficulty for regular networks.
3.1 Preliminaries
Consider the classification problem having dataset $\mathcal{D}=\{(\bm{x}_{1},{y}_{1}),(\bm{x}_{2},{y}_{2}),...,(\bm{x}_{N},{y}_{N%
})\}$ where $\bm{x}_{k}$ denotes the $k^{th}$ observed sample, and ${y}_{k}\in C:=\{0,...,c-1\}$ the corresponding given class label over $c$ classes affected by label noise. Let $\mathcal{F}(\cdot,\bm{\theta})$ denote a neural network parameterized by $\bm{\theta}$, and $y^{\mathcal{F}}$ denote the predicted label of $\bm{x}$ given by the network $y^{\mathcal{F}}=\mathcal{F}(\bm{x},\bm{\theta})$. The label corruption process is characterised by a transition matrix $T_{ij}=P(y=j|\hat{y}=i)$ where $\hat{y}$ is the true label. Synthetic noise patterns are expressed as a label corruption probability $\varepsilon$ plus a noise label distribution. For example, symmetric noise is defined by $\varepsilon$ describing the corruption probability, i.e. $T_{ii}=1-\varepsilon,\forall i\in C$, plus a uniform label distribution across the other labels, i.e. $T_{ij}=\frac{\varepsilon}{c-1},\forall i\neq j\in C$.
3.2 New Test Accuracy Bounds
To extend the previous bounds, we first consider the case where all classes are affected by the same noise ratio. We then further extend to the case where only a subset of classes is affected by noise.
All class noise: All classes are affected by the same noise ratio $\varepsilon$, i.e., meaning only $1-\varepsilon$ percentage of given labels are the true labels.
Lemma 1
For noise with fixed noise ratio $\varepsilon$ for all classes and any given label distribution $P(y=j),i\neq j$, the test accuracy is
$$\begin{split}\displaystyle P({y}^{\mathcal{F}}=y)=(1-\varepsilon)^{2}+%
\varepsilon^{2}\sum_{j\neq i}^{C}{P^{2}(y=j)}\end{split}$$
(1)
Proof
We have that $T_{ii}=1-\varepsilon,\forall i\in C$ since all classes are affected by the same noise ratio. Moreover, the probability of selecting noisy class labels is scaled by the noise ratio $T_{ij}=\varepsilon\;P(y=j),j\neq i\in C$. Now:
$$\begin{split}\displaystyle P(y^{\mathcal{F}}=y)&\displaystyle=\sum_{i}^{C}{P(%
\hat{y}=i)P(y^{\mathcal{F}}=y|\hat{y}=i)}\\
&\displaystyle=\sum_{i}^{C}{P(\hat{y}=i)}\sum_{j}^{C}{T^{2}_{ij}}\\
&\displaystyle=\sum_{i}^{C}{P(\hat{y}=i)}[T^{2}_{ii}+\sum_{j\neq i}^{C}{T^{2}_%
{ij}}]\\
&\displaystyle=\sum_{i}^{C}{P(\hat{y}=i)}[(1-\varepsilon)^{2}+\varepsilon^{2}%
\sum_{j\neq i}^{C}{P^{2}(y=j)}].\end{split}$$
(2)
Since $\sum_{i}^{C}{P(\hat{y}=i)}=1$, we obtain Eq. 1. $\squareforqed$
Partial class noise: in this pattern only a subset $S$ of class labels are affected by a noise ratio, whereas the set $U=C\;\backslash\;S$ is unaffected by any label noise.
Lemma 2
For partial class noise with equal class label probability, where $S$ is the set affected by noise with ratio $\varepsilon$ and $U$ is the set of unaffected labels, the test accuracy is
$$\begin{split}\displaystyle P(y^{\mathcal{F}}=y)=\frac{|U|}{|C|}+\frac{|S|}{|C|%
}[{(1-\varepsilon)}^{2}+\varepsilon^{2}\sum_{j\neq i}^{S}{P^{2}(y=j)}]\end{split}$$
(3)
Proof
We have that for affected labels in $S$ the same noise transition definitions hold, i.e. $T_{ii}=1-\varepsilon,\forall i\in S$ and $T_{ij}=\varepsilon\;P(y=j),j\neq i\in S$. For unaffected labels we have that $\varepsilon=0$ hence $T_{ii}=1,\forall i\in U$ and $T_{ij}=0,j\neq i\in U$.
Moreover, $P(\hat{y}=i)=\frac{1}{|C|}$ assuming all class labels are equally probable. Now:
$$\begin{split}\displaystyle P(y^{f}=y)=&\displaystyle\sum_{i}^{C}{P(\hat{y}=i)P%
(y^{f}=y|\hat{y}=i)}\\
\displaystyle=&\displaystyle\sum_{i}^{|U|}{P(\hat{y}=i)P(y^{f}=y|\hat{y}=i)}+%
\sum_{i^{\prime}}^{|S|}{P(\hat{y}=i^{\prime})P(y^{f}=y|\hat{y}=i^{\prime})}\\
\displaystyle=&\displaystyle\sum_{i}^{U}{P(\hat{y}=i)}\sum_{j}^{U}{T^{2}_{ij}}%
+\sum_{i^{\prime}}^{S}{P(\hat{y}=i^{\prime})}\sum_{j^{\prime}}^{S}{T^{2}_{i^{%
\prime}j^{\prime}}}\\
\displaystyle=&\displaystyle\sum_{i}^{U}{P(\hat{y}=i)}[T^{2}_{ii}+\sum_{j\neq i%
}^{U}{T^{2}_{ij}}]+\sum_{i^{\prime}}^{S}{P(\hat{y}=i^{\prime})}[T^{2}_{i^{%
\prime}i^{\prime}}+\sum_{j^{\prime}\neq i^{\prime}}^{S}{T^{2}_{i^{\prime}j^{%
\prime}}}]\\
\displaystyle=&\displaystyle\frac{1}{|C|}\sum_{i}^{U}[T^{2}_{ii}+\sum_{j\neq i%
}^{U}{T^{2}_{ij}}]+\frac{1}{|C|}\sum_{i^{\prime}}^{S}[T^{2}_{i^{\prime}i^{%
\prime}}+\sum_{j^{\prime}\neq i^{\prime}}^{S}{T^{2}_{i^{\prime}j^{\prime}}}]\\
\displaystyle=&\displaystyle\frac{1}{|C|}\sum_{i}^{U}1+\frac{1}{|C|}\sum_{i^{%
\prime}}^{S}[(1-\varepsilon)^{2}+\varepsilon^{2}\sum_{j^{\prime}\neq i^{\prime%
}}^{S}{P^{2}(y=j^{\prime})}]\\
\displaystyle=&\displaystyle\frac{|U|}{|C|}+\frac{|S|}{|C|}[(1-\varepsilon)^{2%
}+\varepsilon^{2}\sum_{j^{\prime}\neq i^{\prime}}^{S}{P^{2}(y=j^{\prime})}]\\
\end{split}$$
(4)
$\squareforqed$
3.3 Validation of Theoretical Bounds
We validate our new bounds on three study cases by applying our theoretical bounds to three different noise patterns for CIFAR-10 under different noise ratios and comparing the results against empirical accuracy results.
As first new noise pattern, we consider noisy class labels following a truncated normal distribution $\mathcal{N}^{T}(\mu,\sigma,a,b)$. This noise pattern is motivated by the targeted adversarial attacks [6]. We scale $\mathcal{N}^{T}(\mu,\sigma,a,b)$ by the number of classes and center it around a target class $\tilde{c}$ by setting $\mu=\tilde{c}$ and use $\sigma$ to control how spread out the noise is. $a$ and $b$ simply define the class label boundaries, i.e. $a=0$ and $b=c-1$. To compute the bound, we estimate the empirical distribution at the different classes and apply Eq. 1. The second noise pattern extends our previous case. This distribution, referred in short as bimodal hereon, combines two truncated normal distributions. It has two peaks in $\mu_{1}$ and $\mu_{2}$ with two different shapes controlled by $\sigma_{1}$ and $\sigma_{2}$. The peaks are centered on two different target classes $\mu_{1}=\tilde{c_{1}}$ and $\mu_{2}=\tilde{c_{2}}$.
The third noise pattern considers partial targeted noise where only a subset of classes, $[2,3,4,5,9]$ in our example, are affected by targeted noise, i.e. swapped with a specific other class. Here we rely on Eq. 3 to compute the bound. This noise pattern has been studied in [24].
Fig. 7 summarizes the results. Top row shows the noise transition matrices for the three study noise patterns under noise ratio $\varepsilon=0.5$. Bottom row compares the theoretical bounds against the empirical results obtained by corrupting the CIFAR-10 dataset with different noise ratios from clean to fully corrupted data: $0\leq\varepsilon\leq 1$. The highest deviation between theoretical (lines) and empirical (points) results is shown for truncated normal noise around the deepest dip in the test accuracy, i.e., $\varepsilon=0.7$. Here the theoretical accuracy result is 8.67% points worse than the measured result. For the other two, the deviation is at most 4.06% and 2.97% (without considering $\varepsilon=0.0$) for bimodal and partial targeted noise, respectively. Overall, the theoretical and empirical results match well across the whole range of noise ratios.
3.4 Impact of Different Noise Patterns
We conclude by using our theoretical bounds to compare the impact on test accuracy of different noise patterns. First, we consider different parameters for truncated normal and bimodal noises and finish with comparing all noise patterns from here, in [3] and the real-world noise pattern from [26].
Fig. 11 shows all results. We start with truncated normal noise with a fixed target class and different $\sigma$. Higher values of $\sigma$ result into a wider spread of label noise across adjacent classes. Fig. 8a shows the results. Under lower noise ratios, e.g., $\varepsilon<0.5$, the impact of varying $\sigma$ is negligible, as shown by the overlapping curves. After that, we see that the most challenging cases are with high values of $\sigma$ due to the wider spread of corrupted labels deviating from their true classes.
Similarly to the previous analysis, for bimodal noise, we fix the target classes, i.e., $\mu_{1}$ and $\mu_{2}$, while varying the variances around the two peaks, i.e., $\sigma_{1}$ and $\sigma_{2}$. Overall the results are similar to truncated normal noise, but we can observe that the sensitivity to sigma is lower (see Fig. 9b) even if on average test accuracy of truncated normal is higher than bimodal noise. For instance, in case of $\varepsilon=1.0$ the difference between $\sigma=0.5$ and $\sigma=1$ is 16.26% for truncated normal, but only 11.11% for bimodal. Hence, bimodal tends to be more challenging since lines for different $\sigma$ are all more condensed around low values of accuracy with respect to truncated normal noise.
To conclude, we compare all synthetic symmetric and asymmetric noise patterns considered against the real-world noise pattern observed on the Clothing1M dataset [26] (see Fig. 10c). The measured noise ratio of this dataset is $\varepsilon=0.41$. To create the test accuracy bound, we scale the noise pattern to different $\varepsilon$ by redistributing the noise, such as to maintain all relative ratios between noise transition matrix elements per class. This imposes a lower limit on the noise ratio of $\varepsilon=0.36$ to be able to keep all elements within the range $[0,1]$. As intuition can suggest, partial targeted noise has the least impact since it only affects a fraction of classes. More interestingly, we see that the decrease in accuracy for all asymmetric noise patterns is not monotonic. When noise ratios are high, another class becomes dominant, and thus it is easier to counter the noise pattern.
On the contrary, all curves tend to overlap at smaller noise ratios, i.e., noise patterns play a weaker role compared to at higher noise ratios. Finally, the real-world noise pattern almost overlaps with bimodal. This might be due that errors in Clothing1M often are between two classes sharing visual patterns [26].
4 Methodology
In this section, we present our proposed robust learning framework, TrustNet, featuring on a light weight estimation of noise patterns and a robust loss function.
4.1 TrustNet Architecture
Consider extending the classification problem from Section 3.1 with a set of trusted data, $\mathcal{T}=\{(\bm{x}_{1},{y}_{1},\hat{y}_{1}),(\bm{x}_{2},{y}_{2},\hat{y}_{1}%
),...,(\bm{x}_{N},{y}_{N},\hat{y}_{N})\}$.
$\mathcal{T}$ is validated by experts and has for each sample $\bm{x}$ both given $y$ and true $\hat{y}$ class labels. Hence, our classification problem comprises two types of datasets: $\mathcal{T}$ and $\mathcal{D}$, where $\mathcal{D}$ has only the given class label $y$. The given class labels $y$ in both data sets are affected by the same noise pattern and noise ratio. Further, we assume that $\mathcal{T}$ is small compared to $\mathcal{D}$, i.e. $|\mathcal{T}|<<|\mathcal{D}|$, due to the cost of experts’ advise.
Corresponding to the two datasets, TrustNet consists of two training routines highlighted by the top and bottom halves of Fig. 12. First (top half), TrustNet leverages the trusted dataset to learn the underlying noise transition matrix via ExpertNet [4]. ExpertNet is an adversarial network jointly learning from the given and true labels. Different from a fully fledged adversarial learning, TrustNet only uses ExpertNet to learn the noise transition matrix instead of the representation of corrupted images. Second (bottom half), the trained ExpertNet is used to derive a dataset $\mathcal{D^{\prime}}$ from $\mathcal{D}$ by enriching it with estimated class labels $\tilde{y}$ inferred by ExpertNet (blue path). Hence $\mathcal{D^{\prime}}=\{(\bm{x}_{1},{y}_{1},\tilde{y}_{1}),(\bm{x}_{2},{y}_{2},%
\tilde{y}_{2}),...,(\bm{x}_{N},{y}_{N},\tilde{y}_{N})\}$. Then, we train a deep neural network, $\mathcal{F}(\cdot,\bm{\theta})$, on $\mathcal{D^{\prime}}$ using the proposed robust loss function from Section 4.3.
We note that the trusted data is used only to train ExpertNet, not $\mathcal{F}(\cdot,\bm{\theta})$.
4.2 Estimating Noise Transition Matrix
ExpertNet is an adversarial network that consists of two neural networks: Amateur and Expert. Amateur aims to classify images guided by the feedback from Expert. Expert acts as a supervisor who corrects the predictions of Amateur based on the ground truth.
Essentially, Expert learns how to transform predicted labels to true labels, i.e., a reverse noise transition matrix.
During training, first Amateur provides for a sample $\bm{x}_{k}$ a prediction of the class probabilities $\bm{y}_{k}^{\mathcal{A}}$ to Expert. Expert uses $\bm{y}_{k}^{\mathcal{A}}$ concatenated with the given class label $y_{k}$ to learn to predict the ground truth class label $\hat{y_{k}}$. In turn, the predicted label from Expert $y_{k}^{\mathcal{E}}$ is provided as feedback to train Amateur. In summary, training tries to minimize recursively the following two loss functions for Amateur, described by $\mathcal{F^{\mathcal{A}}}(\cdot,\bm{\theta}^{\mathcal{A}})$ and Expert, described by $\mathcal{F^{\mathcal{E}}}(\cdot,\bm{\theta}^{\mathcal{E}})$:
$$\min_{\bm{\theta}^{\mathcal{A}}}\mathcal{L}(\mathcal{F^{\mathcal{A}}}(\bm{x}_{%
k},\bm{\theta}^{\mathcal{A}}),y_{k}^{\mathcal{E}})\;\;\;\;\;\;\;\;\min_{\bm{%
\theta}^{\mathcal{E}}}\mathcal{L}(\mathcal{F^{\mathcal{E}}}(<\bm{y}_{k}^{%
\mathcal{A}},y_{k}>,\mathcal{\bm{\theta}^{\mathcal{E}}}),\hat{y}_{k})$$
where $<\cdot,\cdot>$ represents vector concatenation. [4] provides the technical details.
The trained ExpertNet can estimate the true label from an image $\bm{x_{k}}$:
$$\tilde{y}_{k}=\mathcal{F^{\mathcal{E}}}(<\mathcal{F^{\mathcal{A}}}(\bm{x}_{k},%
\bm{\theta}^{\mathcal{A}}),y_{k}>,\bm{\theta}^{\mathcal{E}}).$$
(5)
Specifically, we use the trained ExpertNet to enrich and transform $\mathcal{D}$ in $\mathcal{D^{\prime}}$ by incorporating for each image $\bm{x}_{k}$ the inferred class label $\tilde{y}_{k}$. Subsequently, we use $\mathcal{D^{\prime}}$ to train $\mathcal{F}(\cdot,\bm{\theta})$ via the loss function robust to noise from Section 4.3.
4.3 Noise Robust Loss Function
The given labels are corrupted by noise. Directly training on the given labels results in highly degraded performance as the neural network is not able to easily discern between clean and corrupted labels. To make the learning more robust to noise, TrustNet proposes to modify the loss function to leverage both given labels $y$ and inferred labels $\tilde{y}$ from ExpertNet to train $\mathcal{F}(\cdot,\bm{\theta})$.
The predicted label of $\mathcal{F}(\cdot,\bm{\theta})$ is compared, e.g., via cross-entropy loss, against both the given label and inferred label. The challenge is how to combine these two loss values. Ideally, for samples for which ExpertNet and $\mathcal{F}(\cdot,\bm{\theta})$ are highly accurate, the inferred label can be trusted more. On the contrary, for samples for which ExpertNet and $\mathcal{F}(\cdot,\bm{\theta})$ have low accuracy, the given labels can be trusted more.
Specifically, TrustNet uses a weighted average between the loss of the predicted label from $\mathcal{F}(\bm{x}_{k},\bm{\theta})$ against both the given label $y_{k}$ and the ExpertNet’s inferred label $\tilde{y}_{k}$ with per sample weights $\alpha_{k}$ and $(1-\alpha_{k})$ for all samples $\bm{x}_{k}$ in $\mathcal{D^{\prime}}$.
Moreover, TrustNet dynamically adjusts $\alpha_{k}$ after each epoch based on the observed learning performance of $\mathcal{F}(\bm{x}_{k},\bm{\theta})$.
In detail we use cross-entropy $H$ as standard loss measure to train our deep neural network $\mathcal{F}(\bm{x}_{k},\bm{\theta})$:
$$\mathcal{H}(\mathcal{F}(\bm{x}_{k},\bm{\theta}),y_{k})=-\sum_{i=0}^{c-1}%
\mathbbm{1}(y_{k},c)\log\mathcal{F}(\bm{x}_{k},\bm{\theta})$$
(6)
where $\mathbbm{1}(y_{k},c)$ is an indicator function equal to $1$ if $y_{k}=c$ and $0$ otherwise.
For each data point $\bm{x}_{k}$ in $\mathcal{D^{\prime}}$, we assign weights of $\alpha_{k}$ and $(1-\alpha_{k})$ to the cross-entropy of the given $y_{k}$ and inferred $\tilde{y}_{k}$ labels, respectively. We let ${\alpha_{k}}\in[0,1]$. Hence, we write the robust loss function $\mathcal{L}_{robust}$ as following:
$$\mathcal{L}_{robust}(\mathcal{F}(\bm{x}_{k},\bm{\theta}),y_{k},\tilde{y}_{k})=%
\alpha_{k}\;\mathcal{H}(\mathcal{F}(\bm{x}_{k},\bm{\theta}),y_{k})+(1-\alpha_{%
k})\;\mathcal{H}(\mathcal{F}(\bm{x}_{k},\bm{\theta}),\tilde{y}_{k}).$$
(7)
When the weight factor is low, we put more weight on the cross-entropy of inferred labels, and vice versa. In the following, we explain how to dynamically set $\alpha_{k}$ per epoch.
4.3.1 Dynamic $\alpha_{k}$
Here we adjust $\alpha_{k}$ based on the uncertainty of TrustNet and ExpertNet. When the learning capacities of ExpertNet and TrustNet are higher (lower values of loss function), we have more confidence on the inferred labels and put more weight on the second term of Eq. 7, i.e., smaller $\alpha_{k}$ values. As a rule of thumb, at the beginning $\alpha_{k}$ values are high since TrustNet experiences higher losses at the start of training. Then $\alpha_{k}$ values gradually decrease with the growing capacity of TrustNet.
Let $\alpha_{k,e}$ be the weight of the $k^{th}$ image at epoch $e$. We initialize $\alpha_{k,0}$ based on the entropy value $S$ from inferred class probabilities $\tilde{\bm{y}}_{k}$ of ExpertNet:
$$S(\tilde{\bm{y}}_{k})=-\sum_{i=0}^{c-1}\;\tilde{y}^{i}_{k}\;log\;\tilde{y}^{i}%
_{k}$$
where $c$ is the number of classes and $\tilde{y}^{i}_{k}$ is the $i^{th}$ class probability of $\tilde{\bm{y}}_{k}$. We use ExpertNet since we do not have yet any predictions from TrustNet’s own neural network.
For subsequent epochs, $e>0$, we switch to TrustNet as source of entropy values. We gradually adjust $\alpha_{k,e}$ based on the relative difference between current and previous epoch values:
$$\alpha_{k,e}=\alpha_{k,e-1}(1+\frac{S(\bm{y}_{k}^{\mathcal{F}}(e))-S(\bm{y}_{k%
}^{\mathcal{F}}(e-1))}{S(\bm{y}_{k}^{\mathcal{F}}(e-1))})\;\;\;\;\;\forall e>0,$$
(8)
where $\bm{y}_{k}^{\mathcal{F}}(e)$ are the class probabilities predicted by $\mathcal{F}(\cdot,\bm{\theta})$ for the $k^{th}$ image at epoch $e$. When the entropy values decrease, we gain more confidence in TrustNet and the weights on the inferred labels (1-$(1-\alpha))$ increase.
We summarize the training procedure of TrustNet in Algorithm 1. Training ExpertNet consists of training two neural networks: Expert, $\mathcal{F^{\mathcal{E}}}(\cdot,\bm{\theta}^{\mathcal{E}})$), and Amateur, $\mathcal{F}^{\mathcal{A}}(\cdot,\bm{\theta}^{\mathcal{A}})$, using the trusted data $\mathcal{T}$ for $E_{ExpertNet}$ epochs (line 1-4).
Then we need to compute the inferred labels for all data points in $\mathcal{D}$ to produce $\mathcal{D^{\prime}}$ (line 5).
Finally, we train TrustNet for $E_{TrustNet}$ epochs (line 6-14). The initialization of $\alpha_{k}$ is via the entropy of the inferred labels (line 9) and then updated by the entropy of predicted labels (line 11). The robust loss function is computed accordingly (line 13).
5 Evaluation
In this section, we empirically compare TrustNet against the state of the art noise, under both synthetic and real-world noises. We aim to show the effectiveness of TrustNet via testing accuracy on diverse and challenging noise patterns.
5.1 Experiments setup
We consider three datasets: CIFAR-10 [13], CIFAR-100 [13] and Clothing1M [26].
CIFAR-10 and CIFAR-100 both have 60K images of $32\times 32$-pixels organized in 10 and 100 classes, respectively. These two datasets have no or minimal label noise. We split the datasets into 50K training and 10K testing sets and inject into the training set the label noises from Section 3. We assume that 10% of the training set forms the trusted data with access to the clean labels used as ground truth. We use this trusted set to learn the noise transition via ExpertNet. In turn, ExpertNet infers the estimated labels for the remaining training data. The whole training set is then used to train TrustNet.
Clothing1M contains 1 million images scrapped from the Internet which we resize and crop to $224\times 224$ pixels. Images are classified into $14$ class labels. These labels are affected by real-world noise stemming from the automatic labelling. Out of the 1 million images, a subset of trusted expert-validated images contains the ground truth labels. This subset consists of 47K and 10K images for training and testing, respectively. As for CIFAR-10 and CIFAR-100, we use the trusted set to train ExpertNet and infer the estimated labels for the rest of the dataset to train TrustNet.
Note that for all three datasets, only training set is subject to label noise, not testing set.
The architecture of Expert consists of a 4-layer feed-forward neural network with Leaky ReLU activation functions in the hidden layers and sigmoid in the last layer. This Expert architecture is used across all datasets. TrustNet and Amateur use the same architecture, which depends on the dataset. For CIFAR-10 TrustNet and Amateur consist in an 8-layer CNN with 6 convolutional layers followed by 2 fully connected layers with ReLU activation functions as in [23]. For CIFAR-100 both rely on the ResNet44 architecture. Finally, Clothing1M uses pretrained ResNet101 with ImageNet. TrustNet (ExpertNet) is trained for 120 (150) and 200 (180) for CIFAR-10 and CIFAR-100, respectively, using SGD optimizer with batch size 128, momentum $0.9$, weight decay $10^{-4}$, and learning rate $0.01$. Finally, Clothing1M uses 50 (35) epochs and batch size 32, momentum $0.9$, weight decay $5\times 10^{-3}$ and learning rate $2\times 10^{-3}$ divided by 10 every $5$ epochs.
Our target evaluation metric is the accuracy achieved on the clean testing set, i.e. not affected by noise. We compare TrustNet against four noise resilient networks from the state of the art: SCL [24], D2L [25], Forward [18], and Bootstrap [19]. All training uses Keras v2.2.4 and Tensorflow v1.13.
5.2 Synthetic Noise Patterns
For CIFAR-10 and CIFAR-100, we inject synthetic noise. We focus on asymmetric noise patterns following a truncated normal and bimodal distribution, and symmetric noise, as discussed in Section 3. We inject noises with average rates $\varepsilon=0.4$, $0.5$ and $0.6$.
For truncated normal the target classes and variances are class 1 with $\sigma=0.5$ or $\sigma=5$ and 10 with $\sigma=1$ or $\sigma=10$ for CIFAR-10 and CIFAR-100, respectively.
For bimodal we use $\mu_{1}=2$, $\sigma_{1}=1$ plus $\mu_{2}=7$, $\sigma_{2}=3$ and $\mu_{1}=20$, $\sigma_{1}=10$ plus $\mu_{2}=70$, $\sigma_{2}=5$ for CIFAR-10 and CIFAR-100, respectively.
5.2.1 CIFAR-10
We summarize the results of CIFAR-10 in Table 1. We report the average and standard deviation across three runs. Overall the results are stable across different runs as seen from the low values of standard deviation. For readability reasons, we skip the results for 50% noise in the table. These results follow the trend between 40% and 60% noise.
TrustNet achieves the highest accuracy for bimodal noises, which is one of the most difficult noise patterns based on Section 3. Here the accuracy of TrustNet is consistently the best beating the second best method by increasing 2.4%, 21.1%, and 27.2% for 40%, 50%, and 60% noise ratios, respectively. At the same time, TrustNet is the second best method for symmetric and truncated normal asymmetric noise. Here the best method is often SCL, which also leverages a modified loss function to enhance the per class accuracy using symmetric cross-entropy. This design targets direct symmetric noise where SCL outperforms TrustNet. Considering the asymmetric truncated normal noise, the difference is smaller and decreasing with increasing noise ratio. At 60% noise SCL is only marginally better by, on average, 2.9%. Finally, test accuracy variations are not noticeable with increasing $\sigma$ values. All other baselines perform worse.
5.2.2 CIFAR-100
Table 2 summarizes the CIFAR-100 results over three runs. CIFAR-100 is more challenging than CIFAR-10 because it increases tenfold the number of classes while keeping the same amount of training data. This is clearly reflected in the accuracy results across all methods, but TrustNet overall seems to be more resilient. Here, TrustNet achieves the highest accuracy for both asymmetric noise patterns under all considered noise ratios. On average, the accuracy of TrustNet is higher than SCL, the second best solution, by 2%. The improvement is higher for higher noise ratios and lower variation, i.e., $\sigma=1$. SCL outperforms TrustNet on symmetric noise of low and middle intensity, i.e., $\varepsilon=[0.4,0.5]$, but the difference diminishes with increasing noise, and at 60% TrustNet performs better. Different from CIFAR-10, test accuracy variations become noticeable for truncated normal noise with increasing $\sigma$ values producing a positive effect across most baselines. All other baselines perform worse.
5.3 Real-world Noisy Data: Clothing1M
We use the noise pattern observed in real world data from the Clothing1M dataset to demonstrate the effectiveness and importance of estimating the noise transition matrix in TrustNet. Table 3 summarizes the results on the testing accuracy for TrustNet and the four baselines. The measured average noise ratio across all classes is 41%. Here, TrustNet achieves the highest accuracy, followed by SCL and Forward. Forward is another approach trying to estimate the noise transition matrix. The better accuracy of TrustNet is attributed to the additional label estimation from ExpertNet learned via the trusted data and dynamically weighting the loss functions from given and inferred labels. The promising results here confirm that the novel learning algorithm of TrustNet can tackle challenging label noise patterns appearing in real-world datasets.
6 Discussion
In this section, we discuss testing accuracy on clean and noisy samples.
The bounds derived in Section 3 consider testing on labels affected by the same noise as training data. This is due to the fact that the ground truth of labels is usually assumed unknown and not even available in the typical learning scenarios. However, the accuracy measured from the noisy testing data provides no information about how effective resilient networks defend the training process against the noisy data.
Hence, related work on noisy label learning tests on clean samples, which show different trends as hinted in the evaluation section. Fig. 15 compares the two approaches across different noise patterns empirically. In general, in the case of clean test labels, the testing accuracy decreases with increasing noise ratios almost linearly. As for noisy labels, testing accuracy shows a clear quadratic trend, first decreasing before increasing again. Specifically, the lowest accuracy happens at noise ratio of 0.6 and 0.8 in the case of the truncated normal noise example with $\mu=1$ and $\sigma=0.5$ (Fig. 13a), and the bimodal noise example with $\mu_{1}=2,\sigma_{1}=0.5,\mu_{2}=7,\sigma_{2}=5$ (Fig. 14b), respectively. The reason is that specific class examples with erroneous labels become more numerous than examples with the true class, e.g., more truck images are labelled as an automobile than automobile images. Such an effect is missing when testing on clean labels.
7 Conclusion
Motivated by the disparity of label noise patterns studied in the prior art, we first derive the analytical understanding of synthetic and real-world noise, i.e., how testing accuracy degrades with noise ratios and patterns. Challenging noise patterns identified here lead to the proposed learning framework, TrustNet, which features light-weight adversarial learning and label noise resilient loss function. TrustNet first adversely learns a noise transition matrix via a small set of trusted data and ExpertNet. Combining the estimated labels inferred from ExpertNet, TrustNet computes a robust loss function from both given and inferred labels via dynamic weights according to the learning confidence, i.e., the entropy. We evaluate TrustNet on CIFAR-10, CIFAR-100, and Clothing1M using a diverse set of synthetic and real-world noise patterns. The higher testing accuracy against state-of-the-art resilient networks shows that TrustNet can effectively learn the noise transition and enhance the robustness of loss function against noisy labels.
References
[1]
Arpit, D., Jastrzębski, S., Ballas, N., Krueger, D., Bengio, E., Kanwal,
M.S., Maharaj, T., Fischer, A., Courville, A., Bengio, Y., et al.: A closer
look at memorization in deep networks. In: ICML. pp. 233–242 (2017)
[2]
Blum, A., Kalai, A., Wasserman, H.: Noise-tolerant learning, the parity
problem, and the statistical query model. Journal of the ACM 50(4),
506–519 (2003)
[3]
Chen, P., Liao, B., Chen, G., Zhang, S.: Understanding and utilizing deep
neural networks trained with noisy labels. In: ICML. pp. 1062–1070 (2019)
[4]
Ghiassi, A., Birke, R., Han, R., Y.Chen, L.: Expertnet: Adversarial learning
and recovery against noisy labels. arXiv preprint arXiv:2007.05305 (2020)
[5]
Goldberger, J., Ben-Reuven, E.: Training deep neural-networks using a noise
adaptation layer. In: ICLR (2017)
[6]
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing
adversarial examples. In: ICLR (2015)
[7]
Han, B., Yao, J., Niu, G., Zhou, M., Tsang, I.W., Zhang, Y., Sugiyama, M.:
Masking: A new perspective of noisy supervision. In: NIPS. pp. 5841–5851
(2018)
[8]
Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I., Sugiyama, M.:
Co-teaching: Robust training of deep neural networks with extremely noisy
labels. In: NIPS. pp. 8527–8537 (2018)
[9]
Han, J., Luo, P., Wang, X.: Deep self-learning from noisy labels. In: ICCV.
pp. 5137–5146 (2019)
[10]
Hendrycks, D., Mazeika, M., Wilson, D., Gimpel, K.: Using trusted data to train
deep networks on labels corrupted by severe noise. In: NIPS. pp.
10456–10465 (2018)
[11]
Jiang, L., Zhou, Z., Leung, T., Li, L., Fei-Fei, L.: Mentornet: Learning
data-driven curriculum for very deep neural networks on corrupted labels. In:
ICML. pp. 2309–2318 (2018)
[12]
Konstantinov, N., Lampert, C.: Robust learning from untrusted sources. In:
ICML. vol. 97, pp. 3488–3498 (2019)
[13]
Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10/100 (Canadian Institute for
Advanced Research) (2009), http://www.cs.toronto.edu/~kriz/cifar.html
[14]
Lee, K., He, X., Zhang, L., Yang, L.: Cleannet: Transfer learning for scalable
image classifier training with label noise. In: CVPR. pp. 5447–5456 (2018)
[15]
Liu, G., Khalil, I., Khreishah, A.: Zk-gandef: A GAN based zero knowledge
adversarial training defense for neural networks. In: IEEE/IFIP DSN. pp.
64–75 (2019)
[16]
Ma, X., Wang, Y., Houle, M.E., Zhou, S., Erfani, S.M., Xia, S., Wijewickrema,
S.N.R., Bailey, J.: Dimensionality-driven learning with noisy labels. In:
ICML. pp. 3361–3370 (2018)
[17]
Malach, E., Shalev-Shwartz, S.: Decoupling" when to update" from" how to
update". In: NIPS. pp. 960–970 (2017)
[18]
Patrini, G., Rozza, A., Krishna Menon, A., Nock, R., Qu, L.: Making deep neural
networks robust to label noise: A loss correction approach. In: IEEE CVPR.
pp. 1944–1952 (2017)
[19]
Reed, S.E., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.:
Training deep neural networks on noisy labels with bootstrapping. In: ICLR
2015, Workshop Track Proceedings
[20]
Sukhbaatar, S., Bruna, J., Paluri, M., Bourdev, L., Fergus, R.: Training
convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080
(2014)
[21]
Thekumparampil, K.K., Khetan, A., Lin, Z., Oh, S.: Robustness of conditional
gans to noisy labels. In: NIPS. pp. 10271–10282 (2018)
[22]
Vahdat, A.: Toward robustness against label noise in training deep
discriminative neural networks. In: NIPS. pp. 5596–5605 (2017)
[23]
Wang, Y., Liu, W., Ma, X., Bailey, J., Zha, H., Song, L., Xia, S.T.: Iterative
learning with open-set noisy labels. In: IEEE CVPR. pp. 8688–8696 (2018)
[24]
Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., Bailey, J.: Symmetric cross
entropy for robust learning with noisy labels. In: IEEE ICCV. pp. 322–330
(2019)
[25]
Wang, Y., Ma, X., Houle, M.E., Xia, S.T., Bailey, J.: Dimensionality-driven
learning with noisy labels. ICML pp. 3361–3370 (2018)
[26]
Xiao, T., Xia, T., Yang, Y., Huang, C., Wang, X.: Learning from massive noisy
labeled data for image classification. In: IEEE CVPR. pp. 2691–2699 (2015)
[27]
Yan, Y., Rosales, R., Fung, G., Subramanian, R., Dy, J.: Learning from multiple
annotators with varying expertise. Machine learning 95(3),
291–327 (2014)
[28]
Yu, X., Han, B., Yao, J., Niu, G., Tsang, I.W., Sugiyama, M.: How does
disagreement help generalization against label corruption? In: ICML. pp.
7164–7173 (2019)
[29]
Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep
learning requires rethinking generalization. In: ICLR (2017)
[30]
Zhang, Z., Sabuncu, M.R.: Generalized cross entropy loss for training deep
neural networks with noisy labels. In: NIPS. pp. 8792–8802 (2018) |
Robust gene regulation:
Deterministic dynamics from asynchronous networks with delay
Konstantin Klemm
Stefan Bornholdt
Interdisciplinary Center for Bioinformatics, University of Leipzig
Kreuzstr. 7b, D-04103 Leipzig, Germany
(December 3, 2020)
Abstract
We compare asynchronous vs. synchronous update of discrete dynamical networks
and find that a simple time delay in the nodes may induce a reproducible
deterministic dynamics even in the case of asynchronous update in random order.
In particular we observe that the dynamics under synchronous parallel update
can be reproduced accurately under random asynchronous serial update for a
large class of networks. This mechanism points at a possible general principle
of how computation in gene regulation networks can be kept in a
quasi-deterministic “clockwork mode” in spite of the absence of a central clock.
A delay similar to the one occurring in gene regulation causes synchronization in the model.
Stability under asynchronous dynamics disfavors topologies containing loops,
comparing well with the observed strong suppression of loops in biological regulatory
networks.
pacs: 87.16.Yc,05.45.Xt,89.75.Hc,05.65.+b
Erwin Schrödinger in his lecture “What is life?” held in 1943 Schroedinger
was one of the first to notice that the information processing performed in the
living cell has to be extremely robust and therefore requires a quasi-deterministic
dynamics (which he called “clockwork mode”). The discovery of a “digital”
storage medium for the genetic information, the double-stranded DNA,
confirmed one important part of this picture.
Today, new experimental techniques allow to observe the dynamics of
regulatory genes in great detail, which motivates us to reconsider the other,
dynamical part of Schrödinger’s picture of a “clockwork mode”.
While the dynamical elements of gene regulation often are known in great detail,
the complex dynamical patterns of the vast network of interacting regulatory genes,
while highly reproducible between identical cells and organisms under similar
conditions, are largely not understood. Most remarkably, these virtually
deterministic activation patterns are often generated by asynchronous genetic
switches without any central clock. In this Letter we address this astonishing fact
with a toy model of gene regulation and study the conditions of when deterministic
dynamics could occur in asynchronous circuits.
Let us start from the observed dynamics of small circuits of regulatory genes,
then derive a discrete dynamical model gene, followed by a study of
networks of such genetic switches, with a focus on comparing their
asynchronous and synchronous dynamics.
Recently, several small gene regulation circuits have been described in terms of
a detailed picture of their dynamics Elowitz ; Hes1 ; Baltimore ; p53 ; Smolen .
A particularly simple motif is the single, self-regulating gene Rosenfeld ; Hes1
that allows for a detailed modeling of its dynamics.
A set of two differential equations, for the temporal evolution
of the concentrations of messenger RNA and protein, respectively,
and an explicit time delay for transmission delay provide a quantitative
model for the observed dynamics in this minimal circuit Jensen03 .
The equations of this model take the basic form
$$\displaystyle\frac{{\rm d}c}{{\rm d}t}$$
$$\displaystyle=$$
$$\displaystyle\alpha[f(s(t-\vartheta))-c(t)]$$
(1)
$$\displaystyle\frac{{\rm d}b}{{\rm d}t}$$
$$\displaystyle=$$
$$\displaystyle\beta[c(t)-b(t)]$$
(2)
for the the dynamics of the concentrations $c$ of mRNA and $b$ of protein,
with some non-linear transmission function $f(s)$ of an input signal $s$,
a time delay $\vartheta$, and the time constants $\alpha$ and $\beta$.
In order to define a minimal discrete gene model let us keep the basic
features (delay, low pass filter characteristics), omit the second filter,
and write the difference equation for one gene $i$ as
$$\Delta c_{i}=\alpha[f(s_{i}(t-\vartheta))-c_{i}(t)]\Delta t~{}.$$
(3)
The non-linear function $f$ is typically a steep sigmoid. We approximate it
as a step function $\Theta$ with $\Theta(s)=0$ for $s<0$ and $\Theta(s)=1$
otherwise. Rescaling time with $\epsilon=\alpha\Delta t$ and
$\tau=\vartheta/\Delta t$ this reads
$$\Delta c_{i}=\epsilon[\Theta(s_{i}(t-\tau))-c_{i}(t)]~{}.$$
(4)
For simplicity let us update $c_{i}$ by equidistant steps according to
$$\Delta c_{i}=\left\{\begin{array}[]{rl}+\epsilon,&{\rm if}\;s_{i}(t-\tau)\geq 0%
\;{\rm and}\;c_{i}\leq 1-\epsilon\\
-\epsilon,&{\rm if}\;s_{i}(t-\tau)<0\;{\rm and}\;c_{i}\geq\epsilon\\
0,&{\rm otherwise}\end{array}\right.$$
(5)
The coupling between nodes is defined by
$$s_{i}(t)=\sum_{j}w_{ij}x_{j}(t)-a_{i}~{},$$
(6)
with discrete output states $x_{j}(t)$ of the nodes defined as
$$x_{j}(t)=\Theta(c_{j}(t)-1/2)~{}.$$
(7)
The influence of node $j$ on node $i$ can be activating ($w_{ij}=1$),
inhibitory ($w_{ij}=-1$), or absent ($w_{ij}=0$).
A constant bias $a_{i}$ is assigned to each node.
In the following let us consider a network model of such nodes. Consider $N$ nodes
with concentration variables $c_{i}$, state variables $x_{i}$, biases $a_{i}$ and a coupling
matrix $(w_{ij})$. Given initial values $x_{i}(0)=c_{i}(0)\in\{0,1\}$ the time-discrete
dynamics is obtained by iterating the following update steps:
(1) Choose a node $i$ at random.
(2) Calculate $s_{i}$ according to Eq. (6).
(3) Update $c_{i}$ according to Eq. (5).
For $\tau=0$ and $\epsilon=1$ random asynchronous update is recovered.
For $\tau>0$ there is an explicit transmission delay from the output of node $j$
to the input of node $i$. To be definite, at $t=0$ we assume that nodes have not
flipped during the previous $\tau$ time steps.
Let us first explore the dynamics of a simple but non-trivial interaction network
with $N=3$ sites and
non-vanishing couplings $w_{01}=w_{21}=-1$ and $w_{10}=w_{12}=+1$,
see Fig. 1. Note that under asynchronous
update the sequence of states reached by the dynamics is not unique.
The system may branch off to different configurations depending on
node update ordering. This is illustrated in Fig. 2(a):
Without delay ($\tau=0$) and filter ($\epsilon=1$) the dynamics is irregular,
i.e. non-periodic.
With filter only ($\tau=0$, $\epsilon=0.01$, Fig. 2(b)),
the dynamics is periodic at times, but also intervals of fast irregular flipping occur.
Finally, in the presence of delay ($\tau=100$, $\epsilon=1$, Fig. 2(c))
we obtain perfectly ordered dynamics with synchronization of flips. Nodes 0 and 2
change states practically at the same (macro) time, followed by a longer pause until
node 1 changes state, etc. With increasing delay time $\tau$ the dynamics under
asynchronous update approaches the dynamics under synchronous update
(cf. Fig. 1) when viewed on a coarse-grained (macro) time scale.
Let us further quantify the difference between synchronous and asynchronous
dynamics. First, a definition of equivalence between the two dynamical modes
has to be given. Let us start from the time series ${\bf x}(t)$ of configurations
${\bf x}=(x_{0},\dots,x_{N-1})$ produced by the asynchronous (random serial)
update of the model and the respective time series ${\bf y}(u)$ produced by
synchronous (parallel) update, using identical initial condition ${\bf y}(0)={\bf x}(0)$.
These time series live on different time scales, which we call the micro time scale
of single site updates in the asynchronous case, and the macro time scale where
each time step is an entire sweep of the system.
Assume that at time $t_{u}$ the asynchronous system is in state ${\bf x}(t_{u})={\bf y}(u)$.
In order to follow the synchronous update it has to subsequently reach the state
${\bf y}(u+1)$ on a shortest path in phase space. Formally, let us require that there is
a micro time $t_{u+1}>t_{u}$ such that ${\bf x}(t_{u+1})={\bf y}(u+1)$ and each node flips
at most once in the time interval $[t_{u},t_{u+1}]$. Once this is violated we say
that an error has occured at the particular macro time step $u$.
This error allows to define a numerical measure of discrepancy between
asynchronous and synchronous dynamics. Starting from identical initial conditions,
the system is iterated in synchronous and asynchronous modes
(here for $u_{\rm total}=10^{7}$ macro time steps). Whenever the resulting time
series are no longer equivalent, an error counter is incremented and the system
reset to initial condition. The total error $E$ of the run is the number of errors
divided by $u_{\rm total}$.
For the network in Fig. 1 and the initial condition $x_{i}=c_{i}=0$
for $i=1,2,3$ the error $E$ is exponentially suppressed with delay time $\tau$
(Fig. 3).
The asynchronous dynamics with delay follows the attractor during a time
span that increases exponentially with the given delay time. Note that there is only
one possibility for the asynchronous dynamics to leave the attractor: When the
system is in configuration $(1,1,0)$ or $(0,1,1)$, node $2$ may change state
such that the system goes to configuration $(1,0,0)$ or $(0,0,1)$ respectively,
whereas the correct next configuration on the attractor is $(0,1,0)$.
Consider the case $\epsilon=1$ where $c_{i}=x_{i}$ for all $i$.
Let us assume that the system is in configuration $(1,1,1)$ and at time
$t_{0}$ node 0 changes state, thereby generating configuration $(0,1,1)$.
This decreases the input sum $s_{1}$ below zero such that for $\tau=0$
node $0$ would change state immediately in its next update. With explicit
transmission delay $\tau>0$, however, node 1 still “sees” the input sum $s_{i}=0$
generated by the configuration $(1,1,1)$ until time step $t_{0}+\tau$.
If node $2$ is chosen for update in this time window $t_{0}+1,\dots,t_{0}+\tau$
it changes state immediately and updates are performed in correct order.
The opposite case, that node 2 does not receive an update in any of the
$\tau$ time steps, happens with probability $(2/3)^{\tau}$,
yielding the correct error decay of the simulation (Fig. 3).
Next we demonstrate that there are cases where also low-pass filtering,
$\epsilon\ll 1$, is needed for the asynchronous dynamics to follow the
deterministic attractor. Consider a network of
$N=5$ nodes with bias values $a_{0}=a_{4}=0$ and $a_{1}=a_{2}=a_{3}=1$.
The only non-zero couplings are
$w_{10}=w_{21}=w_{31}=w_{42}=+1$ and $w_{01}=w_{43}=-1$.
Nodes 0 and 1 form an oscillator, i.e. $(x_{0},x_{1})$ iterate the sequence
$(0,0)$, $(1,0)$, $(1,1)$, $(0,1)$. Nodes $2$ and $3$ simply “copy” the
state of node $1$ such that under synchronous update always
$x_{3}(t)=x_{2}(t)=x_{1}(t-1)$.
Consequently, under synchronous update the input sum of node $4$ never
changes because the positive contribution from node 2 and the negative
contribution from node 3 cancel out. Under asynchronous update, however,
the input sum of node 4 may fluctuate because nodes 2 and 3 do not flip
precisely at the same time. The effect of the low-pass filter $\epsilon\ll 1$
is to suppress the spreading of such fluctuations on the micro time scale.
The influence of the filter is seen in Fig. 4.
When $\tau$ is kept constant, the error drops algebraically with decreasing
$\epsilon$. An exponential decay $E\sim\exp(-\alpha/\epsilon)$ is obtained
when $\tau\propto 1/\epsilon$ (the filter can take full effect only in the
presence of sufficient delay).
Let us finally consider an example of a larger network with $N=16$ nodes
and $L=48$ non-vanishing couplings (chosen randomly from the off-diagonal
elements in the matrix $(w_{ij})$ and assigned values $+1$ or $-1$ with
probability $1/2$ each; biases are chosen as $a_{i}=\sum_{j}w_{ij}/2$).
Simulation runs under pure asynchronous update ($\tau=0$, $\epsilon=1$)
typically yield dynamics as in Fig. 5(a).
The time series ${{\bf x}(t)}$ is non-periodic
and non-reproducible, i.e. under different order of updates a different
series is obtained. For the same initial condition, periodic dynamics is
observed in the presence of sufficent transmission delay and filtering,
Fig. 5(b). In this case, the system follows precisely the
attractor of period 28 found under synchronous update.
As seen in Fig. 5(c), the error decays exponentially as a
function of the delay time $\tau$.
Let us now turn to the dangers of asynchronous update: There is
a fraction of attractors observed under synchronous update that
cannot be realized under asynchronous update. Synchronization
cannot be sustained if the dynamics is separable. In the
trivial case, separability means that the set of nodes can be
divided into two subsets that do not interact with each other.
Then there is no signal to synchronize one set of nodes with
the other and they will go out of phase. In general, synchronization
is impossible if the set of flips itself is separable.
Consider, as the simplest example, a network of $N=2$ nodes
with the couplings $w_{01}=w_{10}=+1$, biases $a_{0}=a_{1}=1$ and the initial condition $(y_{0}(0),y_{1}(0))=(0,1)$. Under
synchronous update, the state alternates between vector $(0,1)$
and $(1,0)$. Under asynchronous update with delay time $\tau$,
the transition of one node $i$ from $x_{i}=0$ to $x_{i}=1$ causes
the other node $j$ to switch from $x_{j}=0$ to $x_{j}=1$
approximately $\tau$ time steps later. The “on”-transitions
only trigger subsequent “on”-transitions and, analogously,
the “off”-transitions only trigger subsequent “off”-transitions.
The dynamics can be divided into two distinct sets of events that
do not influence each other. Consequently, synchronization
between flips cannot be sustained, as illustrated
in Fig. 6.
When the phase difference reaches the value $\tau$, on- and
off-transitions annihilate. Then the system leaves the
attractor and reaches one of the fixed points with $x_{0}=x_{1}$.
These observations have important implications for robust
topological motifs in asynchronous networks. First of all,
the above example of a small excitatory loop can be quickly
generalized to any larger loop with excitatory interactions,
as well as to loops with an even number of inhibitory couplings,
where in principle similar dynamics could occur. Higher order
structures that fail to synchronize include competing modules,
e.g. two oscillators (loops with odd number of inhibitory links)
that interact with a common target.
In conclusion we find that asynchronously updated
networks of autonomous dynamical nodes are able to
exhibit a reproducible and quasi-deterministic dynamics
under broad conditions if the nodes have transmission
delay and low pass filtering as, e.g., observed in regulatory genes.
Timing requirements put constraints on the topology of
the networks (e.g. suppression of certain loop motifs).
With respect to biological gene regulation networks
where indeed strong suppression of loop structures is observed
Shen-Orr02 ; Milo02 , one may thus speculate about a
new constraint on topological motifs of gene regulation:
The requirement for deterministic dynamics from
asynchronous dynamical networks.
Acknowledgements
S.B. thanks D. Chklovskii, M.H. Jensen, S. Maslov, and K. Sneppen for
discussions and comments, and the Aspen Center for Physics for
hospitality where part of this work has been done.
References
(1)
E. Schrödinger, What is Life? The Physical Aspect of the Living Cell,
Cambridge: University Press (1948).
(2)
H. Hirata et al., Science 298, 840 (2002).
(3)
M.B. Elowitz and S. Leibler, Nature 403, 335 (2002).
(4)
A. Hoffmann, A. Levchenko, M.L. Scott, and D. Baltimore,
Science 298, 1241-1245 (2002).
(5)
G. Tiana, M.H. Jensen, and K. Sneppen,
Eur. Phys. J. B 29, 135-140 (2002).
(6)
P. Smolen, D. A. Baxter, J. H. Byrne,
Bull. Math. Biol. 62, 247 (2000).
(7)
N. Rosenfeld, M. B. Elowitz, U. Alon,
J. Mol. Biol. 323, 785 (2002).
(8)
M.H. Jensen, K. Sneppen, G. Tiana, FEBS Letters 541, 176-177 (2003).
(9)
S.S. Shen-Orr, R. Milo, S. Mangan, and U. Alon, Nature Genetics 31, 64-68 (2002).
(10)
R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon,
Science 298, 824-827 (2002). |
A NOTE ON THE QUANTUM–MECHANICAL RICCI FLOW
José M. Isidro${}^{a,b,1}$, J.L.G. Santander${}^{a,2}$ and P. Fernández de Córdoba${}^{a,3}$
${}^{a}$Grupo de Modelización Interdisciplinar, Instituto de Matemática Pura y Aplicada,
Universidad Politécnica de Valencia, Valencia 46022, Spain
${}^{b}$Max–Planck–Institut für Gravitationsphysik, Albert–Einstein–Institut,
D–14476 Golm, Germany
${}^{1}$joissan@mat.upv.es, ${}^{2}$jlgonzalez@mat.upv.es,
${}^{3}$pfernandez@mat.upv.es
Abstract We obtain Schroedinger quantum mechanics from Perelman’s functional and from the Ricci flow equations of a conformally flat Riemannian metric on a closed 2–dimensional configuration space. We explore links with the recently discussed emergent quantum mechanics.
1 Introduction
The Ricci flow has provided many far–reaching insights into long–standing problems in topology and geometry (for an introduction to the Ricci flow and its applications see, e.g., ref. [22]). Perhaps more surprising is the fact that it also has interesting applications in physics, one of its first occurrences being the low–energy effective action of string theory. Recent works [4, 6] have shed light on applications of the Ricci flow to foundational issues in quantum mechanics (see also refs. [2, 3, 7, 8, 9, 10]). In this paper we will establish a 1–to–1 relation between conformally flat metrics on configuration space, and quantum mechanics on the same space (see refs. [1, 5] for the role of conformal symmetry in quantisation). This will be used to prove that Schroedinger quantum mechanics in two space dimensions arises from Perelman’s functional on a compact Riemann surface. Ricci–flow techniques will also be useful to analyse yet another application beyond the 2–dimensional case. Namely, certain issues in quantum mechanics and quantum gravity that go by the name of emergent quantum mechanics [12, 13, 14, 15, 16, 20, 21], to be made precise in section 5.
2 A conformally flat space
Let us consider a 2–dimensional, closed Riemannian manifold $\mathbb{M}$. In isothermal coordinates $x,y$ the metric reads
$$g_{ij}={\rm e}^{-f}\delta_{ij},$$
(1)
where $f=f(x,y)$ is a function, hereafter referred to as conformal factor. The volume element on $\mathbb{M}$ equals111Our conventions are $g=|\det g_{ij}|$ and $R_{im}=g^{-1/2}\partial_{n}\left(\Gamma_{im}^{n}g^{1/2}\right)-\partial_{i}%
\partial_{m}\left(\ln g^{1/2}\right)-\Gamma_{is}^{r}\Gamma_{mr}^{s}$ for the Ricci tensor, $\Gamma_{ij}^{m}=g^{mh}\left(\partial_{i}g_{jh}+\partial_{j}g_{hi}-\partial_{h}%
g_{ij}\right)/2$ being the Christoffel symbols.
$$\sqrt{g}\,{\rm d}x{\rm d}y={\rm e}^{-f}{\rm d}x{\rm d}y.$$
(2)
Given an arbitrary function $\varphi(x,y)$ on $\mathbb{M}$, we have the following expressions for the Laplacian $\nabla^{2}\varphi$ and the squared gradient $\left(\nabla\varphi\right)^{2}$:
$$\nabla^{2}\varphi:=\frac{1}{\sqrt{g}}\partial_{m}\left(\sqrt{g}g^{mn}\partial_%
{n}\varphi\right)={\rm e}^{f}\left(\partial_{x}^{2}\varphi+\partial_{y}^{2}%
\varphi\right)=:{\rm e}^{f}D^{2}\varphi,$$
(3)
$$\left(\nabla\varphi\right)^{2}:=g^{mn}\partial_{m}\varphi\partial_{n}\varphi={%
\rm e}^{f}\left[\left(\partial_{x}\varphi\right)^{2}+\left(\partial_{y}\varphi%
\right)^{2}\right]=:{\rm e}^{f}\left(D\varphi\right)^{2},$$
(4)
where $D^{2}\varphi$ and $\left(D\varphi\right)^{2}$ stand for the flat–space values of the Laplacian and the squared gradient, respectively. The Ricci tensor reads
$$R_{ij}=\frac{1}{2}D^{2}f\,\delta_{ij}=\frac{1}{2}{\rm e}^{-f}\nabla^{2}f\,%
\delta_{ij}.$$
(5)
From here we obtain the Ricci scalar
$$R={\rm e}^{f}D^{2}f=\nabla^{2}f.$$
(6)
A manifold $\mathbb{M}$ such as that considered here is in fact a compact Riemann surface without boundary.
3 Perelman’s functional
For an introduction to the Ricci flow and its applications, a good reference is [22]. Perelman’s functional ${\cal F}[\varphi,g_{ij}]$ on the manifold $\mathbb{M}$ is defined as
$${\cal F}[\varphi,g_{ij}]:=\int_{\mathbb{M}}{\rm d}x{\rm d}y\sqrt{g}\,{\rm e}^{%
-\varphi}\left[\left(\nabla\varphi\right)^{2}+R(g_{ij})\right],$$
(7)
where $g_{ij}$ is a metric on $\mathbb{M}$ and $\varphi$ a real function on $\mathbb{M}$. We will take the above equation as our starting point. It may be regarded physically as providing an action functional, on configuration space $\mathbb{M}$, for the two independent fields $g_{ij}$ and $\varphi$. Now some aspects of the functional ${\cal F}[\varphi,g_{ij}]$ are worth mentioning. Setting $\varphi=0$ identically we have the Einstein–Hilbert functional for gravity on $\mathbb{M}$. Admittedly Einstein–Hilbert gravity, being a boundary term in $n=2$ dimensions, is trivial in $n=2$ dimensions. However the generalisation of 2–dimensional gravity provided by the functional ${\cal F}[\varphi,g_{ij}]$ when $\varphi\neq 0$ is interesting. Indeed, Perelman’s functional arises in string theory as the low–energy effective action of the bosonic string [11].
We first compute the Euler–Lagrange extremals corresponding to the fields $g_{ij}$ and $\varphi$. Next we set the equations of motion so obtained equal to the first–order time derivatives of $g_{ij}$ and $\varphi$, respectively. This results in the two evolution equations
$$\frac{\partial g_{ij}}{\partial t}=-2\left(R_{ij}+\nabla_{i}\nabla_{j}\varphi%
\right),\quad\frac{\partial\varphi}{\partial t}=-\nabla^{2}\varphi-R.$$
(8)
We stress that the right–hand sides of (8), once equated to zero, are the Euler–Lagrange equations of motion corresponding to (7), and that the time derivatives on the left–hand sides have been put in by hand. In fact time is not a coordinate on configuration space $\mathbb{M}$, but an external parameter. The two equations (8) are referred to as the gradient flow of ${\cal F}$. Via a time–dependent diffeomorphism, one can show that the set (8) are equivalent to
$$\frac{\partial g_{ij}}{\partial t}=-2R_{ij},\quad\frac{\partial\varphi}{%
\partial t}=-\nabla^{2}\varphi+\left(\nabla\varphi\right)^{2}-R.$$
(9)
We will use (9) rather than (8).
As already remarked, one advantage of having a 2–dimensional configuration space is that all metrics on it are conformal, so we can substitute (1) throughout. By (2) and (6), we can express ${\cal F}[\varphi,g_{ij}]$ as
$${\cal F}[\varphi,f]:={\cal F}[\varphi,g_{ij}(f)]=\int_{\mathbb{M}}{\rm d}x{\rm
d%
}y\,{\rm e}^{-\varphi-f}\left[\left(\nabla\varphi\right)^{2}+\nabla^{2}f\right].$$
(10)
In order to understand the physical meaning of the flow eqns. (9), let us analyse them in more detail. Using (1) and (5) we see that the first flow equation,
$$\frac{\partial g_{ij}}{\partial t}=-2R_{ij},$$
(11)
is equivalent to
$$\frac{\partial f}{\partial t}=\nabla^{2}f.$$
(12)
This is the usual heat equation, with the important difference that the Laplacian operator $\nabla^{2}$ is given by (3): indeed $\mathbb{M}$ is not flat but only conformally flat. So conformal metrics on the (curved) manifold $\mathbb{M}$ evolve in time according to the heat equation wih respect to the corresponding (curved) Laplacian. The second flow equation in (9) will be the subject of our attention in what follows.
So far, the conformal factor $f$ and the scalar $\varphi$ have been considered as independent fields. Setting now $\varphi=f$ in (10) we obtain
$${\cal F}[f]:={\cal F}[\varphi=f,f]=\int_{\mathbb{M}}{\rm d}x{\rm d}y\,{\rm e}^%
{-2f}\left[\left(\nabla f\right)^{2}+\nabla^{2}f\right].$$
(13)
After setting $\varphi=f$ we appear to have a contradiction, since we have two different flow equations in (9) for just one field $f$.
That there is in fact no contradiction can be seen as follows. In (9) we have two independent flow equations for the two independent fields $f$ and $\varphi$. Equating the latter two fields implies that the two flow equations must reduce to just one. This can be achieved by substituting one of the two flow eqns. (9) into the remaining one. By (6) and (12) we have $R=\partial f/\partial t$, which substituted into the second flow equation of (9) leads to
$$\frac{\partial f}{\partial t}=\frac{1}{2}\left(\nabla f\right)^{2}-\frac{1}{2}%
\nabla^{2}f.$$
(14)
We will later on find it useful to distinguish notationally between the time–independent conformal factor $f$, as it stands in the functional (13), and the time–dependent conformal factor as it stands in the flow equation (14). We therefore rewrite (14) as
$$\frac{\partial\tilde{f}}{\partial t}=\frac{1}{2}\left(\nabla\tilde{f}\right)^{%
2}-\frac{1}{2}\nabla^{2}\tilde{f},$$
(15)
where a tilde on top of a field indicates that it is a time–dependent quantity.
4 The dictionary
In what follows we will regard the manifold $\mathbb{M}$ as the configuration space of a mechanical system, to be identified presently. We will establish a 1–to–1 correspondence between conformally flat metrics on configuration space $\mathbb{M}$, and (classical or quantum) mechanical systems on $\mathbb{M}$. Let us consider classical mechanics in the first place. We recall that, for a point particle of mass $m$ subject to a time–independent potential $U$, the Hamilton–Jacobi equation for the time–dependent action $\tilde{S}$ reads
$$\frac{\partial\tilde{S}}{\partial t}+\frac{1}{2m}\left(\nabla\tilde{S}\right)^%
{2}+U=0.$$
(16)
It is well known that, separating the time variable as per
$$\tilde{S}=S-Et,$$
(17)
with $S$ the so–called reduced action, one obtains
$$\frac{1}{2m}\left(\nabla S\right)^{2}+U=E.$$
(18)
Eqn. (17) suggests separating variables in (15) as per
$$\tilde{f}=f+Et,$$
(19)
where the sign of the time variable is reversed222This time reversal is imposed on us by the time–flow eqn. (15), with respect to which time is reversed in the mechanical model. This is just a rewording of (part of) section 6.4 of ref. [22], where a corresponding heat flow is run backwards in time.
with respect to (17). Substituting (19) into (15) leads to
$$\frac{1}{2}\left(\nabla f\right)^{2}-\frac{1}{2}\nabla^{2}f=E.$$
(20)
Comparing (20) with (18) we conclude that, picking a value of the mass $m=1$, the following identifications can be made:
$$S=f,\qquad U=-\frac{1}{2}\nabla^{2}f=-\frac{1}{2}R.$$
(21)
So the potential $U$ is proportional to the scalar Ricci curvature of the configuration space $\mathbb{M}$, while the reduced action $S$ equals the conformal factor $f$. This concludes the first half of our dictionary: to construct a classical mechanics starting from a conformal metric on $\mathbb{M}$.
Conversely, if we are given a classical mechanics as determined by an arbitrary potential function $U$ on $\mathbb{M}$, and we are required to construct a conformal metric on $\mathbb{M}$, then the solution is given by the function $f$ satisfying the Poisson equation $-2U=\nabla^{2}f$, where the Laplacian is computed with respect to the unknown function $f$.
Although we have so far considered the classical mechanics associated with a given conformal factor, one can immediately construct the corresponding quantum mechanics, by means of the Schroedinger equation for the potential $U$. We can therefore restate our result as follows: we have established a 1–to–1 relation between conformally flat metrics on configuration space, and quantum–mechanical systems on that same space.
5 Discussion
Let us summarise our results. We have considered a conformally flat Riemannian metric on a closed 2–dimensional manifold $\mathbb{M}$, and regarded the latter as the configuration space of a classical mechanical system. We have formulated a dictionary between such conformal metrics, on the one hand, and quantum mechanics on the same space, on the other. This dictionary has a nice geometrical interpretation: the reduced mechanical action $S$ equals the conformal factor $f$, and the potential function $U$ is (proportional to) the Ricci curvature of $\mathbb{M}$. It is interesting to observe that the Ricci scalar as a potential function has also arisen in [17, 18, 19].
The previous correspondence can be exploited further: we will exchange a conformally flat metric for a wavefunction satisfying the Schroedinger equation for the potential $U$. We first observe that the Schroedinger equation itself can be obtained as the extremal of the action functional
$${\cal S}[\psi,\psi^{*}]:=\int_{\mathbb{M}}{\rm d}x{\rm d}y\sqrt{g}\left({\rm i%
}\psi^{*}\frac{\partial\psi}{\partial t}-\frac{1}{2m}\nabla\psi^{*}\nabla\psi-%
U\psi^{*}\psi\right).$$
(22)
Pick $m=1$ as before, and substitute
$$\psi={\rm e}^{{\rm i}\tilde{f}}$$
(23)
into (22) to obtain $-\partial_{t}\tilde{f}-\frac{1}{2}(\nabla\tilde{f})^{2}+\frac{1}{2}\nabla^{2}%
\tilde{f}$ within the integrand. As before, let us consider the stationary case, where $\partial_{t}\tilde{f}=0$ and $\tilde{f}$ becomes $f$. Then (22) turns into
$${\cal S}[f]:={\cal S}\left[\psi={\rm e}^{{\rm i}f}\right]=\frac{1}{2}\int_{%
\mathbb{M}}{\rm d}x{\rm d}y\,{\rm e}^{-f}\left[-(\nabla f)^{2}+\nabla^{2}f%
\right].$$
(24)
Comparing the functionals (13) and (24) we arrive at the following interesting relation:
$${\cal F}\left[f/2\right]={\cal S}[f]+\frac{3}{2}{\cal K}[f],$$
(25)
where
$${\cal K}[f]:=\frac{1}{2}\int_{\mathbb{M}}{\rm d}x{\rm d}y\,{\rm e}^{-f}\left(%
\nabla f\right)^{2}$$
(26)
is the kinetic energy functional on $\mathbb{M}$. In ${\cal F}\left[f/2\right]$ above, the Laplacian $\nabla^{2}f$ and the squared gradient $(\nabla f)^{2}$ are computed with respect to the conformal factor $f$, even though the functional ${\cal F}$ is evaluated on $f/2$. Thus, on a compact Riemann surface without boundary, the Schroedinger functional ${\cal S}[f]$ turns out to be a close cousin of the Perelman functional ${\cal F}[f/2]$. Altogether we have proved that Schroedinger quantum mechanics on a 2–dimensional, compact configuration space arises from Perelman’s functional.
Recent works [2, 4, 5, 6, 7, 8, 9] have shed light on applications of conformal symmetry and the Ricci flow to foundational issues in quantum mechanics. In this paper we have established a 1–to–1 correspondence between conformally flat metrics on configuration space, and quantum mechanics on that same space. This correspondence has been used to prove that Schroedinger quantum mechanics in two space dimensions arises from Perelman’s functional on a compact Riemann surface.
We have worked in the 2–dimensional case for simplicity. Now Perelman’s functional arises in string theory as the low–energy effective action of the bosonic string [11]. In view of these facts it is very tempting to try and interpret quantum mechanics itself, in any number of dimensions, possibly also noncompact, as some kind of effective, low–energy approximation to some more fundamental theory. Related ideas have been put forward in the literature [12, 13, 20, 21], where standard quantum mechanics has been argued to emerge from an underlying deterministic theory. Basically, in emergent quantum mechanics, one starts from a deterministic model and, via some dissipative mechanism that implements information loss, one ends up with a probabilistic theory. Several mechanisms implementing information loss have been proposed in the literature. Thus, in ref. [20], dissipation is effected by an attractor on phase space, which produces a lock–in of classical trajectories around some fixed point; instead, in ref. [13] dissipation arises as a coarse–graining of classical information via a probability distribution function on phase space. A somewhat different dissipative mechanism, based on the Ricci flow equation (11), has been put forward in ref. [16].
Some features of emergent quantum mechanics are present in our picture. Most notable among them is the presence of dissipation, or information loss: as remarked above, this underlies the passage from a classical description to a quantum description. Indeed, in our setup, the classical description is provided by the conformal factor $f$ of the metric $g_{ij}$ on configuration space, while the quantum description is given by the wavefunction $\psi={\rm e}^{{\rm i}f}$. The latter contains less information than the former, as there exist different conformal factors $f$ giving rise to just one quantum wavefunction $\psi$. This situation is analogous to that described in [20, 21], in which quantum states arise as equivalence classes of classical states: different classical states may fall into one and the same quantum equivalence class. Beyond the trivial case of any two conformal factors $f_{1}$ and $f_{2}$ differing by $2\pi$ times an integer, there is the more interesting case of $f_{1}$ and $f_{2}$ satisfying $\nabla^{2}_{1}f_{1}=\nabla^{2}_{2}f_{2}$, where the subindices $1,2$ refer to the fact that the corresponding Laplacians are computed with respect to the conformal factors $f_{1}$ and $f_{2}$, respectively. If $\mathbb{M}$ is such that the Laplace–like equation $\nabla^{2}_{1}f_{1}-\nabla^{2}_{2}f_{2}=0$ admits nontrivial solutions, then any two such $f_{1}$ and $f_{2}$ (different classical states) fall into the same quantum state, as both $f_{1}$ and $f_{2}$ give rise to the same potential function $-2U=\nabla^{2}_{1}f_{1}=\nabla^{2}_{2}f_{2}$.
Another feature of emergent quantum mechanics that is present in our picture is the following. In refs. [12, 20, 21] it has been established that to every quantum system there corresponds at least one deterministic system which, upon prequantisation, gives back the original quantum system. In our setup this existence theorem is realised alternatively as follows. Let a quantum system possessing the potential function $V$ be given on the configuration space $\mathbb{M}$, the latter satisfying the same requirements as above. Let us consider the Poisson equation on $\mathbb{M}$, $\nabla^{2}_{V}f_{V}=-2V$, where $f_{V}$ is some unknown conformal factor, to be determined as the solution to this Poisson equation, and $\nabla^{2}_{V}$ is the corresponding Laplacian. We claim that the deterministic system, the prequantisation of which gives back the original quantum system with the potential function $V$, is described by the following data: configuration space $\mathbb{M}$, with classical states being conformal factors $f_{V}$ and mechanics described by the action functional (13). The lock–in mechanism (in the terminology of refs. [20, 21]) is the choice of one particular conformal factor, with respect to which the Laplacian is computed, out of all possible solutions to the Poisson equation on $\mathbb{M}$, $\nabla^{2}_{V}f_{V}=-2V$. The problem thus becomes topological–geometrical in nature, as the lock–in mechanism has been translated into a problem concerning the geometry and topology of configuration space $\mathbb{M}$, namely, whether or not the Poisson equation possesses solutions on $\mathbb{M}$, and how many.
Acknowledgements J.M.I. is pleased to thank Max–Planck–Institut für Gravitationsphysik, Albert–Einstein–Institut (Potsdam, Germany) for hospitality extended over a long period of time.—Irrtum verlässt uns nie, doch ziehet ein höher Bedürfnis immer den strebenden Geist leise zur Wahrheit hinan. Goethe.
References
[1]
L. Anderson and J. Wheeler, Quantum mechanics as a measurement theory on biconformal space, Int. J. Geom. Meth. Mod. Phys. 3 (2006) 315, arXiv:hep-th/0406159.
[2]
G. Bertoldi, A. Faraggi and M. Matone, Equivalence principle, higher dimensional Mobius group and the hidden antisymmetric tensor of quantum mechanics,
Class. Quant. Grav. 17 (2000) 3965, arXiv:hep-th/9909201.
[3]
R. Carroll, Fluctuations, gravity, and the quantum potential, arXiv:gr-qc/0501045.
[4]
R. Carroll, Some remarks on Ricci flow and the quantum potential, arXiv:math-ph/0703065.
[5]
R. Carroll, Remarks on Weyl geometry and quantum mechanics, arXiv:0705.3921 [gr-qc].
[6]
R. Carroll, Ricci flow and quantum theory, arXiv:0710.4351 [math-ph].
[7]
R. Carroll, Remarks on the Friedman equations, arXiv:0712.3251 [math-ph].
[8]
R. Carroll, More on the quantum potential, arXiv:0807.1320 [math-ph].
[9]
R. Carroll, Remarks on Fisher information, arXiv:0807.4158 [math-ph].
[10]
R. Carroll, Fluctuations, Information, Gravity, and the Quantum Potential, Springer, Berlin (2006).
[11]
See, e.g., Quantum Fields and Strings: a Course for Mathematicians, vol. 2, P. Deligne et al., eds., American Mathematical Society, Providence (1999).
[12]
H.-T. Elze, Note on the existence theorem in “Emergent quantum mechanics and emergent symmetries”, arXiv:0710.2765 [quant-ph].
[13]
H.-T. Elze, The attractor and the quantum states, arXiv:0806.3408 [quant-ph].
[14]
H.-T. Elze, Gauge symmetry of the third kind and quantum mechanics as an infrared phenomenon, in Sense of Beauty in Physics: Proceedings, M. D’Elia et al. eds., Pisa University Press, Pisa (2006), arXiv:quant-ph/0604142.
[15]
H.-T. Elze, The gauge symmetry of the third kind and quantum mechanics as an infrared limit, Int. J. Quant. Inf. 5 (2007) 215, arXiv:hep-th/0605154.
[16]
J.M. Isidro, J.L.G. Santander and P. Fernández de Córdoba, Ricci flow, quantum mechanics and gravity,
arXiv:0808.2351 [hep-th].
[17]
B. Koch, Geometrical interpretation of the quantum Klein–Gordon equation, arXiv:0801.4635 [quant-ph].
[18]
B. Koch, Relativistic Bohmian mechanics from scalar gravity, arXiv:0810.2786 [hep-th].
[19]
B. Koch, A geometrical dual to relativistic Bohmian mechanics - the multi particle case, arXiv:0901.4106 [gr-qc].
[20]
G. ’t Hooft, The mathematical basis for deterministic quantum mechanics, arXiv:quant-ph/0604008.
[21]
G. ’t Hooft, Emergent quantum mechanics and emergent symmetries, arXiv:0707.4568 [hep-th].
[22]
P. Topping, Lectures on the Ricci Flow, London Mathematical Society Lecture Notes Series 325, Cambridge University Press (2006). |
Quantization of Acoustic Model Parameters in Automatic Speech Recognition Framework
Abstract
Robust automatic speech recognition (ASR) system exploits state-of-the-art deep neural networks (DNN) based acoustic model (AM) trained with Lattice Free-Maximum Mutual Information (LF-MMI) criterion and n-gram language models. These systems are quite large and require significant parameter reduction to operate on embedded devices.
Impact of the parameter quantization on the overall word recognition performance is studied in this paper. Following three approaches are presented: (i) AM trained in Kaldi framework with conventional factorized TDNN (TDNN-F) architecture. (ii) the TDNN built in Kaldi is loaded into the Pytorch toolkit using a C++ wrapper. The weights and activation parameters are then quantized and the inference is performed in Pytorch. (iii) post quantization training for fine-tuning. Results obtained on standard Librispeech setup provide an interesting overview of recognition accuracy w.r.t. applied quantization scheme.
Quantization of Acoustic Model Parameters in Automatic Speech Recognition Framework
Amrutha Prasad${}^{1,2}$, Petr Motlicek${}^{1}$, Srikanth Madikeri${}^{1}$
${}^{1}$Idiap Research Institute, Martigny Switzerland
${}^{2}$Unidistance, Distance Learning University, Switzerland
{aprasad, petr.motlicek, srikanth.madikeri}@idiap.ch
Index Terms: speech recognition, parameter reduction, quantization.
1 Introduction
Deep Neural Networks (DNN) help learn multiple levels of representation of data in order to model complex relationships among them. Conventional Acoustic Model (AM) used in ASR framework is trained with neural network architectures such as Convolutional Neural Networks (CNN) [1], Recurrent Neural Networks (RNNs) [2], Time-delay Neural Networks (TDNN) [3] with the Kaldi [4] toolkit.
Such models have often millions of parameters making them impractical to use with
embedded devices such as Raspberry Pi.
To embed ASR system on such devices, the footprint of the ASR system needs to be significantly reduced.
One simple solution is to train a model with fewer parameters.
However, reducing the model size usually decreases the performance of the system.
Previous research has shown several possible alternative approaches. Quantizing the model parameters
from floating point values to integers is one popular approach.
In [5], quantization methods are studied for CNN architectures in image classification and other computer vision problems. Results show that quantizing such models reduces the model size significantly without any impact on the performance.
Another approach is to use teacher-student training to first train a larger model that is optimized for performance and use its output to train a smaller model.
Alternately, in models such as [11] parameter reduction is integrated as a part of training.
In this paper, we study the effect of quantizing the parameters of an AM used in ASR with a focus on deploying it on embedded devices with low computational resources (especially, memory). We present the impact on the performance of the ASR system when the AM is quantized from float32 to int8 or int16. The results of the quantization process are then compared to other techniques used in parameter reduction for automatic speech recognition models.
We believe that results obtained from our study have not been presented in literature, and can be of interest for researchers experimenting with interfacing Kaldi and Pytorch [12] tools for ASR tasks.
The rest of the paper is organized as follows. Section 2 describes briefly the current techniques used in parameter reduction of a model. This is followed by an overview of the quantization techniques and their application to AM training with Kaldi toolkit (Section 3). Section 4 presents the experiments and the results. Finally, the conclusion is provided in Section 5.
2 Related work
Speech recognition can be considered as a sequence-to-sequence mapping problem where a sequence of sounds is converted to a sequence of meaningful linguistic units (e.g. phones, syllables, words, etc.). In order to better distinguish different classes of sounds, it is useful to train with positive and negative examples. Hence, sequential discriminative criteria such as maximum mutual information (MMI) and state-level Maximum Bayes Risk (sMBR) can be applied. The former is now commonly known as lattice-free MMI (LF-MMI/Chain model) [6]. This method could also be used without any Cross-Entropy (CE) initialization leading to lesser computation cost.
State-level sequence-discriminative training of DNNs starts from a set of alignments and lattices that are generated by decoding the training data with a Language Model (LM). For each training condition, the alignments and lattices are generated using the corresponding DNN trained using CE [6]. The cross-entropy trained models are also used as the starting point for the sequence-discriminative training. Whereas in sMBR training word-level language model is used, in LF-MMI training phone-level LM is used. This simplification enables LF-MMI training to use GPU clusters and is considered as the state-of-the-art AM for an ASR system. Hence our experiments consider AM with TDNN architecture.
Parameter reduction is a process that removes certain layers of the neural network avoiding the loss of useful information of the network required for its decision process. This process can be applied to already trained neural networks or implemented during the training. Several different approaches can be considered:
•
Teacher-student approach to reduce the number of layers in the student neural network [7].
•
Reduce the size of the layers used in training the neural network through matrix factorization [8].
•
Reduce the hidden layer dimension (e.g. from 1024 to 512 in each layer of the neural network).
•
Reduce the number of hidden layers used in the network.
•
Quantization of model parameters (e.g. from 32 bit floating precision to 16 bit floating precision) [5][9].
Single Value Decomposition (SVD) is one of the most popular methods which can be applied to the trained models to factorize the learned weight matrix as a product of two much smaller factors. SVD then discards the smaller singular values followed by fine tuning of the network parameters to obtain a parameter-reduced model [10].
Another approach to enforce parameter reduction while training a neural network AM is by applying low-rank factorized layers [11]. In semi-orthogonal factorization, the parameter matrix $M$ is factorized as a product of two matrices $A$ and $B$, where $B$ is a semi-orthogonal matrix and $A$ has a smaller âinteriorâ (i.e. rank) than that of $M$. This technique enables training a smaller network from scratch instead of using a pre-trained network for parameter reduction. The LF-MMI training also provides a stable training procedure for semi-orthogonalized matrices.
While semi-orthogonal matrices have been studied with TDNN-F (a variant of TDNN with residual connections), it has not been compared with other model reduction techniques. In our experiments, we present the comparison with respect to varying the number of layers.
3 Quantization
A popular technique to reduce the size of the model is through quantization. This approach is applied in computer vision problems and is supported by many deep learning frameworks like Pytorch [12] and TensorFlow [13]. However, applying quantization to AM trained with LF-MMI criterion using Kaldi toolkit is not a straightforward approach. The following subsections explain the standard quantization process for DNNs and how it is applied to the AMs.
3.1 Overview of quantization process
Quantization is a process of mapping a set of real valued inputs to a discrete valued outputs. Commonly used quantization types are 16 bits and 8 bits.
Quantizing model parameters typically involves decreasing the number of bits used to represent
the parameters.
Prior to this process the model may have been trained with IEEE float32 or float64.
A model size can be reduced by a factor of 2 (with 16 bits quantization) and by a factor of 4 (with 8 bits quantization)
if the original model uses float32 representation.
In addition to the quantization types, there are different quantization modes such as symmetric and asymmetric quantization. As mentioned earlier, a real valued variable $x$ in the range of $(x_{min},x_{max})$ is quantized to a range $(qmin,qmax)$. In symmetric quantization, the range $(qmin,qmax)$ corresponds to $(\frac{-N_{levels}}{2},\frac{N_{levels}}{2}-1)$. In asymmetric quantization the quantization range is $(0,\frac{N_{levels}}{2}-1)$. In the aforementioned intervals, $N_{levels}=2^{16}=65536$ for 16 bit quantization and $N_{levels}=2^{8}=256$ for 8 bit quantization.
A real value $r$, can be expressed as an integer $q$ given a scale $S$ and zero-point $Z$ [5]:
$$r=S*(q-Z).$$
(1)
In the above equation, scale $S$ speciï¬es the step size required to map the floating point to integer and an integer zero-point represents the ï¬oating point zero [5].
Given the minimum and maximum of a vector $x$ and the range of the quantization scheme, scale and zero-point is computed as below [9]:
$$S=\frac{x_{max}-x_{min}}{qmax-qmin}$$
(2)
$$Z=qmin-\frac{x_{min}}{scale}.$$
(3)
As mentioned in [5], for 8 bit integer quantization the values never reach -128 and hence we use $qmin=-127$ and $qmax=127$.
3.2 Quantization application
We implement the quantization algorithms in Pytorch as it provides better support than Kaldi for int8, uint8 and int16 types.
The aim of our work is to port models trained in Kaldi to be functional in embedded systems.
There already exist tools such as Pykaldi [14] that help users to load Kaldi acoustic models for inference in Pytorch. However, they do not allow to access the model parameters by default. To support this work, we implemented a C++ wrapper that allows to access the model parameters and input MFCC features as Pytorch tensors. The wrapper also allows us to write the models and ark (archive) files back to Kaldi format.
Once the model is loaded as a tensor, there exist several options: we can quantize only the weights of the models, or quantize both weights and activations.
3.2.1 Quantization of weights only
Weight-only quantization is an approach in which only the weights of the neural network model are quantized. This approach is useful when only the model size needs to be reduced and the inference is carried out in floating-point precision. In our experiments, the weights are quantized in Pytorch and the inference is carried out in Kaldi.
3.2.2 Quantization of weights and activations
In order to reduce the model from 32 bit precision to 8 bit precision, both the weights and activations must be quantized. Activations are quantized with the use of a calibration set to estimate the dynamic range of the activations. Our network architecture consists of TDNN layer followed by ReLu and Batchnorm layers. In our experiments, we quantize only the weights and input activations to the TDNN layer as depicted in Figure 1 (i.e., the integer arithmetic is applied only to the 1D convolution). Floating point operations are used in ReLu and Batchnorm layers in order to simplify the implementation, as the main focus of this paper is to only study the impact of quantization on AM weights and activations. The conventional word-recognition lattices are then generated by a Kaldi decoder (i.e. performance in Kaldi) with the use of Pytorch generated likelihoods.
3.2.3 Post quantization fine-tuning
Quantization is a process that reduces the precision of the model. This implies that noise is added when weights are quantized. In order to reduce the level of noise, a process of fine tuning is carried out. In this experiment, the quantized weights are first de-quantized and saved. This model is then loaded back to Kaldi and further trained for 2 epochs with a low learning rate. The process of quantizing and fine tuning is carried out in three iterations with an assumption that the final model when quantized converges to the baseline TDNN model.
4 Experiments
All our experiments conducted to reduce parameters of TDNN-based acoustic models are trained with Kaldi toolkit (i.e. nnet3 model architecture). AMs are trained with the LF-MMI training framework, considered to produce state-of-the-art performance for hybrid ASR systems.
In the paper, we not only consider conventional triphone systems but also a monophone based system.
In the former case, the output layer consists of senones obtained from clustering of context-dependent phones.
In the latter case, the output layer consists of only monophone outputs, which can be considered as yet another approach to reduce the computational complexity of ASR systems.
The triphone-based AM uses position-dependent phones which produces a total of 346 phones including the silence and noise phones. The monophone-based AM uses position-independent phones which comprises of 41 phones. The output of the triphone-based AM produces 5984 states while the monophone-based AM produces 41 states.
The AMs trained use conventional high-resolution MFCC features with speed perturbed data. We did not include i-vectors. The TDNN and TDNN-F models use 7 layers with the hidden layer dimension of 625.
In this study we also train TDNN-F model by increasing the number of layers until it reaches number of parameters of the baseline TDNN (7M params). Table 3 shows that by using twice as many layers as TDNN in TDNN-F, the same number of params are retained with an improved performance. The results presented in this table are rescored with a large LM trained on Librispeech.
The AMs are trained with 960h of Librispeech [15] data. The LMs are also trained on Librispeech which is available to downloaded from the web. Librispeech is a corpus of approximately 1000 hours of 16 kHz read English speech from the LibriVox project. The LibriVox project is responsible for the creation of approximately 8000 public domain audio books, the majority of which are in English. Most of the recordings are based on texts from Project Gutenberg2, also in the public domain.
The quantization is performed in Pytorch. Quantization experiments are carried out for 16 bit and 8 bit integers in symmetric mode. As discussed in Section 3, the model and the features from Kaldi are loaded as Pytorch tensors with the help of the C++ wrapper.
The word recognition performance for all experiments is performed on Librispeech test-clean evaluation set. The quantization experiments use a small LM while the comparison of varying the layers of TDNN-F AM uses a large LM.
4.1 Parameter reduction experiments
We compare floating-point vs. integer arithmetic inference for TDNN model with different quantization types (16-bit and 8-bit integer) and different quantization schemes, as discussed in Section 3.2. We also compare the quantization technique with the low-rank matrix factorization technique used during the training of the model.
Table 1 shows that weight-only quantization reduces the model size by 50% without a significant impact on the performance of the monophone-based AM. Quantizing both weights and activations reduces the model size with an increases WER compared to weight only quantization. Table 2 shows that quantizing both weights and activations outperforms the weight-only quantization in the triphone system. In both monophone and triphone systems, post quantization fine-tuning does not show any impact. The TDNN-F model reduces the model size by 40% with a loss in the recognition performance of 2.7% (absolute) compared to the baseline TDNN. However, compared to the 8-bit and 16-bit quantized model, the loss in WER of TDNN-F is negligible (10.7% WER for the quantized model vs 10.8% WER of TDNN-F).
4.2 Quantization error
The norm between the weights and its de-quantized version is the quantization error. Table 4 shows the error for monophone and triphone-based AMs with respect to int8 int16 quantization. The high variation of the error in the triphone system is due its large number of outputs.
5 Conclusions
We presented a study that shows the effect of quantizing the acoustic model parameters in ASR. The experimental results reveal that the parameter-quantization can reduce the model size significantly while preserving a reasonable word recognition performance. TDNN-F models provide a better performance when the number of layers is higher than for the TDNN models. Quantization of the acoustic models can be further explored through fusing the TDNN, ReLu and Batchnorm layers. Since fine-tuning did not bring any significant improvements in our experiments, our future work will consider an implementation of the quantization-aware training.
The quantization experiments are conducted in Pytorch, while the acoustic models are developed using popular Kaldi toolkit. Implemented C++ wrappers allowing to interface parameters of the Kaldi-based DNN acoustic models in Pytoch will be offered to other researchers through a Github project.
6 Acknowledgements
This work was supported by the CTI Project “SHAPED: Speech Hybrid Analytics Platform for consumer and Enterprise Devices”. We wish to acknowledge Arash Salarian for providing us with valuable insights and suggestions regarding quantization.
The work was also partially supported by the ATCO2 project, funded by the European Union under CleanSky EC-H2020 framework.
References
[1]
LeCun, Yann, and Yoshua Bengio.
“Convolutional networks for images, speech, and time series.”
The handbook of brain theory and neural networks 3361.10 (1995): 1995.
[2]
Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton.
“Speech recognition with deep recurrent neural networks.”
IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.
[3]
Peddinti, Vijayaditya, Daniel Povey, and Sanjeev Khudanpur.
“A time delay neural network architecture for efficient modeling of long temporal contexts.”
Sixteenth Annual Conference of the International Speech Communication Association. 2015.
[4]
Daniel Povey, Arnab Ghoshal et al.
“The Kaldi Speech Recognition Toolkit”
IEEE 2011 workshop on automatic speech recognition and understanding. No. CONF. IEEE Signal Processing Society, 2011.
[5]
Benoit Jacob, Skirmantas Kligys et al.
“Quantization and Training of Neural Networks for Efï¬cient Integer-Arithmetic-Only Inference”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
[6]
Veselý, Karel, et al.
“Sequence-discriminative training of deep neural networks.”
Interspeech. Vol. 2013. 2013.
[7]
Wong, Jeremy HM, and Mark John Gales,
“Sequence student-teacher training of deep neural networks.”
2016.
[8]
Francis Keith, William Hartmann, Man-hung Siu, Jeff Ma, Owen Kimball
“Optimising Multilingual Knowledge Transfer For Time-Delay Neural Networks with Low Rank Factorization,”
2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
[9]
Raghuraman Krishnamoorthi
“Quantizing deep convolutional networks for efï¬cient inference: A whitepaper”
arXiv preprint arXiv:1806.08342 (2018).
[10]
Jian Xue, Jinyu Li, and Yifan Gong
“Restructuring of Deep Neural Network Acoustic Models with Singular Value Decomposition”
Interspeech. 2013.
[11]
Dan Povey et al.
“Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks”
Interspeech. 2018
[12]
Adam Paszke, Sam Gross et al.
“PyTorch: An Imperative Style, High-Performance Deep Learning Library”
Advances in Neural Information Processing Systems. 2019.
[13]
Martin Abadi, Paul Barham et al.
“TensorFlow: A system for large-scale machine learning”
12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016.
[14]
Dogan Can, Victor R. Martinez, Pavlos Papadopoulos, Shrikanth S. Narayanan
“PYKALDI: A Python Wrapper for KALDI”
2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
[15]
Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur
“Librispeech: An ASR Corpus Based On Public Domain Audio Books”
2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. |
[
[
Department of Mechanical Engineering, University of Melbourne, Victoria 3010, Australia
(#1; ?; revised ?; accepted ?)
Abstract
We conducted direct numerical simulations (DNSs) of turbulent flow over three-dimensional sinusoidal roughness in a channel. A passive scalar is present in the flow with Prandtl number $Pr=0.7$, to study heat transfer by forced convection over this rough surface.
The minimal-span channel is used to circumvent the high cost of simulating high Reynolds number flows, which enables a range of rough surfaces to be efficiently simulated.
The near-wall temperature profile in the minimal-span channel agrees well with that of the conventional full-span channel, indicating it can be readily used for heat-transfer studies at a much reduced cost compared to conventional DNS.
As the roughness Reynolds number, $k^{+}$, is increased, the Hama roughness function, $\Delta U^{+}$, increases in the transitionally rough regime before tending towards the fully rough asymptote of $\kappa_{m}^{-1}\log(k^{+})+C$, where $C$ is a constant that depends on the particular roughness geometry and $\kappa_{m}\approx 0.4$ is the von Kármán constant. In this fully rough regime, the skin-friction coefficient is constant with bulk Reynolds number, $Re_{b}$. Meanwhile, the temperature difference between smooth- and rough-wall flows, $\Delta\Theta^{+}$, appears to tend towards a constant value, $\Delta\Theta^{+}_{FR}$. This corresponds to the Stanton number (the temperature analogue of the skin-friction coefficient) monotonically decreasing with $Re_{b}$ in the fully rough regime.
Using shifted logarithmic velocity and temperature profiles, the heat transfer law as described by the Stanton number in the fully rough regime can be derived once both the equivalent sand-grain roughness
$k_{s}/k$
and the temperature difference $\Delta\Theta^{+}_{FR}$ are known.
In meteorology, this corresponds to the ratio of momentum and heat transfer roughness lengths, $z_{0m}/z_{0h}$, being linearly proportional to the inner-normalised momentum roughness length, $z_{0m}^{+}$, where the constant of proportionality is related to $\Delta\Theta_{FR}^{+}$.
While Reynolds analogy, or similarity between momentum and heat transfer, breaks down for the bulk skin-friction and heat-transfer coefficients,
similar distribution patterns
between the heat flux and viscous component of the wall shear stress are observed.
Instantaneous visualisations of the temperature field show a thin thermal diffusive sublayer following the roughness geometry in the fully rough regime, resembling the viscous sublayer of a contorted smooth wall.
keywords:
††volume: #1\checkfont
eurm10
\checkfontmsam10
Roughness effects in turbulent forced convection]
Roughness effects in turbulent forced convection
M. MacDonald, N. Hutchins and D. Chung]
M. MacDonald††thanks: Email address for correspondence: michael.macdonald@unimelb.edu.au,\nsN. Hutchins
and D. Chung
2015
999
\pagerange998–999
1 Introduction
Turbulent flow over rough surfaces is a problem inherent to many engineering and geophysical systems. Over the past few decades, especially with the advent of direct numerical simulation (DNS), we have seen substantial advances in the understanding of how roughness alters the overlying turbulent flow. In particular, the effect of various geometrical roughness parameters like height, wavelength and skewness on the drag force exerted on the wall has been quantified to a reasonably high level of accuracy (Jiménez, 2004; Flack & Schultz, 2014; Chan et al., 2015). However at present this advancement has not been mirrored in our depth of understanding of the effect of surface roughness on heat transfer. This is partly due to the separate communities researching roughness and heat transfer, as well as experimental difficulties in obtaining high fidelity temperature fields.
In this paper, we extend the minimal-span channel, recently shown to be capable of predicting the near-wall turbulent flow over roughness (Chung et al., 2015; MacDonald et al., 2017), to heat transfer. This significantly reduces the cost of the numerical simulations relative to conventional DNS while still retaining the same level of accuracy.
Roughness generally increases the drag force exerted on the wall when compared to a smooth wall, which is often quantified by the (Hama) roughness function, $\Delta U^{+}$ (Hama, 1954). This quantity reflects the retardation of the mean streamwise flow over a rough wall compared to a smooth wall. In the logarithmic region of the flow, the rough-wall velocity profile has the form
$$U^{+}=\frac{1}{\kappa_{m}}\log\left(z^{+}\right)+A_{m}-\Delta U^{+},$$
(1)
where $\kappa_{m}\approx 0.4$ is the von Kármán constant, $A_{m}\approx 5.0$ is the smooth-wall offset, and $z$ is the wall-normal position. The superscript $+$ indicates quantities non-dimensionalised on kinematic viscosity $\nu$ and friction velocity $U_{\tau}\equiv\sqrt{\tau_{w}/\rho}$, where $\tau_{w}$ is the wall-shear stress and $\rho$ is the fluid density.
The roughness function is related to the skin-friction coefficients, $C_{f}$, of smooth-wall (subscript $s$) and rough-wall (subscript $r$) flows at matched friction Reynolds numbers as (e.g. Schultz & Flack, 2009)
$$\Delta U^{+}=U_{bs}^{+}-U_{br}^{+}=\sqrt{\frac{2}{C_{fs}}}-\sqrt{\frac{2}{C_{%
fr}}},$$
(2)
where we are using the bulk velocity $U_{b}^{+}=(1/h)\int_{0}^{h}U^{+}\mathrm{d}z$, $h$ being the channel half-height, to define the skin-friction coefficient, $C_{f}\equiv\tau_{w}/(\tfrac{1}{2}\rho U_{b}^{2})=2/U_{b}^{+2}$.
In the fully rough regime, in which the rough-wall skin-friction coefficient no longer depends on the bulk Reynolds number, $Re_{b}\equiv 2hU_{b}/\nu$, the roughness function scales as $\Delta U^{+}=\kappa_{m}^{-1}\log(k^{+})+C$, where $k$ is the roughness height and $C$ depends on the rough surface in question.
If the offset $C$ is known, then extrapolations to arbitrary roughness Reynolds numbers, $k^{+}$, can be easily performed. Alternatively, the equivalent sand-grain roughness $k_{s}$ can be reported, which relates a given roughness length scale to the sand grain roughness size of Nikuradse (1933) as $k_{s}/k\equiv\exp[\kappa_{m}(3.5+C)]$. Here, the constant 3.5 comes from the difference between the smooth-wall log-law offset ($A_{m}\approx 5$) and Nikuradse’s rough-wall constant ($C_{N}\approx 8.5$).
In the present DNS with three-dimensional sinusoidal roughness, we define the roughness height $k$ to be the sinusoidal semi-amplitude.
The temperature profile for forced convection smooth-wall flow also exhibits a region with logarithmic dependence on distance from the wall (Kader, 1981; Kawamura et al., 1999; Pirozzoli et al., 2016). Roughness is generally a more efficient mechanism for heat transfer through the wall, which would result in a downwards shift of this logarithmic temperature profile (e.g. Yaglom, 1979; Cebeci & Bradshaw, 1984). Therefore, in the same manner as velocity, a temperature difference $\Delta\Theta^{+}$ can be defined as the difference in temperature between the smooth- and rough-wall flows in the outer layer of the flow. The rough-wall temperature profile would then follow
$$\frac{\Theta-\Theta_{w}}{\Theta_{\tau}}=\frac{1}{\kappa_{h}}\log(z^{+})+A_{h}(%
Pr)-\Delta\Theta^{+},$$
(3)
where the offset $A_{h}$ depends on the molecular Prandtl number, $Pr\equiv\nu/\alpha$, with $\alpha$ being the thermal diffusivity. The slope coefficient $\kappa_{h}\approx\kappa_{m}/Pr_{t}\approx 0.46$ is slightly larger than that of the velocity profile in (1), owing to the turbulent Prandtl number, $Pr_{t}$, typically having a value less than unity, in the range of 0.85 to 0.9 (Yaglom, 1979; Pirozzoli et al., 2016). The temperature is relative to the wall temperature, $\Theta_{w}$, and is non-dimensionalised on the friction temperature,
$\Theta_{\tau}\equiv(q_{w}/\rho c_{p})/U_{\tau}$, where $q_{w}$ is the temporally and spatially averaged wall heat flux and $c_{p}$ is the specific heat at constant pressure. The validity of (3), particularly in the use of $\Delta\Theta^{+}$, has not received the same amount of attention as the roughness function, $\Delta U^{+}$. Miyake et al. (2001) and Leonardi et al. (2007) performed DNSs of two-dimensional spanwise-aligned square bars arranged on the bottom wall of a channel with a smooth top wall. Both studies prescribed different temperatures on the bottom and top walls and the resulting inner-normalised temperature profiles exhibited a downwards shift due to the roughness. Leonardi et al. (2007) observed slightly different logarithmic slopes for varying aspect ratios of the bars, although this may be due to the asymmetric bottom and top walls. Chan et al. (2015) discusses how this asymmetry can impact the perception of outer-layer similarity of the flow and may explain the different slopes for the temperature profiles. Moreover, Pirozzoli et al. (2016) noted that channels with different prescribed temperatures between the bottom and top walls exhibit large wakes in the outer layer of the temperature profiles. This would hinder the identification of the logarithmic region, particularly with the relatively low friction Reynolds numbers that varied between the roughness cases used in Leonardi et al. (2007). In the present study, we will verify (3) using a scalar body-forcing approach with matched friction Reynolds numbers to address these uncertainties.
The heat transfer through the wall is often quantified by the Stanton number, $C_{h}=1/(U_{b}^{+}\Theta_{m}^{+})$, which is the heat transfer analogue to the skin-friction coefficient. Here, we are using the mixed-mean (or cup-mixing) temperature, $\Theta_{m}=\int_{0}^{h}U(\Theta-\Theta_{w})\hphantom{.}\mathrm{d}z/\int_{0}^{h%
}U\hphantom{.}\mathrm{d}z$ (e.g. Owen & Thomson, 1963; Bird et al., 2002; Pirozzoli et al., 2016).
This is the mean temperature that would result if the channel outlet discharged into a container and was well mixed.
It is different to the arithmetic mean temperature, $\Theta_{a}=(1/h)\int_{0}^{h}(\Theta-\Theta_{w})\hphantom{.}\mathrm{d}z$, where
if logarithmic profiles are assumed for $U$ and $\Theta$, as in (1) and (3), this difference is
$$\displaystyle\Theta_{m}^{+}-\Theta_{a}^{+}=\frac{1}{\kappa_{m}\kappa_{h}U_{b}^%
{+}}.$$
(4)
Using this result and the definition of $C_{h}=1/(U_{b}^{+}\Theta_{m}^{+})$, we see that the temperature difference, $\Delta\Theta^{+}$, between smooth- and rough-wall flows at matched friction Reynolds numbers is related to the skin-friction and heat-transfer coefficients as
$$\displaystyle\Delta\Theta^{+}$$
$$\displaystyle=$$
$$\displaystyle\Theta_{as}^{+}-\Theta_{ar}^{+}$$
(5)
$$\displaystyle=$$
$$\displaystyle\left(\Theta_{ms}^{+}-\frac{1}{\kappa_{m}\kappa_{h}U_{bs}^{+}}%
\right)-\left(\Theta_{mr}^{+}-\frac{1}{\kappa_{m}\kappa_{h}U_{br}^{+}}\right)$$
$$\displaystyle=$$
$$\displaystyle\left(\frac{1}{U_{bs}^{+}C_{hs}}-\frac{1}{\kappa_{m}\kappa_{h}U_{%
bs}^{+}}\right)-\left(\frac{1}{U_{br}^{+}C_{hr}}-\frac{1}{\kappa_{m}\kappa_{h}%
U_{br}^{+}}\right)$$
$$\displaystyle=$$
$$\displaystyle\sqrt{\frac{C_{fs}}{2}}\left(\frac{1}{C_{hs}}-\frac{1}{\kappa_{m}%
\kappa_{h}}\right)-\sqrt{\frac{C_{fr}}{2}}\left(\frac{1}{C_{hr}}-\frac{1}{%
\kappa_{m}\kappa_{h}}\right).$$
Note that if the heat-transfer coefficient, $C_{h}$, was defined using the arithmetic mean temperature instead, then the temperature difference in (5) would not have the $1/{(\kappa_{m}\kappa_{h})}$ terms.
In external flows, such as zero-pressure-gradient boundary layers, the velocity and temperature scales to use are that of the freestream. This corresponds to the centreline velocity of internal flows, which are related to the bulk quantities through $U_{h}\equiv U(z=h)=U_{b}+1/\kappa_{m}$ and $\Theta_{h}\equiv\Theta(z=h)=\Theta_{a}+1/\kappa_{h}$, where we have assumed logarithmic profiles across the entire channel, with no wake component.
The fully rough regime of roughness is associated with the pressure (or form) drag becoming dominant over the viscous drag (Schultz & Flack, 2009; Busse et al., 2017). The pressure drag component is independent of molecular viscosity, which is why in this regime the skin-friction coefficient is independent of the bulk Reynolds number. Heat transfer through the wall, however, always remains dependent on the molecular transport properties and there is no heat-transfer analogue to the pressure drag (Owen & Thomson, 1963; Cebeci & Bradshaw, 1984). Roughness increases the rate of heat transfer in the transitionally rough regime, however this increase cannot be sustained indefinitely as it is limited by the conductive heat flux near the surface.
In the fully rough regime this corresponds to the heat-transfer coefficient (Stanton number) monotonically decreasing with bulk Reynolds number, in a similar manner to the smooth wall but with an offset (Owen & Thomson, 1963; Dipprey & Sabersky, 1963). However, this behaviour in terms of the temperature profile, in particular the behaviour of $\Delta\Theta^{+}$, has yet to be explained.
The present forced convection simulations neglect buoyancy effects,
as the forces produced by the external source driving the flow
(such as a pump producing a driving pressure gradient)
are typically much larger than any buoyancy forces.
This is a common assumption made in engineering applications and smooth-wall DNS studies (e.g. Kim & Moin, 1989; Kasagi et al., 1992; Kawamura et al., 1998; Tiselj et al., 2001; Pirozzoli et al., 2016).
Many geophysical flows, meanwhile, are driven by buoyancy, referred to as natural or free convection. However,
at very high Rayleigh numbers, sufficiently strong winds can develop that drive
turbulent boundary layers near the wall that are characterised by local buoyancy forces that are much smaller than the shear forces.
Studies of buoyancy-driven flows (e.g. Kraichnan, 1962; Grossmann & Lohse, 2011; Ng et al., 2017)
suggest that within this near-wall shear-dominated region the velocity and temperature profiles tend towards logarithmic functions of distance from the wall, similar to the
present forced convection flow.
This idea is encapsulated in Monin–Obukhov similarity theory, wherein buoyancy forces can be neglected when $|z/L|$ is small, where $L$ is the Obukhov length.
The present forced convection technique could therefore be viewed as a tool to study the near-wall flow of high Rayleigh number natural convection systems, even though the global system is driven by buoyancy.
Presently, we will describe the numerical procedure (§2) and validate the minimal channel for forced convection heat transfer (§3). The roughness Reynolds number will then be increased from the transitionally rough regime towards the fully rough regime (§4). This will enable us to investigate $\Delta\Theta^{+}$ and examine the resulting changes to the temperature field in the fully rough regime.
2 Numerical procedure
The numerical method used here is the second-order (kinetic) energy-conserving finite volume code named CDP, designed for unstructured grids, with flow variables collocated at the cell centroids. Further details are available in Ham & Iaccarino (2004) and Mahesh et al. (2004). This is the same numerical method as has been used in our previous studies (Chan et al., 2015; MacDonald et al., 2016), although here we also solve the transport equation for a passive scalar representing temperature. It is assumed that there are no buoyancy effects as in forced convection, and we neglect temperature variations in viscosity. The conservation equations that are being solved are:
$$\displaystyle\nabla\cdot\mathbf{u}$$
$$\displaystyle=$$
$$\displaystyle 0,$$
(6)
$$\displaystyle\frac{\partial{\mathbf{u}}}{\partial{t}}+\nabla\cdot(\mathbf{u}%
\mathbf{u})$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{\rho}\nabla p+\nu\nabla^{2}\mathbf{u}+\mathbf{F},$$
(7)
$$\displaystyle\frac{\partial{\mathbf{\theta}}}{\partial{t}}+\nabla\cdot(\mathbf%
{u}\theta)$$
$$\displaystyle=$$
$$\displaystyle\alpha\nabla^{2}\theta+G,$$
(8)
where $\mathbf{u}=(u,v,w)$ is the fluid velocity,
$x$, $y$ and $z$ are the streamwise, spanwise and wall-normal (vertical) coordinates, respectively,
$t$ is time
and
$\mathbf{F}=(F_{x},0,0)$ is the driving pressure gradient term. In most turbulent channel flow studies, including our previous works, the pressure $p_{T}$ is decomposed into two components; the driving (mean) pressure, $P(x)$, and the fluctuating (periodic) pressure, $p$. The driving pressure $P$ is an input into the simulation through a spatially uniform body force $F_{x}=-(1/\rho)\mathrm{d}P/\mathrm{d}x>0$. This varies at each time step such that the bulk velocity is constant at all times. The pressure, that is solved for in (7) is the periodic component, $p$.
A similar analogy can be made with temperature, in which the temperature $T_{T}$ is decomposed into the mean, $T(x)$ and periodic component, $\theta$, i.e. $T_{T}=T(x)+\theta$. The mean can be applied as a body force to the transport equation, $G=-u\cdot\mathrm{d}T/\mathrm{d}x$, where $u$ is the instantaneous, spatially dependent streamwise velocity, and the periodic component $\theta$ is solved for by the code. Physically, this amounts to a hot fluid (for $\mathrm{d}T/\mathrm{d}x<0$) being cooled by the wall as it passes through the domain. For a statistically steady fully developed flow in which the time-averaged bulk temperature does not change with time, the heat added through this body forcing will be equal to the heat lost through the walls.
Here, the average heat flux at the wall is defined as $q_{w}/(\rho c_{p})=\alpha\langle\overline{\partial\theta/\partial n}\rangle_{w}$, where $n$ is the local wall-normal direction, overline denotes temporal averaging and angled brackets denotes spatial averaging across all cell wall faces, $\langle\cdot\rangle_{w}\equiv(1/A_{p})\int_{wall}(\cdot)\hphantom{.}\mathrm{d}S$, such that $\langle 1\rangle_{w}=A_{w}/A_{p}$ with $A_{w}$ being the wetted surface area and $A_{p}$ being the plan area.
An isothermal boundary condition for the walls is used with $\theta_{w}=0$, however the true wall temperature would be $T_{w}=T+\theta_{w}=T_{0}+\mathrm{d}T/\mathrm{d}x\cdot x+0$, where $T_{0}$ is the reference temperature. This is a non-conjugate heat transfer problem, or in other words the solid wall has an infinitely large thermal conductivity. The present internal heating technique is similar to that employed in various isothermal smooth-wall heat transfer studies (Kim & Moin, 1989; Kasagi et al., 1992; Kawamura et al., 1998; Tiselj et al., 2001; Pirozzoli et al., 2016). The form that the scalar forcing term takes varies somewhat between these studies, which slightly alters the flow in the channel centre (Pirozzoli et al., 2016) but this should not affect the near-wall roughness effects that we are interested in.
The Prandtl number is set to that of air at room temperature, $Pr=0.7$. The bulk velocity for each case is set by trial and error such that the friction Reynolds number, $Re_{\tau}=U_{\tau}h/\nu$ is approximately equal to its target and matched between smooth- and rough-wall flows. Here, $h$ is the channel half height, defined for the rough-wall flow to be distance between the channel centre and the roughness mean height (Chan et al., 2015), corresponding to the hydraulic half height. Three-dimensional sinusoidal roughness is applied to both the bottom and top walls at $z=0+z_{w}$ and $z=2h+z_{w}$ (figure 1), with
$$z_{w}=k\cos\left(\frac{2\pi x}{\lambda_{x}}\right)\cos\left(\frac{2\pi y}{%
\lambda_{y}}\right),$$
(9)
where $k$ is the sinusoidal semi-amplitude and
the sinusoidal wavelength $\lambda_{x}=\lambda_{y}=\lambda\approx 7.07k$ matches that in our previous study (Chung et al., 2015).
We use a body-fitted (terrain-following) grid to resolve the no-slip wall, as depicted in figure 1(b) and discussed in Chan et al. (2015).
The sinusoidal roughness has a wetted surface area approximately 17.8% greater than the smooth wall.
Details of the simulations conducted are given in table 1. The minimal-span channel for rough wall flows is used (Chung et al., 2015; MacDonald et al., 2017), in which the spanwise domain width is very narrow and only the near-wall flow is captured up to a critical height $z_{c}^{+}\approx 0.4L_{y}^{+}$, where $L_{y}^{+}$ is the channel span. The recommendation in Chung et al. (2015) is used to determine the spanwise domain width, namely $L_{y}\gtrsim\max(100\nu/U_{\tau},k/0.4,\lambda$), where $\lambda$ is the spanwise roughness length scale.
The streamwise length should satisfy $L_{x}\gtrsim\max(3L_{y},1000\nu/U_{\tau},\lambda_{r,x})$, as discussed in MacDonald et al. (2017).
Simulations are run for between $120$ to $600$ large-eddy turnover times $z_{c}/U_{\tau}$ (depending on $L_{y}^{+}$ and $Re_{\tau}$) to ensure that the uncertainty in $\Delta U^{+}$ is less than 0.1$U_{\tau}$, following the guidelines in MacDonald et al. (2017).
Smooth-wall channel simulations with matched channel domain sizes have also been conducted, to ensure that the differences between the smooth- and rough-wall flows are due to the roughness alone and not the channel span. In set $A$, we simulate spans of $L_{y}^{+}=155=\lambda^{+}$, $L_{y}^{+}=310=2\lambda^{+}$ and a full-span channel with $L_{y}=\pi h$ at $Re_{\tau}=395$ to assess the impact of this spanwise width on the heat transfer.
In set $B$, we simulate two different friction Reynolds numbers of $Re_{\tau}=395$ and $Re_{\tau}=590$ but with matched roughness viscous dimensions (same $k^{+}$ and $\lambda^{+}$) and channel viscous dimensions, to examine the effect of relatively low $Re_{\tau}$ on the flow.
Finally, in set $C$, we then increase the
roughness Reynolds number, $k^{+}$, towards the fully rough regime. In this set, all cases have $k=h/18$ except for the first case where $k=h/36$ (to ensure that $Re_{\tau}\gtrsim 395$).
The expected full-span bulk velocity, $U_{bf}^{+}=\int_{0}^{h}U_{f}^{+}\mathrm{d}z/h$, is given in table 1, where the expected full-span velocity profile $U_{f}$ is defined such that the simulation data from the minimal channel is used for $z<z_{c}$, while the composite velocity profile of Nagib & Chauhan (2008) for full-span channel flow is used for $z>z_{c}$. The log-law offset constant is set such that $U_{f}$ is continuous at $z=z_{c}$.
A hyperbolic tangent grid stretching is used in the wall-normal (vertical) direction, resulting in a fairly large grid spacing at the channel centreline. However, the grid spacings below $z_{c}$ are such that $\Delta z^{+}$ only increases beyond conventional DNS spacings above the vertical critical height, $z_{c}$. As the region of the flow above $z_{c}$ is already altered due to the nature of the minimal channel, these spacings should have negligible impact on the near-wall flow of interest.
A uniform grid spacing is used in the streamwise and spanwise directions. Horizontal (wall-parallel) averaging is performed using the intrinsic spatial average for $z<k$, in which quantities represent averages over only the fluid regions. This can be related to the superficial spatial average (averaging over both fluid and solid regions) by multiplying the intrinsic average by the ratio of fluid to total volume, $\sigma(z)$ (see e.g. Finnigan, 2000; Nikora et al., 2001; Breugem et al., 2006).
3 Heat transfer in the minimal channel
3.1 Effect of channel width, $L_{y}^{+}$ (set $A$)
First, we will consider the effect of the channel width on mean velocity and temperature profiles, for both smooth- and rough-wall flows.
This is done at a matched friction Reynolds number of $Re_{\tau}\approx 395$ (set $A$, table 1).
The rough-wall minimal channel has been previously validated for momentum transfer, where it has been shown to be capable of reproducing the roughness function as well as the near-wall high-order statistics of a conventional full-span channel (Chung et al., 2015; MacDonald et al., 2016, 2017).
In the present forced convection flow, temperature is a passive scalar and is simply advected by the velocity, suggesting that it will respond in the same manner as velocity to the minimal-span channel. However, Pirozzoli et al. (2016) noted some differences between the passive scalar and velocity fields, especially in the outer (core) region of the flow. We will therefore compare mean velocity and temperature profiles for minimal and full-span channels.
The mean streamwise velocity profile is shown in figure 2(a), where the dotted lines indicate the unphysical region above the wall-normal critical height, $z_{c}=0.4L_{y}$, of the minimal channels. As expected, this scaling agrees well with the data, where we see that above the critical height (denoted by the vertical mark) the mean velocity increases relative to the full-span channel. Below this point we have what can be described as ‘healthy’ turbulence (Flores & Jiménez, 2010), as it is the same as in conventional (full-span) channel flow. Widening the channel by increasing $L_{y}^{+}$ extends the region of healthy turbulence further from the wall, as larger turbulent structures can fit inside the widened domain.
The region above $z_{c}$ is unphysical due to the narrow constraints of the channel and is not relevant to the near-wall flow.
The inset of this figure shows the difference in smooth- and rough-wall velocity profiles as a function of $z^{+}$. This difference reaches a constant above $z^{+}\approx 40$, which is the offset $\Delta U^{+}$ of the rough-wall flow relative to the smooth-wall flow. Since the difference is constant with $z^{+}$, this indicates that the rough-wall flow has the same velocity profile as the smooth-wall flow and that the outer-layers are similar. This is the case for both minimal-span and full-span channels and demonstrates that the minimal channel can accurately estimate the roughness function, as already discussed in our previous work (Chung et al., 2015; MacDonald et al., 2017).
Figure 2(b) shows the temperature profile in the same format as the velocity profile of figure 2(a). The critical height scaling obtained from the velocity profiles, $z_{c}=0.4L_{y}$, is used here without alteration. There is excellent near-wall agreement between the minimal and conventional channels and it appears that the mean temperature does not increase as readily as the velocity when we are outside the healthy turbulence region (above $z_{c}$, dotted lines). As with the velocity profiles, both the smooth- and rough-wall temperature profiles appear to scale with logarithmic distance from the wall above $z^{+}\gtrsim 50$.
The inset shows the difference in smooth- and rough-wall temperature profiles, which for the conventional full-span channel flow (black line) is tending towards a constant value value above $z^{+}\approx 60$. As with the velocity profile, this indicates that the smooth- and rough-wall temperature profiles are similar in the outer layer of the flow.
This supports the rough-wall logarithmic temperature profile (3), and that we only need to estimate the temperature equivalent of the roughness function, $\Delta\Theta^{+}$, to describe the temperature profile.
The difference in temperature profiles for the minimal-span channels (grey lines in inset of figure 2b) shows more variation above $z_{c}^{+}$ than the corresponding velocity difference. The narrowest span channel with $z_{c}^{+}\approx 62$ (light grey line) shows the temperature difference tending towards a value of approximately 2.2 at the channel centreline, much lower than the conventional channel centreline value (black line) of 3.0. This difference in the minimal-span channel is possibly due to a lack of statistical convergence in the outer-layer region of the channel and would require a much longer run time to converge. However, the region above $z_{c}^{+}$ is inherently unphysical due to the minimal span and resulting lack of large-scale structures, which means obtaining statistical convergence in this region is not necessary or relevant to the near-wall healthy turbulence (MacDonald et al., 2017). The minimal channel simulations are therefore run to ensure converged statistics in the near-wall flow, up to the critical height, $z_{c}^{+}$. As such, if we evaluate the temperature difference at the critical height $z_{c}^{+}$ to obtain $\Delta\Theta^{+}$,we observe good agreement between all three channel widths, with $\Delta\Theta^{+}\approx$ 3.0, 3.1 and 2.9 for $L_{y}^{+}=155$, 310 and the full-span case, respectively.
This result indicates that the minimal channel can be used for studying heat transfer, for both smooth- and rough-wall flows.
3.2 Effect of friction Reynolds number, $Re_{\tau}$ (set $B$)
Having validated that the minimal channel can accurately predict both the roughness function, $\Delta U^{+}$, and the temperature difference, $\Delta\Theta^{+}$, we now assess the influence of the friction Reynolds number, $Re_{\tau}$. In Chan et al. (2015), it was shown that turbulent pipe flow simulations at $Re_{\tau}\approx 180$ leads to an overestimation of the roughness function compared to $Re_{\tau}\gtrsim 360$. This is primarily due to an upward shift of the logarithmic region in the smooth-wall case, caused by the pressure gradient effect inherent to low $Re_{\tau}$ turbulent flows. Here, we repeat this validation but also investigate the effect on heat transfer for two cases of $Re_{\tau}=395$ and $Re_{\tau}=590$ (set $B$, table 1). The roughness size ($k^{+}$ and $\lambda^{+}$) and channel dimensions ($L_{x}^{+}$ and $L_{y}^{+}$) are matched in viscous units. Figure 3 shows the mean velocity and temperature profiles for these two friction Reynolds numbers. As already demonstrated in Chan et al. (2015), the mean velocity profiles (figure 3a) and velocity difference (inset) shows that the Reynolds number is effect is negligible for $Re_{\tau}\gtrsim 395$. The roughness function is marginally overestimated for $Re_{\tau}\approx 395$ by about 0.08$U_{\tau}$, although this is similar to the level of uncertainty we would expect from the minimal-span channel (MacDonald et al., 2017).
The temperature profiles (figure 3b) and difference (inset) shows a similar effect, with there being only a minor difference between $Re_{\tau}\approx 395$ and $Re_{\tau}\approx 590$. Much of this difference is in the outer-layer region of the smooth-wall flow at $Re_{\tau}\approx 395$ (solid grey line). However the temperature difference (inset) is similar below the critical height $z_{c}^{+}$, with the $Re_{\tau}\approx 395$ case underestimating $\Delta\Theta^{+}$ by approximately $0.1\Theta_{\tau}$. While this result does not give a lower bound on the friction Reynolds number for which Reynolds number effects become negligible in heat transfer, we will assume that, as with velocity, $Re_{\tau}\gtrsim 395$ is sufficient.
4 Increasing roughness Reynolds number (set $C$)
4.1 Mean profiles
We now increase the roughness Reynolds number, $k^{+}$, towards fully rough conditions (set $C$, table 1). Figure 4(a) shows the mean velocity profiles for increasing friction Reynolds numbers, where the roughness height is fixed at $k=h/18$
(except for the smallest roughness case where $k=h/36$, to ensure $Re_{\tau}\gtrsim 395$, see §3.2).
We see that increasing $Re_{\tau}$ (and hence $L_{y}^{+}=\lambda^{+}\approx 7.1k^{+}$) leads to the capturing of a larger region of the logarithmic layer for the smooth-wall flow (solid lines). However, the roughness increasingly reduces the near-wall velocity magnitude with Reynolds number (dashed lines), indicating that the roughness function is increasing (inset of figure 4a). The rough-wall velocity profiles are plotted in figure 4(c) as a function of $z/k$. In this scaling, we see that the cases with larger $k^{+}$ (darker grey lines) are collapsing onto the fully rough asymptote of $U^{+}\approx\kappa_{m}^{-1}\log(z/k)+D$ (solid red line), where $\kappa_{m}\approx 0.4$ and $D=A_{m}-C\approx 5.1$.
In a conventional flow we would also see the centreline velocity of the rough-wall flow tending towards a constant as the skin-friction coefficient becomes constant in the fully rough regime. This is not observed here as the channel width, $L_{y}=\lambda$, increases with Reynolds number which affects the critical height, $z_{c}^{+}$, and hence centreline velocity. However, the velocity at the roughness crest (inset of figure 4c) is approximately constant, $U_{k}^{+}\equiv U^{+}(z=k)\approx 4.7$, indicating that the drag coefficient defined on this velocity (e.g., Macdonald et al., 1998; Coceal & Belcher, 2004) is also approximately constant with Reynolds number, $C_{dk}\equiv\tau_{w}/((1/2)\rho U_{k}^{2})=2/{U_{k}^{+2}}\approx 0.09$.
Figure 4(b) shows the temperature profile for the smooth- and rough-wall flows. The smooth-wall DNS data of Pirozzoli et al. (2016) at $Re_{\tau}\approx 2000$ is also shown by the dash-dotted blue line. We see excellent agreement with the present smooth-wall data below the critical height $z_{c}^{+}$ (vertical marks), further supporting the view made in §3.1 that the minimal channel can be used to study the near-wall flow of forced convection heat transfer.
Pirozzoli et al. (2016) determined that the smooth-wall temperature profile in the logarithmic region follows $\Theta^{+}=\kappa_{h}^{-1}\log(z^{+})+A_{h}$, with constants $\kappa_{h}\approx 0.46$ and $A_{h}(Pr=0.7)\approx 3.2$. As the roughness Reynolds number increases, the rough-wall temperature profiles (dashed lines) begin to collapse and follow a logarithmic trend with $z^{+}$, the same as with the smooth-wall (solid lines) but offset. This agrees with the rough-wall logarithmic temperature profile introduced through (3).
The temperature difference (inset of figure 4b) begins to collapse as well for large $k^{+}$, indicating that $\Delta\Theta^{+}$ is reaching a constant of approximately 4.4 that is independent of $k^{+}$.
While the temperature profile is collapsing in the logarithmic region for fully rough flow (figure 4b), the temperature below the roughness crests continues to increase with $k^{+}$, shown in figure 4(d).
This is consistent with the logarithmic temperature profile (3), which becomes independent of $k^{+}$ in the fully rough regime, beginning almost immediately above the crest. This would then force the crest temperature to be $\Theta_{k}^{+}\equiv\Theta^{+}(z=k)\approx\kappa_{h}^{-1}\log(k^{+})+A_{h}-%
\Delta\Theta^{+}$, with $A_{h}-\Delta\Theta^{+}\approx 3.2-4.4\approx-1.2$. Indeed, the inset of this figure shows that the temperature at the crest, $\Theta_{k}^{+}$, increases logarithmically with $k^{+}$, with good agreement with the logarithmic temperature profile (3) evaluated at $z=k$ (dashed red line).
Figure 5 shows the roughness function, $\Delta U^{+}$, and the temperature difference, $\Delta\Theta^{+}$, for the present roughness simulations.
This is shown with the roughness functions from Chung et al. (2015) and Chan et al. (2015) for the same sinusoidal roughness geometry, as well as for the sand grain data of Nikuradse (1933).
The sinusoidal roughness data are plotted as a function of the equivalent sand-grain roughness, $k_{s}/k\approx 4.1$, where this factor has been taken from Chung et al. (2015).
This scaling ensures the collapse of the present $\Delta U^{+}$ values with those of Nikuradse’s sand grain roughness in the fully rough regime (here, $k_{s}^{+}\gtrsim 150$),
where the roughness function scales as
$\kappa_{m}^{-1}\log(k_{s}^{+})+A_{m}-C_{N}\approx\kappa_{m}^{-1}\log(k_{s}^{+}%
)-3.5$.
Within the transitionally rough regime ($k_{s}^{+}\lesssim 150$), the different roughness geometries have a unique behaviour and the roughness function is not guaranteed to collapse with that of Nikuradse (Jiménez, 2004).
Despite having matched roughness geometries, there is a slight difference between the pipe (black diamonds) and present minimal channel (blue circles) data for the largest $k_{s}^{+}$. This is likely
due to the fundamental differences between the two domain geometries, as well as the difference in blockage ratios ($k/h=1/6.75$ from Chan et al. (2015) versus $k/h=1/18$ here).
From the temperature difference in figure 5 it appears that, for small $k_{s}^{+}$, there may be a ‘thermodynamically smooth’ regime for heat transfer in which $\Delta\Theta^{+}\approx 0$. This would be analogous to the hydrodynamically smooth regime for momentum transfer, in which $\Delta U^{+}\approx 0$ for $k_{s}^{+}\lesssim 4$ and the drag produced by the rough wall matches that of the smooth wall (Raupach et al., 1991; Jiménez, 2004). A visual extrapolation of the present data to small $k_{s}^{+}$ suggests that this thermodynamically smooth regime would remain at larger $k_{s}^{+}$ values than the hydrodynamically smooth regime.
This implies that for small $k_{s}^{+}$ the effects of roughness are first felt in momentum transfer before affecting heat transfer,
presumably due to the present molecular Prandtl number being less than unity ($Pr=0.7$).
In this case, the thermal diffusive sublayer is slightly thicker than the viscous sublayer and would require a larger $k_{s}^{+}$ to overcome.
In the fully rough regime, the temperature difference appears to tending towards a constant value of $\Delta\Theta^{+}\approx 4.4$. This implies that further increases to the roughness Reynolds number result in the heat-transfer coefficient (Stanton number) decreasing, consistent with the experiments of Dipprey & Sabersky (1963). Presumably this constant $\Delta\Theta_{FR}^{+}$ would be affected by the roughness geometry, and would depend on the solidity (a measure of the roughness density) among other roughness parameters.
Note that when viewed in isolation, it may appear that $\Delta\Theta^{+}$ could have reached a maximum around $k_{s}^{+}\approx 250$ and could go on to decrease at larger Reynolds numbers, as opposed to remaining constant. However, the flow appears to have reached its asymptotic fully rough state, where $\Delta U^{+}$ scales as $(1/\kappa_{m})\log(k_{s}^{+})$ and $U_{k}^{+}$ is constant (inset of figure 4c). It is therefore difficult to conceive how the flow could undergo an additional change at even higher Reynolds numbers, well within the fully rough regime, that would lead to $\Delta\Theta^{+}$ decreasing. This decrease would imply that the temperature profile would eventually return to that of the smooth-wall flow while the smooth- and rough-wall velocity profiles continue to deviate. Moreover, the crest temperature, $\Theta_{k}^{+}$ (inset of figure 4d), increases in a log-linear manner, which for increasing $k^{+}$, is only consistent if $\Delta\Theta^{+}$ is approaching a constant value. These observations therefore all suggest that $\Delta\Theta^{+}$ remains constant in the fully rough regime.
An important quantity in heat transfer models is the turbulent Prandtl number, defined as the ratio of momentum and heat transfer eddy diffusivities,
$$Pr_{t}=\frac{\nu_{t}}{\alpha_{t}}=\frac{\langle\overline{u^{\prime}w^{\prime}}%
\rangle}{\langle\overline{w^{\prime}\theta^{\prime}}\rangle}\frac{\mathrm{d}%
\Theta/\mathrm{d}z}{\mathrm{d}U/\mathrm{d}z},$$
(10)
where $\langle\overline{u^{\prime}w^{\prime}}\rangle$ is the Reynolds shear stress and $\langle\overline{w^{\prime}\theta^{\prime}}\rangle$ is the turbulent heat flux.
Under sufficiently high Reynolds and Peclet numbers, dimensional arguments predict that $Pr_{t}$ should be constant in the logarithmic layer (Cebeci & Bradshaw, 1984).
Indeed, Pirozzoli et al. (2016) showed that the smooth-wall turbulent Prandtl number is close to unity at the wall, before reducing to become approximately constant in the logarithmic layer, with $Pr_{t}\approx 0.85$ over $z^{+}\gtrsim 100$ and $z/h\lesssim 0.5$. Studies of temporally developing boundary layers (Kozul et al., 2016) and statistically stationary homogeneous shear flows (which can be treated as a model for the logarithmic layer, see Sekimoto et al. 2016; Chung & Matheou 2012) also suggest $Pr_{t}\approx 1.0$.
In the wake region, $Pr_{t}$ reduces further towards values of 0.5, in line with free-shear flows, or the wake behind a bluff body (Cebeci & Bradshaw, 1984).
In figure 6(a), the turbulent Prandtl number is shown for increasing channel widths at fixed $Re_{\tau}=395$ (set $A$, table 1), where the data are only shown for $z<z_{c}$.
The full-span channel (black line) shows $Pr_{t}$ tending towards values of 0.85–0.9 within the logarithmic layer (around $z^{+}\approx 60$), although the
relatively low Reynolds numbers and short logarithmic layer do not enable the constant $Pr_{t}$ region to be obtained. Above the logarithmic layer, in the wake, magnitude of the full-span $Pr_{t}$ continue to decrease, as discussed above.
The narrowest channel with $L_{y}^{+}\approx 153$ (light grey solid line) has a turbulent Prandtl number that is slightly less than the wider span cases, even at the wall. If we consider the turbulent momentum and heat fluxes as integrals of their respective one-dimensional spanwise energy spectra,
$\langle\overline{u^{\prime}w^{\prime}}\rangle=\int_{0}^{\infty}E_{uw}\hphantom%
{.}\mathrm{d}k_{y}$ and $\langle\overline{w^{\prime}\theta^{\prime}}\rangle=\int_{0}^{\infty}E_{w\theta%
}\hphantom{.}\mathrm{d}k_{y}$, we see that some length scales (namely $k_{y}<2\pi/L_{y}$) will be missing due to the use of the minimal channel, even below $z<z_{c}$. However, MacDonald et al. (2017) showed that much of the dynamically relevant scales are still captured in these energy spectra, and we see the difference in $Pr_{t}$ between the different channel spans close to the wall in figure 6(a) is fairly small. Moreover, $L_{y}^{+}$ increases with $Re_{\tau}$ so this effect is less significant for our larger Reynolds number cases.
More noticeable is the reduction in $Pr_{t}$ at higher $z^{+}$ values (while still below $z_{c}^{+}$) for the minimal channels, relative to the full-span channel. This is likely due to the wake region, that now starts at $z_{c}^{+}$, encroaching into the immature logarithmic layer, leading to the early reduction in $Pr_{t}$. Note that $Pr_{t}$ is a highly sensitive measure of the relative slopes of the mean velocity and temperature profiles, and requires very high Reynolds and Peclet numbers to yield meaningful results. The differences in these mean profiles, $\Delta U^{+}$ and $\Delta\Theta^{+}$, meanwhile, are much less sensitive to the slopes and do not suffer as severely from this wake-encroachment issue. They are also the primary source of uncertainty in estimating the skin-friction and heat-transfer coefficients and, as shown above, are accurately measured with the minimal channel.
Figure 6(b) shows the turbulent Prandtl number for increasing friction Reynolds numbers. We see that the rough-wall turbulent Prandtl number (dashed lines) is initially unity at the roughness crest (within the roughness sublayer) and therefore larger than that of the smooth wall at matched $z^{+}$.
For small $k^{+}$ values, the difference in the smooth- and rough-wall turbulent Prandtl numbers is minor and a reasonable collapse is observed.
For the two largest $k^{+}$ values which are nominally fully rough (dark grey dashed lines), the rough-wall $Pr_{t}$ at the crest is noticeably larger than the smooth-wall value at matched $z^{+}$.
It then rapidly reduces with wall-normal distance until it is slightly less than that of the smooth wall at $z_{c}^{+}$.
From Townsend’s outer-layer similarity hypothesis (Townsend, 1976), we would expect the smooth- and rough-wall turbulent Prandtl numbers to eventually collapse in the logarithmic layer (with constant $Pr_{t}$), assuming sufficiently large $Re_{\tau}$ and wall-normal extent of the logarithmic layer.
4.2 Skin-friction and heat-transfer coefficients
The skin-friction and heat-transfer coefficients, $C_{f}=2/{U_{bf}^{+2}}$ and $C_{h}=1/(U_{bf}^{+}\Theta_{mf}^{+})$, are given in figure 7(a) and (b) for the present smooth-wall and rough-wall data with $k/h=1/18$ (symbols). Here, the expected full-span velocity ($U_{f}$) and temperature ($\Theta_{f}$) profiles are computed by fitting the composite profile of Nagib & Chauhan (2008) for full-span channel flow to the minimal channel data for $z>z_{c}$. We use slope coefficients of $\kappa_{m}=0.4$ and $\kappa_{h}=0.46$ (Pirozzoli et al., 2016) and the same empirical wake function, $\Psi$, of Nagib & Chauhan (2008) for both velocity and temperature profiles. However, the wake parameter, $\Pi$, is set to 0.08 for velocity and 0.03 for temperature, where these constants comes from fitting the outer-layer composite profile to the data of Bernardini et al. (2014) and Pirozzoli et al. (2016) for velocity and temperature, respectively. The offsets, $A_{m}$ and $A_{h}$, are set such that the profile is continuous at $z_{c}$.
Empirical correlations are also given in figure 7(a,b). The power law of Dean (1978) for the smooth-wall skin-friction coefficient,
$C_{fs}=0.073Re_{b}^{-1/4}$, is provided (dash-dotted line in figure 7a), as well as the Prandtl–von Kármán logarithmic skin-friction law (solid line). The latter comes from integrating the logarithmic smooth-wall mean velocity profile (as in (1) with $\Delta U^{+}=0$) across the entire channel to obtain an implicit equation,
$$U_{bs}^{+}=\frac{1}{\kappa_{m}}\log\left(\frac{\frac{1}{2}Re_{b}}{U_{bs}^{+}}%
\right)-\frac{1}{\kappa_{m}}+A_{m}.$$
(11)
This can be solved for $C_{fs}$ in terms of $Re_{b}$ using the product logarithm (or Lambert’s $\mathcal{W}$-function), resulting in
$$U_{bs}^{+}=\sqrt{\frac{2}{C_{fs}}}=\frac{1}{\kappa_{m}}\mathcal{W}\left(\frac{%
1}{2}Re_{b}\kappa_{m}e^{(A_{m}\kappa_{m}-1)}\hskip 1.422638pt\right).$$
(12)
While both the power-series and log-law skin-friction coefficient models agree well with the present smooth-wall data at moderate Reynolds number (figure 7a), recent studies of smooth-wall channel flow have shown that the log-law equation agrees better with high Reynolds number data than the smooth-wall power-series correlations (Schultz & Flack, 2013; Bernardini et al., 2014).
Similarly, the smooth-wall power-series heat-transfer coefficient of Kays et al. (2005) is given in figure 7(b), $C_{hs}=0.021Re_{b}^{-0.2}Pr^{-0.5}$. As with the velocity profile, the smooth-wall logarithmic temperature profile (as in (3) with $\Delta\Theta^{+}=0$) can be integrated across the entire channel to obtain a similar expression to (11), with
$$\Theta_{as}^{+}=\frac{1}{\kappa_{h}}\log\left(\frac{\frac{1}{2}Re_{b}}{U_{bs}^%
{+}}\right)-\frac{1}{\kappa_{h}}+A_{h},$$
(13)
where $\kappa_{h}\approx 0.46$ and $A_{h}\approx 3.2$ for the present $Pr=0.7$ flow (figure 4b).
Hence, the smooth-wall Stanton number $C_{hs}=1/(U_{bs}^{+}\Theta_{ms}^{+})$ can be estimated using (12) and (13),
where (4) is used to get the mixed-mean temperature, $\Theta_{m}^{+}$, in terms of the arithmetic mean temperature, $\Theta_{a}^{+}$.
At moderate Reynolds numbers, both the power-series correlation of Kays et al. (2005) and the log-law formulas show good agreement with the present data for the smooth-wall heat-transfer coefficient (squares in figure 7b). At higher Reynolds numbers however, we might expect the log-law equations to perform better like they do with the skin-friction coefficient.
The present rough-wall skin-friction coefficient (circles in figure 7a) is seen to initially increase in value in the transitionally rough regime. However, for sufficiently large bulk Reynolds number it is tending towards a constant value of $C_{f_{FR}}\approx 0.021$ (dashed line). This indicates that the flow is approaching the asymptotic fully rough state. The rough-wall heat-transfer coefficient (circles in figure 7b), meanwhile, increases to a maximum in the transitionally rough regime, before monotonically reducing in the fully rough regime.
The dotted line here shows the heat transfer model of Dipprey & Sabersky (1963, eq. 28), in which the fully rough (subscript $FR$) heat-transfer coefficient for a pipe was given as
$$C_{h_{FR}}=\frac{\frac{C_{f_{FR}}}{2}}{1+\sqrt{\frac{C_{f_{FR}}}{2}}\left[k_{f%
}\left(Re_{b}\sqrt{\frac{C_{f_{FR}}}{2}}\frac{k_{s}}{2h}\right)^{0.2}Pr^{0.44}%
-8.48\right]},$$
(14)
where $k_{s}/(2h)=\exp\left[\kappa_{m}(3.0-\sqrt{2/C_{f_{FR}}})\right]$ is the blockage in terms of the equivalent sand-grain roughness.
The constant $k_{f}$ is roughness dependent, where Dipprey & Sabersky (1963) suggested $k_{f}=5.19$ for granular roughness. Here, we use $k_{f}=5.6$, obtained from a least squares fit to the fully rough data ($k^{+}\gtrsim 66$). While (14) was developed for pipe flow, we see it correctly predicts the trend of $C_{h}$ reducing with Reynolds number for the present channel flow cases.
Alternatively, we can use the assumed logarithmic velocity and temperature profiles to obtain an expression for the fully rough heat transfer coefficient (Stanton number). Starting from the definition of the Stanton number and (4) we get,
$$C_{h_{FR}}=\frac{1}{U_{b_{FR}}^{+}\Theta_{m_{FR}}^{+}}=\frac{\kappa_{m}\kappa_%
{h}}{1+\kappa_{m}\kappa_{h}U_{b_{FR}}^{+}\Theta_{a_{FR}}^{+}},$$
(15)
where $U_{b_{FR}}^{+}$ is the fully rough bulk velocity, which can be obtained by integrating the logarithmic velocity profile (1) with $\Delta U^{+}=\kappa_{m}^{-1}\log(k^{+})+C$, to yield a constant,
$$\displaystyle U_{b_{FR}}^{+}$$
$$\displaystyle=$$
$$\displaystyle A_{m}-C-\frac{1}{\kappa_{m}}\left(1+\log\left(\frac{k}{h}\right)\right)$$
(16)
$$\displaystyle=$$
$$\displaystyle C_{N}-\frac{1}{\kappa_{m}}\left(1+\log\left(\frac{k_{s}}{h}%
\right)\right),$$
(17)
and $\Theta_{a_{FR}}^{+}$ can be obtained by integrating the logarithmic rough-wall temperature profile, as in (3) with constant temperature difference $\Delta\Theta_{FR}^{+}$, to yield
$$\displaystyle\Theta_{a_{FR}}^{+}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\kappa_{h}}\log\left(\frac{\frac{1}{2}Re_{b}}{U_{b_{FR}}%
^{+}}\right)-\frac{1}{\kappa_{h}}+A_{h}-\Delta\Theta_{FR}^{+}.$$
(18)
Here, the log-law constants, $A_{m}\approx 5.0$, $\kappa_{m}\approx 0.4$, $\kappa_{h}\approx 0.46$ and $A_{h}(Pr=0.7)\approx 3.2$ are all known and $C_{N}=8.5$ is Nikuradse’s constant. This means that we only need the roughness function offset, $C$, from $\Delta U^{+}=\kappa_{m}^{-1}\log(k^{+})+C$ in (16), or alternatively the equivalent sand-grain roughness, $k_{s}$, in the form given by (17), as well as the constant $\Delta\Theta_{FR}^{+}$ to predict the Stanton number at any Reynolds number in the fully rough regime. These dynamical parameters ($k_{s}/k$ and $\Delta\Theta_{FR}$) are geometry dependent and must be measured in a dynamic procedure. Here, the unknowns can be determined using the minimal channel technique; for the present sinusoidal roughness, these values are $k_{s}/k\approx 4.1$ and $\Delta{\Theta_{FR}}^{+}\approx 4.4$ (figure 5).
The empirical heat-transfer model of Dipprey & Sabersky (1963) in (14) as well as the integrated log-law equations in (17–18) are shown in figure 7(b), along with the present data (circles). Both models show good agreement with the present data at moderate Reynolds number, with the heat-transfer coefficient reducing with Reynolds number. However, the advantage of the log-law equations is that the only modelling assumption made is that the temperature profile follows a logarithmic profile across the entire channel. This is phenomenologically consistent with our understanding of forced convection wall turbulence and does not require any ad hoc terms like $k_{f}$ in (14). The integrated log-law equations can also be extended to include the effect of the wake through additive constants $L_{m}=\int_{0}^{h}2\Pi_{m}/(\kappa_{m})\Psi\hphantom{.}\mathrm{d}z$ and $L_{h}=\int_{0}^{h}2\Pi_{h}/(\kappa_{h})\Psi\hphantom{.}\mathrm{d}z$. These are small for channel flows so do not significantly alter the results here, but are larger for pipes and boundary layers.
Figure 7(c) shows the ratio of the heat-transfer and skin-friction coefficients, $2C_{h}/C_{f}$, for the present data as well as the log-law equations.
We see that the smooth-wall ratio is constant with Reynolds number, in support of the Reynolds analogy in which momentum transfer is proportional to heat transfer. In contrast, the rough-wall ratio reduces with Reynolds number, indicating that the Reynolds analogy for rough-wall flow is breaking down for these bulk measures of momentum and heat transfer. Note that this is only in regard to the bulk quantities; the analogy may still hold within the flow when looking at quantities such as the turbulent diffusivity for momentum and heat at a given wall-normal location.
In an engineering sense, this ratio of coefficients can be regarded as the heat-transfer rate per unit pumping power (Dipprey & Sabersky, 1963), or that the roughness is advantageous for heat transfer applications when $(C_{hr}/C_{fr})>(C_{hs}/C_{fs})$.
Evidently, the present roughness and flow conditions are not as efficient as the smooth-wall flow. However, Dipprey & Sabersky (1963) noted that increasing the Prandtl number above $Pr\gtrsim 3$ may enable advantageous heat transfer for the rough-wall flow to be obtained. This favourable condition typically only occurs when the flow is in the transitionally rough regime and the rough-wall heat-transfer coefficient has reached a maximum.
Note that there are also alternative measures for the performance of heat transfer systems depending on the engineering design constraints present (Webb, 1981).
Finally, in meteorology, the logarithmic velocity profile (1) is often given as $U^{+}=(1/\kappa_{m})\log(z/z_{0m})$, where $z_{0m}$ is the roughness length for momentum transfer. This is related to the equivalent sand-grain roughness by a constant, $k_{s}=\exp[\kappa_{m}C_{N}]z_{0m}\approx 30z_{0m}$ (Jiménez, 2004). In the fully rough regime, $k_{s}/k$ and $z_{0m}/k$ are constants that only depend on the roughness geometry. In a similar manner, the logarithmic temperature profile is defined as $\Theta^{+}=(1/\kappa_{h})\log(z/z_{0h})$ where $z_{0h}$ is the roughness length for heat transfer (e.g. Owen & Thomson, 1963; Chamberlain, 1966; Wood & Mason, 1991). Relating this to (3) in the fully rough regime, we obtain
$$z_{0h}^{+}=\exp[-\kappa_{h}(A_{h}-\Delta\Theta_{FR}^{+})],$$
(19)
where for the present roughness and flow conditions, $z_{0h}^{+}\approx 1.7$. Importantly, we see that this inner-normalised roughness length for heat transfer (or any passive scalar) is constant and does not depend on the roughness height, $k$, like its momentum counterpart.
This is the same form as suggested by Garratt & Hicks (1973), although there the authors used a slightly different roughness-independent constant, $z_{0h}^{+}\approx 1/(\kappa Pr)\approx 3.5$, where
the heat transfer slope constant was assumed to be equal to the momentum constant with $\kappa=0.41$. In any case, this constant value is larger than the smooth-wall constant, which would be $z_{0hs}^{+}=\exp(-\kappa_{h}A_{h})\approx 0.23$.
Garratt & Hicks (1973) presented data from a collection of experimental studies which for moderate Reynolds numbers provided strong support for the rough-wall constant $z_{0h}^{+}$ form above.
However, at higher Reynolds numbers of $Re_{\tau}\gtrsim 10^{4}$ some of the experimental data for the ratio $z_{0m}/z_{0h}$ increased faster than was predicted by using a constant $z_{0h}^{+}$ (i.e. $z_{0m}/z_{0h}\approx\kappa Prz_{0m}^{+}$). In the present formulation using (19), this ratio would take the form $z_{0m}/z_{0h}\approx\exp[\kappa_{h}(A_{h}-\Delta\Theta_{FR}^{+})]z_{0m}^{+}=Bz%
_{0m}^{+}$ where the constant $B\approx 0.58$ for the present sinusoidal roughness. This is shown in figure 7(d) and agrees well with the data for the present moderate Reynolds numbers.
A more rapid increase in the ratio for very large Reynolds numbers would correspond to $\Delta\Theta_{FR}^{+}$ decreasing.
More complex models for $z_{0m}/z_{0h}$ involving non-linear expressions of $z_{0m}^{+}$ have been suggested to account for this increase (e.g. Owen & Thomson, 1963; Brutsaert, 1975; Andreas, 1987).
However, there is large scatter in the experimental data for high Reynolds number flows, with some data even suggesting $z_{0m}/z_{0h}$ eventually reduces (Garratt & Hicks, 1973), making it difficult to assess the true behaviour of $z_{0m}/z_{0h}$ (or $\Delta\Theta_{FR}^{+}$). For completeness, while an equivalent sand-grain roughness for heat transfer, $k_{sh}$, does not appear to be used in the literature, it would take the form $k_{sh}=\exp[\kappa_{h}C_{Nh}]z_{0h}$, where $C_{Nh}$ would be the heat-transfer constant for Nikuradse’s sand grain roughness which is currently unknown.
4.3 Wall fluxes
The ratio of pressure drag to total drag (pressure plus viscous drag) is shown in figure 8(a). Equivalently, the total stress can be decomposed into pressure and viscous stress contributions, $\tau_{w}=\tau_{p}+\tau_{v}$, so that this figure also represents the inner-normalised pressure stress, $\tau_{p}^{+}$. Also shown in this figure is the pipe-flow data of Chan et al. (2015) for the same sinusoidal roughness geometry, with good agreement observed between the two data sets. From the roughness function values shown in figure 5, it appears that the two roughness cases where $k_{s}^{+}\gtrsim 275$ are tending towards the fully rough asymptote of $\kappa_{m}^{-1}\log(k_{s}^{+})+A_{m}-8.5$, where $\Delta\Theta^{+}$ is approximately constant. In figure 8, this then corresponds to the pressure drag being larger than approximately 75% of the total drag force in the fully rough regime.
Comparing figures 5 and 8(a) shows that while the temperature difference $\Delta\Theta^{+}$ has become approximately constant for $k_{s}^{+}\gtrsim 275$,
the pressure drag continues to increase in a log-linear manner with $k_{s}^{+}$.
There is no distinct change in this trend between the transitionally and fully rough regimes and the viscous drag is still somewhat significant at 25% of the total drag force.
Figure 8(b) shows the momentum and heat fluxes non-dimensionalised with the crest velocity and temperature, $U_{k}$ and $\Theta_{k}$. The momentum flux has been decomposed into its pressure and viscous drag contributions (dashed and dotted lines, respectively), where the sum of these two contributions gives the drag coefficient, $C_{dk}=2/U_{k}^{+2}\approx 0.09$, which was seen to be approximately constant at in the inset of figure 4(c).
Note that the reduction in the pressure drag component for $k_{s}^{+}\approx 90$ is due to the slight increase in $U_{k}^{+}$ for this $k^{+}$; the ratio of pressure to total drag force (figure 8a) increases with $k_{s}^{+}$ as expected.
Alongside the momentum fluxes shown in figure 8(b) is the wall heat flux, $q_{w}/(\rho c_{p}U_{k}\Theta_{k})=1/(U_{k}^{+}\Theta_{k}^{+})$, which follows a similar trend to the viscous contribution. This suggests that in rough-wall forced convection, the Reynolds analogy can only be used to relate the heat flux to the viscous drag contribution of the momentum flux. The analogy does not hold for the overall momentum flux due to the pressure-drag contribution, and explains why the ratio $2C_{h}/C_{f}$ (figure 7c) reduces with Reynolds number for the rough-wall case.
The average viscous stress and heat flux on the rough wall are shown in figure 9 for transitionally rough flow ($k^{+}\approx 33$, a,b) and nominally fully rough flow ($k^{+}\approx 93$, c,d). These have been temporally averaged
from data outputted approximately every $35\nu/U_{\tau}^{2}$ as well as averaged over each repeating roughness element in the domain.
The viscous stress (a,c) is strongest at the roughness crests as there is high-speed fluid passing over the crests, resulting in substantial shear. This region of strong viscous stress extends down towards the saddles between neighbouring crests. A region of negative stress forms on the lee side of the roughness due to recirculation behind the roughness crests.
The negative stress is stronger for the transitionally rough flow however the area covered by this negative stress (the dashed contour line) is approximately constant, at 40% of the roughness plan area for both the transitionally (figure 9a) and nominally fully rough (c) flows.
The heat flux (figure 9b,d) appears similar to the viscous stress contours, with strong heat flux localised to the roughness crests and along the saddles. This occurs because these regions are exposed to faster flow and higher fluid temperature, leading to enhanced viscous stress and heat flux.
Figure 10 shows instantaneous snapshots of the streamwise velocity and fluid temperature in the streamwise–vertical plane for increasing roughness sizes. A white contour line for $u^{+}=4$ and $\theta^{+}-\theta_{w}^{+}=4$ has been selected to provide an indication of the viscous and thermal diffusive sublayers.
The smooth-wall flow (figure 10a) produces thin sublayers close to the wall, for both the velocity and temperature fields.
The roughness in the transitionally rough regime (figure 10b,c) produces much thicker sublayers. Here, the fluid temperature in most of the region below the roughness crests is near its isothermal wall value, suggesting a reduction in the local temperature gradients at the wall and hence a reduced heat transfer rate in these regions. However, the large temperature gradients localised to the roughness crests (see also figure 9b) lead to an increase in the overall heat transfer (Stanton number) for these transitional rough-wall cases relative to the smooth wall.
The black contour for the streamwise velocity fields corresponds to zero velocity, giving an indication of the region of recirculation residing behind the roughness elements.
As the roughness height increases further and becomes nominally fully rough (figure 10d,e) we see that much of the fluid below the roughness crests remains near zero, with the selected contour line of $u^{+}=4$ residing mostly above the roughness crests, indicating an increasing dominance of pressure drag.
In contrast, the thermal diffusive sublayer is seen to be descending into the roughness canopy for these nominally fully rough cases. It appears as a thin sublayer that follows the roughness geometry and resembles that of the smooth wall if the wall was contorted.
The fluid temperature below the roughness crests is therefore much larger than what might have been expected from inspection of the velocity fields.
These figures, especially the contrast in viscous and thermal diffusive sublayers in figure 10(d) and (e), emphasise the dissimilarity of heat and momentum transfer and therefore the breakdown of Reynolds analogy when roughness is present.
Moreover, the visual similarity of the fully rough thermal diffusive sublayer to that of a contorted smooth wall offers some explanation for why the rough-wall Stanton number (figure 7b) has a similar trend to that of the smooth wall.
While the present roughness for $\lambda/k\approx 7.1$ has a 17.8% increase in wetted surface area compared to the smooth wall, this is not enough to explain the 230% increase in Stanton number that the rough wall produced.
The substantial changes to the overlying flow dynamics produced by the roughness are therefore likely to account for much of this increase.
The present qualitative description related to the viscous and thermal diffusive sublayers is based on the single roughness geometry studied in this work. It will be interesting to see whether this disparity in sublayers is also observed when the
roughness is varied, for example with densely packed (short wavelength) roughness or with sharp-edged cuboid roughness. While not studied here, these geometries can be readily investigated using the minimal-span channel technique.
5 Conclusions
Rough-wall turbulent heat-transfer studies have a large parameter space, where the roughness length scales (height, wavelength and skewness, to name a few) can be varied independently of the flow properties (Reynolds number and Prandtl number). Moreover, the expense of conventional numerical simulations has made exploring this parameter space challenging.
We have demonstrated that the minimal channel can be used to study the near-wall region of forced convection turbulent flows over three-dimensional sinusoidal roughness. This enables high fidelity data of the near-wall turbulent flow to be simulated with the same level of accuracy as conventional DNS, promising an efficient means to simulate multiple roughness geometries and flow conditions.
In particular, we have shown that the minimal channel technique can place the fully rough regime of roughness within reach, where the roughness function, $\Delta U^{+}$, tends towards an asymptote of $\Delta U_{FR}^{+}=(1/\kappa_{m})\log(k^{+})+C$. The temperature difference $\Delta\Theta^{+}$ (3), meanwhile, was observed to be tending towards a constant value, which for the present sinusoidal roughness was found to be $\Delta\Theta_{FR}^{+}\approx 4.4$.
With this constant value obtained from the minimal channel, along with the equivalent sand-grain roughness $k_{s}$, the Stanton number can then be estimated for any roughness Reynolds numbers through (15). This assumes a logarithmic function for the rough-wall temperature and velocity profiles across the entire channel and resulted in good agreement with the data from our minimal channel simulations. The Stanton number was maximum in the transitionally rough regime, before starting to reduce monotonically in the fully rough regime as $\Delta\Theta^{+}$ attains a constant value.
The ratio between the momentum and heat transfer roughness lengths, $z_{0m}/z_{0h}$, depends on the Reynolds number (figure 7d). As with the smooth wall, the inner-normalised heat transfer roughness length, $z_{0h}^{+}$, becomes constant in the fully rough regime, so that the ratio $z_{0m}/z_{0h}$ is linearly proportional to $z_{0m}^{+}$. The constant of proportionality is related to the roughness-dependent constant $\Delta\Theta_{FR}^{+}$ through (19).
Analysis of instantaneous temperature fields revealed that in the fully rough regime there is a thin thermal diffusive sublayer that follows the roughness geometry, resembling a contorted smooth wall. This is in contrast to the velocity fields, which showed much of the fluid below the roughness crests being close to zero. The differences between these two fields are due to the pressure drag acting on the rough wall, which directly influences the momentum transfer but not the heat transfer. This causes the Reynolds analogy, or similarity between momentum and heat transfer, to break down so that the factor $2C_{h}/C_{f}$ is no longer constant (figure 7c). However, when looking beyond the bulk coefficients, a similar distribution pattern between the viscous stress and heat flux at the wall was observed. The reduction of the viscous stress, and hence heat flux, in the fully rough regime explains why the Stanton number reduces and $\Delta\Theta^{+}$ attains a constant value.
Acknowledgements
The authors would like to gratefully acknowledge the financial support of the Australian Research Council through a Discovery Project (DP170102595). This research was supported by computational resources provided from Melbourne Bioinformatics at the University of Melbourne, Australia, and also from the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia.
References
Andreas (1987)
Andreas, E. L. 1987 A theory for the scalar roughness and the scalar
transfer coefficients over snow and sea ice. Boundary-Layer Meteorol.
38, 159–184.
Bernardini et al. (2014)
Bernardini, M., Pirozzoli, S. & Orlandi, P. 2014 Velocity statistics in
turbulent channel flow up to ${R}e_{\tau}=4000$. J. Fluid Mech. 742, 171–191.
Bird et al. (2002)
Bird, R. B., Stewart, W. E. & Lightfoot, E. N. 2002 Transport
Phenomena, 2nd edn. Wiley.
Breugem et al. (2006)
Breugem, W. P., Boersma, B. J. & Uittenbogaard, R. E. 2006 The influence
of wall permeability on turbulent channel flow. J. Fluid Mech. 562, 35–72.
Brutsaert (1975)
Brutsaert, W. 1975 The roughness length for water vapor sensible heat,
and other scalars. J. Atmos. Sci. 32, 2028–2031.
Busse et al. (2017)
Busse, A., Thakkar, M. & Sandham, N. D. 2017 Reynolds-number dependence
of the near-wall flow over irregular rough surfaces. J. Fluid Mech.
810, 196–224.
Cebeci & Bradshaw (1984)
Cebeci, T. & Bradshaw, P. 1984 Physical and computational aspects
of convective heat transfer. Springer-Verlag.
Chamberlain (1966)
Chamberlain, A. C. 1966 Transport of gases to and from grass and
grass-like surfaces. Proc. Royal Soc. A 290, 236–265.
Chan et al. (2015)
Chan, L., MacDonald, M., Chung, D., Hutchins, N. & Ooi, A. 2015 A
systematic investigation of roughness height and wavelength in turbulent pipe
flow in the transitionally rough regime. J. Fluid Mech. 771,
743–777.
Chung et al. (2015)
Chung, D., Chan, L., MacDonald, M., Hutchins, N. & Ooi, A. 2015 A fast
direct numerical simulation method for characterising hydraulic roughness.
J. Fluid Mech. 773, 418–431.
Chung & Matheou (2012)
Chung, D. & Matheou, G. 2012 Direct numerical simulation of stationary
homogeneous stratified sheared turbulence. J. Fluid Mech. 696,
434–467.
Coceal & Belcher (2004)
Coceal, O. & Belcher, S. E. 2004 A canopy model of mean winds through
urban areas. Q. J. Royal Meteorol. Soc. 130, 1349–1372.
Dean (1978)
Dean, R. B. 1978 Reynolds number dependence of skin friction and other
bulk flow variables in two-dimensional rectangular duct flow. J. Fluids
Engng 100, 215–223.
Dipprey & Sabersky (1963)
Dipprey, D. F. & Sabersky, R. H. 1963 Heat and momentum transfer in
smooth and rough tubes at various Prandtl numbers. Int. J. Heat Mass
Transfer 6, 329–353.
Finnigan (2000)
Finnigan, J. 2000 Turbulence in plant canopies. Annu. Rev. Fluid
Mech. 32, 519–571.
Flack & Schultz (2014)
Flack, K. A. & Schultz, M. P. 2014 Roughness effects on wall-bounded
turbulent flows. Phys. Fluids 26, 101305.
Flores & Jiménez (2010)
Flores, O. & Jiménez, J. 2010 Hierarchy of minimal flow units in the
logarithmic layer. Phys. Fluids 22, 071704.
Garratt & Hicks (1973)
Garratt, J. R. & Hicks, B. B. 1973 Momentum, heat and water vapour
transfer to and from natural and artificial surfaces. Q. J. Royal
Meteorol. Soc. 99, 680–687.
Grossmann & Lohse (2011)
Grossmann, S. & Lohse, D. 2011 Multiple scaling in the ultimate regime
of thermal convection. Phys. Fluids 23, 045108.
Ham & Iaccarino (2004)
Ham, F. & Iaccarino, G. 2004 Energy conservation in collocated
discretization schemes on unstructured meshes. In Annual Research Briefs
2004, pp. 3–14. Center for Turbulence Research, Stanford University/NASA
Ames.
Hama (1954)
Hama, F. R. 1954 Boundary-layer characteristics for smooth and rough
surfaces. Trans. Soc. Naval Arch. Mar. Engrs 62, 333–358.
Jiménez (2004)
Jiménez, J. 2004 Turbulent flows over rough walls. Annu. Rev.
Fluid Mech. 36, 173–196.
Kader (1981)
Kader, B. A. 1981 Temperature and concentration profiles in fully
turbulent boundary layers. Int. J. Heat Mass Transfer 24,
1541–1544.
Kasagi et al. (1992)
Kasagi, N., Tomita, Y. & Kuroda, A. 1992 Direct numerical simulation of
passive scalar field in a turbulent channel flow. Trans. ASME J. Heat
Transfer 114, 598–606.
Kawamura et al. (1999)
Kawamura, H., Abe, H. & Matsuo, Y. 1999 DNS of turbulent heat transfer
in channel flow with respect to Reynolds and Prandtl number effects. Int. J. Heat Fluid Flow 20, 196–207.
Kawamura et al. (1998)
Kawamura, H., Ohsaka, K., Abe, H. & Yamamoto, K. 1998 DNS of turbulent
heat transfer in channel flow with low to medium-high Prandtl number fluid.
Int. J. Heat Fluid Flow 19, 482–491.
Kays et al. (2005)
Kays, W. M., Crawford, M. E. & Weigand, B. 2005 Convective Heat and
Mass Transfer, 4th edn. McGraw-Hill.
Kim & Moin (1989)
Kim, J. & Moin, P. 1989 Transport of passive scalars in a turbulent
channel flow. In Turbulent Shear Flows 6, pp. 85–96. Springer.
Kozul et al. (2016)
Kozul, M., Chung, D. & Monty, J. P. 2016 Direct numerical simulation of
the incompressible temporally developing turbulent boundary layer. J.
Fluid Mech. 796, 437–472.
Kraichnan (1962)
Kraichnan, R. H. 1962 Turbulent thermal convection at arbitrary Prandtl
number. Phys. Fluids 5, 1374–1389.
Leonardi et al. (2007)
Leonardi, S., Orlandi, P. & Antonia, R. A. 2007 Heat transfer in a
turbulent channel flow with roughness. In Proc. 5th Int. Symp. on
Turbulence and Shear Flow, pp. 785–790.
MacDonald et al. (2016)
MacDonald, M., Chan, L., Chung, D., Hutchins, N. & Ooi, A. 2016
Turbulent flow over transitionally rough surfaces with varying roughness
density. J. Fluid Mech. 804, 130–161.
MacDonald et al. (2017)
MacDonald, M., Chung, D., Hutchins, N., Chan, L., Ooi, A. &
García-Mayoral, R. 2017 The minimal-span channel for rough-wall
turbulent flows. J. Fluid Mech. 816, 5–42.
Macdonald et al. (1998)
Macdonald, R. W., Griffiths, R. F. & Hall, D. J. 1998 An improved method
for the estimation of surface roughness of obstacle arrays. Atmos.
Environ. 32, 1857–1864.
Mahesh et al. (2004)
Mahesh, K., Constantinescu, G. & Moin, P. 2004 A numerical method for
large-eddy simulation in complex geometries. J. Comput. Phys. 197, 215â–240.
Miyake et al. (2001)
Miyake, Y., Tsujimoto, K. & Nakaji, M. 2001 Direct numerical simulation
of rough-wall heat transfer in a turbulent channel flow. Int. J. Heat
Fluid Flow 22, 237–244.
Nagib & Chauhan (2008)
Nagib, H. M. & Chauhan, K. A. 2008 Variations of von Kármán
coefficient in canonical flows. Phys. Fluids 20, 101518.
Ng et al. (2017)
Ng, C. S., Ooi, A., Lohse, D. & Chung, D. 2017 Changes in the
boundary-layer structure at the edge of the ultimate regime in vertical
natural convection. J. Fluid Mech. 825, 550–572.
Nikora et al. (2001)
Nikora, V., Goring, D., McEwan, I. & Griffiths, G. 2001 Spatially
averaged open-channel flow over rough bed. J. Hydraul. Engng 127, 123–133.
Nikuradse (1933)
Nikuradse, J. 1933 Laws of flow in rough pipes. English translation
published 1950, NACA Tech. Mem. 1292.
Owen & Thomson (1963)
Owen, P. R. & Thomson, W. R. 1963 Heat transfer across rough surfaces.
J. Fluid Mech. 15, 321–334.
Pirozzoli et al. (2016)
Pirozzoli, S., Bernardini, M. & Orlandi, P. 2016 Passive scalars in
turbulent channel flow at high Reynolds number. J. Fluid Mech. 788, 614–639.
Raupach et al. (1991)
Raupach, M. R., Antonia, R. A. & Rajagopalan, S. 1991 Rough-wall
turbulent boundary layers. Appl. Mech. Rev. 44, 1–25.
Schultz & Flack (2009)
Schultz, M. P. & Flack, K. A. 2009 Turbulent boundary layers on a
systematically varied rough wall. Phys. Fluids 21, 015104.
Schultz & Flack (2013)
Schultz, M. P. & Flack, K. A. 2013 Reynolds-number scaling of turbulent
channel flow. Phys. Fluids 25, 025104.
Sekimoto et al. (2016)
Sekimoto, A., Dong, S. & Jiménez, J. 2016 Direct numerical
simulation of statistically stationary and homogeneous shear turbulence and
its relation to other shear flows. Phys. Fluids 28, 035101.
Tiselj et al. (2001)
Tiselj, I., Bergant, R., Mavko, B., Bajsić, I. & Hetsroni, G. 2001
DNS of turbulent heat transfer in channel flow with heat conduction in the
solid wall. Trans. ASME J. Heat Transfer 123, 849–857.
Townsend (1976)
Townsend, A. A. 1976 The Structure of Turbulent Shear
Flow, 2nd edn. Cambridge University Press.
Webb (1981)
Webb, R. L. 1981 Performance evaluation criteria for use of enhanced heat
transfer surfaces in heat exchanger design. Int. J. Heat Fluid Flow
24, 715–726.
Wood & Mason (1991)
Wood, N. & Mason, P. 1991 The influence of static stability on the
effective roughness lengths for momentum and heat transfer. Q. J. Royal
Meteorol. Soc. 117, 1025–1056.
Yaglom (1979)
Yaglom, A. M. 1979 Similarity laws for constant-pressure and
pressure-gradient turbulent wall flows. Annu. Rev. Fluid Mech. 11, 505–540. |
The Effect of a Non-Thermal tail
on the Sunyaev-Zel’dovich Effect in Clusters of Galaxies
P. Blasi${}^{*}$, A. V. Olinto${}^{{\dagger}}$, and A. Stebbins${}^{*}$
${}^{*}$ Fermi National Accelerator Laboratory, Batavia, IL 60510-0500
${}^{{\dagger}}$Department of Astronomy & Astrophysics, & Enrico Fermi
Institute, University of Chicago, Chicago, IL 60637
Abstract
We study the spectral distortions of the cosmic microwave background radiation
induced by the Sunyaev-Zel’dovich (SZ) effect in clusters of galaxies when the
target electrons have a modified Maxwell-Boltzmann distribution with a
high-energy non-thermal tail. Bremsstrahlung radiation from this type of
electron distribution may explain the supra-thermal X-ray emission observed in
some clusters such as the Coma cluster and A2199 and serve as an alternative to
the classical but problematic inverse Compton scattering interpretation. We
show that the SZ effect can be used as a powerful tool to probe the electron
distribution in clusters of galaxies and discriminate among these different
interpretations of the X-ray excess. The existence of a non-thermal tail can
have important consequences for cluster based estimators of cosmological
parameters.
Subject Headings: Galaxies: clusters: general - Cosmology: theory
1 Introduction
Clusters of galaxies are powerful laboratories for
measuring cosmological parameters and for testing cosmological models of
the formation of structure in the Universe. These
associations of large numbers of galaxies are confined by a much greater
mass of dark matter, which also confines a somewhat smaller mass in very
hot gas. The galaxies and the gas are in rough virial equilibrium with the
dark matter potential well. While initially clusters were investigated
through the observed dynamics of the galaxies they contain, in recent
decades much information has been gathered from studies of the gas,
primarily via X-ray observations of bremsstrahlung emission but also
through the Sunyaev-Zel’dovich (SZ) effect
(Zeldovich & Sunyaev 1969). Interpreting these
observations requires a detailed understanding of the thermodynamic state
of the gas. With increasingly more sensitive measurements, the gas dynamics
should become clearer which would allow for a better understanding of the
structure and dynamics of clusters as well as their effectiveness as tests
of cosmological models.
Both the X-ray emission and the SZ effect are sensitive to the energy
distribution of the electrons. It is usually assumed that in the intracluster
gas the electron energy distribution is described by a thermal
(transrelativistic Maxwell-Boltzmann) distribution function. The typical
equilibration time for the bulk of this hot and rarefied electron gas is of
order $\sim 10^{5}$ years and is mainly determined by electron-electron Coulomb
scattering (electron-proton collisions are much less efficient). This time
rapidly increases when the electron energy becomes appreciably larger than the
thermal average, so that thermalization takes longer for higher energy
electrons. In the absence of processes other than Coulomb scatterings, the
electron distribution rapidly converges to a Maxwell-Boltzmann distribution.
However, the fact that the intracluster gas may be (and actually is often
observed to be) magnetized, can change this simple scenario: for instance,
cluster mergers can modify the electron distribution either by producing shocks
that diffusively accelerate part of the thermal gas, or by inducing the
propagation of MHD waves that stochastically accelerate part of the electrons
and heat most of the gas (Blasi 1999a).
Although the bulk of the electron
distribution is likely to maintain its thermal energy distribution, higher
energy electrons, more weakly coupled to the thermal bath, may acquire a
significantly non-thermal spectrum (Blasi 1999a).
Until recently, X-ray observations could only probe energies below
$\sim 10$ keV, where the observed radiation is consistent with
bremsstrahlung emission from the intracluster plasma with a thermal electron
distribution with temperatures in the $1-20$ keV range. The recent
detection of a hard X-ray component in excess of the thermal spectrum of the
Coma cluster (Fusco-Femiano et al. 1999) may be the
first indication that the particle
distribution in (some) clusters of galaxies contains a significant
non-thermal component. Observations of Abell 2199 (Kaastra et al. 1999)
show a
similar excess while no excess has been detected in Abell 2319
(Molendi et al. 1999), thus, the source of this effect
may not be universal.
As argued above, the presence of magnetic fields in the intracluster gas allows
for acceleration processes that can modify the details of the heating
processes, so that the electron energy distribution may differ from a
Maxwell-Boltzmann. In this case, the bremsstrahlung emission from a modified
Maxwell-Boltzmann electron gas can account for the observed X-ray spectra, up
to the highest energies accessible to current X-ray observations
(Ensslin, Lieu & Biermann 1999; Blasi 1999a).
This model works as an alternative to the more
traditional interpretation based on the inverse Compton scattering (ICS)
emission from a population of shock accelerated ultra-relativistic electrons
(Volk & Atoyan 1999). The ICS model has many
difficulties such as the requirement that
the cosmic ray energy density be comparable to the thermal energy in the gas
(Ensslin et al. 1999; Blasi & Colafrancesco 1999)
. This large cosmic ray energy
density might be hard to reconcile with the nature of cosmic ray sources in
clusters (Berezinsky, Blasi & Ptuskin 1997) and with
gamma ray observations (Blasi 1999).
Moreover, the combination of X-ray and radio observations within
the ICS model strongly indicates a very low magnetic field,
$B\sim 0.1\mu G$, much lower than the values derived from Faraday rotation
measurements (Kim et al. 1990; Feretti et al. 1995),
which by themselves represent only lower limits to the field.
The best way to resolve the question of whether the observed hard X-rays are
due to ICS or are the first evidence for a modified thermal electron
distribution in clusters is to probe directly such a distribution.
We propose that this probe can be achieved by detailed observations of the
SZ effect, which is the change in brightness temperature of the cosmic
microwave background (CMB) photons when they traverse a hot electron gas
such as the gas in clusters. We discuss the SZ effect in detail in the next
section, where the main results are also discussed. Additional implications
of the scenario proposed here are presented in section 3.
2 The SZ effect as a probe of non-thermal processes
In this section, we calculate the SZ effect
for a modified electron distribution, including a high energy tail. We
follow the procedure outlined by Birkinshaw (1999).
Photons of the CMB propagating in a gas of electrons are Compton scattered and
their energy spectrum is modified. As long as the
center-of-mass energy of the collision is less than $m_{\rm e}c^{2}$, the
scattering is accurately described by the Thomson differential cross-section.
For CMB photons at low redshift this only requires that the electron
energy in the cosmic rest-frame be less than $\sim 1$ TeV. For scattering
of a photon with initial frequency $\nu_{\rm i}$, off an isotropic
distribution of electrons each with speed $v$, the probability distribution
of the scattered photon having frequency $\nu_{\rm i}(1+\Delta)$ is
(Stebbins 1997)
$$P(\Delta,\beta)\,d\Delta={\overline{F}(\Delta,\beta\,{\rm sgn}(\Delta))\over(1%
+\Delta)^{3}}\,d\Delta\ ,\qquad\Delta\in\left[-{2\beta\over 1+\beta},{2\beta%
\over 1-\beta}\right]$$
(1)
where $\beta={v\over c}$ and
$$\displaystyle\overline{F}(\Delta,b)=\Biggl{|}{3(1-b^{2})^{2}(3-b^{2})(2+\Delta%
)\over 16b^{6}}\,\ln{(1-b)(1+\Delta)\over 1+b}+{3(1-b^{2})(2b-(1-b)\Delta)%
\over 32b^{6}(1+\Delta)}$$
(2)
$$\displaystyle \times\left(4(3-3b^{2}+b^{4})+2(6+b-6b^{2}%
-b^{3}+2b^{4})\Delta+(1-b^{2})(1+b)\Delta^{2}\right)\Biggr{|}\ .$$
(3)
If instead of a fixed speed, we consider the scattering off electrons
with a distribution of speeds,
${\rm p}(\beta)\,d\beta$, the distribution of $\Delta$ after one scattering
is
$P_{1}(\Delta)=\int_{|\Delta|/(2+\Delta)}^{1}d\beta\,{\rm p}(\beta)\,P(\Delta,%
\beta)\ .$
This expression can be easily applied to determine the change in the
spectrum of the CMB as seen through the hot gas in a cluster of
galaxies. Since clusters have a small optical depth to Compton scattering
($\sim 10^{-2}$), the fraction of photons which are scattered is given by the
optical depth,
$\tau_{\rm e}=\sigma_{\rm T}N_{\rm e}$, where $N_{\rm e}$ is the projected
surface density of free electrons.
The change in brightness of the CMB at frequency $\nu$
due to the SZ effect is then given by
$$\Delta I(\nu)=\frac{2h\,\nu^{3}}{c^{2}}\tau_{e}\int_{-1}^{+\infty}d\Delta\,P_{%
1}(\Delta)\,\left[{(1+\Delta)^{3}\over e^{(1+\Delta)\,x}-1}-{1\over e^{x}-1}%
\right],$$
(4)
where $x\equiv{h\nu\over k_{\rm B}T_{\rm CMB}}$, $T_{\rm CMB}$ is the CMB
temperature at the present epoch, and $k_{\rm B}$ is
Boltzmann’s constant. It
is conventional in CMB studies to use the change in the thermodynamic
brightness temperature rather than the change in brightness, the former
being given by
$$\frac{\Delta T}{T_{\rm CMB}}={(e^{x}-1)^{2}\over x^{4}\,e^{x}}\,{\Delta I\over
I%
_{0}}$$
(5)
where $I_{0}\equiv{2(k_{B}T_{\rm CMB})^{3}\over(hc)^{2}}$.
For very non-relativistic electrons, $P_{1}(\Delta)$ is narrowly peaked and can
be accurately estimated via a 1st order Fokker-Planck approximation. This gives
the classical formula (Zeldovich & Sunyaev 1969)
$\frac{\Delta T}{T_{\rm CMB}}=y\,\left(x{e^{x}+1\over e^{x}-1}-4\right),$
where $y={1\over 3}\tau_{\rm e}\,\langle\beta^{2}\rangle$. In this limit the
shape of the spectral distortion yields no useful information, only the
amplitude, $y$, is interesting but it depends only on the 2nd moment of ${\rm p}(\beta)$. Fortunately the gas in rich clusters is hot enough for relativistic
corrections to become important, leading to deviations from this classical
formula at the $\sim$10% level
(Birkinshaw 1999, Rephaeli 1995, Stebbins 1997, Challinor & Lasenby 1998,
Itoh, Kohyama & Nozawa 1998)
.
Through these relativistic
corrections, changes in the electron energy distribution can be measured by the
modified shape of the SZ spectrum, hence the shape of the SZ effect can be used
to differentiate between thermal and non-thermal models. Even without spectral
information, non-thermality can be inferred by the comparison of the X-ray
flux and temperature with the amplitude of $\Delta T_{\rm SZ}$, however
this requires a detailed model of the density structure of the cluster
since the SZ effect and bremsstrahlung emission scale differently with
density.
The SZ effect is usually computed assuming a thermal ${\rm p}(\beta)$, but
here we include the effect of a non-thermal tail. We adopt the model for the
distribution function used by Ennslin et al. (1998)
which fits both the non-thermal
hard X-ray data and the thermal soft X-ray data. In particular, a thermal
distribution for momenta smaller than $p^{*}$
($\equiv m_{\rm e}c\,\beta^{*}\gamma^{*}$)
is matched to a power law distribution in momentum above $p^{*}$, and cutoff at
momentum $p_{\rm max}$ ($\equiv m_{\rm e}c\,\beta_{\rm max}\gamma_{\rm max}$)
i.e.
$${\rm p}(\beta)={C\gamma^{5}\beta^{2}\over\Theta\,K_{2}({1\over\Theta})}\times%
\left\{\matrix{{\rm exp}(-{\gamma\over\Theta})&&\quad\beta\in[0,\beta^{*}]\cr{%
\rm exp}(-{\gamma^{*}\over\Theta})&\hskip-10.0pt({\beta^{*}\gamma^{*}\over%
\beta\gamma})^{\alpha+2}&\quad\beta\in[\beta^{*},\beta_{\rm max}]\cr 0&&\quad%
\beta\in[\beta_{\rm max},1)}\right.\ .$$
(6)
Here $\gamma={1\over\sqrt{1-\beta^{2}}}$,
$\gamma^{*}={1\over\sqrt{1-{\beta^{*}}^{2}}}$, $\Theta={kT\over m_{e}c^{2}}$ gives the
temperature of the low energy thermal distribution, and $C$ ($\approx 1$)
normalizes the function to unit total probability. For instance, in the model
proposed by Blasi (1999a), a cutoff at
$\beta_{\rm max}\gamma_{\rm max}\sim 1000$ arises naturally and insures that the electrons in the tail do not affect
the synchrotron radio emission.
For $\gamma_{*}\gg 1$ one finds $C=0.982$, indicating that only 1.8% of the
electrons are in the non-thermal tail, however the electron kinetic energy is
increased by 73% and the electron pressure by 48%, so the hydrodynamical
properties of the gas can be greatly influenced by the non-thermal component.
The bremsstrahlung emissivity is given by
$q_{brem}(k_{\gamma})=n_{gas}\int dp~{}n_{e}(p)~{}v(p)~{}\sigma_{B}(p,k_{\gamma%
}),$
where $n_{gas}$ is the gas density in the cluster, $v(p)$ is the velocity
of an electron with momentum $p$ and $k_{\gamma}$ is the photon momentum. The
bremsstrahlung cross section, $\sigma_{B}$, is taken from
(Haug 1997).
We assumed for simplicity that the cluster has
constant density and temperature, but our results can be easily
generalized to the more realistic spatially varying case.
As shown by Ensslin et al. (1999), there is a
wide region in the $p^{*}$-$\alpha$
parameter space that matches the observations. We choose the values
$\beta^{*}\gamma^{*}=0.5$ and $\alpha=2.5$ that provide a good fit to the overall
X-ray data, as shown in Fig. 1, where the thermal component has a temperature
$T=8.21$ KeV. The data points are from BeppoSAX
(Fusco-Femiano 1999) observations,
while the thick curve is the result of our calculations for a suitable choice
of the emission volume.
The basic question that we want to answer is whether the non-thermal tail
in the electron distribution produces distortions in the CMB radiation that
can be distinguished from the thermal SZ effect. To answer this question,
we calculate the SZ spectrum using eq. (4),
plotting the results in Fig. 2, for a thermal model and two non-thermal
models, each based on Coma. There is an appreciable difference between the
curves, as large as $\sim 60\%$ at high frequencies ($x>5$). At low
frequencies ($x<1.7$), the region currently probed by most SZ observations, the
relative difference is at the level of $\sim 10-20\%$.
To establish the existence of a non-thermal contribution to the SZ effect, say
in Coma, one should measure $\Delta T$ at three or more frequencies. While
$T_{\rm e}$ is well constrained by X-ray measurements, $\tau_{\rm e}$ is not,
and in addition the SZ distortion is contaminated by a frequency independent
constant, $\Delta T_{\rm CMBR}+\Delta T_{\rm kSZ}$, i.e. the sum of the
background primordial CMBR anisotropy, and the kinematic SZ effect caused by a
line-of-sight velocity, $v_{\rm c}$, in the CMBR frame. Two measurements are
required to determine these unknowns before one is able to detect
non-thermality. In fig. 3 we estimate the difference in $\Delta T$ for a
thermal and non-thermal spectrum after allowing for these unknowns. The
residual spectral difference remain both at low and high frequencies, and might
be accessible by ground observation. From space a non-thermal signature should
be detectable by the Planck Surveyor, but not by MAP, mainly due to sensitivity
and beam dilution rather than frequency coverage.
Of particular interest observationally is the frequency of the zero SZ spectral
distortion, $x_{0}$, defined by $\Delta I(x_{0})=\Delta T(x_{0})=0$. Measuring the
difference in the CMB flux on and off the cluster near the zero allows
the measurement of small deviations from the classical behaviour with
only moderate requirements on the calibration of the detector, and is very
sensitive to $v_{\rm c}$. For a thermal plasma
(Birkinshaw 1999),
$x_{0}=3.830\,\left(1+1.13\,\Theta+1.14\,{\beta_{\rm c}\over\Theta}\right)+{%
\cal O}(\Theta^{2},\beta_{\rm c}^{2})$,
where $\Theta={kT_{e}\over m_{\rm e}c^{2}}$, $\beta_{\rm c}={v_{\rm c}\over c}$.
This equation is no longer valid for a non-thermal electron distribution.
For our canonical parameters, no cutoff, and $v_{\rm c}=0$, we find that $x_{0}$
is shifted to 3.988, the same as would be obtained for a thermal distribution
with an unreasonably large temperature of 18.62 keV, and $v_{\rm c}=0$, or
with the “correct” temperature (8.21 keV) and $v_{\rm c}=111\,$km/sec.
Even with our non-thermal tail, it is the velocity which mostly determines the
value of $x_{0}$, although the non-thermal electrons can bias the $v_{\rm c}$
determinations by $\sim+100$ km/sec (i.e. away from the observer).
3 Other Implications
In this section we mention other important consequences of the existence of a
non-thermal electron distribution. As noted above, the non-thermal component
might correspond to only a few percent in additional electrons which do
not contribute significantly to the nearly thermal 1-10 keV X-ray
emission, while at the same time the electron pressure may be increased
by nearly a factor of two (we have no evidence whether there is similar
increase in the ion pressure). Many cluster mass estimates which are
based on X-ray observations, use the hydrostatic relation $M_{\rm c}\propto\nabla p/\rho$, and if the pressure has been significantly
under-estimated due to non-thermal electrons the cluster mass would also
be underestimated. Cluster masses play an important role in normalizing
the amplitude of inhomogeneities in cosmological models, and the
non-thermal electron populations may lead to an underestimate in this
cluster normalization. The baryon fraction in clusters have also been
used as an indicator of the universal baryon-to-mass ratio, $\Omega_{\rm b}/\Omega_{\rm m}$. If a cluster mass is underestimated due to
non-thermal electrons then the cluster baryon fraction will be
overestimated. Note that the Coma cluster, which does have a non-thermal
X-ray excess, has played a particularly important role in cluster
$\Omega_{\rm b}/\Omega_{\rm m}$ estimates
(White et al. 1993) although optical mass
estimates are also used here.
These cosmological consequences would be true even if the
excess pressure was provided by a population of relativistic cosmic rays,
as discussed by Ensslin et al. (1997).
Other implications are instead peculiar to the non-thermal tail scenario:
using a combination of X-ray and SZ measurements, clusters have been used to
estimate Hubble’s constant, $H_{0}\propto I_{\rm X}/(\Delta T_{\rm SZ})^{2}$
(Birkinshaw 1999).
We have shown that a non-thermal electron distribution
generally increases $\Delta T_{\rm SZ}$ for fixed $\tau_{\rm e}$ and $\Theta$,
and therefore one should use a larger proportionality constant when
non-thermal electrons are present. Therefore cluster estimates of $H_{0}$
without taking into account a non-thermal electron distribution would
under-estimate $H_{0}$.
If our model of the non-thermal tail held universally then naive estimates of
$M_{\rm c}$, $\Omega_{\rm b}/\Omega_{\rm m}$, and $H_{0}$ should be respectively
adjusted upward, downward, and upward by 10’s of percent. However estimates of
cosmological parameters using clusters generally make use of measurements of an
ensemble of clusters. Supra-thermal X-ray emission does appear in two of three
clusters, but the statistics are not good enough for an accurate prediction of
how frequently a non-thermal electron distribution might be present in a sample
of clusters. Therefore the overall bias introduced in parameter estimates
is necessarily uncertain. In any individual cluster the bias in a parameter
estimator will depend on the spatial distribution of the non-thermal electrons,
which is also uncertain and not well-constrained by present hard X-ray
measurements. The important point is that the magnitude of cosmological
parameter mis-estimation might be quite large.
Confirmation or refutation of the hypothesis that the X-ray excess is due to a
non-thermal tail will have important consequences not only for the
understanding of cluster structure but for cosmology as well. We argue that SZ
measurements are the best way to test this hypothesis, and that this is within
the capabilities of present technology.
4 Acknowledgements
We are grateful to M. Bernardi for a useful discussion. The work of P.B. and
A.S. was funded by the DOE and the NASA grant NAG 5-7092 at Fermilab. The work
of A.V.O. was supported in part by the DOE through grant DE-FG0291 ER40606, and
by the NSF through grant AST-94-20759.
References
(1)
(2)
Berezinsky, V.S., Blasi, P., & Ptuskin, V.S. 1997,
ApJ, 487, 529
(3)
(4)
Birkinshaw, M. 1999, Phys. Rep., 310, 97
(5)
(6)
Blasi, P. 1999, ApJ524, 603
(7)
(8)
Blasi, P. 1999a, To be published in Astrophys. J. Lett.
(preprint astro-ph/0001344).
(9)
(10)
Blasi, P., & Colafrancesco, A. 1999,
Astropart. Phys. 12, 169
(11)
(12)
Challinor A., & Lasenby, A. 1998, ApJ, 499, 1
(13)
(14)
Ensslin, T. A., Biermann, P. L., Kronberg, P. P., & Wu, X.-P. 1997,
/apj, 477, 560
(15)
(16)
Ensslin, T. A., Lieu, R., & Biermann, P. L. 1999, A&A, 344, 409
(17)
(18)
Feretti, L., Dallacasa, D., Giovannini, G., & Taglianai, A. 1995, A&A,
302, 680
(19)
(20)
Fusco-Femiano, R. 1999, private communication.
(21)
(22)
Fusco-Femiano, R.,
Dal Fiume, D., Feretti, L., Giovannini, G., Grandi, P., Matt, G., Molendi, S.,
& Santangelo, A. 1999, ApJ, 513, L21
(23)
(24)
Haug, E. 1997, A&A, 326, 417
(25)
(26)
Herbing, T., Lawrence, C. R., Readhead, A. C. S., &
Gulkis, S. 1995, ApJ, 449, L5
(27)
(28)
Itoh, N., Kohyama, Y., & Nozawa, S. 1998, ApJ, 502, 71
(29)
(30)
Kaastra, J.S., Lieu, R., Mittaz, J.P.D., Bleeker, J.A.M.,
Mewe, R., Colafrancesco, S., & Lockman, F.J. 1999, preprint astro-ph/9905209.
(31)
(32)
Kim, K.-T., Kronberg, P. P., Dewdney P. E., & Landecker, T. L.
1990, ApJ, 355, 29
(33)
(34)
Molendi, S., Grandi, S.,
Fusco-Femiano, R., Colafrancesco, S., Fiore, F., Nesci, R., & Tamurelli, F.
1999, ApJ525, L73
(35)
(36)
Rephaeli, Y. 1995, ApJ, 445, 33
(37)
(38)
Stebbins, A. 1997, in
The Cosmic Microwave Background ed.s C.H. Lineweaver,
J.G. Bartlett, A. Blanchard, M. Signore, & J. Silk (Dordrecht: Kluwer)
pp 241-270
(39)
(40)
Volk, H.J., & Atoyan, A.M. 1998, preprint astro-ph/9812458.
(41)
(42)
Zel’dovich Y. B. &
Sunyaev, R.A. 1969, Ap. & Sp. Sci., 4, 301
(43)
(44)
White, S. D. M., Navarro, J. F., Evrard, A. E., &
Frenk, C. S., 1993, Nature, 366, 429
(45) |
A systematic investigation of classical causal inference strategies under mis-specification due to network interference
Vishesh Karwa
Edoardo M. Airoldi
Abstract
We systematically investigate issues due to mis-specification that arise in estimating causal effects when (treatment) interference is informed by a network available pre-intervention, i.e., in situations where the outcome of a unit may depend on the treatment assigned to other units. We develop theory for several forms of interference through the concept of âexposure neighborhoodâ, and develop the corresponding semi-parametric representation for potential outcomes as a function of the exposure neighborhood. Using this representation, we extend the definition of two popular classes of causal estimands considered in the literature, marginal and average causal effects, to the case of network interference. We then turn to characterizing the bias and variance one incurs when combining classical randomization strategies (namely, Bernoulli, Completely Randomized, and Cluster Randomized designs) and estimators (namely, difference-in-means and Horvitz-Thompson) used to estimate average treatment effect and on the total treatment effect, under misspecification due to interference. We illustrate how difference-in-means estimators can have arbitrarily large bias when estimating average causal effects, depending on the form and strength of interference, which is unknown at design stage. Horvitz-Thompson estimators are unbiased when the correct weights are specified. Here, we derive the Horvitz-Thompson weights for unbiased estimation of different estimands, and illustrate how they depend on the design, the form of interference, which is unknown at design stage, and the estimand. More importantly, we show that Horvitz-Thompson estimators are in-admissible for a large class of randomization strategies, in the presence of interference. We develop new model-assisted and model-dependent strategies to improve Horvitz-Thompson estimators, and we develop new randomization strategies for estimating the average treatment effect and total treatment effect.
Keywords: Causal inference; potential outcomes; average treatment effect; total treatment effect; interference; network interference; statistical network analysis.
00footnotetext: Vishesh Karwa is an Assistant Professor in the Department of Statistical Science, Fox Business School at Temple University. Edoardo M. Airoldi is the Millard E. Gladfelter Professor of Statistics & Data Science and the Director of the Data Science Center at the Fox Business School, Temple University.
This work was partially supported
by the National Science Foundation under grants
CAREER IIS-1149662 and IIS-1409177,
by the Office of Naval Research under grants
YIP N00014-14-1-0485 and N00014-17-1-2131, to Harvard University,
and
by a Shutzer Fellowship and a Sloan Research Fellowship to EMA.
Contents
1 Introduction
1.1 Summary of Contributions and Organization
2 Overview of the main results: Modes of failures and solutions
3 Revisiting the Potential Outcomes framework under arbitrary treatment interference
3.1 Potential Outcomes under arbitrary interference
3.2 Modeling Potential Outcomes under Network Interference
3.2.1 Interference Neighborhood
3.2.2 Exposure Models
3.2.3 Structural Models
3.2.4 Some models of Potential Outcomes under interference
3.3 Defining Causal Effects under Network Interference
3.4 Designs, Estimators and Strategies
3.5 Existence of estimators
3.6 Commonly used designs and estimators
4 Analytical insights for Difference-in-Means Estimators
4.1 Sources of bias in estimating the direct and the total effect
4.2 Characterization of bias under various models of Potential Outcomes
4.2.1 Symmetric Exposure Model
4.2.2 Additive Symmetric Exposure model
4.2.3 Symmetric Additive Linear exposure
4.2.4 Binary Exposure Model
5 Linear Unbiased Estimators
5.1 Horvitz-Thompson Estimator
5.2 Inadmissibility of the Horvitz-Thompson estimator
5.3 Improving the Horvitz-Thompson Estimator
5.3.1 Generalized Linear Estimators
5.3.2 Model dependent Unbiased Estimation
5.3.3 Model assisted Estimation
6 New Designs for estimating ATE
6.1 Re-randomization for estimating $\beta_{1}$ and $\beta_{2}$
6.2 The Independent Set Design for estimating Direct Effect
6.3 Cluster Randomized Design for estimating Total Treatment Effect
7 Discussion
A Numerical results
A.1 Bias of the naive Estimator
A.2 Estimation of Direct and Total Effects
B Variance of Estimators
B.1 Sources of Variation of the Horvitz-Thompson Estimator
B.2 Sources of Variation in estimating the direct effect using difference of means estimator
B.2.1 Symmetric Additive linear model
B.2.2 Binary Additive exposure model
C Unbiasedness of difference-in-means estimators for estimating marginal estimands
D Bias of the difference in means estimator for $TTE$
E Bias of the naive estimator for the direct effect under Cluster Randomized Design
F Proofs
F.1 Proof of Proposition 3.1
F.2 Proof of Proposition 3.2
F.3 Proof of Proposition 3.3
F.4 Proof of Proposition 3.4
F.5 Proof of Proposition 3.5
F.6 Proof of Theorem 3.1
F.7 Proof of Proposition C.1
F.8 Proof of Proposition 4.1
F.9 Proof of Proposition 4.2
F.10 Proof of Theorem 4.1
F.11 Proof of Proposition 4.3
F.12 Proof of Proposition 4.4
F.13 Proof of Proposition E.1
F.14 Proof of Propositions 4.5 and 4.6
F.15 Proof of Theorem 5.1
F.16 Proof of Theorem 5.2
F.17 Proof of Theorem 5.3
F.18 Proof of Theorem 5.5
F.19 Proof of Proposition B.1
1 Introduction
The estimation of causal effects is a fundamental goal of many scientific studies. The framework of Potential Outcomes (Neyman 1923; Rubin 1974; Holland 1986) is a popular approach to formalize the problem of estimating causal effects of a treatment on an outcome, from a finite population of $n$ units. For instance, one can use the potential outcomes framework to formally define causal effects of interest called estimands or inferential targets, construct estimators that have desirable properties, such as, unbiasedness with respect to the randomization distribution, and formulate the assumptions under which the estimands and the estimators are well defined and causal conclusions hold.
An important assumption made in the classical potential outcomes framework is the no treatment-interference (or simply no interference111There can be other forms of interference; for e.g. the outcome of a unit may depend on the outcome of others. We are concerned only with treatment interference) assumption which can be stated as follows: The outcome of any unit depends only on its own treatment. In particular, the outcome of a unit does not depend on the treatment assigned to (or selected by) other units in the finite population of $n$ units. This assumption is implied by the so called Stable Unit Treatment Value Assumption or SUTVA as formulated in Rubin (1980), see also Rubin (1986).
It is clear and well known (see for e.g. Section 3 in Rubin (1990)) that the classical framework of potential outcomes needs to be extended when estimating causal effects under interference.
When extending the classical potential outcomes framework and relaxing the assumption of no treatment-interference, the key natural question that arises is the following: What should be the form of interference? It is straightforward to specify what we mean by no treatment-interference, but the existence of treatment interference is not a concrete modeling assumption - there are many different ways to specify the exact form of interference and one needs to choose from various models of interference.
Once a model for interference is fixed, the next steps are to define causal estimands and develop designs and corresponding estimators. The classical versions of average treatment effects are no longer well defined when there is interference between units. This is due to the fact that the space of potential outcomes for each unit changes with the form of interference. In particular, the number of potential outcomes for each unit becomes a function of the form of interference. For example, consider a binary treatment $T$. Under the no treatment-interference assumption, each unit $i$ has two potential outcomes $Y_{i}(0)$ and $Y_{i}(1)$. The average causal effect is defined as the average of differences between these two potential outcomes. However, when there is arbitrary treatment-interference, the number of potential outcomes for each unit can be as large as $2^{n}$ and $Y_{i}(0)$ and $Y_{i}(1)$ are not well defined. There are many non-equivalent ways to define an estimand under interference and the choice depends on the scientific question that one is interested in answering. Once a choice has been made regarding the nature of interference and an estimand has been proposed, the next step is to develop (idealized) experimental designs along with corresponding estimators with good properties, such as unbiasedness with respect to the design, that allow us to estimate causal estimands.
Related work
Relaxing the assumption of no treatment-interference has been the subject of many works, see Halloran and Hudgens (2016) for a recent review. A classical line of work proceeds by limiting the interference to non-overlapping groups and assuming that there is no interference between groups. This setting is often referred to as partial interference (e.g., see Sobel 2006; Hudgens and Halloran 2008; Tchetgen and VanderWeele 2012; Liu and Hudgens 2014; Kang and Imbens 2016; Liu et al. 2016; Rigdon and Hudgens 2015; Basse and Feller 2017; Forastiere et al. 2016; Loh et al. 2018). Various types of estimands have been defined under partial interference. For instance, Sobel (2006) defined estimands that naturally arise in housing mobility studies and noted that under partial interference the classical estimators may be biased. Hudgens and Halloran (2008) considered potential outcomes marginalized over a randomization distribution, and use these marginal potential outcomes to define estimands. They considered two-stage designs and developed unbiased estimators for these marginal estimands. A different line of work has focused on designing experiments that eliminate or reduce partial interference, so that estimation can be carried out by ignoring interference (e.g., see David and Kempton 1996). In the modern setting, the assumption of partial interference has been relaxed by several authors to allow for arbitrary interference, or interference encoded by a network, see Bowers et al. (2012); Manski (2013); Goldsmith-Pinkham and Imbens (2013); Toulis and Kao (2013); Ugander et al. (2013); Aronow and Samii (2013); Basse and Airoldi (2015); Forastiere et al. (2016); Halloran and Hudgens (2016); Choi (2017); Athey et al. (2017). Manski (2013) considered the problem of whether causal effects are identifiable in presence of arbitrary interference. Aronow and Samii (2013) proposed Horvitz-Thompson estimators for estimating causal effects when there is arbitrary interference. Ugander et al. (2013) and Eckles et al. (2014) consider a cluster randomization design to reduce bias in estimating a specific estimand (i.e., total treatment effect). More recently, Sävje et al. (2017) study the large sample properties of estimating treatment effects, when the interference structure is unknown. They show (somewhat surprisingly) that in a large sample setup, the Horvitz-Thompson and Hajek estimators can be used to consistently estimate the expected average treatment effect, even if the structure of interference is incorrect. Jagadeesan et al. (2017) have proposed new designs for estimating the direct effect under interference. Ogburn et al. (2014) approach the problem of interference by using causal diagrams, and they present various causal Bayesian networks under different types of interference. Ogburn et al. (2017) use causal diagrams to develop GLM type estimators for contagion. Finally, Li et al. (2018) study peer effects using randomization based inference.
In this paper, we initiate a systematic investigation of issues that arise in definition and unbiased estimation of causal effects under arbitrary interference and develop possible solutions. Some of the key goals of our work are (a) to develop models of interference, (b) organize and place different estimands and estimators that have appeared in the literature under a common framework, (c) to clarify the issues present in existing definitions and estimators of causal effects and (d) study designs and unbiased estimation strategies under interference.
1.1 Summary of Contributions and Organization
Section 2 provides an overview of the key results of the paper. Here we present a informal summary of contributions.
Models for Potential Outcomes under Interference: We begin by revisiting the framework of potential outcomes under arbitrary interference in Section 3. Using the concept of exposure neighborhood, in Section 3.2, we develop non-parametric models potential outcomes to formalize the nature and form of interference. The exposure neighborhood allows one to explicitly model the form of interference, whereas the structural models formulate assumptions on the structure of potential outcomes under the assumed form of interference.
Choice of estimands: Unlike the classical no-interference setting, there are several non-equivalent ways of defining causal estimands under interference. In Section 3.3, we consider two different (overlapping) classes of estimands for formally defining causal effects - marginal causal effects (in the spirit of Hudgens and Halloran (2008)) and average causal effects. Marginal causal effects are defined as contrasts between expected values of potential outcomes under a fixed randomization scheme also called as policy, where as the average causal effects are defined as contrasts between averages of fixed potential outcomes. These classes include several estimands that have appeared in the literature as special cases.
Bias due to interference in difference-in-means estimators: In Section 4, we address some folklore about estimation strategies. In many cases, it is common to use a design along with classical difference-in-means like estimators to estimate an average causal effect, even when there is interference, with the hope that there might be little or no bias. In some settings, however, the definition of causal effect that is being estimated (the estimand) is not well specified. Our analysis makes it clear that certain classic versions of causal effects are not well defined when there is interference. In the cases where the estimand is well defined, we show that difference-in-means estimators can be biased for many types of estimands. We characterize the nature and sources of bias in estimating a large class of estimands. Our results also illustrate settings where simple estimators can yield little or no bias. For instance, when estimating the so called marginal causal effects, the difference-in-means estimators are unbiased. In general, the unbiasedness of the difference-of-means estimators depends on the nature and structure of interference, which we characterize in Section 4.2.
Liner Unbiased Estimation: We then consider the problem of unbiased estimation of causal effects with commonly used estimation strategies, in Section 5. We consider the Bernoulli, Completely Randomized and Cluster Randomized Designs and focus on the problem of unbiased estimation.
A popular estimation strategy is to use Horvitz-Thompson (HT) like estimators, which is the subject of Section 5. For instance, Aronow and Samii (2013) proposed using HT estimators for particular estimands (i.e., contrasts between potential outcomes corresponding to two different treatment assignment vectors).
We consider the class of all linear weighted unbiased estimators and show that HT estimators can be used to obtain unbiased estimates for any estimand and design, as long as the correct weights are used and some regularity conditions on the design hold. However, we note that the weights depend on the design, the structure of interference (as specified by an interference model) and the estimand. We explicitly derive the weights of HT estimators for commonly used designs and estimands.
We also show that the correct weights that endow HT estimators with good properties need not be unique. The question of optimality (e.g., minimum variance, unbiased) of HT weights is difficult, and has been recently addressed, in part (Sussman and Airoldi 2017).
We prove that Horvitz-Thompson estimators are inadmissible for estimating a large class of estimands and a large class of designs. The HT estimator is one of many estimators in the class of linear weighted unbiased estimators. Using ideas from survey sampling literature, we consider two strategies to improve upon the HT estimator. The non-parametric linear representation of potential outcomes we develop lends itself naturally to develop improved estimators either in a model dependent or a model assisted framework (à la Basse and Airoldi 2015). Finally, in section 6, we explore new designs to estimate two commonly used estimands: average treatment effect and total treatment effect, defined in Section 2. A key observation is that the optimal design for estimation may depend on the estimand.
2 Overview of the main results: Modes of failures and solutions
Consider a finite population of $n$ units indexed by $\{1,\ldots,n\}$. Let
$\textbf{z}=(z_{1},\ldots,z_{n})$ denote a vector of binary treatment assignments where each $z_{i}\in\{0,1\}$. Let $(Y_{i}(\textbf{z}))_{i=1}^{n}$ denote the vector of potential outcomes when the finite population of $n$ units gets assigned the treatment vector z. For each unit $i$, $Y_{i}(\textbf{z})$ is a function of z. The no treatment-interference assumption ensures that for each $i=1,\ldots,n$, we can write the potential outcomes function as,
$$\displaystyle Y_{i}(\textbf{z})=Y_{i}(z_{i}).$$
(1)
Thus, under the no treatment-interference assumption the total number of potential outcomes for each unit $i$ is $2$. However, when there is interference, we can write
$$\displaystyle Y_{i}(\textbf{z})=Y_{i}(z_{i},\textbf{z}_{-i}).$$
(2)
where $\textbf{z}_{-i}$ is the vector of treatment assignments of all units except $i$.
Explosion of Potential Outcomes
When there is arbitrary interference, the number of potential outcomes for each unit may explode, rendering causal inference impossible without modeling potential outcomes. In general, the total number of potential outcomes for each unit $i$ can be as high as $2^{n}$.
Proposition 2.1.
Without any further assumptions on the function $Y_{i}(\textbf{z})$, causal inference is impossible.
Proposition 2.1 is simple, but has far reaching consequences. The key consequence is that under arbitrary treatment interference, one must model the potential outcomes, even under randomization inference. Indeed, the no-interference assumption is also a modeling assumption. Thus, the question becomes which model to use. We develop models for potential outcomes by specifying three components: An interference neighborhood, an exposure function and structural assumptions. Modeling potential outcomes allows one to reduce the number of potential outcomes per unit to a more manageable size. The total number of potential outcomes per unit is directly related to the interference neighborhood and an exposure model.
Table 1 gives three examples of exposure models and the number of potential outcomes per unit. We refer the reader to see Section 3.2 for precise definitions of these exposure models.
Under the simplest exposure model, called the binary exposure, each unit has $4$ potential outcomes - this is twice as many when compared to the case of no-interference. On the other hand, for symmetric exposure, the number of potential outcomes for each unit grows linearly with the size of the exposure neighborhood. Finally, for a general exposure model, the number of potential outcomes for each unit grows exponentially with the size of the exposure neighborhood.
Non-parametric Decomposition of Potential Outcomes
Under arbitrary interference, we develop a non-parametric linear decomposition of the potential outcomes:
Proposition 2.2.
Let $\textbf{z}_{-i}$ denote the vector of treatment assignments assigned to all but unit $i$. There exist functions $A_{i}(\cdot),B_{i}(\cdot),C_{i}(\cdot)$ and $f$ where if $e_{i}=f(\textbf{z}_{-i})$, then every potential outcome function for unit $i$ can be decomposed as
$$Y_{i}(\textbf{z})=A_{i}(z_{i})+B_{i}(e_{i})+z_{i}C_{i}(e_{i}).$$
Proposition 2.2 states that the potential outcome function for every unit $i$ can be decomposed linearly into three components: A component that depends on unit $i^{\prime}s$ treatment, a component that depends on the treatment of all other units, and an interaction term.
At first glance, this representation appears to be redundant as it is over-parametrized. But this decomposition offers three benefits: Firstly, the number of parameters and hence the number of potential outcomes can be now modeled by specification of these functions. Indeed, the explicit construction of the functions $A_{i},B_{i},C_{i}$ and $f$ in Proposition 2.2 requires modeling assumptions on the Potential outcomes which is the subject of Section 3.1.
Secondly, the decomposition makes it clear that classical causal effects are ill-defined when there is interference because they ignore two components of the potential outcomes and use only the first component of direct effect. This decomposition allows us to define different types of causal estimands that focus on direct effects, interference effects or the interaction between the two. Finally, the decomposition also allows us to gain deeper insights into the nature and sources of biases for various classical estimators to estimate causal effects. We will discuss these issues next.
Different types of Average Treatment effects
We show that in presence of interference, there are many non-equivalent ways to define a treatment effect. In this summary, we will focus on two most popular treatment effects that fall under the class of average treatment effects. We also consider a different class called marginal treatment effects, see Section 3.3. The two average treatment effects that we consider are the direct effect
$$\displaystyle\beta_{1}=\frac{1}{n}\sum_{i}\left(Y_{i}(1,\textbf{0})-Y_{i}(0,%
\textbf{0})\right),$$
(3)
and the total effect
$$\displaystyle\beta_{2}=\frac{1}{n}\sum_{i}\left(Y_{i}(1,\textbf{1})-Y_{i}(0,%
\textbf{0})\right).$$
(4)
Proposition 2.3.
Under the no-interference assumption, $\beta_{1}=\beta_{2}$. Under interference, $\beta_{1}\neq\beta_{2}$.
In fact, one can show that under no-interference assumption, the marginal and the average causal effects are equivalent. This is no longer true in presence of interference, so one needs to be careful in defining what one is interested in estimating. An important point to note is that the causal estimands should not depend on the randomization design and must be defined independent of the actual design that was implemented.
In Section 3.4, we introduce the concept of an estimation strategy - a combination of a design and an estimator for estimating a particular estimand and in Section 3.5 we study conditions under which unbiased estimators exist.
Commonly used designs and estimators are biased
An intuitive approach to estimate average causal effects under interference in the literature is to use a difference-in-means estimator, with the hope that a mild form of interference may not effect the bias of the estimator We formalize this intuition and study the nature and sources of bias in various difference-in-means estimators under interference. Unfortunately, the situation is more complex. The nature of bias depends on the estimand, the exact form of the difference-in-means estimator, the design and finally the model for interference. This is the subject of Section 4. For estimating the marginal effects, the difference-in-means estimators are unbiased under certain mild conditions on the design. For estimating the total and the direct effect defined in equations 4 and 3, the situation is more nuanced.
In general, there are two sources of bias in estimating the direct and the total effects, see Proposition 4.1. The first source of bias is due to the so called nuisance potential outcomes. These are the potential outcomes that do not appear in the definition of the estimand and are irrelevant for estimation of certain classes of estimands, specially the average causal effects. The nuisance potential outcomes form a source of bias when estimating average causal effects, as shown in Section 4 and Proposition 4.1.
The second source of bias is due to incorrect weights used in the estimator. In some difference-in-means estimators and designs, the first source of bias can be completely eliminated, see Proposition 4.2 for an example. The second source of bias is due to the use of incorrect weights; these weights depend on the design and the nature of interference. For many commonly used designs such as the Completely Randomized Design, Bernoulli Design and the Cluster Randomized Design, assuming a mild form of interference, the second source of bias can be made very small. However, we must point out that the reduction of bias depends on the model of interference, which is not known in general. An incorrect assumption on the interference model may lead to bias, we do not investigate this source of bias.
Proposition 2.4.
Assume that the interference is specified by a graph $G_{n}$ on $n$ units, i.e., the treatment of unit $i$ effects the outcome of unit $j$ iff there is an edge between nodes $i$ and $j$ in $G_{n}$. Further, assume that the Potential Outcomes follow a linear model:
$Y_{i}=\alpha_{i}+\beta_{i}z_{i}+\gamma e_{i}$, where $e_{i}$ denotes the number of treated neighbors of unit $i$ in graph $G_{n}$. Let $m$ be the total number of edges in $G_{n}$.
Under a completely randomized design and a Bernoulli trial, (defined in Section 3.4), consider the naive difference-in-means estimator:
$$\hat{\beta}_{naive}=\frac{\sum_{i}Y_{i}^{obs}Z_{i}}{\sum_{i}Z_{i}}-\frac{\sum_%
{i}Y_{i}^{obs}(1-Z_{i})}{\sum_{i}(1-Z_{i})}.$$
The bias of the difference-in-means estimator for estimating the direct effect $\beta_{1}$ given in equation 3 is
$$\mathbb{E}[\hat{\beta}_{naive}]-\beta_{1}=-\gamma\frac{m}{{n\choose 2}}.$$
Proposition 2.4 is an example of the type of characterization of the nature and source of bias developed in Section 4 for various models of interference. This result shows that even under a simple linear model of potential outcomes, the difference-in-means estimator is biased for estimating the direct effect. The bias depends on the unknown interference parameter $\gamma$ and the density of the interference graph given by $m/{n\choose 2}$. The bias is in the opposite direction of the interference effect: If there is positive interference, the estimated direct effect is smaller than the true direct effect and vice versa. Also, if the interference effect is small and the interference graph is sparse, then the bias is very small. However, as we can see, even in such a simple model, the nature of bias depends on unknown parameters such as the density of the interference graph $G_{n}$ and $\gamma$. For more general models, the qualitative results are similar, and the reader is refereed to Section 4 for more details.
Linear unbiased estimators and inadmissibility of the Horvitz-Thompson estimator
Section 5 is devoted to the theory of linear unbiased estimation. For any design, weighted unbiased linear estimators can be constructed using techniques from sampling theory. We study two classes of weighted linear unbiased estimators. We show that under some regularity conditions, there are infinitely many weighted linear unbiased estimators, see Theorem 5.1. Moreover, when the weights are allowed to depend only on the treatment and exposure status of a unit, the Horvitz-Thompson estimator is the only unbiased estimator, see Theorem 5.2. The weights used in the HT estimator depend on the interference model, the design and the estimand. In Theorem 5.4, we derive the formula for the weights used in HT estimators for Bernoulli, CRD and the Cluster Randomized designs for different interference models for estimating the direct effect. A point to note is that unbiased estimators of the direct effect do not exist when using cluster randomized designs. This illustrates the fact that an estimation strategy that is considered optimal for one type of estimand may not necessarily be optimal for a different estimand, in fact, it can be far from optimal. The optimality criteria can be as simple as existence or unbiasedness.222It remains an open question to find estimation strategies that can be simultaneously optimal for a large class of estimands.
Although the H-T estimator is unbiased, its performance can be very poor in practice because of high variance. An estimator is inadmissible if there exists a uniformly better estimator in terms of the mean squared error. In Theorem 5.5, we show that for a large class of designs that satisfy some natural regularity conditions, the HT estimator is inadmissible. We discuss various improvements to the HT estimator, which are inspired by the survey sampling literature, that aim to reduce the variance at the cost of a mild bias.
New Designs
We consider new designs for estimating causal effects when there is treatment interference. There are two key considerations when thinking about new designs. The first consideration is that the optimality of a design may depend on the estimand: A design that is considered optimal for estimating the direct effect may be far from optimal for estimating the total effect. The second consideration is that the optimal design may need to depend on the interference graph and the exposure model. Classical designs such as CRD and Bernoulli designs are oblivious to the interference graph and the exposure model. They can generate units with potential outcomes that are nuisance when estimating the direct and the total effect.
To this end, we discuss two designs, one old and one new for estimating the direct and the total effect under the symmetric exposure model, when the interference graph is known. For estimating the direct effect, we develop a new design inspired by the concept of an independent set in graph theory. The independent set design attempts to maximize the number of units that reveal the relevant potential outcomes required for estimating the direct treatment effect. For estimating the total effect, we consider the cluster randomized design discussed in Ugander et al. (2013).
Optimality of Estimation Strategies.
We evaluate several estimation strategies for estimating the total and the direct effect using simulation studies. The key lessons of the simulation studies can be summarized as follows: The bias of the difference of means estimator in estimating the direct effect depends on the unknown interference effects. Estimation strategies that are unbiased for one estimand may be severly biased for a different estimand. For e.g. we find that the Independent set designs along with any estimator is approximately unbiased for the direct effect and has superior performance in terms of mean squared error when compared with other designs. On the other hand, the cluster randomized design along with any estimator is approximately unbiased for estimating the total effect. Moreover, the Horvitz-Thompson estimator has the worst performance in terms of mean-squared error - even the biased naive difference-in-means estimator is beats it.
3 Revisiting the Potential Outcomes framework under arbitrary treatment interference
In this section, we revisit the definition of potential outcomes when there is arbitrary treatment interference. We develop a framework for specifying models for potential outcomes under interference. Such models are necessary when there is treatment-interference. We consider two classes of causal effects and study the conditions under which unbiased estimators exist for estimating causal effects.
3.1 Potential Outcomes under arbitrary interference
Consider a finite population of $n$ units indexed by the set $\{1,\ldots,n\}$ and a binary treatment $z_{i}\in\{0,1\}$ for each unit $i$. Let $\textbf{z}=\left(z_{1},\ldots,z_{n}\right)$ denote the vector of treatment assignments. Let $\Omega$ be the set of relevant treatment assignments and let $|\Omega|=m$. In general, $\Omega=\{0,1\}^{n}$ and $m=2^{n}$.
Under arbitrary treatment interference, let $Y_{i}(\textbf{z})$ be the fixed potential outcome of unit $i$ under the treatment assignment vector z. The potential outcome of a unit $i$ can also be considered as a function from the set of possible treatment assignments $\Omega$ to $\mathbb{R}$. For example, in case of a binary outcome, $Y_{i}(\textbf{z}):\Omega\rightarrow\{0,1\}$. Under this notation, the potential outcome of unit $i$ depends on the treatment assignment of all units under the study. Thus, there are a total of $n\times m$ potential outcomes, which can be assembled in the form of an $n\times m$ table, as shown in Table 2. The rows in Table 2 correspond to the units and the columns correspond to the treatment assignments; the $(i,j)^{th}$ entry corresponds to the potential outcome of unit $i$ under treatment represented by column $j$. This table is referred to as the Table of Science and denoted by $\mathbb{T}$.
Remark 1.
We have made an implicit assumption of no hidden versions of a treatment which appears as the second part of the SUTVA, see section 3.5 for more details.
Causal effects are defined as functions of the entries of Table of Science. In particular, Causal effects can be defined as contrasts between functions of potential outcomes under two distinct treatment assignments. For example, let $\textbf{z}_{0}$ and $\textbf{z}_{1}$ be two distinct treatment allocations in $\Omega$, i.e., $\textbf{z}_{0}\neq\textbf{z}_{1}$, then an example of a causal effect is
$$\frac{1}{n}\left(\sum_{i}Y_{i}(\textbf{z}_{1})-\sum_{i}Y_{i}(\textbf{z}_{0})%
\right).$$
In Section 3.3 we consider two different classes of estimands or causal effects.
The fundamental problem of Causal Inference is that the table of science is unknown and only one entry of Table 2 can be observed.
More specifically, let $\textbf{Z}=(Z_{1},\ldots,Z_{n})$ denote the random vector of treatment assignments and $p(\textbf{Z}=\textbf{z})$ be a probability distribution defined over the set of all possible treatment assignments $\Omega$. $p(\bf{Z})$ is called the treatment assignment mechanism or a design. In many cases, we can also restrict ourselves to $\Omega_{p}=\{\textbf{z}:p(\textbf{Z}=\textbf{z})>0\}$, the support of the treatment assignment mechanism. Under a random treatment assignment $p(\textbf{Z})$, without any further assumptions, only one random entry of each row of Table 2 can be observed, i.e. for each unit $i$, only one of it’s potential outcome can be observed. For example, if the realized treatment Z corresponds to column $j$, then only column $j$ is observed. Since causal effects are defined as contrasts between two different treatment assignments, they cannot be estimated if only one column is observed.
Proposition 3.1.
Causal effects are unidentifiable without any assumptions on the potential outcome functions $Y_{i}(\bf{z})$.
Proof.
Since there are no further assumptions on the potential outcomes, only one entry of the table of science is observable due to the fundamental problem of causal inference. As causal effects are defined as contrasts between two distinct treatment assignments, they are unidentifiable as only one entry of the Table 2 is observed.
∎
Proposition 3.1 is a simple observation but has profound consequences. It implies that Causal Inference is impossible without further assumptions on the potential outcomes. Hence we are forced to make modeling choices to make progress. Indeed the bulk of Causal Inference since past 40 years has been centered around the no-treatment interference assumption, which is embedded in SUTVA assumption. One can consider the no-interference assumption as a very specific model on the potential outcomes. Under SUTVA, $Y_{i}(\textbf{z})=Y_{i}(z_{i})$ and Table 2 reduces to an $n\times 2$ table. Thus, the question is not “why a model”, but rather “which model”? We discuss a series of modeling assumptions on potential outcomes that allow tractable causal inference.
3.2 Modeling Potential Outcomes under Network Interference
In this section, we describe models for potential outcomes when there is arbitrary interference due to treatment. As we saw in the previous section, when there is arbitrary interference, modeling potential outcomes becomes necessary without which Causal Inference is impossible. Our framework makes these modeling choices easy to specify and transparent to present. This framework unifies existing models for potential outcomes under treatment interference - many existing models can be instantiated as special cases of our framework. We also develop a linear decomposition of potential outcomes that is useful for interpreting causal effects and studying estimators.
Models of potential outcomes are specified by specifying three different components: an interference neighborhood, an exposure model and a structural model. These components are hierarchical in nature. Each of these components build upon the other, and they need to be defined in this order. For example, to define an exposure model, we need to define the interference neighborhood, and so on. We give an informal description of these components before moving to the formal definitions.
The interference neighborhood, denoted by $N_{i}$, defines the set of units whose treatment assignment can potentially influence unit $i$’s outcomes. Any unit outside the interference neighborhood cannot influence $i$’s potential outcomes. For example, in educational studies, $N_{i}$ can be the school that unit $i$ belongs to. Any unit outside unit $i$’s school does not effect the outcome of unit $i$. In this example, interference neighborhoods can be partitioned into non-overlapping sets. In more general settings, e.g. in the context of social networks or vaccination studies, the interference neighborhoods of units may overlap with each other and can be more complex. Next, the exposure model defines what it means for a unit to be exposed and defines the set of relevant exposure conditions of a unit $i$. For example, a unit $i$ may be said to be exposed to the treatment if all the units in it’s interference neighborhood are treated, or if a fraction of them are treated and so on. The exposure level of a unit need not be a binary variable, but a continuous quantity. For example, it could be the case that there is a gradual increase in exposure, i.e. as more and more units in $i$’s interference neighborhood get treated, $i$ gets “more” exposed. Finally, a structural model defines or imposes structural constraints on different potential outcomes of each unit $i$. One can think of structural models as a way to specify null hypothesis of interests on individual level potential outcomes. For example, a linear model specifies that the potential outcomes are linearly related to the treatment and the exposure conditions. We will discuss these three components in more detail and give their formal definitions along with several examples.
3.2.1 Interference Neighborhood
The interference neighborhood or neighborhood of a unit $i$ (not to be confused with the neighborhood of a node in a graph) is denoted by $N_{i}\subset[n]\backslash\{i\}$ and is defined as the set of units whose treatment status may effect the outcome of unit $i$. Let $\textbf{z}_{N_{i}}$ denote the vector z sub-setted by the indices in $N_{i}$. Let z and $\textbf{z}^{\prime}$ be two distinct potential outcomes. Then given a choice of the interference neighborhood for each unit $i$, we make the following assumption:
$$\displaystyle Y_{i}(\textbf{z})=Y_{i}(\textbf{z}^{\prime})\text{ iff }\textbf{%
z}_{N_{i}}=\textbf{z}^{\prime}_{N_{i}}$$
(5)
This allows us to write down the potential outcome of each unit $i$ in the following manner:
$$\displaystyle Y_{i}(\textbf{z})=Y(z_{i},\textbf{z}_{N_{i}}),$$
(6)
where $z_{i}$ denotes the treatment assigned to unit $i$ and $\textbf{z}_{N_{i}}$ denotes the vector of treatment assigned to units in the interference neighborhood of unit $i$.
Remark 2.
Note that the interference neighborhood of each unit can be different and hence $\textbf{z}_{N_{i}}$ can be of different length. Moreover, a unit $i$ may be in unit $j$’s interference neighborhood, but $j$ may not be in $i$’s neighborhood. Finally, Interference neighborhoods of two units may overlap, they may be disjoint or they can also be the same.
We will now consider two simple, but extreme examples of interference neighborhoods.
1.
No treatment interference: $N_{i}=\emptyset$ for each unit $i$
2.
Complete interference: $N_{i}=[n]/\{i\}$
The simplest example is the setting of no treatment interference, which amounts to saying that the outcome of unit $i$ does not depend on the treatment of any other unit. At the other extreme is complete interference, where the treatment of every unit can effect the outcome of unit $i$.
The first example reduces to the classical SUTVA setting, and in the second example, there is no causal inference possible, unless we make additional assumptions (specified by an exposure model and/or a structural model to be defined below). The most interesting cases are when we can consider interference neighborhoods that lie in between no-interference and complete-interference. To model these intermediate cases, it turns out to be convenient to define interference neighborhoods using a graph.
Graph Induced Interference Neighborhoods
A convenient way to specify the interference neighborhood of a unit $i$ is by the means of an interference graph. Let $G$ be a fixed, known graph on $n$ nodes with $V$ as its vertex set and $E$ as its edge set. The introduction of an interference graph allows us to introduce additional structure into the nature of interference.
Note that in general, $G$ can be asymmetric and even weighted. For simplicity of notation, we will assume for the rest of the paper that $G$ is symmetric and un-weighted, i.e. if $g_{ij}$ denotes the edge from unit $i$ to unit $j$, we will assume $g_{ij}=g_{ji}$. All these ideas apply to an asymmetric weighted graph with additional notation.
We now consider a few examples of graph induced interference neighborhood:
1.
1 hops interference: $N_{i}=\{j\in V:g_{ij}=1\}$
2.
2 hops interference: $N_{i}=\{j\in V:\exists k\mbox{ such that }g_{ik}=1\mbox{ and }g_{kj}=1\}$
3.
$k$ hops interference: $N_{i}=\{j\in V:\exists\mbox{ a path of length at most $k$ connecting $i$ and $%
j$ in }G\}$
Remark 3.
The interference graph is an abstract representation of the interference that may exist in the real world setting. In general $G$ may not be observable, random or may not even be well defined. How does one choose $G$? This is an important question and beyond the scope of this paper. But we will give some remarks. In many cases, $G$ may be clear from the study. For example, consider the setting of partial interference. In this setting, the $n$ units can be partitioned into $m$ disjoint groups $K_{1},\ldots,K_{m}$. Interference may happen within the groups but not between the groups, see for e.g. Sobel (2006) or Hudgens and Halloran (2008). The interference graph in this case consists of a collection of $m$ disjoint cliques. In many other settings, one may observe a social graph which can serve as a good approximation for $G$, (e.g. Facebook). It may also be the case that we observe a network but posit that the interference may happen only along stronger social ties, for e.g. frequently contacted friends, as opposed to all friends in a social network. In such cases, the interference graph $G$ may be an induced subgraph of the social graph. One may also consider $G$ as random and posit a distribution over $G$. This leads to additional complexities that are beyond the scope of this paper.
For the remainder of the paper, we will assume that the interference neighborhood $N_{i}$ for each unit $i$ is defined through a fixed graph $G$ on $n$ units.
3.2.2 Exposure Models
After defining the interference neighborhood, there are two modeling choices remaining for specifying potential outcomes and for making causal inference tractable (i.e. to ensure that the table of science as shown in Table 2 not too wide) - the so called exposure model and the structural model. The exposure model specifies how the treatment status of units in $Z_{N_{i}}$ effect the outcome of $i$. It defines the relevant levels of exposure and how the treatment levels of the interference neighborhood get mapped to these levels.
Formally, the exposure model is specified by an exposure function $f$ that maps $\textbf{z}_{N_{i}}$ to a range $\mathcal{E}_{i}$. The range of $f$ specifies the relevant exposure levels and the mapping $f$ specifies how the treatment patterns of $\textbf{z}_{N_{i}}$ map to different exposure levels. To this end, let us assume that the potential outcome function $Y_{i}(z_{i},\textbf{z}_{N_{i}})$ depends on $\textbf{z}_{N_{i}}$ through a function
$$f:\{\textbf{z}_{N_{i}}\}\rightarrow\mathcal{E}_{i}.$$
Let $n_{i}=|N_{i}|$ be the number of units in the interference neighborhood of $i$. The domain of $f$ is the set of all possible treatment assignments of the neighborhood of a unit $i$. The domain of $f$ has at most $2^{n_{i}}$ elements and is finite. Hence the range of $f$ is also finite. This is because for each treatment assignment $\textbf{z}_{N_{i}}$, $f$ can map to at most one exposure level. Let $K_{i}=|\mathcal{E}_{i}|$ denote the size of the range. Thus, for every unit $i$, there are $K_{i}$ different levels of exposure. Without loss of generality, we can write the range of $f$ as $\mathcal{E}_{i}=\{0,1,\ldots,K_{i}-1\}$.
Given an exposure function $f$, let $e_{i}=f(\textbf{z}_{N_{i}})\in\mathcal{E}_{i}$.
Thus we can write the potential outcome function for each unit $i$ as
$$\displaystyle Y_{i}(\textbf{z})=Y_{i}(z_{i},\textbf{z}_{N_{i}})=Y_{i}(z_{i},f(%
\textbf{z}_{N_{i}}))=Y_{i}(z_{i},e_{i})$$
(7)
and $e_{i}$ takes values in $\{0,1,\ldots,K_{i}-1\}$.
To specify an exposure model, one must specify the function $f$ and the levels of exposure $\{0,1,\ldots K_{i}-1\}$. The total number of exposure patterns $K_{i}$ depends on the choice of $f$ and $\textbf{z}_{N_{i}}$. When $f$ is a one to one mapping, there are a total of $2^{n_{i}}$ levels of exposure for each unit $i$. Clearly, an $f$ that is onto reduces the number of exposure levels and hence the total number of possible potential outcomes.
In the most general case, one can set $N_{i}=\textbf{z}_{-i}$ and $f$ to be a one to one function. In this case, $K_{i}=2^{n-1}$ and there is no reduction in the number of potential outcomes. On the other extreme, when $N_{i}=\emptyset$, we are in the setting of no interference. Intermediate cases are more interesting and can be defined by a network interference graph $G$. To ensure identifiability we need $\max_{i}K_{i}<2^{n-1}$.
Remark 4.
Note that $e_{i}$ is a way of indexing the $K_{i}$ different types of possible exposure patterns or levels and the symbols $0,1,\ldots,K_{i}-1$ denote these exposure patterns as defined by the function $f$. The interpretation of the symbols is a choice of the definition of $f$. For example, statements such as “a unit is exposed if $10\%$ of it’s neighbors are treated” can be the modeled by mapping $\{0,\ldots,K_{i}-1\}$ to the appropriate fractions.
We will define two special symbols for two commonly used values of the exposure patterns: $e_{i}=0$ is called no exposure and $e_{i}=1$ is full exposure. The exposure function $f$ also specifies what it means for a unit to be fully exposed and not exposed. We give two examples:
1.
$e_{i}=0$, when all elements of $\textbf{z}_{N_{i}}$ are 0 and $e_{i}=1$ when all elements are 1.
2.
$e_{i}=0$ when all elements of $\textbf{z}_{N_{i}}$ are $0$ and $e_{i}=1$ when at least one element is $1$.
Remark 5.
A possible exposure function is one that maps $Z_{N_{i}}$ to the number of units in $N_{i}$ that are treated. One subtle issue with choosing such an exposure function is that the levels of exposure function depends on the maximum degree in the graph $G$. If $G$ is not a regular graph, i.e. the degree of each node is different, then the exposure levels of each unit is different, which may not be desirable depending on the application. These are subtle issues that need to be resolved and are out of the scope of our paper.
Let us consider a few examples of exposure functions:
1.
Symmetric Exposure: $f(\textbf{z}_{N_{i}})$ is symmetric in the indices of $\textbf{z}_{N_{i}}$
2.
Linear and Additive Exposure: $f(\textbf{z}_{N_{i}})=\sum_{j\in N_{i}}h_{j}(z_{j})$
3.
Linear Exposure: $f(\textbf{z}_{N_{i}})=\sum_{j\in N_{i}}\textbf{z}$.
It is also possible to define more complex exposure functions. For example, consider a setting when the interference neighborhood is specified through a graph $G$. The interference neighborhood of unit $i$ is the set of units in $G$ that have a connected by a path of size $2$ to $i$, i.e. through friends and friends of friends of unit $i$. The exposure function $f$ can be parametric that allows the potential outcomes to depend on $Z_{N_{i}}$ through a weighted combination of the number of treated friends in $G$ and the number of treated friends of treated friends.
Remark 6.
Note that we have made an assumption that the exposure function $f$ is independent of the unit $i$, i.e. we do not allow $f$ to depend on $i$. For example, we do not allow exposure functions where unit $i$’s exposure depends on the number of treated friends and unit $j$’s exposure depends on both the number of treated friends and the number of treated friends of friends. However, the range of $f$ may depend on $i$.
3.2.3 Structural Models
Parametrization of Potential Outcomes under Neighborhood Interference
Before defining a structural model, it is convenient to introduce a parametrization or a linear decomposition of potential outcomes into direct and indirect effects. This parametrized form of potential outcomes allows one to define and focus on various treatment effects of interests. We present one such parameterization. When $N_{i}$ is the interference neighborhood and $e_{i}=f(\textbf{z}_{N_{i}})$ is the exposure model, every unit $i$ has $2K_{i}$ potential outcomes, that can be parametrized by $2K_{i}$ parameters, as given in Proposition 3.2.
Proposition 3.2.
For each unit $i$, let $e_{i}=f(\textbf{z}_{N_{i}})$ where $N_{i}$ is the interference neighborhood. The potential outcomes can be parametrized as
$$\displaystyle Y_{i}(\textbf{z})=A_{i}(z_{i})+B_{i}(e_{i})+z_{i}C_{i}(e_{i})$$
(8)
where $e_{i}=f(\textbf{z}_{N_{i}})\in\{0,\ldots,K_{i}-1\}$, and $B_{i}(0)=C_{i}(0)=0$.
Remark 7.
Equation 8 resembles a linear model for Potential Outcomes. However, it is not a linear model in the usual sense of linear regression. Unlike regression, which is a model of conditional expectation, there are no random variables. Moreover, the parameters are not linear, and they depend on $i$.
This parametrization has a nice interpretation: The $A_{i}$ parameters represent the direct treatment effects or the part of the Potential outcome that depends only on a unit $i$’s treatment. The $B_{i}$ and $C_{i}$ parameters represent the indirect or interference effects, i.e. the part of the potential outcome that depends on the exposure level. In particular, the $B_{i}$ parameters represent the additive interference effect and the $C_{i}$ parameters represent the interaction between the additive interference and the direct treatment effects, see also section 3.3. Given this linear parametrization, we are now ready to specify structural models.
Structural Model
Up to this point, we have made no assumptions on how the potential outcomes relate to each other, we have only focused on reducing the number of potential outcomes. However, in some cases, we may also make additional modeling assumptions on how one potential outcome relates to another. Sometimes these assumptions serve as null hypothesis for treatment effects. These are called structural modeling assumptions as they impose a structure on different potential outcomes. Given the parameterization in equation 8, structural assumptions can be regarded as restrictions on the parameters of the potential outcomes. Without any structural assumptions, the parameterizations are functionally independent of each other. Structural assumptions make the parameters functionally dependent. Examples include, linear models, additive models and so on. Some examples of structural assumptions are stated below:
1.
Additivity: $C_{i}(e_{i})=0\forall e_{i}\in\{1,\ldots,K_{i}-1\}$.
2.
Constant effects: $A_{i}(z_{i})=A(z_{i})$, $B_{i}(e_{i})=B(e_{i})$ and $C_{i}(e_{i})=C(e_{i})$.
3.
Linear Effects: $B_{i}(e_{i})=b_{i}e_{i}$
4.
Constant Additive Effects $A_{i}(z_{i})=A(z_{i})$ and $C_{i}(e_{i})=0$.
5.
Sharp Null: $A_{i}(1)-A_{i}(0)=\beta\forall i$.
Remark 8.
The interference neighborhood reduces the number of potential outcomes for each unit $i$ from $2^{n}$ to $2^{n_{i}}$. An exposure model of the Potential Outcomes further reduces the number of potential outcomes for each unit $i$ from $2^{n_{i}}$ to a tractable number $K_{i}$. On the other hand, a structural model of the Potential Outcomes specifies a relationship between the $K_{i}$ different potential outcomes by imposing constraints on the parameters.
3.2.4 Some models of Potential Outcomes under interference
Different choices of the interference neighborhood, the exposure function and the structural model lead to different models for the potential outcomes. In this section, we present specific choices that give rise to some models used in the paper. These examples illustrate how one can use the framework to reduce the number of potential outcomes and model them. We start with the parametrized model of potential outcomes modeled using the interference neighborhood $N_{i}$ and the exposure function $e_{i}=f(\textbf{z}_{N_{i}})$:
$$\displaystyle Y_{i}(z_{i},e_{i})=A_{i}(z_{i})+B_{i}(e_{i})+z_{i}C_{i}(e_{i})=%
\alpha_{i}+\beta_{i}z_{i}+B_{i}(e_{i})+z_{i}C_{i}(e_{i})$$
(9)
where $A_{i}(0)=\alpha_{i}$ and $A_{i}(1)=\alpha_{i}+\beta_{i}$.
Symmetric Exposure Models
Model 1: Symmetric Exposure
Let $N_{i}$ denote the neighborhood of a unit $i$ as specified by the interference graph $G$, i.e.
$$N_{i}=\{j:g_{ij}=1\}$$
Next, let $f(\textbf{z}_{N_{i}})=\sum_{j\in N_{i}}z_{j}$, then we get the following model:
$$\displaystyle Y_{i}(z_{i},e_{i})$$
$$\displaystyle=\alpha_{i}+\beta_{i}z_{i}+B_{i}(e_{i})+z_{i}C_{i}(e_{i})$$
(10)
where $e_{i}=\sum_{j}g_{ij}z_{j}\in\{0,1,\ldots,d_{i}\}$ and $d_{i}$ is the degree of unit $i$ in the interference graph $G$. Under this model, each unit has $2(d_{i}+1)$ potential outcomes.
Starting with equation 9, we can make additional structural assumptions to get simpler models for the potential outcomes. We give two examples below.
Model 2: Symmetric Linear exposure
In Model 1, Let $B(e_{i})=\gamma_{i}e_{i}$ and $C_{i}(e_{i})=\theta_{i}e_{i}$
$$\displaystyle Y_{i}(z_{i},e_{i})$$
$$\displaystyle=\alpha_{i}+\beta_{i}z_{i}+B_{i}(e_{i})+z_{i}C_{i}(e_{i})$$
$$\displaystyle=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}e_{i}+\theta_{i}z_{i}e_{i}$$
$$\displaystyle=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}\left(\sum_{j}g_{ij}z_{j}%
\right)+\theta_{i}z_{i}\left(\sum_{j}g_{ij}z_{j}\right)$$
(11)
Model 3: Symmetric Additive Linear exposure
In Model 2, Let $\theta_{i}=0$
$$\displaystyle Y_{i}(z_{i},e_{i})$$
$$\displaystyle=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}e_{i}$$
$$\displaystyle=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}\left(\sum_{j}g_{ij}z_{j}\right)$$
(12)
Binary Exposure Models
The binary exposure model is te simplest exposure model that weakens the no-interference assumption. In these models, the range of the exposure function is always $\{0,1\}$, where $0$ is interpreted as not exposed and $1$ is interpreted as exposed. The definition of $f$ specifies which treatment levels get mapped to $0$ or $1$ and is chosen based on the application. In the binary exposure model, each unit has $4$ potential outcomes, in contrast to 2 potential outcomes per unit in the SUTVA case. The binary exposure model is the simplest possible model of potential outcomes when there is interference.
We present below a simple but natural choice of such a binary exposure function where a unit is said to be exposed if at least one of it’s neighbor is treated.
Model 4: Binary exposure
Let $N_{i}$ be
$$N_{i}=\{j:g_{ij}=1\}.$$
Let $f(\textbf{z}_{N_{i}})=0$ if all elements of $Z_{N_{i}}$ are $0$ and $1$ if at least one element of $Z_{N_{i}}$ is $1$. This gives us the so called two by two potential outcomes model or the binary exposure model:
$$\displaystyle Y_{i}(z_{i},e_{i})=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}e_{i}+%
\theta_{i}z_{i}e_{i}$$
(13)
where $e_{i}=I\left(\sum_{i}{g_{ij}}z_{j}>0\right)\in\{0,1\}$ and $z_{i}\in\{0,1\}$.
As in the previous case, one can impose structural assumptions on the binary exposure model to generate simpler models.
Model 5: Additive Binary exposure
Let $\theta_{i}=0$ in model 4, then we get the additive two by two model of potential outcomes
$$\displaystyle Y_{i}(z_{i},e_{i})=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}e_{i}$$
(14)
3.3 Defining Causal Effects under Network Interference
Given an interference neighborhood $G$ and a corresponding exposure function $f(\textbf{z}_{N_{i}})$, causal effects or estimands are defined as contrasts between potential outcomes under distinct treatment and exposure assignments. Under interference, there are many non-equivalent ways of defining causal effects. The definition of the estimand depends on the question one is interested in answering. It is important to note that the estimands are defined only using the table of science, and they do not depend on the actual treatment assignment mechanism used to estimate them.
For a given exposure function $f(\cdot)$, each unit $i$ has $2K_{i}$ distinct potential outcomes denoted by $Y_{i}(z_{i},e_{i})$ where $z_{i}\in\{0,1\}$ and $e_{i}\in\{0,\ldots,K_{i}-1\}$. These potential outcomes can be assembled in the form of a $n\times m$ Table of Science as before. The number of columns of the Table of Science in Table 2 reduce from $2^{n}$ to $2\cdot(\sum_{i=1}^{n}K_{i})$ columns where $K_{i}$ is the number of exposure levels for each unit $i$. The columns of the table of science now correspond to the relevant treatment and exposure conditions $(z_{i},e_{i})$ as specified by $f$. For instance, under the binary exposure model, the table of science has $4$ columns and $n$ rows. This is the simplest setting of a Table of Science that relaxes the no-treatment interference assumption. Causal effects are functions of at least two columns of $\mathbb{T}$.
Before we define causal effects, we need some additional notation. A fixed treatment assigned vector z gets mapped to different treatment and exposure combination $(z_{i},e_{i})$ for each unit $i$. Moreover, different treatment assignment vectors can get mapped to the same treatment and exposure combination. Formally, let $z_{0}$ and $e_{0}$ denote a generic treatment and exposure condition. Let
$$\Omega_{i}(z_{0},e_{0})=\{\textbf{z}\in\Omega:z_{i}=z_{0},f(\textbf{z}_{N_{i}}%
)=e_{0}\}$$
be the set of all treatment assignment vectors that give rise to treatment $z_{0}$ and exposure $e_{0}$ for a unit $i$.
We will consider two classes of estimands: marginal and average causal effects: The marginal effects are defined as contrasts between two different randomized treatment policies. The average effects are a contrast between two different types of potential outcomes. In some cases, both definitions can lead to the same estimand, but it is not true in general.
Marginal Effects
Let us first consider the marginal effects that are defined as contrasts between two randomized treatment assignment mechanisms. We will refer to such treatment assignments as policies to distinguish them from the actual treatment assignment mechanism used in the experiment. For instance, a policy can be to treat randomly chosen $10\%$ of the units in the population, or to treat $5\%$ of the units in the population and so on. Let $\phi$ and $\psi$ be two policies, i.e. $\phi(\textbf{Z})$ and $\psi(\textbf{Z})$ are two distributions over Z. Similar to Hudgens and Halloran (2008), we define conditional and marginal potential outcomes of a unit $i$ as expectations of potential outcomes under a treatment policy. Let $E_{i}$ denote the random exposure condition of unit $i$.
Definition 1 (Conditional and Marginal Potential Outcomes).
$$\displaystyle\bar{Y}_{i}(z_{i};\phi)$$
$$\displaystyle=\mathbb{E}_{\phi}\left[Y_{i}(Z_{i},E_{i})|Z_{i}=z_{i}\right]=%
\sum_{e_{i}}Y_{i}(z_{i},e_{i})\phi(E_{i}=e_{i}|Z_{i}=z_{i})$$
$$\displaystyle\bar{Y}_{i}(\phi)$$
$$\displaystyle=\mathbb{E}_{\phi}\left[Y_{i}(Z_{i},E_{i})\right]=\sum_{e_{i},z_{%
i}}Y_{i}(z_{i},e_{i})\phi(Z_{i}=z_{i},E_{i}=e_{i})$$
Here $\phi(Z_{i}=z,E_{i}=e)=\sum_{\textbf{z}\in\Omega_{i}(z,e)}\phi(\textbf{Z}=%
\textbf{z})$ and $\Omega_{i}(z,e)=\{\textbf{z}\in\Omega:z_{i}=z,e_{i}=e\}$. Given the conditional and marginal potential outcomes, various causal effects can be defined as follows:
$$\displaystyle\theta(\phi)$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\bar{Y}_{i}(1;\phi)-\bar{Y}_{i}(0;\phi)$$
$$\displaystyle\theta(\phi;\psi)$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\bar{Y}_{i}(\phi)-\bar{Y}_{i}(\psi)$$
$$\displaystyle\theta(\phi;\psi,z)$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\bar{Y}_{i}(z_{i}=z;\phi)-\bar{Y}_{i}(z%
_{i}=z;\psi)$$
One can define total, direct and indirect causal effects using these definitions, and consider various decompositions among them.
Average Causal Effects
An alternate way to define causal effects is to consider contrasts between two fixed types of potential outcomes. Let us consider a generic causal estimand $\beta$ defined as a contrast between two different treatment and exposure combinations: $\tau_{1}=(z_{1},e_{1})$ and $\tau_{0}=(z_{0},e_{0})$:
$$\displaystyle\beta=\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}(z_{1},e_{1})-Y_{i}(z_{%
0},e_{0})\right)$$
(15)
The most popular average causal effects are the Average treatment effects and the Average interference effects. We will consider two types of average treatment effects: the direct treatment effect (DTE) and the total treatment effect (TTE) that are defined below. Recall that the exposure levels $e_{i}=0$ and $e_{i}=1$ are special values defined to represent the situations when a unit is not exposed or exposed respectively. Direct effects are defined as contrasts between the conditions $\tau_{1}=(1,0)$ and $\tau_{0}=(0,0)$, i.e. when a unit is treated and not exposed vs when a unit is neither treated nor exposed.
Definition 2 (Direct Treatment Effect).
$$DTE=\beta_{DE}=\frac{\sum_{i}Y_{i}(1,0)}{n}-\frac{\sum_{i}Y_{i}(0,0)}{n}$$
Similarly, one can consider the total treatment that is a contrast between $\tau_{1}=(1,1)$ and $\tau_{0}=(0,0)$ when a unit is treated and exposed vs when a unit is neither treated nor exposed.
Definition 3 (Total Treatment Effect).
$$TTE=\beta_{TE}=\frac{\sum_{i}Y_{i}(1,1)}{n}-\frac{\sum_{i}Y_{i}(0,0)}{n}$$
Similarly, one can also define average estimands that measure interference effects:
Definition 4 (Average Interference Effects).
$$\gamma_{1}=\frac{\sum_{i}Y_{i}(0,1)}{n}-\frac{\sum_{i}Y_{i}(0,0)}{n}$$
$$\gamma_{2}=\frac{\sum_{i}Y_{i}(1,1)}{n}-\frac{\sum_{i}Y_{i}(1,0)}{n}$$
Relation between the estimands
As we saw, there are two ways to define causal effects. The marginal effects defined as a contrast between expectations of the potential outcomes under two different randomization policies, and the average effects defined as a contrast between averages of potential outcomes. These two definitions are non-equivalent in general.
However, under SUTVA, the marginal effects and the average effects reduce to the classical ATE, as the following proposition shows:
Proposition 3.3.
Assume that $Y_{i}(\textbf{z})=Y_{i}(z_{i})$. Then we have
$\beta_{DE}=\beta_{TE}=\theta(\phi)=\beta$ where
$\beta=\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}(1)-Y_{i}(0)\right).$
Remark 9.
Under the no interference assumption, the direct and the total effect reduce to the “usual” classical version of ATE. Moreover, the average interference effects are $0$ under no-interference. But under interference, the direct and the total effects are different, and the average interference effects are not $0$. For example, consider the following linear model of the potential outcomes:
$$Y_{i}(\textbf{z})=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}\left(\sum_{j}g_{ij}z_{j%
}\right)+\delta_{i}z_{i}\left(\sum_{j}g_{ij}z_{j}\right)$$
Let $\beta=(1/n)\sum_{i}\beta_{i})$.
Under this model, $DTE=\beta$ and $TTE=\beta+\frac{1}{n}\sum_{i=1}^{n}{(\gamma_{i}+\delta_{i})d_{i}}$. where $d_{i}=\sum_{j}g_{ij}z_{j}$ is the number of treated neighbors of unit $i$ in $G$. However, if $\gamma_{i}=0$ and $\delta_{i}=0$, then $DTE=TTE$. Hence we have the following proposition:
Proposition 3.4.
Consider the linear additive symmetric exposure model of Potential Outcomes given by equation 3.2.4, and let $\beta_{i}=\beta$, $\gamma_{i}=\gamma$ and $\delta_{i}=\delta$ $\forall i$ and $\bar{d}=\frac{1}{n}\sum_{i}d_{i}$. We have
$DTE=\beta$, $TTE=\beta+(\gamma+\delta)\bar{d}$, $\gamma_{1}=\gamma\bar{d}$, and $\gamma_{2}=(\delta+\gamma)\bar{d}$.
Note that one can obtain $TTE$ as a special case of $\theta(\phi,\psi)$, but this is not the case for $DTE$.
Proposition 3.5.
Consider the following two degenerate policies $\phi_{0}$ and $\phi_{1}$
$$\displaystyle\phi_{1}(\textbf{Z}=\textbf{z})=\begin{cases}1&\text{ if }\textbf%
{z}=\textbf{1}\\
0&\text{ otherwise}\end{cases}$$
$$\displaystyle\phi_{0}(\textbf{Z}=\textbf{z})=\begin{cases}1&\text{ if }\textbf%
{z}=\textbf{0}\\
0&\text{ otherwise}\end{cases}$$
Then $\theta(\phi_{1},\phi_{0})=TTE$.
Obtaining $DTE$ as a special case of $\theta$ is not possible. This is because there is no single policy (degenerate or non-degenerate) that allows us to estimate $\sum_{i}Y_{i}(1,0)$. We need to define $n$ different degenerate policies,
$\psi_{i}(\textbf{Z})$ where
$$\psi_{i}(\textbf{Z})=1\text{ if }\textbf{z}\in\Omega_{i}(1,0),0\text{ otherwise}$$
then
$$DTE=\frac{1}{n}\sum_{i}\mathbb{E}_{\psi_{i}}\left[Y_{i}(z_{i},e_{i})\right]-%
\mathbb{E}_{\psi_{0}}\left[Y_{i}(z_{i},e_{i})\right]$$
A similar analysis can be done for the interference effects $\gamma_{1}$ and $\gamma_{2}$.
Definition 5 (Irrelevant or nuisance potential outcomes).
Let a causal estimand be a function of $\{Y_{i}(z^{j},e^{j})\}_{j=1}^{J}$. We will call such potential outcomes as relevant. Potential outcomes that are not relevant are nuisance or irrelevant potential outcomes.
For instance, consider the average causal estimands defined as contrasts between fixed potential outcomes, i.e. the direct treatment effect and the total treatment effect. If we make no structural assumptions, then each causal effect is a function of only two columns in the Table of science, and the other columns are irrelevant. For e.g. potential outcomes of the form $Y_{i}(1,e_{i}),e_{i}\neq 0$ are not relevant for estimating $DTE$, since $DTE$ is a function of only $Y_{i}(1,0)$ and $Y_{i}(0,0)$. In this sense, if the goal is to only estimate $DTE$, then designs that generate other potential outcomes are wasteful.
Similarly, consider the marginal causal effects defined as contrast between expected potential outcomes under different policies. All the potential outcomes that have positive probability under the policy are relevant. On the other hand, potential outcomes that have $0$ probability under the policy are nuisance. If support of the policy is $\Omega$, the causal effect is a function of the entire table of science and all potential outcomes are relevant.
As we will see in Section 4, average causal effects like the DTE and TTE are more difficult to estimate, and the nuisance potential outcomes form a source of bias when using difference-in-means estimators. We need new designs along with Horvitz-Thompson estimators for unbiased estimation of average estimands. On the other hand, it is simpler to estimate marginal estimands as long as the actual randomization follows the policies of interest. One can use difference of means estimators to estimate marginal estimands.
3.4 Designs, Estimators and Strategies
We will now consider the problem of estimation of causal effects and the existence of estimators. Our focus will be on randomization based inference where the potential outcomes are considered to be fixed and the only source of randomness is due to the random assignment of treatment vector z. Under this setting, a random assignment Z is sampled according to $p(\textbf{Z})$ and the units are assigned to the treatment Z. The outcome observed for each unit $i$ is denoted by $Y_{i}^{obs}$.
Consider an interference and exposure model of potential outcomes specified by a graph $G$ and an exposure function $f$. Let $\theta$ be a generic causal estimand defined as a function of $\mathbb{T}(G,f)$.
Definition 6 (Design).
A design $p(\textbf{Z}|G,f)$ is a probability distribution supported over $\Omega$, the set of all possible treatment assignments for $n$ units.
In general, the design may depend on the interference graph $G$ and the exposure function $f$. We will suppress the dependence on $G$ and $f$ for simplicity. Designs that do not depend on the interference graph $G$ are called network-oblivious designs. Such designs are preferred in the case when $G$ is unknown, however, these designs may not be optimal.
Given a realization z of a design $p(\textbf{Z})$, let $Y^{obs}(\mathbb{T},\textbf{z})$ be the set of observed potential outcomes, where $\mathbb{T}$ is the Table of Science.
Definition 7 (Estimator).
An estimator $\hat{\theta}$ for $\theta$ is a function of the observed potential outcomes $Y^{obs}(\mathbb{T},\textbf{z})$.
We are now ready to define an estimation strategy. An estimation strategy for estimating a causal effect is a combination of a design and an estimator to be used with that design.
Definition 8 (Estimation Strategy).
An estimation strategy or simply, a strategy for estimating $\theta$ is a pair $(p(\textbf{Z}),\hat{\theta})$.
Estimation strategies are evaluated based on their properties such as unbiasedness and variance.
An estimation strategy is said to be unbiased for estimating $\theta$ if
$$\mathbb{E}_{p(\textbf{Z})}[\hat{\theta}]=\theta.$$
An estimator is said to be unbiased for $\theta$ if it is unbiased for any design $p(\textbf{Z})$.
For a given $\theta$, one goal is to construct strategies that are so called uniformly minimum variance unbiased (UMVU), see for e.g. Särndal et al. (2003). It is well known that such strategies don’t exist, even with the no-interference assumption. We will focus only on the unbiasedness properties of an estimation strategy.
3.5 Existence of estimators
We will now consider the assumption to ensure causal estimates are identified. We first start with the classic SUTVA assumption and present it’s counterpart when there is treatment interference. When there is no interference, Stable Unit Treatment Value Assumption (SUTVA) has two parts:
1.
No Interference: The potential outcome of unit $i$ depends only on the treatment of unit $i$.
2.
Consistency or Stability: There are no hidden versions of the treatment.
The first part of SUTVA says that the potential outcome of a unit $i$ depends only on its treatment assignment. The second part says that there are no hidden versions of the treatment. Put in a different way, the stability assumption states that there is only one column corresponding to a treatment in the table of science. Under interference, we consider both these parts of SUTVA separately. The first part of SUTVA is relaxed to neighborhood interference by considering models of potential outcomes, and the second part of SUTVA is modified to the “no hidden versions of treatment and exposure assumption”.
Neighborhood Interference
The neighborhood interference assumption states that the potential outcome of unit $i$ depends on its treatment and the treatment status of it’s interference neighborhood $N_{i}$ as specified by the exposure models, i.e.
$$Y_{i}(\textbf{z})=Y_{i}(z_{i},\textbf{z}_{N_{i}})=Y_{i}(z_{i},e_{i})$$
where $e_{i}=f(\textbf{z}_{N_{i}})$ and $f$ is the exposure model.
Consistency or Stability
The consistency assumption states that the observed outcome of a unit $i$ is exactly equal to the unit’s potential outcome under the assigned treatment and exposure combination, that is, there are no hidden versions of the treatment and exposure combination.
$$\displaystyle Y_{i}^{obs}=\sum_{i}{Y_{i}(z_{i},e_{i})I(Z_{i}=z_{i},E_{i}=e_{i})}$$
(16)
Unconfoundedness
Unconfoundedness assumption states that the treatment assignment mechanism $p(\textbf{Z})$ does not depend on the potential outcomes $Y_{i}(z_{i},e_{i})$.
Positivity
To state the positivity assumption, we need some preliminary definitions.
Recall that $e_{i}=f(\textbf{z}_{N_{i}})$. Let $z_{0}$ and $e_{0}$ denote a generic treatment and exposure condition. Recall that
$$\Omega_{i}(z_{0},e_{0})=\{\textbf{z}\in\Omega:z_{i}=z_{0},f(\textbf{z}_{N_{i}}%
)=e_{0}\}$$
is the set of all treatment vectors that give rise to $z_{0}$ and $e_{0}$ for unit $i$.
Definition 9 (Propensity Scores).
The propensity score for each unit $i$ for treatment and exposure pair $(z_{0},e_{0})$ denoted by $\pi_{i}(z_{0},e_{0})$ is defined as follows:
$$\displaystyle\pi_{i}(z_{0},e_{0})=\underset{\textbf{z}\in\Omega_{i}(z_{0},e_{0%
})}{\sum}\mathbb{P}\left(\textbf{Z}=\textbf{z}\right)$$
(17)
Let the causal estimand be a function of $\{Y_{i}(z^{j},e^{j})\}_{j=1}^{J}$. We will call such potential outcomes relevant. Potential outcomes that are not relevant are nuisance. Then we need
$$\displaystyle 0<\pi_{i}(z=z^{j},e=e^{j})<1\forall i,j$$
(18)
The positivity assumption implies that there is a positive probability of observing the relevant potential outcomes for each unit.
Example 1.
For example, $DTE$ is a function of $Y_{i}(0,0)$ and $Y_{i}(1,0)$. Hence the positivity condition requires that
$$\displaystyle 0<\pi_{i}(z=1,e=0),\pi_{i}(z=0,e=0)<1\forall i$$
(19)
If the positivity condition is not satisfied, the relevant potential outcomes are not observable and causal inference is not possible. In particular,
Theorem 3.1.
Let $p(\textbf{Z})$ be any design and let $\{Y_{i}(z^{j},e^{j})\}_{j=1}^{J}$ be the set of relevant potential outcomes for any estimand $\theta$. Without any structural assumptions on the Potential Outcomes, unbiased estimators of $\theta$ under a design $p(\textbf{Z})$ exist iff
$$0<\pi_{i}(z=z^{j},e=e^{j})<1\forall i=1,\ldots,n\text{ and }j=1,\ldots,J$$
Example 2.
The positivity assumption depends not only on the design, but also on the interference graph and the exposure model. Consider an interference graph $G$ with a linear exposure model. If there is a unit $j$ with degree $n-1$, then either $\pi_{j}(0,0)=0$ or $\pi_{i}(1,0)=0$ for all $i$. This is because the only way $Y_{j}(0,0)$ can be observed if all units are assigned to control, i.e. $\textbf{Z}=\textbf{0}$. However, under this assignment, $Y_{i}(1,0)$ is unobservable for any $i$. That is when degree is $n-1$, it is impossible to observe both $Y_{i}(1,0)$ and $Y_{i}(0,0)$. A solution is to allow for biased estimators or to exclude nodes with degree $n-1$ from the definition of causal effect. Note that in practice, such networks may be rare.
3.6 Commonly used designs and estimators
In this section we will consider some classical designs and estimators that are used for estimating causal effects.
We will start by considering three classic designs used for estimating causal effects; these designs are commonly used when there is no treatment interference. Since these designs no not depend on the interference graph $G$ and the exposure function, they are network-oblivious. We will examine the applicability of these designs in estimating causal effects when there is interference.
Completely Randomized Design
In a Completely Randomized Design (CRD), the treatment is assigned by fixing the number of treated and control units to $n_{t}$ and $n_{c}$ such that $n_{t}+n_{c}=n$, and choosing $n_{t}$ random units without replacement to be assigned to treatment, the rest of the units are assigned to control. The probability distribution $p(\textbf{Z})$ is hyper-geometric:
$$\displaystyle p(\textbf{Z}=\textbf{z})\begin{cases}&=\frac{1}{\binom{n}{n_{t}}%
}\text{ if }|\textbf{z}|=n_{t}\\
&=0\text{ o.w}\end{cases}$$
Note that the entries of Z are correlated.
Bernoulli Randomization
In a Bernoulli trial, each unit is assigned to treatment independently with probability $p$. The total number of treated and control units are random. The probability distribution of a Bernoulli trial is:
$$\displaystyle p(\textbf{Z}=\textbf{z})$$
$$\displaystyle=p^{|\textbf{z}|}(1-p)^{n-|\textbf{z}|}$$
In the Bernoulli trial, there is a positive probability that all units get assigned to either the treatment or control, violating the positivity assumption. A simple way to avoid this is to consider a restricted Bernoulli trial. Under a restricted Bernoulli trial, the number of treated unit and control units is always at least $1$. The probability distribution of a restricted Bernoulli trial is:
$$\displaystyle p(\textbf{Z}=\textbf{z})$$
$$\displaystyle=\begin{cases}\frac{p^{|\textbf{z}|}(1-p)^{n-|\textbf{z}|}}{1-p^{%
n}-(1-p)^{n}}\text{ if }0<|z|<n\\
0&\text{ otherwise }\end{cases}$$
Cluster Randomization
In a cluster randomized design, units are grouped to form clusters and these clusters are randomly assigned to the treatment or control condition, i.e. the randomization happens at the cluster level. The treatment status of a unit is equal to the treatment assigned to it’s cluster.
Formally, let the $n$ units be partitioned into $k=1,\ldots,K$ clusters. Let $n_{1},\ldots,n_{K}$ be the number of units in each cluster where $n=\sum_{i=1}^{K}n_{k}$. Note that the $n_{k}$’s are fixed. Let $z_{k}$ denote the treatment assignment of cluster $k$ and let $c_{i}$ denote the cluster that unit $i$ belongs to. Thus we have $z_{i}=z_{c_{i}}$. The random assignment of clusters to treatment or control is done by a completely randomized design. Let $K_{t}$ and $K_{c}$ denote the number of treated and control clusters respectively. Let $n_{t}$ be the total number of treated nodes and $n_{c}$ be the total number of control nodes. Note that $n_{t}=\sum_{k}n_{k}z_{k}$ and $n_{c}=\sum_{k}n_{k}(1-z_{k})$ and hence $n_{t}$ and $n_{c}$ are random.
Remark 10.
One can consider a Bernoulli assignment of clusters to treatment and control. The Bernoulli assignment is not preferred as one has no control over the number of clusters assigned to treatment or control.
Next, we will discuss two classes of estimators.
Difference-in-Means Estimators
The difference-in-means estimators are simplest estimators. These estimators are defined as difference of means between two types of observed potential outcomes. We consider the following three difference-in-means estimators, the first one of which is the classic difference-in-means:
$$\displaystyle\hat{\beta}_{naive}$$
$$\displaystyle=\frac{\sum_{i}Y_{i}^{obs}Z_{i}}{\sum_{i}{Z_{i}}}-\frac{\sum_{i}Y%
_{i}^{obs}(1-Z_{i})}{\sum_{i}{(1-Z_{i})}}$$
(20a)
$$\displaystyle\hat{\beta_{1}}$$
$$\displaystyle=\frac{\sum_{i}Y_{i}^{obs}I(Z_{i}=1,E_{i}=0)}{\sum_{i}I(Z_{i}=1,E%
_{i}=0)}-\frac{\sum_{i}Y_{i}^{obs}I(Z_{i}=0,E_{i}=0)}{\sum_{i}I(Z_{i}=0,E_{i}=%
0)}$$
(20b)
$$\displaystyle\hat{\beta_{2}}$$
$$\displaystyle=\frac{\sum_{i}Y_{i}^{obs}I(Z_{i}=1,E_{i}=1)}{\sum_{i}I(Z_{i}=1,E%
_{i}=1)}-\frac{\sum_{i}Y_{i}^{obs}I(Z_{i}=0,E_{i}=0)}{\sum_{i}I(Z_{i}=0,E_{i}=%
0)}$$
(20c)
Linear Estimators
One can also consider a larger class of estimators that are linear combination of the observed potential outcomes:
$$\displaystyle\hat{\theta}=\sum_{i=1}^{n}w_{i}(\textbf{z})Y_{i}^{obs}$$
This class includes the difference-in-means estimators.
We will study the difference-in-means estimators in Section 4 and the linear weighted estimators in Section 5.
4 Analytical insights for Difference-in-Means Estimators
In this section we study various estimation strategies that use a combination of difference-in-means estimators and classical designs for estimating causal effects when there is interference. We will focus on the nature and source of bias, if any, for estimating average and marginal causal effects.
The nature and source of bias depends on the estimand, the estimation strategy (i.e. the design and the estimator) and the model for potential outcomes.
In Section C we show that the difference-in-means estimator is unbiased for estimating marginal effects. This section is devoted to understanding the nature of bias when using difference-means estimators to estimate the direct treatment effect (DTE). We will consider different models of potential outcomes when estimating the direct effect using the difference of means estimators. The role of these models is to gain some analytical insights into the nature of bias and it’s dependence on the modeling assumptions.
4.1 Sources of bias in estimating the direct and the total effect
Without making any structural assumptions on the potential outcomes, (i.e. without any assumptions on how one potential outcome is related to another) the difference-in-means estimators given in equation 20 have two different sources of bias for estimating the total effect and the direct effect:
1.
The first source of bias is due to unequal weights given to the potential outcomes, or equivalently, unequal probability of including in the sample for estimating the mean potential outcomes. For some designs, this source of bias can be eliminated.
2.
The second source of bias is due to the inclusion of irrelevant potential outcomes, i.e. potential outcomes other than those used in the definition of the corresponding causal effect. For example, when estimating DTE, these would be any other potential outcomes other than $Y_{i}(0,0)$ and $Y_{i}(1,0)$.
Proposition 4.1 characterizes the bias of the naive estimator in estimating the direct treatment effect. For the bias of estimating $TTE$ using the naive estimator, see Proposition D.1 in Appendix.
Proposition 4.1.
Consider the parametrized potential outcomes given in equation 9:
$$\displaystyle Y_{i}(z_{i},e_{i})=A_{i}(z_{i})+B_{i}(e_{i})+z_{i}C_{i}(e_{i})$$
The direct effect $DTE$ is given by
$$DTE=\frac{1}{n}\sum_{i=1}^{n}(A_{i}(1)-A_{i}(0)).$$
For any design $p(\textbf{Z})$, the bias of the naive estimator $\hat{\beta}_{naive}$ (equation 20a) for estimating $DTE$ is:
$$\displaystyle b_{1}$$
$$\displaystyle=E[\hat{\beta}_{naive}]-DTE$$
$$\displaystyle=\sum_{i}\left(A_{i}(1)\left(\alpha_{i}(1)-\frac{1}{n}\right)-A_{%
i}(0)\left(\alpha_{i}(0)+\frac{1}{n}\right)\right)$$
$$\displaystyle+\sum_{i}\sum_{e_{i}\neq 0}B_{i}(e_{i})\left(\alpha_{i}(1,e_{i})-%
\alpha_{i}(0,e_{i})\right)$$
$$\displaystyle+\sum_{i}\sum_{e_{i}\neq 0}C_{i}(e_{i})\alpha_{i}(1,e_{i})$$
where,
$$\displaystyle\alpha_{i}(z_{i},e_{i})=E\left[\frac{I(Z_{i}=z_{i},E_{i}=e_{i})}{%
\sum_{i}{I(Z_{i}=z_{i})}}\right]\text{ and }\alpha_{i}(z_{i})=\sum_{e_{i}}{%
\alpha_{i}(z_{i},e_{i})}=E\left[\frac{I(Z_{i}=z_{i})}{\sum_{i}I(Z_{i}=z_{i})}%
\right].$$
Next Proposition 4.2 shows that the difference of means estimators $\hat{\beta}_{1}$ and $\hat{\beta}_{2}$ are also biased, but the source of bias is milder when compared to the naive estimator.
Proposition 4.2.
$$\displaystyle E[\hat{\beta_{1}}]$$
$$\displaystyle=\sum_{i=1}^{n}A_{i}(1)\beta_{i}(1,0)-A_{i}(0)\beta_{i}(0,0)$$
$$\displaystyle E[\hat{\beta_{2}}]$$
$$\displaystyle=\sum_{i=1}^{n}A_{i}(1)\beta_{i}(1,1)-A_{i}(0)\beta_{i}(0,0)+\sum%
_{i=1}^{n}B_{i}(1)\beta_{i}(1,1)+\sum_{i=1}^{n}C_{i}(1)\beta_{i}(1,1)$$
where,
$$\displaystyle\beta_{i}(z_{i},e_{i})=E\left[\frac{I(Z_{i}=z_{i},E_{i}=e_{i})}{%
\sum_{i}{I(Z_{i}=z_{i},E_{i}=e_{i})}}\right].$$
Propositions 4.1, 4.2 and D.1 suggests that for any design $p(\textbf{Z})$, the biases in estimating the direct and the total treatment effects using difference-in-means estimators are controlled by the weighted exposure probabilities $\alpha_{i}(z_{i},e_{i})$ under that design. For instance, as Proposition 4.1 shows, without making any structural assumptions on the potential outcomes $Y_{i}(z_{i},e_{i})$, the bias of $\hat{\beta}$ in estimating $DTE$ is $0$ only if $\alpha_{i}(1)=\alpha_{i}(0)=\frac{1}{n}$
and $\alpha_{i}(1,e_{i})=\alpha_{i}(1,e_{i})=0\forall e_{i}\neq 0$. The first condition removes the first source of bias by placing equal weights on the relevant potential outcomes, and the second condition removes the second source of bias by placing $0$ weights on irrelevant potential outcomes. Similarly, as seen by Proposition 4.2, the estimators $\hat{\beta_{1}}$ and $\hat{\beta_{2}}$ remove the second source of bias by eliminating irrelevant potential outcomes, but the first source of bias remains.
4.2 Characterization of bias under various models of Potential Outcomes
The bias of the difference-in-means estimators depends on the weights $\alpha_{i}(z_{j},e_{j})$. These weights depend on the design and the exposure model. To gain additional insight into the nature of the bias, we will make several modeling assumptions on the potential outcomes. These assumptions allow us to computing the exposure weights analytically for commonly used designs. We will focus on the bias of estimating the direct effect using the naive estimator $\beta_{naive}$. We also ask the related question: Does the bias in the difference-in-means estimator disappear if we make structural assumptions on the Potential Outcomes?
4.2.1 Symmetric Exposure Model
We begin by considering the Symmetric exposure model given in equation 10 and computing the exposure weights for CRD and Bernoulli designs.
Theorem 4.1 (Exposure Weights for Symmetric Exposure).
Consider the symmetric exposure model of potential outcomes given in equation 10. Under a CRD and a Bernoulli design, we have $\alpha_{i}(1)=\alpha_{i}(0)=\frac{1}{n}$. On the other hand, under a cluster randomized design, $\alpha_{i}(1)\neq\alpha_{i}(0)\neq\frac{1}{n}$.
For a CRD design,
$$\displaystyle\alpha_{i}(1,e_{i})$$
$$\displaystyle=\frac{1}{n}\frac{\binom{n_{t}-1}{e_{i}}\binom{n_{c}}{d_{i}-e_{i}%
}}{\binom{n-1}{d_{i}}}\text{ if }n_{t}\geq e_{i}+1\text{ and }n_{c}\geq d_{i}-%
e_{i},0\text{ otherwise}$$
$$\displaystyle\alpha_{i}(0,e_{i})$$
$$\displaystyle=\frac{1}{n}\frac{\binom{n_{t}}{e_{i}}\binom{n_{c}-1}{d_{i}-e_{i}%
}}{\binom{n-1}{d_{i}}}\text{ if }n_{t}\geq e-i\text{ and }n_{c}\geq d_{i}-e_{i%
}+1,0\text{ otherwise}$$
For a Bernoulli design, let $K$ be a restricted binomial random variable with support on $\{1,\ldots,n-1\}$ and $\mathbb{P}\left(K=k\right)=\frac{\binom{n}{k}p^{k}(1-p)^{n-k}}{1-(1-p)^{n}-p^{%
n}}$. Then,
$$\displaystyle\alpha_{i}(1,e_{i})$$
$$\displaystyle=\frac{1}{n}\mathbb{E}_{K}\left[\frac{\binom{K-1}{e_{i}}\binom{n-%
K}{d_{i}-e_{i}}}{\binom{n-1}{d_{i}}}\right]$$
$$\displaystyle\alpha_{i}(0,e_{i})$$
$$\displaystyle=\frac{1}{n}\mathbb{E}_{K}\left[\frac{\binom{K}{e_{i}}\binom{n-K-%
1}{d_{i}-e_{i}}}{\binom{n-1}{d_{i}}}\right]$$
Theorem 4.1 shows that the first source of bias gets eliminated under the CRD and the Bernoulli designs. Does the second source of bias disappear under these designs? Without any further assumptions, the answer is no. However, under additional assumptions, the second source of bias can go to $0$ asymptotically, or even be made exactly $0$. Examination of the second source of bias requires computing the weighted exposure probabilities $\alpha_{i}(z_{i},e_{i})$ under the CRD, Bernoulli designs and Cluster Randomized designs, which depend on the exposure model. Computing $\alpha_{i}(z_{i},e_{i})$ under the Bernoulli and cluster randomized designs is further complicated by the fact that, unlike the CRD, the denominator is a random variable that is correlated with numerator. Moreover, due to the overlapping neighborhoods, the correlation depends on a complicated manner on the graph $G$. Similar issues prevent us from obtaining explicit formula for $\beta_{i}(z_{i},e_{i})$. However, progress can be made by computing the bias directly under some structural assumptions.
4.2.2 Additive Symmetric Exposure model
Let us consider the additive symmetric model given by equation 21 below which is obtained by making the structural assumption $C_{i}(e_{i})=0$ in equation 10.
$$\displaystyle Y_{i}(z_{i},e_{i})=\alpha_{i}+\beta_{i}z_{i}+B_{i}(e_{i})$$
(21)
where $e_{i}\in\{0,1,\ldots,d_{i}\}$.
Corollary 1.
Let $Y_{i}(z_{i},e_{i})=\alpha_{i}+\beta_{i}z_{i}+B_{i}(e_{i})$, $e_{i}\in\{0,1,\ldots,d_{i}\}$. The bias in estimating $DTE$ using the difference-in-means estimator $\hat{\beta}_{naive}$ under the CRD and Bernoulli designs is
$$\sum_{i}\sum_{e_{i}\neq 0}B_{i}(e_{i})\left[\alpha_{i}(1,e_{i})-\alpha_{i}(0,e%
_{i})\right]$$
4.2.3 Symmetric Additive Linear exposure
Consider the symmetric additive linear model of the potential outcomes model specified by equation 3.2.4 and further assume constant interference effects, i.e. $\gamma_{i}=\gamma$. This gives us the following linear model of Potential outcomes:
$$\displaystyle Y_{i}(z_{i},e_{i})$$
$$\displaystyle=\alpha_{i}+\beta_{i}z_{i}+\gamma\left(\sum_{j}g_{ij}z_{j}\right)$$
Proposition 4.3.
Under a completely randomized design, we have,
$$E[\hat{\beta}]-DTE=-\gamma\frac{2m}{n(n-1)}.$$
where $m=\frac{1}{2}\sum_{i}{d_{i}}$ is the number of edges in the interference graph.
Bernoulli Randomization
Proposition 4.4.
Under a restricted Bernoulli trial, we have,
$$E[\hat{\beta}]-DTE=-\gamma\frac{2m}{n(n-1)}$$
Remark
Propositions 4.3 and 4.4 show that even under the structural assumptions of additivity, linearity and constant interference effect, there is always a bias due to the interference. The bias is independent of $n_{c}$ and $n_{t}$ in the CRD and $p$ in the Bernoulli designs. The bias is in the opposite direction of the interference effect, i.e. a positive interference leads to smaller estimate of the average treatment effect when compared to the true $\beta_{1}$. The bias scales as $O\left(\frac{m}{n^{2}}\right)$, hence for sparse and large network, asymptotically, the bias goes to 0. Is it possible for the bias to be exactly $0$? The answer is yes, and further explained in the next section.
4.2.4 Binary Exposure Model
In this section we consider the binary exposure model given in equation 13 and study the bias of $\beta_{naive}$ for estimating DTE.
Completely Randomized Design
Proposition 4.5.
Under model 13, we have, for a CRD
$$\displaystyle\mathbb{E}\left[\hat{\beta}\right]-DTE$$
$$\displaystyle=-\frac{1}{n}\sum_{i}\gamma_{i}\frac{\binom{n_{c}-1}{d_{i}-1}}{%
\binom{n-1}{d_{i}}}+\frac{1}{n}\sum_{i}\theta_{i}\left(1-\frac{\binom{n_{c}}{d%
_{i}}}{\binom{n-1}{d_{i}}}\right)$$
Bernoulli Randomization
Proposition 4.6.
For a Bernoulli trial, we have
$$\displaystyle\mathbb{E}\left[\hat{\beta}\right]-DTE=-\sum_{i}\left(\frac{d_{i}%
\gamma_{i}(1-p)^{d_{i}}}{n(n-d_{i})}\right)+\sum_{i}\theta_{i}\left[\frac{1}{n%
}-\frac{(1-p)^{d_{i}}}{n}\right]$$
Under the structural assumption of additivity we have $\theta_{i}=0$. In this case, by Proposition 4.5 it follows that the bias of the difference-in-means estimator can be $0$ when $n_{c}<\min_{i}d_{i}$. Thus, we have the following result:
Proposition 4.7.
Consider the binary additive exposure model 14. Under the completely randomized design, if $n_{c}<\min_{i}d_{i}$, then the bias of the difference-in-means estimator $\hat{\beta}_{naive}$ in estimating $DTE$ is $0$.
5 Linear Unbiased Estimators
For any design $p(\textbf{Z}=z)$, we can construct unbiased estimators of causal effects by using standard techniques from the survey sampling literature. Let us consider a generic average causal estimand $\theta$ defined as a contrast between two different treatment and exposure combinations: $\tau_{1}=(z_{1},e_{1})$ and $\tau_{2}=(z_{0},e_{0})$:
$$\displaystyle\theta=\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}(z_{1},e_{1})-Y_{i}(z_%
{0},e_{0})\right)$$
(22)
For example, $\theta=\beta_{DTE}$ if $(z_{1},e_{1})=(1,0)$ and $(z_{0},e_{0})=(0,0)$, and so on. Following Godambe (1955), let us consider the most general class of linear weighted estimators for estimating $\theta$, i.e
$$\displaystyle\hat{\theta}=\sum_{i}w_{i}(\textbf{z})Y_{i}^{obs}$$
(23)
Here $w_{i}(\textbf{z})$ is the weight assigned to unit $i$. Note that the weight assigned to unit $i$ depends on the treatment assigned to all the units in the finite population, i.e it depends on z. The set of weights $w_{i}(\textbf{z})$ that lead to unbiased estimators of $\theta$ can be characterized as a solution to a system of equations that depend on the design, interference graph and the exposure model.
Theorem 5.1.
Consider an exposure model $e_{i}=f(\textbf{z}_{N_{i}})$ where $N_{i}$ is specified by an interference graph $G$. Assume that there are no structural assumptions on the potential outcomes. Let $\Omega_{i}(z_{1},e_{1})=\{\textbf{z}:z_{i}=z_{1},e_{i}=e_{1}\}$. Similarly, let $\Omega_{i}(z_{0},e_{0})=\{\textbf{z}:z_{i}=z_{0},e_{i}=e_{0}\}$.
The estimator $\hat{\theta}$ in equation 23 is unbiased for $\theta=\frac{1}{n}\sum_{i}\left(Y_{i}(z_{1},e_{1})-Y_{i}(z_{0},e_{0})\right)$ if and only if
$0<\pi_{i}(z_{1},e_{1})<1$ and $0<\pi_{i}(z_{0},e_{0})<1$ and $w_{i}(\textbf{z})$ satisfy the following system of equations:
$$\displaystyle\underset{\textbf{z}\in\Omega_{i}(z_{1},e_{1})}{\sum}w_{i}(%
\textbf{z})p(\textbf{z})$$
$$\displaystyle=\frac{1}{n},\text{ }\forall i=1,\ldots,n$$
$$\displaystyle\underset{\textbf{z}\in\Omega_{i}(z_{0},e_{0})}{\sum}w_{i}(%
\textbf{z})p(\textbf{z})$$
$$\displaystyle=-\frac{1}{n},\text{ }\forall i=1,\ldots,n$$
$$\displaystyle\underset{\textbf{z}\in\Omega_{i}(z,e)}{\sum}w_{i}(\textbf{z})p(%
\textbf{z})$$
$$\displaystyle=0,\text{ }\forall(z,e)\neq(z_{0},e_{0})\text{ and }(z,e)\neq(z_{%
1},e_{1}),i=1,\ldots,n$$
Recall that $\Omega_{i}(z,e)$ are the set of treatment allocations that reveal the potential outcome $Y_{i}(z,e)$ for unit $i$. Note that these sets depend on $i$ and are different for each unit. In general, there can be infinitely many solutions to the system of equations in Theorem 5.1 depending on the interference graph, exposure model, and the design. Hence there can be infinitely many unbiased estimators of $\theta$. For each unit $i$, let us consider the number of equations $p_{i}$ and the number of unknowns $m$. For each $i$, there are $m=|\Omega|$ unknown weights $w_{i}(\textbf{z}),\textbf{z}\in\Omega$ which depend on the support of the design. On the other hand, there are $p_{i}=2\cdot K_{i}$ linearly independent equations in Theorem 5.1 which depend on the exposure model. Recall that $K_{i}$ is the number of levels of exposure for unit $i$. Hence linear weighted unbiased estimators don’t exist if for each $i$, $m<p_{i}=2K_{i}$. We have the following result:
Proposition 5.1.
Let $m=|\Omega|$ be the number of allocations and $K_{i}$ be the number of levels of the exposure model for each unit $i$. If for each $i$, $m>2K_{i}$ and $0<\pi_{i}(z_{1},e_{1}),\pi_{i}(z_{0},e_{0})<1$, there are infinitely many linear unbiased estimators of $\theta$.
Table 3 gives the values of $m$ and $p_{i}$ for some exposure models and designs. For instance, under a restricted Bernoulli design, there are $m=2^{n}-2$ unknown weights, since $\textbf{z}=0$ and $\textbf{z}=1$ is not allowed as it violates the positivity assumption. On the other hand, for a symmetric exposure model, there are $p_{i}=2d_{i}$ equations where $d_{i}$ is the number of units in $N_{i}$.
5.1 Horvitz-Thompson Estimator
If the weight of a unit $i$ is allowed to depend on z only through $z_{i}$ and $e_{i}$, then we get a smaller class of linear estimators of the following form:
$$\displaystyle\hat{\theta}_{2}=\sum_{i}{w_{i}(z_{i},e_{i})Y_{i}^{obs}}$$
(24)
The restriction on the weights is a form of sufficiency: instead of the weight depending on the entire vector z, it depends only on $(z_{i},e_{i})$. Since the potential outcomes are reduced from $Y_{i}(\textbf{z})$ to $Y_{i}(z_{i},e_{i})$, it is natural to consider such a reduction of the weights from $w_{i}(\textbf{z})$ to $w_{i}(z_{i},e_{i})$.
Theorem 5.2 shows that under no structural assumptions on the potential outcomes, the only unbiased estimator of type $\hat{\theta}_{2}$ is the Horvitz-Thompson estimator.
Theorem 5.2.
Consider the estimators of type $\hat{\theta}_{2}$ given by equation 24. Without any structural assumptions on the potential outcomes, the only unbiased estimator of $\theta$ in this class is the Horvitz-Thompson estimator $\hat{\theta}_{HT}$ where
$$\displaystyle w_{i}(z_{i},e_{i})=\begin{cases}\frac{1}{n\pi_{i}(z_{1},e_{1})}%
\text{ if }(z_{i},e_{i})=(z_{1},e_{1})\\
-\frac{1}{n\pi_{i}(z_{0},e_{0})}\text{ if }(z_{i},e_{i})=(z_{0},e_{0})\\
0\text{ otherwise }\end{cases}$$
The HT estimator eliminates both sources of bias mentioned in Section 4 by choosing the correct weights. In particular, the HT estimator assigns a weight of $0$ to nuisance potential outcomes, and a positive weight to relevant potential outcomes. The positive weight is inversely proportional to the probability of observing that potential outcome under the design $p(\textbf{Z})$. The HT estimator depends on the propensity scores $\pi_{i}(z_{i},e_{i})$. As mentioned before, these probabilities depend on the design and the exposure model. We compute an analytical formula of these probabilities for the CRD and the Bernoulli designs for different exposure models.
Theorem 5.3 (Propensity Scores for Symmetric Exposure).
Consider the symmetric exposure function, $e_{i}=f(\textbf{Z}_{N_{i}})=|\textbf{Z}_{N_{i}}|$, $e_{i}\in\{0,1,\ldots,d_{i}\}$.
For a CRD Design,
$$\displaystyle\mathbb{P}\left(Z_{i}=1,E_{i}=e_{i}\right)$$
$$\displaystyle=\frac{n_{t}}{n}\frac{\binom{n_{t}-1}{e_{i}}\binom{n_{c}}{d_{i}-e%
_{i}}}{\binom{n-1}{d_{i}}}\text{ if }n_{t}\geq e_{i}+1\text{ and }n_{c}\geq d_%
{i}-e_{i},0\text{ otherwise}$$
$$\displaystyle\mathbb{P}\left(Z_{i}=0,E_{i}=e_{i}\right)$$
$$\displaystyle=\frac{n_{c}}{n}\frac{\binom{n_{t}}{e_{i}}\binom{n_{c}-1}{d_{i}-e%
_{i}}}{\binom{n-1}{d_{i}}}\text{ if }n_{t}\geq e-i\text{ and }n_{c}\geq d_{i}-%
e_{i}+1,0\text{ otherwise}$$
For a Bernoulli Design,
$$\displaystyle\mathbb{P}\left(Z_{i}=1,E_{i}=e_{i}\right)$$
$$\displaystyle=\binom{d_{i}}{e_{i}}p^{e_{i}+1}(1-p)^{d_{i}-e_{i}}$$
$$\displaystyle\mathbb{P}\left(Z_{i}=0,E_{i}=e_{i}\right)$$
$$\displaystyle=\binom{d_{i}}{e_{i}}p^{e_{i}}(1-p)^{d_{i}-e_{i}+1}$$
Theorem 5.4 (Propensity Scores for Binary Exposure).
Consider the symmetric exposure function, $e_{i}=f(\textbf{Z}_{N_{i}})=I(|\textbf{Z}_{N_{i}}|>1)$, $e_{i}\in\{0,1\}$.
i.e a unit is exposed if at least 1 of its neighbor is treated.
For a CRD,
$$\displaystyle\mathbb{P}\left(Z_{i}=1,E_{i}=1\right)$$
$$\displaystyle=\begin{cases}0&\text{if }d_{i}=0\\
\frac{n_{t}}{n}\left[1-\frac{\binom{n_{c}}{d_{i}}}{\binom{n-1}{d_{i}}}\right]&%
\text{if }0<d_{i}\leq n_{c}\\
\frac{n_{t}}{n},&\text{if }d_{i}>n_{c}\end{cases}$$
$$\displaystyle\mathbb{P}\left(Z_{i}=1,E_{i}=0\right)$$
$$\displaystyle=\begin{cases}\frac{n_{t}}{n}&\text{if }d_{i}=0\\
\frac{n_{t}}{n}\frac{\binom{n_{c}}{d_{i}}}{\binom{n-1}{d_{i}}}&\text{if }0<d_{%
i}\leq n_{c}\\
0,&\text{if }d_{i}>n_{c}\end{cases}$$
$$\displaystyle\mathbb{P}\left(Z_{i}=0,E_{i}=1\right)$$
$$\displaystyle=\begin{cases}0&\text{if }d_{i}=0\\
\frac{n_{c}}{n}\left[1-\frac{\binom{n_{c}-1}{d_{i}}}{\binom{n-1}{d_{i}}}\right%
]&\text{if }0<d_{i}\leq n_{c}-1\\
\frac{n_{c}}{n},&\text{if }d_{i}>n_{c}-1\end{cases}$$
$$\displaystyle\mathbb{P}\left(Z_{i}=0,E_{i}=0\right)$$
$$\displaystyle=\begin{cases}\frac{n_{c}}{n}&\text{if }d_{i}=0\\
\frac{n_{c}}{n}\frac{\binom{n_{c}-1}{d_{i}}}{\binom{n-1}{d_{i}}}&\text{if }0<d%
_{i}\leq n_{c}-1\\
0&\text{if }d_{i}>n_{c}-1\end{cases}$$
Similarly, for a Bernoulli trial with probability of success $p$, we have
$$\displaystyle\mathbb{P}\left(Z_{i}=1,E_{i}=1\right)$$
$$\displaystyle=p(1-(1-p)^{d_{i}})$$
$$\displaystyle\mathbb{P}\left(Z_{i}=1,E_{i}=0\right)$$
$$\displaystyle=p(1-p)^{d_{i}}$$
$$\displaystyle\mathbb{P}\left(Z_{i}=0,E_{i}=1\right)$$
$$\displaystyle=(1-p)(1-(1-p)^{d_{i}})$$
$$\displaystyle\mathbb{P}\left(Z_{i}=0,E_{i}=0\right)$$
$$\displaystyle=(1-p)^{d_{i}+1}$$
Under a cluster randomized design, let $u_{i}$ be the number of clusters of unit $i$ and it’s neighbors. Assume $n_{k}>0,\forall k=1,\ldots,K$.
$$\displaystyle\mathbb{P}\left(z_{i}=1,e_{i}=1\right)$$
$$\displaystyle=\frac{K_{t}}{K}$$
$$\displaystyle\mathbb{P}\left(z_{i}=1,e_{i}=0\right)$$
$$\displaystyle=0$$
$$\displaystyle\mathbb{P}\left(z_{i}=0,e_{i}=1\right)$$
$$\displaystyle=0\mbox{ if }u_{i}=1,\frac{K_{c}}{K}\left[1-\prod_{i=1}^{u_{i}-1}%
\frac{K_{c}-u_{i}}{K-u_{i}}\right]\mbox{ if }u_{i}>1$$
$$\displaystyle\mathbb{P}\left(z_{i}=0,e_{i}=0\right)$$
$$\displaystyle=\frac{K_{c}}{K}\mbox{ if }u_{i}=1,\frac{K_{c}}{K}\left[\prod_{i=%
1}^{u_{i}-1}\frac{K_{c}-i}{K-i}\right]\mbox{ if }u_{i}>1$$
$$\displaystyle=\prod_{i=1}^{u_{i}}\frac{K_{c}-i+1}{K-i+1}$$
Remark 11.
The weights of the HT estimator depend on the exposure model and the interference graph $G$. In cases where the interference graph $G$ is not known, the HT estimator may not be usable.
Remark 12.
The weights of the HT estimator depend only on the exposure model and the design. They do not depend on the structural model. For example, in the linear model of Potential Outcomes, the HT weights do not depend on the linearity of the model, but only on the exposure neighborhood.
Remark 13.
For a clustered randomized design, we cannot estimate $\beta_{DE}$ using HT estimators because some propensity scores are $0$.
5.2 Inadmissibility of the Horvitz-Thompson estimator
It is clear that the class of linear weighted unbiased estimators as given in equation 23 is quite large. Choosing a single estimator from this class is not possible. This is due to the fact that uniformly minimum variance estimators of $\theta$ don’t exist. This can be shown by following a classical proof of Godambe who shows that uniformly minimum variance unbiased estimators of the finite sample population totals do not exist, see Godambe (1955). On the other hand, if we restrict ourselves to a smaller class of linear unbiased estimators $\theta_{2}$ by requiring the weights to depend only on the treatment $z_{i}$ and the exposure $e_{i}$ of unit $i$, the HT estimator is the only unbiased estimator and hence is the minimum variance unbiased estimator.
It is natural to ask if the HT estimator satisfies some optimality properties in a larger class of estimators. We study the admissibility of the HT estimator with respect to the mean squared error, in the class of all estimators for estimating a causal parameter $\theta$ under interference. The mean squared error of an estimator is defined as
$$MSE(\hat{\theta})=\mathbb{E}_{p(\textbf{Z})}[(\hat{\theta}-\theta)^{2}]$$
Definition 10.
An estimator $\hat{\theta}_{1}$ is inadmissible with respect to mean squared error if there exists an estimator $\hat{\theta}_{2}$ such that
$MSE(\hat{\theta}_{2})<MSE(\hat{\theta}_{1})$ for all $\theta$.
For finite population inference, the admissibility of the HT estimator for estimating a finite population total in the class of all unbiased estimators is well known, see Godambe and Joshi (1965).
We show that the Horvitz-Thompson estimator is inadmissible under the class of all estimators with respect to the mean squared error for a special class of designs called the non-constant designs. A non-constant design is a design where the number of units allocated to the treatment and exposure combinations of interest are random.
Definition 11 (Non-Constant Designs).
Consider a generic estimand $\theta$ given in equation 8 that is a contrast between treatment and exposure combinations $\tau_{0}=(z_{0},e_{0})$ and $\tau_{1}=(z_{1},e_{1})$. Let $X_{0}=\sum_{i=1}^{n}I(Z_{i}=z_{0},E_{i}=e_{0})$ and $X_{1}=\sum_{i=1}^{n}I(Z_{i}=z_{1},E_{i}=e_{1})$.
A design $\mathbb{P}$ is a non-constant design for an estimand $\theta$ if $X_{0}$ and $X_{1}$ are random.
Theorem 5.5 (Inadmissibility of HT).
Let $\mathbb{P}$ be any non-constant design as given in Definition 11. Consider the class of all estimators of $\theta$ with respect to the design $\mathbb{P}$. The Horvitz-Thompson estimator is inadmissible with respect to the mean squared error in this class.
It is can be verified that under interference, most commonly used designs such as Bernoulli design, CRD, and cluster randomized designs are non-constant. This is because the these designs control the treatment condition, but the exposure is indirectly assigned and hence the number of units under $\tau_{0}$ and $\tau_{1}$ are random. Thus, the consequence of this is that the H-T estimator is inadmissible for estimating average causal effects under interference for these designs.
5.3 Improving the Horvitz-Thompson Estimator
The HT estimator is the only unbiased estimator in the class of linear unbiased estimators when the weights are not allowed to depend on the sample. However, the Horvitz-Thompson estimator is inadmissible with respect to the mean squared error in a larger class of estimators. In fact, the mean squared error of the HT estimator can be quite large, see Section A. There are three general directions in which the HT estimator can be improved.
1.
Generalized Linear Estimators: Allow the weights to depend on the sample and/or auxiliary information.
2.
Model dependent Unbiased Estimation: Make structural assumptions on the Potential outcomes and seek unbiased estimators under the model assumptions.
3.
Model assisted Estimation: Make structural assumptions on the Potential Outcomes and seek model assisted HT estimators that are mildly biased.
5.3.1 Generalized Linear Estimators
The Horvitz-Thompson estimator can be improved by taking into account auxiliary information that depends on the labels of the sample. Such estimators fall into the class $\hat{\theta}_{1}$ that we have considered before and are called generalized linear estimators in the survey sampling literature, see for instance Basu (2011). We present the so called generalized difference estimator which can be an improvement to the HT estimator that is still unbiased. This estimator takes into account auxiliary information on the potential outcomes for each unit and in some cases can have smaller variance that the HT estimator.
Following Basu (2011), let $a_{1},\ldots,a_{n}$ and $b_{1},\ldots,b_{n}$ be auxiliary information available for each unit $i$. Then the following difference estimator is an unbiased estimator of $\theta$:
$$\displaystyle\hat{\theta}_{D}=\frac{1}{N}\left(\sum_{i}(Y_{i}(z_{1},e_{1})-a_{%
i})\frac{I_{i}(z_{1},e_{1})}{\pi_{i}(z_{1},e_{1})}-\sum_{i}(Y_{i}(z_{0},e_{0})%
-b_{i})\frac{I_{i}(z_{0},e_{0})}{\pi_{i}(z_{0},e_{0})}+\sum_{i}\left(a_{i}-b_{%
i}\right)\right)$$
(25)
Here, $a_{i}$ and $b_{i}$ can be thought of as a priori information about the potential outcomes $Y_{i}(z_{1},e_{1})$ and $Y_{i}(z_{0},e_{0})$. This estimator is a special case of the following generalized difference estimator. To define the generalized difference estimator, let
$$\displaystyle\hat{\bar{Y}}(z_{1},e_{1})$$
$$\displaystyle=\frac{1}{N}\left(\sum_{i}\frac{Y_{i}(z_{1},e_{1})I(Z_{i}=z_{1},E%
_{i}=e_{1})}{\pi_{i}(Z_{i}=z_{1},E_{1})}+\lambda_{1}\left(\sum_{i}\frac{a_{i}I%
(Z_{i}=z_{1},E_{i}=e_{1})}{\pi(Z_{i}=z_{1},E_{i}=e_{1})}-\sum_{i}a_{i}\right)\right)$$
$$\displaystyle\hat{\bar{Y}}(z_{0},e_{0})$$
$$\displaystyle=\frac{1}{N}\left(\sum_{i}\frac{Y_{i}(z_{0},e_{0})I(Z_{i}=z_{0},E%
_{i}=e_{0})}{\pi_{i}(Z_{i}=z_{0},E_{0})}+\lambda_{2}\left(\sum_{i}\frac{b_{i}I%
(Z_{i}=z_{0},E_{i}=e_{0})}{\pi(Z_{i}=z_{0},E_{i}=e_{0})}-\sum_{i}b_{i}\right)\right)$$
where $\lambda_{1}$ and $\lambda_{2}$ are fixed numbers.
Then we have,
$$\hat{\beta}_{GD}=\hat{\bar{Y}}(z_{1},e_{1})-\hat{\bar{Y}}(z_{0},e_{0})$$
If we set $\lambda_{1}=\lambda_{2}=-1$, $\hat{\theta}_{GD}=\hat{\theta}_{D}$.
5.3.2 Model dependent Unbiased Estimation
In the model dependent unbiased estimation, one assumes a structural model for the potential outcomes and constructs unbiased estimators with respect to the model. For instance, consider the estimation of the direct treatment effect. Let us assume the additive model of potential outcomes with $C_{i}(e_{i})=0\forall i$, as given in equation 21. Then any linear weighted estimator of the form
$$\hat{\theta}=\sum_{i}w_{i}(z_{i},e_{i})Y^{obs}$$
is an unbiased estimator of the direct treatment effect, where the weights are given by the following system:
For each unit $i$, consider the linear system,
$$\displaystyle w_{i}(1,0)\pi_{i}(1,0)+\ldots+w_{i}(1,d_{i})\pi_{i}(1,d_{i})$$
$$\displaystyle=\frac{1}{n}$$
$$\displaystyle w_{i}(0,e_{i})\pi_{i}(0,e_{i})+w_{i}(1,e_{i})\pi_{i}(1,e_{i})$$
$$\displaystyle=0,e_{i}=\{0,\ldots,d_{i}\}$$
(26)
The unbiasedness of the estimator depends on the structural assumptions of the potential outcomes, which are not known in general. Instead of using modeling assumptions, one may use the model to assist in the designing estimators.
5.3.3 Model assisted Estimation
Model assisted approach is very popular in the survey sampling literature, see for e.g. Särndal et al. (2003). Assume that for each unit $i$, there exists a auxiliary information $X_{i}$, and the goal is to estimate the finite population total $\sum_{i}Y_{i}$. In the model assisted estimation, a working model between $Y_{i}$ and $X_{i}$ is assumed. This may introduce a mild bias in the estimate as a trade off for a reduction in the variance. Such model assisted estimators are called Generalized Regression Estimators or GREG estimators or calibration estimators in the survey literature.
The model assisted approach fits naturally in causal inference with interference. As in the case with survey sampling, model assisted estimators can be constructed by assuming a “working” model of potential outcomes. There are several natural models of potential outcomes that one can consider. Moreover, a natural auxiliary variable associated with each unit is the exposure level $e_{i}$. When considering causal estimands with interference, the model assisted approach offers a subtle advantage. It allows one to include units with nuisance potential outcomes in the estimator - The model relates the nuisance potential outcomes to the relevant potential outcomes, thus allowing us to use both in the estimator. For instance, consider estimation of the direct treatment effect using the HT estimator. Without any underlying model, the only units that appear in the estimator are those whose observed exposure is $(z_{i}=0,e_{i}=0)$ and $(z_{i}=1,e_{i}=0)$. Any unit with a different exposure does not appear in the estimator. However, the model assisted approach allows us to include information from the units whose observed outcome is a nuisance potential outcome, thereby increasing the effective sample size.
For example, consider the symmetric linear model for the potential outcomes:
$$\displaystyle Y_{i}(z_{i},e_{i})=\alpha+\beta z_{i}+\gamma e_{i}+\delta z_{i}e%
_{i}$$
(27)
where $e_{i}\in\{0,1,\ldots,d_{i}\}$ denotes the number of treated units in the interference neighborhood of unit $i$. Let $\hat{\alpha}$, $\hat{\beta}$, $\hat{\gamma}$ and $\hat{\delta}$ be the weighted least squares estimates of $\alpha,\beta,\gamma,$ and $\delta$, where the weights for each unit $i$ is $w_{i}=\frac{1}{\pi(Z_{i},E_{i})}$. For any exposure $(z,e)$, let $\hat{Y}(z,e)$ be the estimated potential outcomes using the least squares fit. Let $\epsilon_{i}(z,e)=Y_{i}(z,e)-\hat{Y}_{i}(z,e)$.
The GREG estimator is defined as
$$\displaystyle\hat{\beta}_{greg}=\hat{\bar{Y}}_{g}(z_{1},e_{1})-\hat{\bar{Y}}_{%
g}(z_{0},e_{0})$$
(28)
where,
$$\displaystyle\hat{\bar{Y}}_{g}(z_{1},e_{1})$$
$$\displaystyle=\frac{1}{N}\sum_{i=1}^{N}\hat{Y}_{i}(z_{1},e_{1})+\frac{1}{N}%
\sum_{i}\frac{\hat{\epsilon}_{i}(z_{1},e_{1})I(Z_{i}=z_{1},E_{i}=e_{1})}{\pi_{%
i}(Z_{i}=z_{1},E_{1})}$$
$$\displaystyle\hat{\bar{Y}}_{g}(z_{0},e_{0})$$
$$\displaystyle=\frac{1}{N}\sum_{i=1}^{N}\hat{Y}_{i}(z_{0},e_{0})+\frac{1}{N}%
\sum_{i}\frac{\hat{\epsilon}_{i}(z_{0},e_{0})I(Z_{i}=z_{0},E_{i}=e_{0})}{\pi_{%
i}(Z_{i}=z_{0},E_{0})}$$
6 New Designs for estimating ATE
The CRD and the Bernoulli designs are oblivious to the interference structure. There are at least two issues in using such designs for performing causal inference with interference. The first issue is that the experimenter has no control over which potential outcomes are revealed. The second issue is that the observed potential outcomes have an unequal probability of being revealed, which can lead to increased variance and bias in estimation.
In fact, a careful analysis reveals that these issues are two sides of the same coin - the exposure condition $e_{i}$ is only indirectly assigned. The experimenter has an indirect control over the exposure probability $\pi_{i}(z_{i},e_{i})$. Hence in some cases, relevant potential outcomes are not observed since $\pi_{i}(z,e)$ can be $0$, in other cases, this probability is non-uniform.
For a concrete example, consider the symmetric exposure model. Each unit has $2(d_{i}+1)$ potential outcomes - depending on the treatment status of unit $i$ and the number of treated units in the interference neighborhood. Here $d_{i}$ is the number of units in $i^{\prime}s$ interference neighborhood. Consider the case when we are interested in estimating the Direct treatment effect. In this setting, the only relevant potential outcomes for each unit are $Y_{i}(1,0)$ and $Y_{i}(0,0)$. Any unit that has at least one treated neighbor is thus not included in the estimator. A naive design for a dense interference graph can lead to situations where all units only reveal nuisance potential outcomes and thus can be wasteful. Thus, we must consider new designs for estimating causal effects under interference. One more subtle issue is that a design that is optimal for estimating one type of estimand may be far from optimal for a different estimand. For example, the cluster randomized design cannot be used to estimate the Direct Treatment effect, as seen from the results of Theorem 5.4. However, as we will see in Section A, simulations suggest that the cluster randomized design (along with any estimator) has the least mean sqaured error for estimating the Total Treatment effect.
6.1 Re-randomization for estimating $\beta_{1}$ and $\beta_{2}$
A simple solution to avoid bad designs where nuisance potential outcomes are revealed is to re-randomize until a desired number of units fall under the treatment and exposure assignments that reveal relevant potential outcomes. For instance, consider estimating DTE using a CRD design. Let z be a realized treatment vector and let $n(1,0)$ be the number of units $i$ that reveal $Y_{i}(1,0)$. Similarly define $n(0,0)$. In general, $n(1,0)+n(0,0)<n$ and in fact, both these numbers can be $0$ for dense graphs. The re-randomization strategy would be to do a rejection sampling until $n(1,0)$ and $n(0,0)$ are larger than a given threshold. Estimation is done by using the HT estimator.
Clearly, the re-randomization approach can be very slow. An alternate strategy is to consider new designs where we maximize the number of units that reveal relevant potential outcomes. We discuss such a design to estimate the Direct Treatment Effect and the Total Treatment Effect next.
6.2 The Independent Set Design for estimating Direct Effect
Consider the problem of estimating the direct treatment effect under the symmetric exposure model when an interference graph $G$ is known. In estimating DTE, the only relevant potential outcomes are when a unit’s neighborhood in $G$ is untreated. We can construct such a design by using the concept of an independent set. An independent set $\mathbb{I}$ is a set of vertices in the graph such that no two vertex in $\mathbb{I}$ share an edge. A maximal independent set is a set that is not a subset of any other independent set. Independent sets have been well studied in the graph theory literature, and constructing maximal independent set is NP hard. Fortunately, for the independent set design, it is sufficient to construct a random independent set of size $k$. In fact, a design based on maximal independent sets may violate the positivity condition need to ensure estimation.
The independent set design iteratively selects nodes to be included in the independent set, also called as the Ego nodes. The nodes not in the independent set are called alters. At each step $i$, a random node is selected to be included in $\mathcal{I}$. Once a node is selected, the node and it’s neighbors are deleted. This process is repeated until there are no more nodes remaining. Let $k$ be the number of units in the independent set. Randomization is performed by randomly assigning $k_{t}$ nodes to treatment.
1.
Let $G$ be the interference graph on $n$ nodes. Set $G_{0}=G$.
2.
Let the independent set $\mathcal{I}=\emptyset$.
(a)
At step $t=1,\ldots$, choose a unit $i$ randomly from $G_{t-1}$
(b)
Insert $i$ in the independent set $\mathcal{I}$.
(c)
Let $G_{t}$ be the graph obtained by deleting $i$ and it’s neighbors from $G_{t-1}$
(d)
If $G_{t}$ is empty, stop.
3.
Choose $k_{t}$ units in $\mathcal{I}$ and assign them to control.
The units in the independent set are called egos and the units outside are called alters. The independent set design ensures that every ego is either assigned to $(z_{i}=1,e_{i}=0)$ or $(z_{i}=0,e_{i}=0)$ condition. Only the units in the ego set are chosen to estimate the causal effect and the alter units act as buffer units to prevent interference. Hence, it is beneficial to maximize the number of units in the ego set.
Note that every unit has a positive probability of being in $\mathcal{I}$ in the greedy algorithm. This may not be the case in other variants of the independent set algorithm, for e.g. where one starts with the unit with smallest degree. In this case, the causal estimate is not unbiased for the $n$ units, but it could be unbiased for those units that have a positive probability of being included in the independent $\mathcal{I}$. One way to solve this problem is to choose a random unit with probability $p$ and a unit with the smallest degree with probability $1-p$.
It is important to note that we still need to take into account the unequal probability of revealing the potential outcomes, and hence we need to use a Horvitz-Thompson or its variant to obtain unbiased or approximately unbiased estimator. This requires the knowledge of the propensity scores. Unlike the CRD and Bernoulli designs, there are no simple expressions for the propensity scores in the random independent set design. However, they can be computed using Monte Carlo simulation.
6.3 Cluster Randomized Design for estimating Total Treatment Effect
A drawback of the independent set design is that it cannot be used to estimate the total treatment effect. In fact, any design that reveals the relevant potential outcomes for estimating the total treatment effect will reveal nuisance potential outcomes for estimating the direct treatment effect and vice versa.
Here we consider a cluster based design for estimating $TTE$, see also Ugander et al. (2013). Assume that the interference neighborhood depends only on the immediate neighbors as defined by the interference graph $G$. Consider partitioning the graph $G$ into $1,\ldots,K$ clusters. In the cluster randomized design, we select $n_{k}$ clusters and label them with treatment. The remaining clusters are labeled with control. The nodes in each cluster are assigned to the treatment status indicated by their labels. This design attempts to increase the number of relevant potential outcomes $Y_{i}(1,1)$ and $Y_{i}(0,0)$ for estimating $TTE$ and was introduced in Ugander et al. (2013). Unbiased Estimation is done by using the Horvitz-Thompson estimator.
7 Discussion
We systematically investigated the problem of estimating causal effects when there is treatment interference. When there is arbitrary treatment interference, the number of potential outcomes explodes, rendering causal inference impossible. A starting point to resolve this issue is to posit models of potential outcomes that aim at reducing the total number of potential outcomes. These models are specified by using an interference graph $G$ and an exposure model. Using the exposure model, the potential outcomes for each unit can be decomposed into direct effects, interference effects and interaction between the two. Relying on this nonparametric linear decomposition of potential outcomes, we proposed two classes of causal estimands - the marginal effects and the average effects. These classes contain many of the popular estimands considered in the literature such as the direct treatment effect and the total treatment effect.
Focusing on the direct treatment effect, we showed that the classical designs and difference-in-means estimators can be biased. The nature and magnitude of the bias depends on the interference graph and the exposure model, both of which may be unknown. The bias remains even after making strong linearity assumptions on the potential outcomes; however the bias is mild when the potential outcomes are linear and additive and the interference graph is sparse. On the other hand, the Horvitz-Thompson estimator is always unbiased, as long as the correct propensity scores are used. In practice, the Horvitz-Thompson estimator performs quite poorly in terms of the mean squared error due to high variance. Moreover, the weights used in the Horvitz-Thompson estimator depend on the interference graph and the exposure model which are not known in general.
A central open issue is to design estimation strategies when the interference graph and the exposure model are not known. One possibility is to consider estimators and designs that are robust to the interference graph and the exposure model, another would be to learn the interference graph and the exposure model from the data. An important related question that deserves further investigation is testing the assumed form of interference
References
Aronow and Samii [2013]
Peter M Aronow and Cyrus Samii.
Estimating average causal effects under interference between units.
arXiv preprint arXiv:1305.6156, 2013.
Athey et al. [2017]
Susan Athey, Dean Eckles, and Guido W Imbens.
Exact p-values for network interference.
Journal of the American Statistical Association, pages 1–11,
2017.
Basse and Feller [2017]
Guillaume Basse and Avi Feller.
Analyzing two-stage experiments in the presence of interference.
Journal of the American Statistical Association, (just-accepted), 2017.
Basse and Airoldi [2015]
Guillaume W Basse and Edoardo M Airoldi.
Optimal design of experiments in the presence of network-correlated
outcomes.
arXiv preprint arXiv:1507.00803, 2015.
Basu [2011]
Debabrata Basu.
An essay on the logical foundations of survey sampling, part one.
In Selected Works of Debabrata Basu, pages 167–206. Springer,
2011.
Bowers et al. [2012]
Jake Bowers, Mark Fredrickson, and Costas Panagopoulos.
Reasoning about interference between units.
arXiv preprint arXiv:1208.0366, 2012.
Choi [2017]
David Choi.
Estimation of monotone treatment effects in network experiments.
Journal of the American Statistical Association, 112(519):1147–1155, 2017.
David and Kempton [1996]
Olivier David and Rob A Kempton.
Designs for interference.
Biometrics, pages 597–606, 1996.
Eckles et al. [2014]
Dean Eckles, Brian Karrer, and Johan Ugander.
Design and analysis of experiments in networks: Reducing bias from
interference.
arXiv preprint arXiv:1404.7530, 2014.
Forastiere et al. [2016]
Laura Forastiere, Edoardo M Airoldi, and Fabrizia Mealli.
Identification and estimation of treatment and interference effects
in observational studies on networks.
arXiv preprint arXiv:1609.06245, 2016.
Godambe [1955]
VP Godambe.
A unified theory of sampling from finite populations.
Journal of the Royal Statistical Society. Series B
(Methodological), pages 269–278, 1955.
Godambe and Joshi [1965]
VP Godambe and VM Joshi.
Admissibility and bayes estimation in sampling finite populations. i.
The Annals of Mathematical Statistics, 36(6):1707–1722, 1965.
Goldsmith-Pinkham and Imbens [2013]
Paul Goldsmith-Pinkham and Guido W Imbens.
Social networks and the identification of peer effects.
Journal of Business & Economic Statistics, 31(3):253–264, 2013.
Halloran and Hudgens [2016]
M Elizabeth Halloran and Michael G Hudgens.
Dependent happenings: a recent methodological review.
Current epidemiology reports, 3(4):297–305, 2016.
Holland [1986]
Paul W Holland.
Statistics and causal inference.
Journal of the American statistical Association, 81(396):945–960, 1986.
Hudgens and Halloran [2008]
Michael G Hudgens and M Elizabeth Halloran.
Toward causal inference with interference.
Journal of the American Statistical Association, (482):832–842, 2008.
Jagadeesan et al. [2017]
Ravi Jagadeesan, Natesh Pillai, and Alexander Volfovsky.
Designs for estimating the treatment effect in networks with
interference.
arXiv preprint arXiv:1705.08524, 2017.
Kang and Imbens [2016]
Hyunseung Kang and Guido Imbens.
Peer encouragement designs in causal inference with partial
interference and identification of local average network effects.
arXiv preprint arXiv:1609.04464, 2016.
Li et al. [2018]
Xinran Li, Peng Ding, Qian Lin, Dawei Yang, and Jun S Liu.
Randomization inference for peer effects.
Journal of the American Statistical Association, (just-accepted):1–36, 2018.
Liu and Hudgens [2014]
Lan Liu and Michael G Hudgens.
Large sample randomization inference of causal effects in the
presence of interference.
Journal of the american statistical association, 109(505):288–301, 2014.
Liu et al. [2016]
Lan Liu, Michael G Hudgens, and Sylvia Becker-Dreps.
On inverse probability-weighted estimators in the presence of
interference.
Biometrika, 103(4):829–842, 2016.
Loh et al. [2018]
Wen Wei Loh, Michael G Hudgens, John D Clemens, Mohammad Ali, and Michael E
Emch.
Randomization inference with general interference and censoring.
arXiv preprint arXiv:1803.02302, 2018.
Manski [2013]
Charles F Manski.
Identification of treatment response with social interactions.
The Econometrics Journal, 16(1):S1–S23,
2013.
Neyman [1923]
Jerzy Neyman.
On the application of probability theory to agricultural experiments.
essay on principles. section 9.
Statistical Science, 5(4):465–472, 1923.
Trans. Dorota M. Dabrowska and Terence P. Speed.
Ogburn et al. [2014]
Elizabeth L Ogburn, Tyler J VanderWeele, et al.
Causal diagrams for interference.
Statistical science, 29(4):559–578, 2014.
Ogburn et al. [2017]
Elizabeth L Ogburn, Tyler J VanderWeele, et al.
Vaccines, contagion, and social networks.
The Annals of Applied Statistics, 11(2):919–948, 2017.
Rigdon and Hudgens [2015]
Joseph Rigdon and Michael G Hudgens.
Exact confidence intervals in the presence of interference.
Statistics & probability letters, 105:130–135,
2015.
Rubin [1974]
Donald B Rubin.
Estimating causal effects of treatments in randomized and
nonrandomized studies.
Journal of Educational Psychology, 66(5):688, 1974.
Rubin [1980]
Donald B Rubin.
Randomization analysis of experimental data: The fisher randomization
test comment.
Journal of the American Statistical Association, 75(371):591–593, 1980.
Rubin [1986]
Donald B Rubin.
Comment: Which ifs have causal answers.
Journal of the American Statistical Association, 81(396):961–962, 1986.
Rubin [1990]
Donald B Rubin.
[on the application of probability theory to agricultural
experiments. essay on principles. section 9.] comment: Neyman (1923) and
causal inference in experiments and observational studies.
Statistical Science, 5(4):472–480, 1990.
Särndal et al. [2003]
Carl-Erik Särndal, Bengt Swensson, and Jan Wretman.
Model assisted survey sampling.
Springer Science & Business Media, 2003.
Sävje et al. [2017]
Fredrik Sävje, Peter M Aronow, and Michael G Hudgens.
Average treatment effects in the presence of unknown interference.
arXiv preprint arXiv:1711.06399, 2017.
Sobel [2006]
Michael E Sobel.
What do randomized studies of housing mobility demonstrate? causal
inference in the face of interference.
Journal of the American Statistical Association, 101(476):1398–1407, 2006.
Sussman and Airoldi [2017]
Daniel L Sussman and Edoardo M Airoldi.
Elements of estimation theory for causal effects in the presence of
network interference.
arXiv preprint arXiv:1702.03578, 2017.
Tchetgen and VanderWeele [2012]
Eric J Tchetgen Tchetgen and Tyler J VanderWeele.
On causal inference in the presence of interference.
Statistical methods in medical research, 21(1):55–75, 2012.
Toulis and Kao [2013]
Panos Toulis and Edward Kao.
Estimation of causal peer influence effects.
Proceedings of the 30th International Conference on Machine Learning,
2013.
Ugander et al. [2013]
Johan Ugander, Brian Karrer, Lars Backstrom, and Jon Kleinberg.
Graph cluster randomization: Network exposure to multiple universes.
In Proceedings of the 19th ACM SIGKDD international conference
on Knowledge discovery and data mining, pages 329–337. ACM, 2013.
Appendix A Numerical results
In this section, we carry out several simulation studies to illustrate the theoretical claims. In the first set of experiments, we study the bias of the difference-in-means estimators for a simple model. In the second set of experiments, we evaluate various estimation strategies for estimating DTE and TTE.
A.1 Bias of the naive Estimator
In this section, we illustrate the bias of the difference of means estimator for estimating the direct effect, as a function of the interference. We consider the Completely Randomized design. The potential outcomes are modeled using the additive binary exposure model 14. We use an erdos renyi model to generate the interference graph on $n=100$ nodes with the probability of an egde between any two nodes $p=0.05$. The bias is estimated using the results in Proposition 4.5.
Figure 1 shows the bias of the difference in means estimator for estimating $DTE$ for a completely randomized design. The results show that the bias increases with the interference effect and is negative when the interference is positive, and vice versa. Also, the bias goes down as the number of treated unit increases. In fact, Proportion 4.5 reveals that the bias is exactly $0$ whenever $n_{c}<\min_{i}d_{i}$. This is because when $n_{c}<\min_{i}d_{i}$, the propensity scores reduce to the weights of a completely randomized design, see Theorem 5.4. Due to this, the bias of the naive estimator goes away. Note that this is true only in the additive model, i.e. when there is no interaction between the interference effect and the treatment of unit $i$ ($C_{i}(e)=0$).
A.2 Estimation of Direct and Total Effects
In this section, we will perform a simulation study for estimating the direct effect $ATE_{1}$ and total effect $ATE_{2}$. There are 5 different factors that play an important role in the simulation study. These factors are the exposure model, network model, potential outcomes model, estimand, design, and finally the estimator. The various possible settings of these factors are listed below:
1.
Exposure model - Binary Exposure, Symmetric Exposure
2.
Network model - Erdos Renyi, Barabasi Albert, Small World Networks
3.
Potential Outcomes model - Linear, Correlated.
4.
Estimand - Direct Effect and Total Effect
5.
Design - CRD, Bernoulli, Independent Design, Cluster Randomized
6.
Estimator - Naive, Difference of Means, Horvitz-Thompson, Ratio, GREG
Including every possible combination of the factors into the design of the simulation would lead to a large number of combinations. Moreover, not every possible combination of the factors is possible. For example, the Independent Set Design can be used only to estimate the direct effect. Similarly, the cluster randomized design is a good design only for the total effect. To reduce the total number of experiments, we will make some design choices.
We will focus only on the binary exposure model. We choose $n=200$ and simulate the interference graph from three different models: An Erdos renyi model with $p=0.01$, a Barabasi Albert Model with minimum degree $2$ and attractiveness parameter $\rho=0.1$, and the small world network with neighborhood size $1$. These choice of parameters lead to three different kinds of degree distributions: Erdos Renyi graphs are low degree graphs (for e.g., $d_{\min}=0,d_{\max}=8,d_{med}=2$), the Barabasi Albert graphs show a power law behavior ($d_{\min}=2,d_{\max}=16,d_{med}=2$) whereas the small world network produces almost regular graphs ($d_{\min}=1,d_{\max}=3,d_{med}=2$).
For the binary exposure model, the potential outcome of each unit $i$ can be parameterized by $4$ parameters as below:
$$Y_{i}(z_{i},e_{i})=\alpha_{i}+\beta_{i}z_{i}+\gamma_{i}e_{i}+\delta_{i}e_{i}z_%
{i}$$
We consider two different models for the potential outcomes: In the uncorrelated model, the parameters of the potential outcomes $\alpha$, $\beta$, $\gamma$ and $\delta$ are generated from independent distributions as given below:
$$\displaystyle\alpha_{i}$$
$$\displaystyle=N(\mu=1,\sigma=0.1)$$
$$\displaystyle\beta_{i}$$
$$\displaystyle=Unif(0,1)$$
$$\displaystyle\gamma_{i}$$
$$\displaystyle=Unif(0,1)$$
$$\displaystyle\delta_{i}$$
$$\displaystyle=N(2,0.1)$$
In the correlated model, the parameters generated by specifying a conditional distribution in terms of two covariates $x$ and $y$ (which can be interpreted as age and gender respectively). More specifically, the correlated potential outcomes model is given by
$$\displaystyle x_{i}$$
$$\displaystyle\sim\text{ Log Normal}(\log\mu=3,\log\sigma=0.5)$$
$$\displaystyle y_{i}$$
$$\displaystyle\sim\text{ Bernoulli}(p=0.4)$$
$$\displaystyle\alpha_{i}$$
$$\displaystyle=1+15\log(x)-0.5y+\text{ Normal}(\mu=0,\sigma=1+y_{i}|\log(age)|)$$
$$\displaystyle\beta_{i}$$
$$\displaystyle=-2-0.8x_{i}+0.8y_{i}+\text{ Normal}(\mu=0,\sigma=2)$$
$$\displaystyle\gamma_{i}$$
$$\displaystyle=3+4\log(x_{i})+\text{Normal}(\mu=0,\sigma=0.1|\alpha_{i}|)$$
$$\displaystyle\delta_{i}$$
$$\displaystyle=2\log(x_{i})+\text{gammaa}(2,2)$$
We considered 4 different designs, and for each design, we considered $5$ estimators: The naive estimator that takes the difference between treated and untreated units. The DofM estimator that compares the average of the relevant units, the HT estimator, ratio estimator, and the GREG estimator. For the GREG estimator, we use a linear regression model regressing on the treatment status $z_{i}$ and the exposure status $e_{i}$ of each unit and the covariates, if any.
We will present results only for the CRD, cluster randomized design and the independent set design. The results for the Bernoulli design are very similar to the CRD design. In particular, if $p$ is the probability of assigning a unit to treatment, by letting $n_{t}=np$, the CRD design can approximate the Bernoulli design when $n$ is large. This is because for large $n$, one can show that the exposure probabilities for both the designs are very close, since the Binomial distribution is concentrated around its mean.
Figures 2 and 3 show the plots of the bias, variance and the mean squared error for estimating the direct effect and the total effect respectively. The x-axis of the plots shows the number of treated units for CRD designs, along with the Independent set design for the direct effect (Figure 2) and the cluster randomized design for the total effect (Figure 3). The findings of the simulation study can be summarized below:
1.
Designs that are optimal (minimize the mean square error) for estimating the Total Effect are sub-optimal (in fact far from optimal) for estimating the direct effect. In all the experiments, the Independent Set Design was optimal for the Direct Effect. On the other hand, the cluster randomized design was the optimal design for estimating the Total Effect . Similarly, for the CRD, when estimating the direct effect, setting $n_{t}$ to a smaller value is more optimal, whereas when estimating the total effect, setting $n_{t}$ to a larger value is more optimal. These results are in line with the intuition. The direct effect is a function of $Y_{i}(1,0)$ whereas the indirect effect is a function of $Y_{i}(1,1)$. Hence the exposure conditions for estimating the direct effect are nuisance for estimating the total effect and vice versa. In the CRD, when $n_{t}$ is small, more units are allocated to the condition $(Z_{i}=1,E_{i}=0)$ as opposed to $(Z_{i}=1,E_{i}=1)$. Similarly, when $n_{t}$ is large, more units are allocated to the condition $(Z_{i}=1,E_{i}=1)$.
2.
The HT estimator is unbiased in theory, but in practice can be very unstable. This is demonstrated by the large Monte Carlo standard errors of the estimate of the bias. The HT estimator has the largest variance, no matter what the design is. Hence we do not recommend the HT estimator.
3.
The difference of means estimators - naive and DofM - are biased in theory. But surprisingly, they can perform quite well in practice. The theory developed in Section 4 offers an explanation for this behavior: Recall that the naive estimator ignores the network interference and compares average outcome of treated and control units whereas the DofM estimator compares the average outcome of the relevant units. The naive estimator has two sources of bias: Irrelevant potential outcomes and incorrect weights. The bias of the naive estimator is small whenever the design reduces the number of irrelevant potential outcomes (For e.g. the Independent set design and a CRD design with a small $n_{t}$). Similarly, the difference of means estimator that uses the relevant potential outcomes has only one source of bias: Incorrect weights. For large $n$ and sparse graphs, this source of bias also goes to $0$. Moreover, the difference of means estimator has smaller variance. Even though it is biased, it beats the HT estimator in terms of the Mean squared error.
4.
The Ratio and GREG estimators behave similarly. They are both biased in theory, but in practice the bias is very small. Moreover, their variance is much smaller than the HT estimator. In some cases, the GREG estimator has higher variance than the ratio estimator. This happens when the exposure probabilities are very small which can cause the predictions of the regression model to be unstable.
5.
The exposure probabilities and hence the bias and variance of the estimators depend on the degree of the network. The simulation suggests that the choice of the design parameters (e.g. the type of clustering in the cluster randomized design) depends on the degree of the graph. Indeed as the theoretical results show, the existence of unbiased estimators is tied to the relation between the minimum degree and the variance is a function of the exposure probabilities and hence the degree.
Appendix B Variance of Estimators
The problem of estimating variance of the estimators of causal effects is central to inference. However, given the complexity of the number of estimands, estimators, and designs, it is non-trivial to identify good estimators of variance. In this section, we illustrate the complexity of estimating variance for estimands under interference by deriving the formula of population variance for some simple estimators under different models. In particular, we consider the variance of the difference-in-means estimator under the linear model and the binary exposure model, and the variance of the Horvitz-Thompson estimator.
B.1 Sources of Variation of the Horvitz-Thompson Estimator
Let $\pi_{ij}((z_{1},e_{1}),(z_{0},e_{0}))$ denote the joint exposure probability of unit $i$ being in condition $(z_{1},e_{1})$ and unit $j$ being in condition $(z_{0},e_{0})$. The variance of the HT estimator can be calculated by using the standard formula in the survey sampling literature.
Theorem B.1 (Variance of Horvitz-Thompson).
The variance of the Horvitz-Thompson Estimator is given by
$$\displaystyle n^{2}Var_{HT}{}=$$
$$\displaystyle\sum_{i}\frac{\pi_{i}(z_{1},e_{1})\left(1-\pi_{i}(z_{1},e_{1})%
\right)}{\pi^{2}_{i}(z_{1},e_{1})}Y_{i}^{2}(z_{1},e_{1})$$
$$\displaystyle+\underset{i\neq j}{\sum\sum}\frac{\left(\pi_{ij}(z_{1},e_{1})-%
\pi_{i}(z_{1},e_{1})\pi_{j}(z_{1},e_{1})\right)}{\pi_{i}(z_{1},e_{1})\pi_{j}(z%
_{1},e_{1})}\left(Y_{i}(z_{1},e_{1})Y_{j}(z_{1},e_{1})\right)$$
$$\displaystyle+\sum_{i}\frac{\pi_{i}(z_{0},e_{0})\left(1-\pi_{i}(z_{0},e_{0})%
\right)}{\pi^{2}_{i}(z_{0},e_{0})}Y_{i}^{2}(z_{0},e_{0})$$
$$\displaystyle+\underset{i\neq j}{\sum\sum}\frac{\left(\pi_{ij}(z_{0},e_{0})-%
\pi_{i}(z_{0},e_{0})\pi_{j}(z_{0},e_{0})\right)}{\pi_{i}(z_{0},e_{0})\pi_{j}(z%
_{0},e_{0})}\left(Y_{i}(z_{0},e_{0})Y_{j}(z_{0},e_{0})\right)$$
$$\displaystyle-2\left(\underset{i\neq j}{\sum\sum}Y_{i}(z_{0},e_{0})Y_{j}(z_{0}%
,e_{0})\frac{\pi_{ij}((z_{1},e_{1}),(z_{0},e_{0}))}{\pi_{i}(z_{1},e_{1})\pi_{j%
}(z_{0},e_{0})}-\sum_{i}Y_{i}(z_{1},e_{1})Y_{j}(z_{0},e_{0})\right)$$
B.2 Sources of Variation in estimating the direct effect using difference of means estimator
Here the calculations get quickly out of hand, thus we focus on the case of CRD design and a linear additive model under symmetric and binary exposure conditions.
B.2.1 Symmetric Additive linear model
Proposition B.1 (Variance of naive estimator under linear model).
Let $d_{i}$ be the degree of node $i$, and $m$ be the number of edges in the network.
$$\displaystyle Var(\hat{\beta}_{naive})$$
$$\displaystyle=\sigma^{2}\left(\frac{1}{n_{t}}+\frac{1}{n_{c}}\right)+\gamma^{2%
}\left(c_{1}m+c_{2}m^{2}+c_{3}\sum_{i}{d_{i}^{2}}+c_{4}\sum_{i\neq j}{d_{i}d_{%
j}}\right)$$
where
$$\displaystyle c_{1}$$
$$\displaystyle=\frac{4n}{(n-1)(n-2)(n-3)}\left(1-\frac{1}{n_{t}}\right)\left(1-%
\frac{1}{n_{c}}\right)$$
$$\displaystyle c_{2}$$
$$\displaystyle=\left(\frac{8(n_{t}-1)(6n_{t}-3n+3n^{2}-5nn_{t})}{n(n-1)^{2}(n-2%
)(n-3)n_{t}n_{c}}\right)$$
$$\displaystyle c_{3}$$
$$\displaystyle=\left(\frac{4n(n_{t}-1)(n_{t}-2)}{n_{t}n_{c}(n-1)(n-2)(n-3)}+%
\frac{n_{t}}{n_{c}n^{2}}-\frac{4(n_{t}-1)}{n_{c}(n-1)(n-2)}\right)$$
$$\displaystyle c_{4}$$
$$\displaystyle=-\frac{n_{t}}{n_{c}n^{2}(n-1)}$$
If $n_{c}=n_{t}=\frac{n}{2}$, then
$$Var(\hat{\tau})=\frac{4\sigma^{2}}{n}+\gamma^{2}O\left(\frac{\sum_{i}{d_{i}^{2%
}}}{n^{2}}+\frac{m^{2}}{n^{2}}+\frac{m}{n^{2}}-\frac{\sum_{ij}{d_{i}d_{j}}}{n^%
{3}}\right)$$
B.2.2 Binary Additive exposure model
Proposition B.2 (Variance of naive estimator under two by two model).
Let $\rho_{i}=P(z_{i}=1,e_{i}=1)$, $\pi_{i}=P(e_{i}=1)$, $\rho_{ij}=P(z_{i}=1,e_{i}=1,z_{j}=1,e_{j}=1)$, $\pi_{ij}=P(e_{i}=1,e_{j}=1)$. Under equation 14, assuming $\beta_{i}=\beta$, we have, for a CRD
$$\displaystyle Var\left[\hat{\tau}\right]$$
$$\displaystyle=\frac{n^{2}}{n_{t}^{2}n_{c}^{2}}\left[\sum_{i}\alpha_{i}^{2}%
\frac{n_{t}n_{c}}{n}+\sum_{i\neq j}\alpha_{i}\alpha_{j}\frac{n_{t}}{n^{2}}\right]$$
$$\displaystyle+\frac{n^{2}}{n_{t}^{2}n_{c}^{2}}\left[\sum_{i}\gamma_{i}^{2}\rho%
_{i}(1-\rho_{i})+\sum_{i\neq j}\gamma_{i}\gamma_{j}(\rho_{ij}-\rho_{i}\rho_{j}%
)\right]$$
$$\displaystyle+\frac{1}{n^{2}}\left[\sum_{i}\gamma_{i}^{2}\pi_{i}(1-\pi_{i})+%
\sum_{i\neq j}\gamma_{i}\gamma_{j}(\pi_{ij}-\pi_{i}\pi_{j})\right]$$
$$\displaystyle+\frac{n^{2}}{n_{t}^{2}n_{c}^{2}}\left[\sum_{i}\alpha_{i}\gamma_{%
i}\rho_{i}\frac{n_{c}}{n}+\sum_{i\neq j}\alpha_{i}\gamma_{j}\frac{n_{t}^{2}}{n%
^{2}}\left(\mathbb{E}\{e_{j}|z_{j}=z_{i}=1\}-\mathbb{E}\{e_{j}|z_{j}=1\}\right%
)\right]$$
$$\displaystyle-\frac{1}{n_{t}n_{c}}\left[\sum_{i}\gamma_{i}^{2}\rho_{i}(1-\pi_{%
i})+\sum_{i\neq j}\gamma_{i}\gamma_{j}\left(\mathbb{E}\{e_{j}e_{i}z_{i}\}-\rho%
_{i}\pi_{j}\right)\right]$$
$$\displaystyle-\frac{1}{n_{t}n_{c}}\left[\sum_{i}\alpha_{i}\gamma_{i}\left(\rho%
_{i}-\frac{n_{t}}{n}\pi_{i}\right)+\sum_{i\neq j}\alpha_{i}\gamma_{j}\left(%
\mathbb{E}\{z_{i}e_{j}\}-\frac{n_{t}}{n}\pi_{j}\right)\right]$$
Appendix C Unbiasedness of difference-in-means estimators for estimating marginal estimands
When the treatment assignment strategy $p(\textbf{Z})$ is equal to the policy $\psi$ that defines the estimand, the difference in means estimator is unbiased for estimating $\theta(\psi)$.
Proposition C.1.
Let $\psi$ be a restricted Bernoulli or a CRD policy and let the treatment assignment mechanism $p(\textbf{Z})$ be equal to the policy $\psi$.
Then $E[\hat{\beta}_{naive}]=\theta(\psi)$.
Appendix D Bias of the difference in means estimator for $TTE$
Proposition D.1.
Consider the parametrized Potential outcomes given in equation 9:
$$\displaystyle Y_{i}(z_{i},e_{i})=A_{i}(z_{i})+B_{i}(e_{i})+z_{i}C_{i}(e_{i})$$
For any design $p(\textbf{Z}=\textbf{z})$, the bias of the difference in means estimator $\hat{\beta}$ (equation 20a) for estimating $\beta_{2}$ is:
$$\displaystyle b_{2}$$
$$\displaystyle=E[\hat{\beta}]-\beta_{2}$$
$$\displaystyle=\sum_{i}\left(A_{i}(1)\left(\alpha_{i}(1)-\frac{1}{n}\right)-A_{%
i}(0)\left(\alpha_{i}(0)+\frac{1}{n}\right)\right)+\sum_{i}B_{i}(1)\left(%
\alpha_{i}(1,1)-\alpha_{i}(1,0)-\frac{1}{n}\right)$$
$$\displaystyle+\sum_{i}C_{i}(1)\left(\alpha_{i}(1,1)-\frac{1}{n}\right)$$
$$\displaystyle+\sum_{i}\sum_{e_{i}\neq\{0,1\}}B_{i}(e_{i})\left(\alpha_{i}(1,e_%
{i})-\alpha_{i}(0,e_{i})\right)$$
$$\displaystyle+\sum_{i}\sum_{e_{i}\neq\{0,1\}}C_{i}(e_{i})\alpha_{i}(1,e_{i})$$
where,
$$\displaystyle\alpha_{i}(z_{i},e_{i})=E\left[\frac{I(Z_{i}=z_{i},E_{i}=e_{i})}{%
\sum_{i}{I(Z_{i}=z_{i})}}\right]\text{ and }\alpha_{i}(z_{i})=E\left[\frac{I(Z%
_{i}=z_{i})}{\sum_{i}I(Z_{i}=z_{i})}\right].$$
Appendix E Bias of the naive estimator for the direct effect under Cluster Randomized Design
Even without interference, the difference of means estimator is biased in cluster randomization. A simple reason is that the number of nodes in each cluster is not fixed and random.
Consider the following simple linear model of potential outcomes:
$$\displaystyle Y_{i}=\alpha_{i}+\beta_{i}z_{i}+\gamma\left(\sum_{ij}{g_{ij}z_{j%
}}\right)$$
(29)
For estimating the bias in the clustered randomized trial, let $c_{k}$ be the covariance between $Z_{k}$, the treatment status of cluster $k$ and $\frac{Z_{k}}{n_{t}}$. Similarly, let $d_{k}$ be the covariance between $1-Z_{k}$ and $\frac{1-Z_{k}}{n_{c}}$.
Proposition E.1.
Consider the simple linear model of the potential outcomes model specified by equation 29. Under a cluster randomized design, we have,
$$\displaystyle E[\hat{\beta}_{naive}]-\beta_{DE}$$
$$\displaystyle=\gamma-\frac{K}{K_{t}}\sum_{k}\bar{\beta}_{k}n_{k}^{2}c_{k}+%
\frac{K}{K_{t}}\sum_{k}\bar{\alpha}_{k}n_{k}\left(d_{k}-c_{k}\right)$$
where $\bar{\beta}_{k}$ is the average of $\beta_{i}$ for all nodes in cluster $k$. Similarly, $\bar{\alpha}_{k}$ is the average of $\alpha_{i}$ in cluster $k$.
Appendix F Proofs
F.1 Proof of Proposition 3.1
Proof.
Since there are no further assumptions on the potential outcomes, only one entry of the table of science is observable due to the fundamental problem of causal inference. As causal effects are defined as contrasts between two distinct treatment assignments, they are unidentifiable as only one entry of the Table 2 is observed.
∎
F.2 Proof of Proposition 3.2
Proof.
$$\displaystyle Y_{i}(\textbf{z})$$
$$\displaystyle=Y_{i}(z_{i},\textbf{z}_{N_{i}})$$
$$\displaystyle=Y_{i}(z_{i},f(\textbf{z}_{N_{i}}))$$
$$\displaystyle=Y_{i}(z_{i},e_{i})$$
$$\displaystyle=A_{i}(z_{i})+B_{i}(e_{i})+z_{i}C_{i}(e_{i})$$
Let us show that this is a linear map with full rank. For each unit $i$, there are a total of $2K_{i}$ distinct potential outcomes. In the linear parametrization, there are also a total of $2+K_{i}-1+K_{i}-1=2K_{i}$ parameters. These are: $\{A_{i}(0),A_{i}(1)\},\{B_{i}(1),\ldots,B_{i}(K-1)\},\{C_{i}(1),\ldots,C_{i}(K%
-1)\}$. For each $i$, the inverse map is given by:
$$\displaystyle A_{i}(z_{i})$$
$$\displaystyle=Y_{i}(z_{i},0)$$
$$\displaystyle B_{i}(e_{i})$$
$$\displaystyle=Y_{i}(0,e_{i})-Y_{i}(0,0)$$
$$\displaystyle C_{i}(e_{i})$$
$$\displaystyle=Y_{i}(1,e_{i})-Y_{i}(1,0)-Y_{i}(0,e_{i})+Y_{i}(0,0)$$
Hence the map has full rank.
Also note that $B_{i}(0)=Y_{i}(0,0)-Y_{i}(0,0)=0$ and
$C_{i}(0)=Y_{i}(1,0)-Y_{i}(1,0)-Y_{i}(0,0)+Y_{i}(0,0)=0$.
∎
F.3 Proof of Proposition 3.3
Proof.
Note that $\bar{Y_{i}}(z_{i};\phi)=\sum_{e_{i}}Y_{i}(z_{i},e_{i})\phi(E_{i}=e_{i}|Z_{i}=z%
_{i})=Y_{i}(z_{i})$.
Hence $\theta(\phi)=\frac{1}{n}\left(Y_{i}(1)-Y_{i}(0)\right)$.
The other results also follow from the definition.
∎
F.4 Proof of Proposition 3.4
From Proposition 3.2 , we ave the following representation of the Potential Outcomes:
$$Y_{i}(z_{i},e_{i})=\alpha_{i}+\beta_{i}z_{i}+B_{i}(e_{i})+z_{i}C(e_{i})$$
where $B_{i}(0)=0$ and $C_{i}(0)=0$.
By the definition of the direct treatment effect, we have
$$\displaystyle\beta_{DE}$$
$$\displaystyle=\frac{1}{n}\left(\sum_{i=1}^{n}Y_{i}(1,0)-Y_{i}(0,0)\right)$$
$$\displaystyle=\frac{1}{n}\left(\sum_{i=1}^{n}\alpha_{i}+\beta_{i}-\alpha_{i}\right)$$
$$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\beta_{i}=\beta$$
Similarly, for the total treatment effect we have,
$$\displaystyle\beta_{TE}$$
$$\displaystyle=\frac{1}{n}\left(\sum_{i=1}^{n}Y_{i}(1,1)-Y_{i}(0,0)\right)$$
$$\displaystyle=\frac{1}{n}\left(\sum_{i=1}^{n}\alpha_{i}+\beta_{i}+B_{i}(1)+C_{%
i}(1)-\alpha_{i}\right)$$
The results follow by substituting $B_{i}(1)=\gamma$ and $C_{i}(1)=0$.
F.5 Proof of Proposition 3.5
The result follows from the definition of the policies.
F.6 Proof of Theorem 3.1
Proof.
Let $\hat{\theta}$ be an unbiased estimator of $\theta$ under the design $p(\textbf{Z})$.
Assume to the contrary that there exists a unit $i$ and a relevant potential outcome $j$ such that $\pi_{i}(z_{j},e_{j})$ is $0$. This implies that unit $i$’s potential outcome $Y_{i}(z_{j},e_{j})$ is never observed under design $p(\textbf{Z})$. Since $\hat{\theta}$ is a function of the observed potential outcomes, and there are no structural assumptions on the potential outcomes, the expectation of $\hat{\theta}$ is free from $Y_{i}(z_{j},e_{j})$. However, since $Y_{i}(z_{j},e_{j})$ is a relevant potential outcome, it appears in the definition of $\theta$. Hence $\hat{\theta}$ cannot be unbiased.
Similarly, assume that there exists a unit $i$ and a relevant potential outcome $j$ such that $\pi_{i}(z_{j},e_{j})=1$. This implies that under design $p(\textbf{Z})$ we only observe $Y_{i}(z_{j},e_{j})$ for unit $i$. Since causal effects are defined as contrasts between two distinct potential outcomes, there exists a potential outcome $Y_{i}(z_{j^{\prime}},e_{j^{\prime}})$ that appears in the definition of $\theta$ but $\pi_{i}(z_{j^{\prime}},e_{j^{\prime}})=0$, which brings us back to the first case.
∎
F.7 Proof of Proposition C.1
Proof.
Note that $\frac{Y_{i}^{obs}Z_{i}}{\sum_{i}{Z_{i}}}$ is an unbiased estimator of $\frac{1}{N}\bar{Y}_{i}(1;\psi)$:
$$\displaystyle\mathbb{E}\left[\frac{Y_{i}^{obs}Z_{i}}{\sum_{i}Z_{i}}\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{Y_{i}(Z_{i},E_{i})Z_{i}}{\sum_{i}Z_{i}}\right]$$
$$\displaystyle=\mathbb{E}_{K}\left[\frac{1}{K}\mathbb{E}_{\psi}\left[Y_{i}(Z_{i%
},E_{i})Z_{i}|\sum_{i}Z_{i}=K\right]\right]$$
$$\displaystyle=\mathbb{E}_{k}\left[\frac{1}{K}\mathbb{E}_{\psi}\left[\sum_{z_{i%
},e_{i}}Y_{i}(z_{i},e_{i})Z_{i}I(Z_{i}=1,E_{i}=e_{i})\right]\right]$$
$$\displaystyle=\mathbb{E}_{K}\left[\frac{1}{K}\frac{K}{N}\sum_{e_{i}}Y_{i}(1,e_%
{i})\mathbb{P}\left(E_{i}=e_{i}|Z_{i}=1\right)\right]$$
$$\displaystyle=\frac{1}{N}\sum_{e_{i}}Y_{i}(1,e_{i})\psi(E_{i}=e_{i}|Z_{i}=1)$$
$$\displaystyle=\frac{1}{N}\bar{Y}_{i}(1;\psi)$$
Similarly, one can show that $\frac{Y_{i}^{obs}(1-Z_{i})}{\sum_{i}(1-Z_{i})}$ is an unbiased estimator of $\frac{1}{N}\bar{Y_{i}}(0;\psi)$ which completes the proof.
∎
F.8 Proof of Proposition 4.1
Proof.
$$\displaystyle\hat{\beta}$$
$$\displaystyle=\frac{\sum_{i}Y_{i}^{obs}Z_{i}}{\sum_{i}{Z_{i}}}-\frac{\sum_{i}Y%
_{i}^{obs}(1-Z_{i})}{\sum_{i}{1-Z_{i}}}$$
$$\displaystyle=\frac{\sum_{i}Y_{i}(1,E_{i})I(Z_{i}=1)}{\sum_{i}{I(Z_{i}=1)}}-%
\frac{\sum_{i}Y_{i}(0,E_{i})I(Z_{i}=0)}{\sum_{i}{I(Z_{i}=0)}}$$
$$\displaystyle=\frac{\sum_{i}\sum_{e_{i}}Y_{i}(1,e_{i})I(Z_{i}=1,E_{i}=e_{i})}{%
\sum_{i}{I(Z_{i}=1)}}-\frac{\sum_{i}\sum_{e_{i}}Y_{i}(0,e_{i})I(Z_{i}=0,E_{i}=%
e_{i})}{\sum_{i}{I(Z_{i}=0)}}$$
$$\displaystyle=\sum_{i}\sum_{e_{i}}\left[Y_{i}(1,e_{i})\alpha_{i}(1,e_{i})-Y_{i%
}(0,e_{i})\alpha_{i}(0,e_{i})\right]$$
$$\displaystyle=\sum_{i}\left[Y_{i}(1,0)\alpha_{i}(1,0)-Y_{i}(0,0)\alpha_{i}(0,0%
)\right]+\sum_{i}\sum_{e_{i}\neq 0}\left[Y_{i}(1,0)\alpha_{i}(1,e_{i})-Y_{i}(0%
,e_{i})\alpha_{i}(0,e_{i})\right]$$
where,
$$\alpha_{i}(z_{i},e_{i})=\frac{I(Z_{i}=z_{i},E_{i}=e_{i})}{\sum_{i}{I(Z_{i}=z_{%
i})}}$$
Now, the bias is
$$\displaystyle b$$
$$\displaystyle=E[\hat{\beta}]-\beta_{DE}$$
$$\displaystyle=\sum_{i}\left(Y_{i}(1,0)E\left[\alpha_{i}(1,0)-\frac{1}{n}\right%
]-Y_{i}(0,0)E\left[\alpha_{i}(0,0)+\frac{1}{n}\right]\right)$$
$$\displaystyle+\sum_{i}\sum_{e_{i}\neq 0}\left(Y_{i}(1,e_{i})E[\alpha_{i}(1,e_{%
i})]-Y_{i}(0,e_{i})E[\alpha_{i}(0,e_{i})]\right)$$
∎
F.9 Proof of Proposition 4.2
Proof.
Note that from the non-parametric decomposition of the Potential outcomes given in Proposition 3.2, we have,
$$\displaystyle Y_{i}^{obs}I(Z_{i}=1,E_{i}=0)$$
$$\displaystyle=A_{i}(1)I(Z_{i}=1,E_{i}=0)$$
$$\displaystyle Y_{i}^{obs}I(Z_{i}=0,E_{i}=0)$$
$$\displaystyle=A_{i}(0)I(Z_{i}=0,E_{i}=0)\mbox{ and }$$
$$\displaystyle Y_{i}^{obs}I(Z_{i}=1,E_{i}=1)$$
$$\displaystyle=(A_{i}(1)+B_{i}(1)+C_{i}(1))I(Z_{i}=1,E_{i}=1)$$
Substituting these in the definition of $\beta_{1}$ and $\beta_{2}$ gives the result.
∎
F.10 Proof of Theorem 4.1
The theorem follows from results of Theorem 5.4
F.11 Proof of Proposition 4.3
Proof.
Note that
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}z_{i}}}{n_{t}}=\frac{\sum_{i}\alpha_{i}%
z_{i}}{n_{t}}+\frac{1}{n_{t}}\beta\sum_{i}{z_{i}^{2}}+\frac{1}{n_{t}}\gamma%
\sum_{i}\sum_{j}g_{ij}z_{j}z_{i}$$
Similarly,
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}(1-z_{i})}}{n_{c}}=\frac{\sum_{i}\alpha%
_{i}(1-z_{i})}{n_{c}}+\frac{1}{n_{c}}\beta\sum_{i}{z_{i}(1-z_{i})}+\frac{1}{n_%
{c}}\gamma\sum_{i}\sum_{j}g_{ij}z_{j}(1-z_{i})$$
Using the facts $z_{i}^{2}=z_{i}$, $\sum_{i}{z_{i}^{2}}=n_{t}$, and $z_{i}(1-z_{i})=0$, we get,
$$\displaystyle\hat{\beta}_{naive}=\beta_{DE}+\gamma\left(\frac{\sum_{i}\sum_{j}%
g_{ij}z_{j}z_{i}}{n_{t}}-\frac{\sum_{i}\sum_{j}g_{ij}z_{j}(1-z_{i})}{n_{c}}%
\right)+\frac{\sum_{i}\alpha_{i}z_{i}}{n_{t}}-\frac{\sum_{i}\alpha_{i}(1-z_{i}%
)}{n_{c}}$$
Note the following expectations - $E[z_{i}z_{j}]=P(z_{i}=1,z_{j}=1)=\frac{n_{t}}{n}\frac{n_{t}-1}{n-1}$ and $E[(1-z_{i})z_{j}]=P(z_{i}=0,z_{j}=1)=\frac{n_{t}}{n}\frac{n_{c}}{n-1}$ and that $E[z_{i}]=\frac{n_{t}}{n}$. Using these facts and taking expectations, we get,
$$\displaystyle E[\hat{\beta}_{naive}]=\beta_{DE}-\gamma\frac{2m}{n(n-1)}$$
∎
F.12 Proof of Proposition 4.4
Proof.
Note that
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}z_{i}}}{\sum_{i}z_{i}}=\alpha+\beta%
\frac{\sum_{i}{z_{i}^{2}}}{\sum_{i}z_{i}}+\gamma\sum_{i}\sum_{j}g_{ij}\frac{z_%
{j}z_{i}}{\sum_{i}{z_{i}}}+\sum_{i}\frac{\epsilon_{i}z_{i}}{\sum_{i}z_{i}}$$
Similarly,
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}(1-z_{i})}}{\sum_{i}(1-z_{i})}=\alpha+%
\beta\sum_{i}{\frac{z_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}}+\gamma\sum_{i}\sum_{j}%
g_{ij}\frac{z_{j}(1-z_{i})}{\sum_{i}(1-z_{i})}+\sum_{i}\frac{\epsilon_{i}(1-z_%
{i})}{\sum_{i}(1-z_{i})}$$
Using the facts that $z_{i}^{2}=z_{i}$, and $z_{i}(1-z_{i})=0$, we get,
$$\displaystyle\hat{\beta}_{naive}=\beta+\gamma\left(\sum_{i}\sum_{j}g_{ij}\left%
(\frac{z_{j}z_{i}}{\sum_{i}{z_{i}}}-\frac{z_{j}(1-z_{i})}{\sum_{i}{(1-z_{i})}}%
\right)\right)+\sum_{i}\left(\frac{\epsilon_{i}z_{i}}{\sum_{i}z_{i}}-\frac{%
\epsilon_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}\right)$$
Note the following expectations - $E[e_{i}z_{i}]=E[e_{i}]E[z_{i}]=0$ and similarly, $E[e_{i}(1-z_{i})]=0$ as $\epsilon_{i}\perp\!\!\!\perp z_{i}$. Also note that $g_{ij}=0$ when $i=j$, hence we need to only focus on $i\neq j$ for the calculation of the remaining expectations.
Now let $X=\sum_{i}{z_{i}}$ and consider,
$$\displaystyle E\left[\frac{z_{j}z_{i}}{\sum_{i}{z_{i}}}\right]=E\left[\frac{1}%
{X}E\left[z_{i}z_{j}\mid X\right]\right]=E\left[\frac{1}{X}\frac{X}{n}\frac{X-%
1}{n-1}\right]=E\left[\frac{X-1}{n(n-1)}\right]=\frac{\frac{np-np^{n}}{1-(1-p)%
^{n}-p^{n}}-1}{n(n-1)}$$
In the last two equalities, we have used the fact that $X$ is a restricted binomial distribution with probability of success $p$, and $X\in\{1,\ldots,n-1\}$. Similarly, let $Y=\sum(1-z_{i})$ be a restricted binomial with probability of success $1-p$.
$$\displaystyle E\left[\frac{z_{j}(1-z_{i})}{\sum_{i}{(1-z_{i})}}\right]$$
$$\displaystyle=E\left[\frac{1}{Y}E\left[z_{j}(1-z_{j})\mid Y\right]\right]=E%
\left[\frac{1}{Y}\frac{n-Y}{n}\frac{Y}{n-1}\right]$$
$$\displaystyle=E\left[\frac{n-Y}{n(n-1)}\right]=\frac{n-\frac{n(1-p)-n(1-p)^{n}%
}{1-(1-p)^{n}-p^{n}}}{n(n-1)}$$
Finally, let $c=\frac{1}{1-(1-p)^{n}-p^{n}}$ and note that,
$$\displaystyle E[X-1]-E[n-Y]=c(np-np^{n})-1-n+c(n(1-p)-n(1-p)^{n})=-1$$
The result follows by plugging these expectations in the expression of $E[\hat{\beta}_{naive}]$ using the fact that $\sum_{i}\sum_{j}g_{ij}=2m$.
∎
F.13 Proof of Proposition E.1
Proof.
Note that, as before, we have
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}z_{i}}}{\sum_{i}z_{i}}=\frac{\sum_{i}%
\alpha_{i}z_{i}}{\sum_{i}z_{i}}+\frac{\sum_{i}{\beta_{i}z_{i}^{2}}}{\sum_{i}z_%
{i}}+\gamma\frac{\sum_{i}\sum_{j}z_{j}z_{i}}{\sum_{i}z_{i}}$$
Similarly,
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}(1-z_{i})}}{\sum_{i}(1-z_{i})}=\frac{%
\sum_{i}\alpha_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}+\frac{\sum_{i}{\beta_{i}z_{i}(%
1-z_{i})}}{\sum_{i}(1-z_{i})}+\gamma\frac{\sum_{i}\sum_{j}z_{j}(1-z_{i})}{\sum%
_{i}(1-z_{i})}$$
Using the facts that $z_{i}^{2}=z_{i}$, and $z_{i}(1-z_{i})=0$, we get,
$$\displaystyle\hat{\tau}=\frac{\sum_{i}\beta_{i}z_{i}}{\sum_{i}z_{i}}+\gamma%
\underset{i\neq j}{\sum\sum}\left(\frac{z_{j}z_{i}}{\sum_{i}{z_{i}}}-\frac{z_{%
j}(1-z_{i})}{\sum_{i}{(1-z_{i})}}\right)+\sum_{i}\left(\frac{\alpha_{i}z_{i}}{%
\sum_{i}z_{i}}-\frac{\alpha_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}\right)$$
(30)
However, now we have the case that $\sum_{i}z_{i}=\sum_{k=1}^{K}n_{k}z_{k}$ and $\sum_{i}{1-z_{i}}=\sum_{k=1}^{K}{n_{k}(1-z_{k})}$.
Let $\bar{\beta}_{k}$ be the average of $\beta_{i}$ for all nodes in cluster $k$. Similarly, we define $\bar{\alpha}_{k}$. Thus, we get,
$$\displaystyle\hat{\tau}=\frac{\sum_{k}\bar{\beta}_{k}n_{k}z_{k}}{\sum_{k}n_{k}%
z_{k}}+\gamma\underset{i\neq j}{\sum\sum}\left(\frac{z_{j}z_{i}}{\sum_{i}{z_{i%
}}}-\frac{z_{j}(1-z_{i})}{\sum_{i}{(1-z_{i})}}\right)+\sum_{k}\left(\frac{\bar%
{\alpha}_{k}n_{k}z_{k}}{\sum_{k}n_{k}z_{k}}-\frac{\bar{\alpha}_{k}n_{k}(1-z_{k%
})}{\sum_{k}n_{k}(1-z_{k})}\right)$$
(31)
Note the following:
$$\displaystyle\underset{i\neq j}{\sum\sum}\left(\frac{z_{i}z_{j}}{\sum_{i}z_{i}%
}-\frac{(1-z_{i})z_{j}}{\sum_{i}(1-z_{i})}\right)=-1$$
This is because,
$$\displaystyle\frac{\sum_{i}\sum_{j}z_{i}z_{j}}{\sum_{i}z_{i}}=\frac{\sum_{j}z_%
{j}\sum_{i}z_{i}}{\sum_{i}z_{i}}=\sum_{i}z_{i}$$
and,
$$\displaystyle\frac{\sum_{i}\sum_{j}(1-z_{i})z_{j}}{\sum_{i}(1-z_{i})}=\frac{%
\sum_{j}z_{j}\sum_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}=\sum_{i}z_{i}.$$
Hence, we have,
$$\displaystyle 0$$
$$\displaystyle=\sum_{i}\sum_{j}\left(\frac{z_{i}z_{j}}{\sum_{i}z_{i}}-\frac{(1-%
z_{i})z_{j}}{\sum_{i}(1-z_{i})}\right)$$
$$\displaystyle=\underset{i\neq j}{\sum}\left(\frac{z_{i}z_{j}}{\sum_{i}z_{i}}-%
\frac{(1-z_{i})z_{j}}{\sum_{i}(1-z_{i})}\right)+\underset{i=j}{\sum\sum}\left(%
\frac{z_{i}z_{j}}{\sum_{i}z_{i}}-\frac{(1-z_{i})z_{j}}{\sum_{i}(1-z_{i})}\right)$$
$$\displaystyle=\underset{i\neq j}{\sum}\left(\frac{z_{i}z_{j}}{\sum_{i}z_{i}}-%
\frac{(1-z_{i})z_{j}}{\sum_{i}(1-z_{i})}\right)+\underset{i=j}{\sum\sum}\left(%
\frac{z_{i}^{2}}{\sum_{i}z_{i}}-\frac{(1-z_{i})z_{i}}{\sum_{i}(1-z_{i})}\right)$$
$$\displaystyle=\underset{i\neq j}{\sum}\left(\frac{z_{i}z_{j}}{\sum_{i}z_{i}}-%
\frac{(1-z_{i})z_{j}}{\sum_{i}(1-z_{i})}\right)+1$$
Thus, to evaluate the bias, we need to compute the expectations of the following terms:
$$\displaystyle E\left[\frac{n_{k}z_{k}}{\sum_{k}n_{k}z_{k}}\right]\mbox{ and }E%
\left[\frac{n_{k}(1-z_{k})}{\sum_{k}n_{k}(1-z_{k})}\right]$$
We will use the trivial identity
$$E\left[\frac{U}{V}\right]=\frac{1}{E[V]}\left[E[U]-Cov\left(\frac{U}{V},V%
\right)\right]$$
Thus, we get,
$$\displaystyle E\left[\frac{n_{k}z_{k}}{\sum_{k}{n_{k}z_{k}}}\right]$$
$$\displaystyle=\frac{1}{\sum_{k}E\left[n_{k}z_{k}\right]}\left[E[n_{k}z_{k}-Cov%
\left(\frac{n_{k}z_{k}}{\sum_{k}n_{k}z_{k}},n_{k}z_{k}\right)\right]$$
$$\displaystyle=\frac{K}{nK_{t}}\left[\frac{n_{k}K_{t}}{K}-n_{k}^{2}Cov\left(%
\frac{z_{k}}{\sum_{k}n_{k}z_{k}},z_{k}\right)\right]$$
$$\displaystyle=\frac{n_{k}}{n}-\frac{n_{k}^{2}K}{K_{t}}Cov\left(\frac{z_{k}}{%
\sum_{k}n_{k}z_{k}},z_{k}\right)$$
$$\displaystyle=\frac{n_{k}}{n}-\frac{n_{k}^{2}K}{K_{t}}c_{k}$$
Similarly, one can show that
$$\displaystyle E\left[\frac{n_{k}(1-z_{k})}{\sum_{k}{n_{k}(1-z_{k})}}\right]$$
$$\displaystyle=\frac{n_{k}}{n}-\frac{n_{k}^{2}K}{K_{c}}Cov\left(\frac{1-z_{k}}{%
\sum_{k}n_{k}(1-z_{k})},1-z_{k}\right)$$
$$\displaystyle=\frac{n_{k}}{n}-\frac{n_{k}^{2}K}{K_{c}}d_{k}$$
Thus, we get
$$\displaystyle E[\hat{\tau}]$$
$$\displaystyle=\sum_{k}\bar{\beta}_{k}\left(\frac{n_{k}}{n}-\frac{n_{k}^{2}Kc_{%
k}}{K_{t}}\right)-\gamma+\sum_{k}\bar{\alpha}_{k}\left(\frac{n_{k}K(d_{k}-c_{k%
})}{K_{t}}\right)$$
$$\displaystyle=\bar{\beta}-\gamma-\frac{K}{K_{t}}\sum_{k}\bar{\beta}_{k}n_{k}^{%
2}c_{k}+\frac{K}{K_{t}}\sum_{k}\bar{\alpha}_{k}n_{k}\left(d_{k}-c_{k}\right)$$
∎
F.14 Proof of Propositions 4.5 and 4.6
Proof.
From the definition of $\hat{\beta}_{naive}$ and using the facts $z_{i}^{2}=z_{i}$ and $z_{i}(1-z_{i})=0$, we have,
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}z_{i}}}{\sum_{i}z_{i}}=\frac{\sum_{i}%
\alpha_{i}z_{i}}{\sum_{i}z_{i}}+\frac{\sum_{i}{\beta_{i}z_{i}}}{\sum_{i}z_{i}}%
+\frac{\sum_{i}(\gamma_{i}+\theta_{i})e_{i}z_{i}}{\sum_{i}z_{i}}$$
and,
$$\displaystyle\frac{\sum_{i}{y_{i}^{obs}(1-z_{i})}}{\sum_{i}(1-z_{i})}=\frac{%
\sum_{i}\alpha_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}+\frac{\sum_{i}\gamma_{i}e_{i}(%
1-z_{i})}{\sum_{i}(1-z_{i})}$$
Under both the CRD and Bernoulli trial, we get, (e.g. using Proposition LABEL:prop:iidratio)
$$\displaystyle E[\hat{\beta}]=\bar{\beta}+\sum_{i}\left((\gamma_{i}+\theta_{i})%
E\left[\frac{e_{i}z_{i}}{\sum_{i}z_{i}}\right]-\gamma_{i}E\left[\frac{e_{i}(1-%
z_{i})}{\sum_{i}(1-z_{i})}\right]\right)$$
Under CRD, $\sum_{i}z_{i}=n_{t}$ and $\sum_{i}(1-z_{i})=n_{c}$. Moreover, $E[z_{i}]=n_{t}$ and $E[1-z_{i}]=n_{c}$. Let $P(z_{i}=1,e_{i}=1)=\pi_{i}(1,1)$ and $P(z_{i}=0,e_{i}=1)=\pi_{i}(0,1)$, then we have,
$$\displaystyle E[\hat{\beta}]$$
$$\displaystyle=\beta_{1}+\sum_{i}\gamma_{i}\left(\frac{\pi_{i}(1,1)}{n_{t}}-%
\frac{\pi_{i}(1,0)}{n_{c}}\right)+\sum_{i}\theta_{i}\frac{\pi_{i}(1,1)}{n_{t}}$$
$$\displaystyle\mathbb{E}[\hat{\beta}]$$
$$\displaystyle=\beta_{1}+\frac{1}{n}\sum_{i}\gamma_{i}\left(\frac{\binom{n_{c}-%
1}{d_{i}}-\binom{n_{c}}{d_{i}}}{\binom{n-1}{d_{i}}}\right)+\sum_{i}\theta_{i}%
\frac{\pi_{i}(1,1)}{n_{t}}$$
$$\displaystyle\mathbb{E}[\hat{\beta}]$$
$$\displaystyle=\beta_{1}-\frac{1}{n}\sum_{i}\gamma_{i}\left(\frac{\binom{n_{c}-%
1}{d_{i}-1}}{\binom{n-1}{d_{i}}}\right)+\frac{1}{n}\sum_{i}\theta_{i}\left(1-%
\frac{\binom{n_{c}}{d_{i}}}{\binom{n-1}{d_{i}}}\right)$$
Now let us compute the bias for a Bernoulli trial. We need to compute the following expectations:
$$\displaystyle E\left[\frac{e_{i}z_{i}}{\sum_{i}z_{i}}\right]\mbox{ and }\left[%
\frac{e_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}\right]$$
$$\displaystyle E\left[\frac{e_{i}z_{i}}{\sum_{i}z_{i}}\right]$$
$$\displaystyle=E\left[E\left[\frac{e_{i}z_{i}}{\sum_{i}z_{i}}\bigg{|}\sum_{i}{z%
_{i}}=k\right]\right]=E\left[\frac{1}{\sum_{i}z_{i}}P\left(e_{i}=1,z_{i}=1%
\bigg{|}\sum_{i}z_{i}=k\right)\right]$$
$$\displaystyle=E_{k}\left[\frac{1}{k}\frac{k}{n}\left[1-\frac{n_{c}(n_{c}-1)%
\ldots(n_{c}-d_{i}+1)}{(n-1)(n-2)\ldots(n-d_{i})}\right]\right],\mbox{ where }%
n_{c}=n-k$$
$$\displaystyle=\frac{1}{n}-E_{k}\left[\frac{n_{c}^{(d_{i})}}{n^{(d_{i})}(n-d_{i%
})}\right]=\frac{1}{n}-E_{k}\left[\frac{(n-k)^{(d_{i})}}{n^{(d_{i})}(n-d_{i})}\right]$$
$$\displaystyle=\frac{1}{n}-\frac{n^{(d_{i})}(1-p)^{d_{i}}}{n^{(d_{i})}(n-d_{i})%
}=\frac{1}{n}-\frac{(1-p)^{d_{i}}}{n-d_{i}}$$
The last equation follows from the easy to show fact that if $X\sim Bin(n,p)$, then $E\left[X^{(r)}\right]=n^{(r)}p^{r}$, and that $n-k\sim Bin(n,1-p)$.
Using a similar argument, one can show that
$$E\left[\frac{e_{i}(1-z_{i})}{\sum_{i}(1-z_{i})}\right]=\frac{1}{n}-\frac{(1-p)%
^{d_{i}}}{n}$$
Thus, we get,
$$\displaystyle E[\hat{\tau}]$$
$$\displaystyle=\bar{\beta}+\sum_{i}\left((\gamma_{i}+\theta_{i})E\left[\frac{e_%
{i}z_{i}}{\sum_{i}z_{i}}\right]-\gamma_{i}E\left[\frac{e_{i}(1-z_{i})}{\sum_{i%
}(1-z_{i})}\right]\right)$$
$$\displaystyle=\bar{\beta}+\sum_{i}\left((\gamma_{i}+\theta_{i})\left[\frac{1}{%
n}-\frac{(1-p)^{d_{i}}}{n-d_{i}}\right]-\gamma_{i}\left[\frac{1}{n}-\frac{(1-p%
)^{d_{i}}}{n}\right]\right)$$
$$\displaystyle=\bar{\beta}-\sum_{i}\left(\frac{d_{i}\gamma_{i}(1-p)^{d_{i}}}{n(%
n-d_{i})}\right)+\sum_{i}\theta_{i}\left[\frac{1}{n}-\frac{(1-p)^{d_{i}}}{n}\right]$$
∎
F.15 Proof of Theorem 5.1
Proof.
Note that by the consistency assumption, we have,
$$\hat{\theta}_{1}=\sum_{i=1}^{n}Y_{i}^{obs}w_{i}(\textbf{z})=\sum_{i=1}^{n}\sum%
_{z,e}Y_{i}(z,e)I(z_{i}=z,e_{i}=e)w_{i}(\textbf{z})$$
$$\displaystyle\mathbb{E}[\hat{\theta}_{1}]{}=$$
$$\displaystyle\sum_{i=1}^{n}\underset{z,e}{\sum}Y_{i}(z,e)\left(\sum_{\textbf{z%
}\in\Omega}I(z_{i}=z,e_{i}=e)w_{i}(\textbf{z})p(\textbf{z})\right)$$
$$\displaystyle\sum_{i=1}^{n}\sum_{\textbf{z}\in\Omega}w_{i}(\textbf{z})Y_{i}(z_%
{1},e_{1})I(z_{i}=z_{1},e_{i}=e_{1})p(\textbf{z})$$
$$\displaystyle+\sum_{i=1}^{n}\sum_{\textbf{z}\in\Omega}w_{i}(\textbf{z})Y_{i}(z%
_{0},e_{0})I(z_{i}=z_{0},e_{i}=e_{0})p(\textbf{z})$$
$$\displaystyle+\sum_{i=1}^{n}\underset{(z,e)\neq(z_{1},e_{1}),(z_{0},e_{0})}{%
\sum}\sum_{\textbf{z}\in\Omega}w_{i}(\textbf{z})Y_{i}(z_{1},e_{1})I(z_{i}=z,e_%
{i}=e)p(\textbf{z})$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=1}^{n}\sum_{\textbf{z}\in\Omega_{i}(z_{1},e_{1})}w_{i}(%
\textbf{z})Y_{i}(z_{1},e_{1})p(\textbf{z})+\sum_{i=1}^{n}\sum_{\textbf{z}\in%
\Omega_{i}(z_{0},e_{0})}w_{i}(\textbf{z})Y_{i}(z_{0},e_{0})p(\textbf{z})$$
$$\displaystyle+\sum_{i=1}^{n}\underset{(z,e)\neq(z_{1},e_{1}),(z_{0},e_{0})}{%
\sum}\sum_{\textbf{z}\in\Omega_{i}(z,e)}w_{i}(\textbf{z})Y_{i}(z_{1},e_{1})p(%
\textbf{z})$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=1}^{n}Y_{i}(z_{1},e_{1})\left(\sum_{\textbf{z}\in\Omega_{%
i}(z_{1},e_{1})}w_{i}(\textbf{z})p(\textbf{z})\right)+\sum_{i=1}^{n}Y_{i}(z_{0%
},e_{0})\left(\sum_{\textbf{z}\in\Omega_{i}(z_{0},e_{0})}w_{i}(\textbf{z})p(%
\textbf{z})\right)$$
$$\displaystyle+\sum_{i=1}^{n}\underset{(z,e)\neq(z_{1},e_{1}),(z_{0},e_{0})}{%
\sum}Y_{i}(z,e)\left(\sum_{\textbf{z}\in\Omega_{i}(z,e)}w_{i}(\textbf{z})p(%
\textbf{z})\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=1}^{n}Y_{i}(z_{1},e_{1})\left(\frac{1}{n}\right)-\sum_{i=%
1}^{n}Y_{i}(z_{0},e_{0})\left(\frac{1}{n}\right)$$
where the last line is required for unbiasedness. Since this is an identity in $Y_{i}(z,e)$, we have
$$\displaystyle\forall i=1,\ldots,n,\sum_{\textbf{z}\in\Omega_{i}(z_{1},e_{1})}w%
_{i}(\textbf{z})p(\textbf{z})$$
$$\displaystyle=\frac{1}{n}$$
$$\displaystyle\forall i=1,\ldots,n,\sum_{\textbf{z}\in\Omega_{i}(z_{0},e_{0})}w%
_{i}(\textbf{z})p(\textbf{z})$$
$$\displaystyle=-\frac{1}{n}$$
$$\displaystyle\forall i,\forall(z,e)\neq(z_{1},e_{1}),(z_{0},e_{0}),\sum_{%
\textbf{z}\in\Omega_{i}(z,e)}w_{i}(\textbf{z})p(\textbf{z})$$
$$\displaystyle=0$$
Let us now show that $0<\pi_{i}(z_{1},e_{1})<1$ is necessary for unbiasedness. Suppose there exists a $j$ such that $\pi_{j}(z_{1},e_{1})=\sum_{\textbf{z}\in\Omega_{j}(z_{1},e_{1})}p(\textbf{z})=0$, then $p(\textbf{z})=0$ $\forall$ $\textbf{z}\in\Omega_{j}(z_{1},e_{1})$. This means that $E[\hat{\theta}]$ is free of $Y_{j}(z_{1},e_{1})$ irrespective of $w_{j}(\textbf{z})$, see line 4 of the previous equation, and hence cannot be equal to $\sum_{i=1}^{n}Y_{i}(z_{1},e_{1})$. Similarly, suppose there exists a $j$ such that $\pi_{j}(z_{1},e_{1})=\sum_{\textbf{z}\in\Omega_{j}(z_{1},e_{1})}p(\textbf{z})=1$. This implies that $\Omega_{j}(z_{1},e_{1})=\Omega$. Since for fixed $j$, the sets $\Omega_{j}(z,e)$ are disjoint, we have $p(\textbf{z})=0$ for any $\textbf{z}\in\Omega_{j}(z_{0},e_{0})$. Hence by the previous argument, $E[\hat{\theta}_{1}]$ will be free of $Y_{j}(z_{0},e_{0})$ and therefore cannot be unbiased. A similar argument will show the necessity of $0<\pi_{i}(z_{0},e_{0})$.
∎
F.16 Proof of Theorem 5.2
Proof.
Note that $\hat{\theta}_{2}$ given by 24 is contained in the class of estimators given by $\hat{\theta}$ 23, since $w(\textbf{z})=w(z,e)$. Using the results from Theorem 5.1, we have $\hat{\theta}_{2}$ is unbiased iff for each $i=1,\ldots,n$
$$\displaystyle\underset{\textbf{z}\in\tau_{i}(z_{1},e_{1})}{\sum}w_{i}(\textbf{%
z})p(\textbf{z})$$
$$\displaystyle=\frac{1}{n}$$
$$\displaystyle\implies\underset{\textbf{z}\in\tau_{i}(z_{1},e_{1})}{\sum}w_{i}(%
z,e)p(\textbf{z})$$
$$\displaystyle=\frac{1}{n}$$
$$\displaystyle\implies w_{i}(z_{1},e_{1})\underset{\textbf{z}\in\tau_{i}(z_{1},%
e_{1})}{\sum}p(\textbf{z})$$
$$\displaystyle=\frac{1}{n},\text{ }$$
$$\displaystyle\implies w_{i}(z_{1},e_{1})\pi_{i}(z_{1},e_{1})$$
$$\displaystyle=\frac{1}{n}$$
$$\displaystyle\implies w_{i}(z_{1},e_{1})$$
$$\displaystyle=\frac{1}{n\pi_{i}(z_{1},e_{1})}$$
A similar argument shows that $w_{i}(z_{0},e_{0})=\frac{1}{n\pi_{i}(z_{0},e_{0})}$ and $w_{i}(z,e)=0$ for all $(z,e)\neq(z_{1},e_{1})and(z_{0},e_{0})$.
∎
F.17 Proof of Theorem 5.3
Proof.
For a CRD design, we have,
$$\displaystyle\alpha_{i}(1,e_{i})$$
$$\displaystyle=\mathbb{E}\left[\frac{I(Z_{i}=1,E_{i}=e_{i})}{\sum_{i}{Z_{i}}}\right]$$
$$\displaystyle=\frac{1}{n_{t}}\mathbb{P}\left(I(Z_{i}=1,E_{i}=e_{i}\right)$$
$$\displaystyle=\frac{1}{n_{t}}\frac{n_{t}}{n}\frac{\binom{n_{t}-1}{e_{i}}\binom%
{n_{c}}{d_{i}-e_{i}}}{\binom{n-1}{d_{i}}}\text{ if }n_{t}\geq e_{i}+1\text{ %
and }n_{c}\geq d_{i}-e_{i},0\text{ otherwise}$$
$$\displaystyle=\frac{1}{n}\frac{\binom{n_{t}-1}{e_{i}}\binom{n_{c}}{d_{i}-e_{i}%
}}{\binom{n-1}{d_{i}}}\text{ if }n_{t}\geq e_{i}+1\text{ and }n_{c}\geq d_{i}-%
e_{i},0\text{ otherwise}$$
For a Bernoulli trial, we have,
$$\displaystyle\alpha_{i}(1,e_{i})$$
$$\displaystyle=\mathbb{E}\left[\frac{I(Z_{i}=1,E_{i}=e_{i})}{\sum_{i}Z_{i}}\right]$$
$$\displaystyle=\mathbb{E}_{k}\left[\mathbb{E}\left[\frac{I(Z_{i}=1,E_{i}=e_{i})%
}{\sum_{i}Z_{i}}\bigg{|}\sum_{i}{Z_{i}}=k\right]\right]$$
$$\displaystyle=\mathbb{E}\left[\frac{1}{\sum_{i}Z_{i}}\mathbb{P}\left(Z_{i}=1,E%
_{i}=e_{i}\bigg{|}\sum_{i}Z_{i}=k\right)\right]$$
$$\displaystyle=\mathbb{E}_{K}\left[\frac{1}{K}\frac{K}{n}\frac{\binom{K-1}{e_{i%
}}\binom{n-K}{d_{i}-e_{i}}}{\binom{n-1}{d_{i}}}\right],\mbox{ where }\sum_{i}{%
Z_{i}}=K$$
$$\displaystyle=\frac{1}{n}\mathbb{E}_{K}\left[\frac{\binom{K-1}{e_{i}}\binom{n-%
K}{d_{i}-e_{i}}}{\binom{n-1}{d_{i}}}\right]$$
where $K$ is a restricted binomial random variable with support on $\{1,\ldots,N-1\}$ and $\mathbb{P}\left(K=k\right)=\frac{\binom{n}{k}p^{k}(1-p)^{n-k}}{1-(1-p)^{n}-p^{%
n}}$.
A similar proof holds for the other cases.
∎
F.18 Proof of Theorem 5.5
To prove Theorem 5.5, we first need an intermediate Lemma proved below. This Lemma essentially states that given any unbiased estimator of $\theta$ whose minimum variance is strictly greater than $0$, one can always construct a new estimator that has lower mean squared error than the unbiased estimator. Lemma F.1 follows from a result of Godambe and Joshi [1965].
Lemma F.1.
Let $\mathbb{P}$ be any design and let $\hat{\theta}$ be an unbiased estimator of a generic causal effect $\theta$ under the design $\mathbb{P}$. Suppose $\min_{\mathbb{T}}Var_{\mathbb{P}}(\hat{\theta})>0$ where $\mathbb{T}$ is the table of science. Then there exists an estimator $\hat{\theta}_{1}$ such that $MSE[\hat{\theta}]<MSE[\hat{\theta}_{1}]$ for all $\theta$.
Proof.
Let $0<k\leq 1$ be a constant to be specified later and let $\hat{\theta}_{1}=(1-k)\hat{\theta}$.
Then we have
$$\displaystyle MSE(\hat{\theta}_{1})$$
$$\displaystyle=\mathbb{E}\left((1-k)\hat{\theta}-\theta\right)^{2}$$
$$\displaystyle=MSE(\hat{\theta})+k^{2}\left(Var(\hat{\theta})+\theta^{2}\right)%
-2kVar(\hat{\theta})$$
(32)
Note that the MSE is a function of the design $\mathbb{P}$ and the unknown but fixed potential outcomes $\{Y_{i}(z_{i},e_{i})\}_{i=1}^{n}$ given by the entries of Table of science $\mathbb{T}$. In fact, since $\hat{\theta}$ is an unbiased estimator, one can show that it is a function of only the relevant potential outcomes, i.e. $\{Y_{i}(z_{1},e_{1})\}_{i=1}^{n}$ and $\{Y_{i}(z_{0},e_{0})\}_{i=1}^{n}$. Now if $k>0$ and
$$k\left(Var(\hat{\theta})+\theta^{2}\right)<2Var(\hat{\theta}),$$
for all $\theta$, then $MSE(\hat{\theta}_{1})<MSE(\hat{\theta})$.
We need to show that such a $k$ exists. To see that this is true, let
$$\displaystyle k_{0}=\underset{\mathbb{T}}{\min}\frac{2Var(\hat{\theta})}{Var(%
\hat{\theta})+\theta^{2}}$$
Since $\min_{\mathbb{T}}Var(\hat{\theta})>0$, we have $k_{0}>0$. Let $k=\min(k_{0},1)$. Hence we have $0<k\leq 1$.
When $k_{0}<1$, $k=k_{0}$ and by definition of $k_{0}$ , $MSE(\hat{\theta}_{1})<MSE(\hat{\theta})$.
When $k_{0}\geq 1$, $k=1$, and $\hat{\theta}_{1}=0$. But $k_{0}\geq 1$ implies that $2Var(\hat{\theta})>Var(\hat{\theta})+\theta^{2}$ or $Var(\hat{\theta})\geq\theta^{2}$. Using this fact and substituting $k=1$ in equation 32, one can see that $MSE(\hat{\theta}_{1})<MSE(\hat{\theta})$. Note that in such a case, the variance of $\hat{\theta}$ is so large that a constant estimator is able to beat it. This happens when $Var(\hat{\theta})>\theta^{2}$, making estimation impossible.
∎
Proof of Theorem 5.5.
To show that the Horvitz-Thompson estimator is inadmissible, from Lemma F.1, it suffices to show that the variance of the HT estimator can never be zero for a non-constant design $\mathbb{P}$.
Let $X_{i}=I(Z_{i}=z_{0},E_{i}=e_{0})$ and $Y_{i}=I(Z_{i}=z_{1},E_{i}=e_{1})$ and $p_{i}=\mathbb{E}(X_{i})$ and $q_{i}=\mathbb{E}(Y_{i})$. Let us assume to the contrary that the variance of $\hat{\theta}_{HT}=0$ for some $\theta$ for a non-constant design $\mathbb{P}$.
The variance of $\hat{\theta}_{HT}$ is $0$ iff
$$\displaystyle\hat{\theta}_{HT}$$
$$\displaystyle=\mathbb{E}\left(\hat{\theta}_{HT}\right)=\theta\mbox{ a.s. }%
\mathbb{P}$$
$$\displaystyle\iff\sum_{i=1}^{n}\left(Y_{i}(z_{1},e_{1})\frac{X_{i}}{p_{i}}-Y_{%
i}(z_{0},e_{0})\frac{Y_{i}}{q_{i}}\right)$$
$$\displaystyle=\sum_{i=1}^{n}\left(Y_{i}(z_{1},e_{1})-Y_{i}(z_{0},e_{0})\right)%
\mbox{ a.s. }\mathbb{P}$$
$$\displaystyle\iff\sum_{i=1}^{n}\frac{Y_{i}(z_{1},e_{1})}{p_{i}}\left(X_{i}-p_{%
i}\right)$$
$$\displaystyle=\sum_{i=1}^{n}\frac{Y_{i}(z_{0},e_{0})}{q_{i}}\left(Y_{i}-q_{i}%
\right)\mbox{ a.s. }\mathbb{P}$$
(33)
Since the potential outcomes are fixed, they cannot be functions of random variables. Hence equation 33 holds only if either
1.
$Y_{i}(z_{1},e_{1})=c_{1}p_{i}$, $Y_{i}(z_{0},e_{0})=c_{2}q_{i}$ for all $i$ and $\sum_{i}(X_{i}-Y_{i})=c\sum_{i}(p_{i}-q_{i})$ almost surely for some constants $c_{1},c_{2}$ and $c$ (or)
2.
$Y_{i}(z_{1},e_{1})$ and $Y_{i}(z_{0},e_{0})$ are all $0$.
Ignoring the trivial solutions of equation 33, the variance of the Horvitz-Thompson estimator is $0$ only if
$X_{0}-r_{1}Y_{0}=r_{2}$ almost surely for some constants $r_{1}$ and $r_{2}$. Now since $X_{0}+Y_{0}\leq n$, this implies $Cov(X_{0},Y_{0})\leq 0$. Hence
$$\displaystyle 0$$
$$\displaystyle=Var(X_{0}-Y_{0})=Var(X_{0})+Var(Y_{0})-2Cov(X_{0},Y_{0})$$
which is true if and only if $Var(X_{0})=0$ and $Var(Y_{0})=0$. This implies that $X_{0}$ and $Y_{0}$ are constant, which contradicts the assumption that $\mathbb{P}$ is a non-constant design.
∎
F.19 Proof of Proposition B.1
Proof.
Recall from the previous lemma, that
$$\displaystyle\hat{\beta}_{naive}$$
$$\displaystyle=\beta+\gamma\left(\frac{\sum_{i}\sum_{j}g_{ij}z_{j}z_{i}}{n_{t}}%
-\frac{\sum_{i}\sum_{j}g_{ij}z_{j}(1-z_{i})}{n_{c}}\right)+\frac{1}{n_{t}}\sum%
_{i}\epsilon_{i}z_{i}-\frac{1}{n_{c}}\sum_{i}\epsilon_{i}(1-z_{i})$$
$$\displaystyle=\beta+\gamma\left(\frac{n}{n_{c}n_{t}}\sum_{i}\sum_{j}g_{ij}z_{j%
}z_{i}-\frac{1}{n_{c}}\sum_{i}\sum_{j}g_{ij}z_{i}\right)+\sum_{i}\epsilon_{i}%
\frac{nz_{i}-n_{t}}{n_{t}n_{c}}$$
Let $t_{1}=\gamma\left(\frac{n}{n_{c}n_{t}}\sum_{i}\sum_{j}g_{ij}z_{j}z_{i}-\frac{1%
}{n_{c}}\sum_{i}\sum_{j}g_{ij}z_{i}\right)$ and
$t_{2}=\sum_{i}\epsilon_{i}\frac{nz_{i}-n_{t}}{n_{t}n_{c}}$ and
We will compute the variance of each of these terms separately. Note that the covariance between these two terms is $0$, since $e_{i}\perp z_{i}$ and $E[e_{i}]=0$, as seen below.
$$\displaystyle Cov\left(\frac{n}{n_{t}n_{c}}z_{i}z_{j}-\frac{z_{i}}{n_{c}},%
\epsilon_{i}(nz_{i}-n_{t})\right)$$
$$\displaystyle=E\left[\left(\frac{n}{n_{t}n_{c}}z_{i}z_{j}-\frac{z_{i}}{n_{c}}%
\right)\epsilon_{i}(nz_{i}-n_{t})\right]-E\left[\frac{n}{n_{t}n_{c}}z_{i}z_{j}%
-\frac{z_{i}}{n_{c}}\right]E[\epsilon_{i}(nz_{i}-n_{t})]=0$$
The variance of $t_{2}$ is
$$\displaystyle Var(t_{2})$$
$$\displaystyle=Var\left(\sum_{i}\epsilon_{i}\frac{nz_{i}-n_{t}}{n_{t}n_{c}}\right)$$
$$\displaystyle=\sum_{i}\sum_{j}Cov\left(\epsilon_{i}\frac{nz_{i}-n_{t}}{n_{t}n_%
{c}},\epsilon_{j}\frac{nz_{j}-n_{t}}{n_{t}n_{c}}\right)$$
$$\displaystyle=nE\left[e_{i}^{2}\left(\frac{nz_{i}-n_{t}}{n_{t}n_{c}}\right)^{2%
}\right]$$
$$\displaystyle(\mbox{Since when $i\neq j$, the covariance is 0})$$
$$\displaystyle=\frac{n\sigma^{2}}{n_{t}^{2}n_{c}^{2}}\left[(n-n_{t})^{2}\frac{n%
_{t}}{n}+n_{t}^{2}\frac{n_{c}}{n}\right]$$
$$\displaystyle=\sigma^{2}\left(\frac{1}{n_{t}}+\frac{1}{n_{c}}\right)$$
To calculate the variance of $t_{1}$, note that
$$\displaystyle\frac{1}{\gamma^{2}}Var(t_{1})$$
$$\displaystyle=\frac{n^{2}}{n_{t}^{2}n_{c}^{2}}Var\left(\sum_{i}\sum_{j}g_{ij}z%
_{j}z_{i}\right)+\frac{1}{n_{c}^{2}}Var\left(\sum_{i}\sum_{j}g_{ij}z_{i}\right)$$
$$\displaystyle-2\frac{n}{n_{c}^{2}n_{t}}Cov\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{%
i},\sum_{i}\sum_{j}g_{ij}z_{i}\right)$$
Each of these variances are calculated in the propositions below. Using these propositions, we get
$$\displaystyle\frac{1}{\gamma^{2}}Var(t_{1})$$
$$\displaystyle=\frac{4n}{n_{t}n_{c}}\frac{(n_{t}-1)}{(n-1)(n-2)(n-3)}\left(m(n_%
{c}-1)+2m^{2}\frac{(3n+3n_{t}-2nn_{t}-3)}{n(n-1)}+(n_{t}-2)\sum_{i}d_{i}^{2}\right)$$
$$\displaystyle+\frac{1}{n_{c}}\frac{n_{t}}{n^{2}}\left(\sum_{i}{d_{i}^{2}}-%
\frac{\sum_{i\neq j}{d_{i}d_{j}}}{n-1}\right)$$
$$\displaystyle-2\frac{1}{n_{c}}\frac{2(n_{t}-1)}{(n-1)(n-2)}\left(\sum_{i}d_{i}%
^{2}-\frac{4m^{2}}{n}\right)$$
∎
Proposition F.1.
$$\displaystyle\frac{1}{4}Var\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{i}\right)=$$
$$\displaystyle\frac{n_{t}n_{c}(n_{t}-1)}{n(n-1)(n-2)(n-3)}\left(m(n_{c}-1)+2m^{%
2}\frac{(3n+3n_{t}-2nn_{t}-3)}{n(n-1)}+(n_{t}-2)\sum_{i}d_{i}^{2}\right)$$
Proof.
$$\displaystyle Var\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{i}\right)$$
$$\displaystyle=\sum_{i}\sum_{j}\sum_{k}\sum_{l}g_{ij}g_{kl}Cov(z_{i}z_{j},z_{k}%
z_{l})$$
We will consider several cases. To keep notation simple, let $(x)_{n}=(x)(x-1)(x-2)\ldots(x-(n-1))$ be the falling factorial.
Case 1
$k=i,l=j$ and $k=j,l=i$
$$\displaystyle 2\sum_{ij}g_{ij}^{2}Var(z_{i}z_{j})=\left(\frac{n_{t}-1}{n-1}%
\frac{n_{t}}{n}-\left(\frac{n_{t}}{n}\right)^{2}\right)\sum_{ij}g_{ij}=4m\left%
(\frac{n_{t}-1}{n-1}\frac{n_{t}}{n}-\left(\frac{(n_{t})_{2}}{(n)_{2}}\right)^{%
2}\right)$$
Case 2
$k\neq(i,j),l\neq(i,j),k\neq l$
$$\displaystyle Cov(z_{i}z_{j},z_{k}z_{l})=E[z_{i}z_{j}z_{k}z_{l}]-E[z_{i}z_{j}]%
E[z_{k}z_{l}]=\frac{(n_{t})_{4}}{(n)_{4}}-\left(\frac{(n_{t})_{2}}{(n)_{2}}%
\right)^{2}\forall k\neq i,l\neq j,k\neq l$$
$$\displaystyle\sum_{i}\sum_{j}\sum_{k\neq i,j}\sum_{l\neq i,j}g_{ij}g_{kl}=\sum%
_{i}\sum_{j}g_{ij}(2m-2d_{i}-2d_{j}+2g_{ij})=4(m^{2}+m-\sum_{i}{d_{i}^{2}})$$
Case 3
$k=i,l\neq j$, $k=j,l\neq i$, $l=i,k\neq j$ and $l=j,k\neq i$
Let $\mathcal{K}_{ij}$ be the index set of tuples $(k,l)$ satisfying the conditions mentioned above, for a fixed $(i,j)$. Then for any $(k,l)\in\mathcal{K}_{ij}$, we have,
$$\displaystyle Cov(z_{i}z_{j},z_{k}z_{l})=E[z_{i}^{2}z_{j}]-E[z_{i}z_{j}]E[z_{i%
}z_{l}]=\frac{(n_{t})_{3}}{(n)_{3}}-\left(\frac{(n_{t})_{2}}{(n)_{2}}\right)^{2}$$
$$\displaystyle\sum_{i}\sum_{j}\sum_{k\in\mathcal{K}{ij}}g_{ij}g_{kl}=\sum_{i}%
\sum_{j}g_{ij}(2d_{i}+2d_{j}-4g_{ij})=4\left(\sum_{i}{d_{i}^{2}}-2m\right)$$
Combining these three cases, we get
$$\displaystyle\frac{1}{4}Var\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{i}\right)=$$
$$\displaystyle m\left(\frac{(n_{t})_{2}}{(n)_{2}}-\left(\frac{(n_{t})_{2}}{(n)_%
{2}}\right)^{2}\right)+$$
$$\displaystyle\left(m^{2}+m-\sum_{i}{d_{i}^{2}}\right)\left(\frac{(n_{t})_{4}}{%
(n)_{4}}-\left(\frac{(n_{t})_{2}}{(n)_{2}}\right)^{2}\right)+$$
$$\displaystyle\left(\sum_{i}{d_{i}^{2}}-2m\right)\left(\frac{(n_{t})_{3}}{(n)_{%
3}}-\left(\frac{(n_{t})_{2}}{(n)_{2}}\right)^{2}\right)$$
Collecting each term and simplifying, we get
$$\displaystyle\frac{1}{4}Var\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{i}\right)$$
$$\displaystyle=\frac{n_{t}n_{c}(n_{t}-1)(n_{c}-1)}{n(n-1)(n-2)(n-3)}m+\frac{2n_%
{t}n_{c}(n_{t}-1)}{n^{2}(n-1)^{2}(n-2)(n-3)}(3n+3n_{t}-2nn_{t}-3)m^{2}$$
$$\displaystyle+\frac{n_{t}n_{c}(n_{t}-1)(n_{t}-2)}{n(n-1)(n-2)(n-3)}\sum_{i}{d_%
{i}^{2}}$$
$$\displaystyle=\frac{n_{t}n_{c}(n_{t}-1)}{n(n-1)(n-2)(n-3)}\left(m(n_{c}-1)+2m^%
{2}\frac{(3n+3n_{t}-2nn_{t}-3)}{n(n-1)}+(n_{t}-2)\sum_{i}d_{i}^{2}\right)$$
∎
Proposition F.2.
$$Var\left(\sum_{i}\sum_{j}g_{ij}z_{j}\right)=\frac{n_{t}n_{c}}{n^{2}}\left(\sum%
_{i}{d_{i}^{2}}-\frac{\sum_{i\neq j}{d_{i}d_{j}}}{n-1}\right)$$
Proof.
$$\displaystyle Var\left(\sum_{i}\sum_{j}g_{ij}z_{i}\right)$$
$$\displaystyle=Var\left(\sum_{i}d_{i}z_{i}\right)=Var(z_{1})\sum_{i}{d_{i}^{2}}%
+Cov(z_{1},z_{2})\sum_{i\neq j}d_{i}d_{j}$$
$$\displaystyle=\left(\frac{n_{t}}{n}-\left(\frac{n_{t}}{n}\right)^{2}\right)%
\sum_{i}{d_{i}^{2}}+\left(\frac{(n_{t})_{2}}{(n)_{2}}-\left(\frac{n_{t}}{n}%
\right)^{2}\right)\sum_{i\neq j}d_{i}d_{j}$$
∎
Proposition F.3.
$$Cov\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{i},\sum_{i}\sum_{j}g_{ij}z_{j}\right)=%
\frac{2n_{t}n_{c}(n_{t}-1)}{n(n-1)(n-2)}\left(\sum_{i}d_{i}^{2}-\frac{4m^{2}}{%
n}\right)$$
Proof.
$$\displaystyle Cov\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{i},\sum_{i}\sum_{j}g_{ij}%
z_{j}\right)$$
$$\displaystyle=\sum_{i}\sum_{j}\sum_{k}\sum_{l}g_{ij}g_{kl}Cov(z_{i}z_{j},z_{k}%
)=\sum_{i}\sum_{j}\sum_{k}g_{ij}d_{k}Cov(z_{i}z_{j},z_{k})$$
Consider again, 3 cases:
Case 1
$k=i$ and $k=j$
$$\displaystyle\sum_{i}\sum_{j}\sum_{k}g_{ij}d_{k}Cov(z_{i}z_{j},z_{k})$$
$$\displaystyle=\sum_{i}\sum_{j}g_{ij}(d_{i}Cov(z_{i}z_{j},z_{i})+d_{j}Cov(z_{i}%
z_{j},z_{j}))$$
$$\displaystyle=\left(\frac{(n_{t})_{2}}{(n)_{2}}-\frac{(n_{t})_{2}}{(n)_{2}}%
\frac{n_{t}}{n}\right)2\sum_{i}d_{i}^{2}$$
Case 2
$k\neq(i,j)$
$$\displaystyle\sum_{i}\sum_{j}\sum_{k}g_{ij}d_{k}Cov(z_{i}z_{j},z_{k})$$
$$\displaystyle=\left(\frac{(n_{t})_{3}}{(n)_{3}}-\frac{(n_{t})_{2}}{(n)_{2}}%
\frac{n_{t}}{n}\right)\sum_{i}\sum_{j}g_{ij}\sum_{k\neq(i,j)}d_{k}$$
$$\displaystyle=\left(\frac{(n_{t})_{3}}{(n)_{3}}-\frac{(n_{t})_{2}}{(n)_{2}}%
\frac{n_{t}}{n}\right)\sum_{i}\sum_{j}g_{ij}(2m-d_{i}-d_{j})$$
$$\displaystyle=\left(\frac{(n_{t})_{3}}{(n)_{3}}-\frac{(n_{t})_{2}}{(n)_{2}}%
\frac{n_{t}}{n}\right)(4m^{2}-2\sum_{i}d_{i}^{2})$$
Adding these two terms gives simplifying gives the desired result.
$$\displaystyle Cov\left(\sum_{i}\sum_{j}g_{ij}z_{j}z_{i},\sum_{i}\sum_{j}g_{ij}%
z_{j}\right)$$
$$\displaystyle=\left(\frac{(n_{t})_{3}}{(n)_{3}}-\frac{(n_{t})_{2}}{(n)_{2}}%
\frac{n_{t}}{n}\right)4m^{2}+\left(\frac{(n_{t})_{2}}{(n)_{2}}-\frac{(n_{t})_{%
3}}{(n)_{3}}\right)2\sum_{i}{d_{i}^{2}}$$
$$\displaystyle=\frac{2n_{t}n_{c}(n_{t}-1)}{n(n-1)(n-2)}\left(\sum_{i}d_{i}^{2}-%
\frac{4m^{2}}{n}\right)$$
$$\displaystyle=\frac{-2n_{t}(n_{t}-1)n_{c}}{n^{2}(n-1)(n-2)}4m^{2}+2\frac{n_{t}%
n_{c}(n_{t}-1)}{n(n-1)(n-2)}\sum_{i}{d_{i}^{2}}$$
∎ |
Some open problems concerning the convergence of positive series
Constantin P. Niculescu
University of Craiova, Department of Mathematics, Craiova RO-200585, Romania
cniculescu47@yahoo.com
and
Gabriel T. Prǎjiturǎ
Department of Mathematics, The College at Brockport, State University of New
York, 350 New Campus Drive, Brockport, New York 14420-2931, USA
gprajitu@brockport.edu
(Date:: Jan 19, 2012)
Abstract.
We discuss some old results due to Abel and Olivier concerning the convergence
of positive series and prove a set of necessary conditions involving
convergence in density.
Key words and phrases:positive series, set of zero density, convergence in density
2000 Mathematics Subject Classification: Primary 37A45, 40A30; Secondary 40E05
1. Introduction
Understanding the nature of a series is usually a difficult task. The
following two striking examples can be found in Hardy’s book, Orders of
infinity: the series
$$\sum_{n\geq 3}\frac{1}{n\ln n\left(\ln\ln n\right)^{2}}$$
converges to 38.43…, but does it so slow that one needs to sum up its first
$10^{3.14\times 10^{86}}$ terms to get the first 2 exact decimals of the sum.
In the same time, the series
$$\sum_{n\geq 3}\frac{1}{n\ln n\left(\ln\ln n\right)}$$
is divergent but its partial sums exceed 10 only after $10^{10^{100}}$ terms.
See [17], pp. 60-61. On page 48 of the same book, Hardy mentions an
interesting result (attributed to De Morgan and Bertrand) about the
convergence of the series of the form
($$MB_{k}$$)
$$\sum_{n\geq 1}\frac{1}{n^{s}}\text{ and }\sum_{n\geq n_{k}}\frac{1}{n\left(\ln
n%
\right)\left(\ln\ln n\right)\cdots(\underset{k\text{ times}}{\underbrace{\ln%
\ln\cdots\ln n}})^{s}}~{},$$
where $k$ is an arbitrarily fixed natural number, $s$ is a real number and
$n_{k}$ is a number large enough to ensure that $\underset{k\text{ times}}{\underbrace{\ln\ln\cdots\ln n}}$ is positive. Precisely, such a series is
convergent if $s>1$ and divergent otherwise. This is an easy consequence of
Cauchy’s condensation test (see Knopp [21], p. 122). Another short
argument is provided by Hardy [18] in his Course of Pure
Mathematics, on p. 376.
The above discussion makes natural the following problem.
Problem 1.
What decides if a positive series is convergent or divergent?
Is there any universal convergence test? Is there any pattern in convergence?
This is an old problem who received a great deal of attention over the years.
Important progress was made during the 19th Century by people like A.-L.
Cauchy, N. H. Abel, C. F. Gauss, A. Pringsheim and Du Bois-Reymond. In the
last fifty years the interest shifted toward combinatorial aspects of
convergence/divergence, although papers containing new tests of convergence
continue to be published. See for example [2] and [23]. This
paper’s purpose is to discuss the relationship between the convergence of a
positive series and the convergence properties of the summand sequence.
2. Some history
We start by recalling an episode from the beginning of Analysis, that marked
the moment when the series of type ($MB_{k}$) entered the attention of
mathematicians. M. Goar [14] has written the story in more detail.
In 1827, L. Olivier [26] published a paper claiming that the
harmonic series represents a kind of “boundary” case with which other
potentially convergent series of positive terms could be compared.
Specifically, he asserted that a positive series $\sum a_{n}$ whose terms are
monotone decreasing is convergent if and only if $na_{n}\rightarrow 0.$ One
year later, Abel [1] disproved this convergence test by considering
the case of the (divergent) positive series $\sum_{n\geq 2}\frac{1}{n\ln n}$.
In the same Note, Abel noticed two other important facts concerning the
convergence of positive series:
Lemma 1.
There is no positive function $\varphi$ such that a positive
series $\sum a_{n}$ whose terms are monotone decreasing is convergent if and
only if $\varphi(n)a_{n}\rightarrow 0.$ In other words, there is no “boundary”
positive series.
Lemma 2.
If $\sum a_{n}$ is a divergent positive series, then the
series ${\displaystyle\sum}\left(\frac{a_{n}}{\sum_{k=1}^{n}a_{k}}\right)$ is also divergent. As a
consequence, for each divergent positive series there is always another one
which diverges slower.
A fact which was probably known to Abel (although it is not made explicit in
his Note) is that the whole scale of divergent series
(A)
$$\sum_{n\geq n_{k}}\frac{1}{n\left(\ln n\right)\left(\ln\ln n\right)\cdots(%
\underset{k\text{ times}}{\underbrace{\ln\ln\cdots\ln n}})}\text{\quad for }k=%
1,2,3,...$$
comes from the harmonic series $\sum\frac{1}{n},$ by successive application of
Lemma 2 and the following result on the generalized Euler’s constant.
Lemma 3.
(C. Maclaurin and A.-L. Cauchy). If $f$ is positive and strictly
decreasing on $[0,\infty),$ there is a constant $\gamma_{f}\in(0,f(1)]$ and a
sequence $(E_{f}(n))_{n}$ with $0<E_{f}(n)<f(n),$ such that
(MC)
$$\sum_{k\,=\,1}^{n}\,f(k)=\int_{1}^{n}\,f(x)\,dx+\gamma_{f}+E_{f}(n)$$
for all $n.$
See [4], Theorem 1, for details.
If $f(n)\rightarrow 0$ as $n\rightarrow\infty,$ then $($MC$)$ implies
$$\sum_{k\,=\,1}^{n}\,f(k)-\int_{1}^{n}\,f(x)\,dx\rightarrow\gamma_{f}.$$
$\gamma_{f}$ is called the generalized Euler’s constant, the
original corresponding to $f(x)=1/x.$
Coming back to Olivier’s test of convergence, we have to mention that the
necessity part survived the scrutiny of Abel and became known as Olivier’s Theorem:
Theorem 1.
If $\sum a_{n}$ is a convergent positive series and $(a_{n})_{n}$
is monotone decreasing, then $na_{n}\rightarrow 0.$
Remark 1.
If $\sum a_{n}$ is a convergent positive series and $(na_{n})_{n}$ is monotone
decreasing, then $\left(n\ln n\right)a_{n}\rightarrow 0$. In fact,
according to the well known estimate of harmonic numbers,
$$\sum_{1}^{n}\frac{1}{k}=\log n+\gamma+\frac{1}{2n}-\frac{1}{12n^{2}}+\frac{%
\varepsilon_{n}}{120n^{4}}\text{,}$$
where $\varepsilon_{n}\in(0,1),$ we get
$$\sum_{\lfloor\sqrt{n}\rfloor}^{n}a_{k}=\sum_{\lfloor\sqrt{n}\rfloor}{}^{n}%
\left(ka_{k}\right)\frac{1}{k}\geq na_{n}\sum_{\lfloor\sqrt{n}\rfloor}^{n}%
\frac{1}{k}\geq\frac{1}{2}\left(n\ln n\right)a_{n}-\frac{1}{2(\lfloor\sqrt{n}%
\rfloor-1)}$$
for all $n\geq 2.$ Here $\lfloor x\rfloor$ denotes the largest integer that
does not exceeds $x.$
Simple examples show that the monotonicity condition is vital for Olivier’s
Theorem. See the case of the series $\sum a_{n},$ where $a_{n}=\frac{\log n}{n}$ if $n$ is a square, and $a_{n}=\frac{1}{n^{2}}$ otherwise.
The next result provides an extension of the Olivier’s Theorem to the context
of complex numbers.
Theorem 2.
Suppose that $(a_{n})_{n}$ is a nonincreasing sequence of positive numbers
converging to 0 and $(z_{n})_{n}$ is a sequence of complex numbers such that
the series $\sum a_{n}z_{n}$ is convergent. Then
$$\lim_{n\rightarrow\infty}\left(\sum_{k=1}^{n}z_{k}\right)a_{n}=0.$$
Proof.
Since the series $\sum a_{n}z_{n}$ is convergent, one may choose a natural
number $m>0$ such that
$$\left|\sum_{k=m+1}^{n}a_{k}z_{k}\right|<\frac{\varepsilon}{4}$$
for every $n>m.$ We will estimate $a_{n}\left(z_{m+1}+\cdots+z_{n}\right)$
by using Abel’s identity. In fact, letting
$$S_{n}=a_{m+1}z_{m+1}+\cdots+a_{n}z_{n},$$
we get
$$\displaystyle\left|a_{n}\left(z_{m+1}+\cdots+z_{n}\right)\right|=a_{n}\left|%
\frac{1}{a_{m+1}}a_{m+1}z_{m+1}+\cdots+\frac{1}{a_{n}}a_{n}z_{n}\right|\\
\displaystyle=a_{n}\left|\frac{1}{a_{m+1}}S_{m+1}+\frac{1}{a_{m+2}}\left(S_{m+%
2}-S_{m+1}\right)+\cdots+\frac{1}{a_{n}}\left(S_{n}-S_{n-1}\right)\right|\\
\displaystyle=a_{n}\left|\left(\frac{1}{a_{m+1}}-\frac{1}{a_{m+2}}\right)S_{m+%
1}+\cdots+\left(\frac{1}{a_{n-1}}-\frac{1}{a_{n}}\right)S_{n-1}+\frac{1}{a_{n}%
}S_{n}\right|\\
\displaystyle\leq\frac{\varepsilon a_{n}}{4}\left(\left(\frac{1}{a_{m+2}}-%
\frac{1}{a_{m+1}}\right)+\cdots+\left(\frac{1}{a_{n}}-\frac{1}{a_{n-1}}\right)%
+\frac{1}{a_{n}}\right)\\
\displaystyle=\frac{\varepsilon a_{n}}{4}\left(\frac{2}{a_{n}}-\frac{1}{a_{n+1%
}}\right)<\frac{\varepsilon}{2}.$$
Since $\lim_{n\rightarrow\infty}a_{n}=0,$ one may choose an index
$N(\varepsilon)>m$ such that
$$\left|a_{n}\left(z_{1}+\cdots+z_{m}\right)\right|<\frac{\varepsilon}{2}$$
for every $n>N(\varepsilon)$ and thus
$$\left|a_{n}\left(z_{1}+\cdots+z_{n}\right)\right|\leq\left|a_{n}\left(z_{1}+%
\cdots+z_{m}\right)\right|+\left|a_{n}\left(z_{m+1}+\cdots+z_{n}\right)\right|<\varepsilon$$
for every $n>N(\varepsilon).$
∎
In 2003, T. Šalát and V. Toma [28] made the important remark that
the monotoni-city condition in Theorem 1 can be dropped
if the convergence of $(na_{n})_{n}$ is weakened:
Theorem 3.
If $\sum a_{n}$ is a convergent positive series, then
$na_{n}\rightarrow 0$ in density.
In order to explain the terminology, recall that a subset $A$ of $\mathbb{N}$
has zero density if
$$d(A)=\underset{n\rightarrow\infty}{\lim}\frac{\left|A\cap\{1,\ldots,n\}\right|%
}{n}=0,$$
positive lower density if
$$\underline{d}(A)=\underset{n\rightarrow\infty}{\lim\inf}\frac{\left|A\cap\{1,%
\ldots,n\}\right|}{n}>0,$$
and positive upper density if
$$\bar{d}(A)=\underset{n\rightarrow\infty}{\lim\inf}\frac{\left|A\cap\{1,\ldots,%
n\}\right|}{n}>0.$$
Here $\left|\cdot\right|$ stands for cardinality.
We say that a sequence $(x_{n})_{n}$ of real numbers converges in
density to a number $x$ (denoted by $(d)$-$\lim_{n\rightarrow\infty}x_{n}=x)$
if for every $\varepsilon>0$ the set $A(\varepsilon)=\left\{n:\left|x_{n}-x\right|\geq\varepsilon\right\}$ has zero density. Notice that
$(d)-\lim_{n\rightarrow\infty}x_{n}=x$ if and only if there is a subset $J$ of
$\mathbb{N}$ of zero density such that
$$\lim_{\begin{subarray}{c}n\rightarrow\infty\\
n\notin J\end{subarray}}a_{n}=0.$$
This notion can be traced back to B. O. Koopman and J. von Neumann
([22], pp. 258-259), who proved the integral counterpart of the
following result:
Theorem 4.
For every sequence of nonnegative numbers,
$$\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}a_{k}=0\Rightarrow(d)\text{-%
}\lim_{n\rightarrow\infty}a_{n}=0.$$
The converse works under additional hypotheses, for example, for bounded sequences.
Proof.
Assuming $\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}a_{k}=0,$ we
associate to each $\varepsilon>0$ the set $A_{\varepsilon}=\left\{n\in\mathbb{N}:a_{n}\geq\varepsilon\right\}.$ Since
$$\displaystyle\frac{\left|\{1,...,n\}\cap A_{\varepsilon}\right|}{n}$$
$$\displaystyle\leq\frac{1}{n}\sum_{k=1}^{n}\frac{a_{k}}{\varepsilon}$$
$$\displaystyle\leq\frac{1}{\varepsilon n}\sum_{k=1}^{n}a_{k}\rightarrow 0\text{%
\ as
}n\rightarrow\infty,$$
we infer that each of the sets $A_{\varepsilon}$ has zero density. Therefore
$(d)$-$\lim_{n\rightarrow\infty}a_{n}=0.$
Suppose now that $(a_{n})_{n}$ is bounded and $(d)$-$\lim_{n\rightarrow\infty}a_{n}=0.$ Then for every $\varepsilon>0$ there is a set $J$ of zero density
outside which $a_{n}<\varepsilon.$ Since
$$\displaystyle\frac{1}{n}\sum_{k=1}^{n}a_{k}$$
$$\displaystyle=\frac{1}{n}\sum_{k\in\{1,...,n\}\cap J}a_{k}+\frac{1}{n}\sum_{k%
\in\{1,...,n\}\backslash J}a_{k}$$
$$\displaystyle\leq\frac{\left|\{1,...,n\}\cap J\right|}{n}\cdot\sup_{k\in%
\mathbb{N}}a_{k}+\varepsilon$$
and $\lim_{n\rightarrow\infty}\frac{\left|\{1,...,n\}\cap J\right|}{n}=0$, we conclude that $\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}a_{k}=0.$
∎
Remark 2.
Theorem 4 is related to the Tauberian theory, whose aim is to
provide converses to the well known fact that for any sequence of complex
numbers,
$$\lim_{n\rightarrow\infty}z_{n}=z\Rightarrow\lim_{n\rightarrow\infty}\frac{1}{n%
}\sum_{k=1}^{n}z_{k}=z.$$
Recall here the famous Hardy-Littlewood Tauberian theorem: If $\left|z_{n}-z_{n-1}\right|=O\left(1/n\right)$ and
$$\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}z_{k}=z,$$
then $\lim_{n\rightarrow\infty}z_{n}=z.$ See [19], Theorem 28.
The aforementioned result of Šalát and Toma is actually an easy
consequence of Theorem 4. Indeed, if $\sum a_{n}$ is a convergent
positive series, then its partial sums $S_{n}=\sum_{k=1}^{n}a_{k}$ constitutes
a convergent sequence with limit $S$. By Cesàro’s Theorem,
$$\lim_{n\rightarrow\infty}\frac{S_{1}+\cdots+S_{n-1}}{n}=S,$$
whence
$$\lim_{n\rightarrow\infty}\frac{a_{1}+2a_{2}+\cdots+na_{n}}{n}=\lim_{n%
\rightarrow\infty}\left(S_{n}-\frac{S_{1}+\cdots+S_{n-1}}{n}\right)=0.$$
According to Theorem 4, this fact is equivalent to the convergence
in density of $(na_{n})_{n}$ to 0.
In turn, the result of Šalát and Toma implies Olivier’s Theorem.
Indeed, if the sequence $(a_{n})$ is decreasing, then
$$\frac{a_{1}+2a_{2}+\dots+na_{n}}{n}\geq\frac{(1+2+\dots+n)a_{n}}{n}=\frac{(n+1%
)a_{n}}{2}$$
which implies that if
$$\lim_{n\rightarrow\infty}\frac{a_{1}+2a_{2}+\cdots+na_{n}}{n}=0$$
then $\lim_{n}na_{n}=0.$
If $\sum a_{n}$ is a convergent positive series, then so is $\sum a_{\varphi(n)},$ whenever $\varphi:\mathbb{N\rightarrow N}$ is a bijective
map. This implies that $na_{\varphi(n)}\rightarrow 0$ in density (a conclusion
that doesn’t work for usual convergence).
The monograph of H. Furstenberg [13] outlines the importance of
convergence in density in ergodic theory. In connection to series summation,
the concept of convergence in density was rediscovered (under the name of
statistical convergence) by Steinhaus [28] and Fast [12] (who
mentioned also the first edition of Zygmund’s monograph [31], published in
Warsaw in 1935). Apparently unaware of the Koopman-von Neumann result,
Šalát and Toma referred to these authors for the roots of convergence
in density.
At present there is a large literature about this concept and its many
applications. We only mention here the recent papers by M. Burgin and O. Duman
[7] and P. Therán [30].
3. An extension of Šalát - Toma Theorem
In this section we will turn our attention toward a generalization of the
result of Šalát and Toma mentioned above. This generalization involves
the concepts of convergence in density and convergence in lower density. A
sequence $(x_{n})_{n}$ of real numbers converges in lower density to a
number $x$ (abbreviated, $(\underline{d})$-$\lim_{n\rightarrow\infty}x_{n}=x)$
if for every $\varepsilon>0$ the set $A(\varepsilon)=\left\{n:\left|x_{n}-x\right|\geq\varepsilon\right\}$ has zero lower density.
Theorem 5.
Assume that $\sum a_{n}$ is a convergent positive series and
$(b_{n})_{n}$ is a nondecreasing sequence of positive numbers such that
$\sum_{n=1}^{\infty}\frac{1}{b_{n}}=\infty.$ Then
$$(\underline{d})\text{-}\lim_{n\rightarrow\infty}a_{n}b_{n}=0,$$
and this conclusion can be improved to
$$(d)\text{-}\lim_{n\rightarrow\infty}a_{n}b_{n}=0,$$
provided that $\inf_{n}\frac{n}{b_{n}}>0$.
An immediate consequence is the following result about the speed of
convergence to 0 of the general term of a convergent series of positive numbers.
Corollary 1.
If $\sum a_{n}$ is a convergent series of positive numbers, then for each
$k\in\mathbb{N},$
($$\text@underline{D}_{k}$$)
$$(\underline{d})\,\text{-\thinspace}\lim_{n\rightarrow\infty}\left[n\left(\ln n%
\right)\left(\ln\ln n\right)\cdots(\underset{k\text{ times}}{\underbrace{\ln%
\ln\cdots\ln n}})a_{n}\right]=0.$$
The proof of Theorem 5 is based on two technical lemmas:
Lemma 4.
Suppose that $(c_{n})_{n}$ is a nonincreasing sequence of
positive numbers such that $\sum_{n=1}^{\infty}c_{n}=\infty$ and $S$ is a set
of positive integers with positive lower density. Then the series $\sum_{n\in S}c_{n}$ is also divergent.
Proof.
By our hypothesis there are positive integers $p$ and $N$ such that
$$\frac{\left|S\cap\{1,...,n\}\right|}{n}>\frac{1}{p}$$
whenever $n\geq N$. Then $\left|S\cap\{1,...,kp\}\right|>k$ for
every $k\geq N/p,$ which yields
$$\displaystyle\sum_{n\in S}c_{n}$$
$$\displaystyle=\sum_{k=1}^{\infty}c_{n_{k}}\geq\sum_{k=1}^{\infty}c_{kp}$$
$$\displaystyle=\frac{1}{p}\sum_{k=1}^{\infty}pc_{kp}$$
$$\displaystyle\geq\frac{1}{p}\left[\left(c_{p}+\cdots+c_{2p-1}\right)+\left(c_{%
2p}+\cdots+c_{3p-1}\right)+\cdots\right]$$
$$\displaystyle=\frac{1}{p}\sum_{k=p}^{\infty}c_{k}=\infty.$$
∎
Our second lemma shows that a subseries $\sum_{n\in S}\frac{1}{n}$ of the
harmonic series is divergent whenever $S$ is a set of positive integers with
positive upper density.
Lemma 5.
If $S$ is an infinite set of positive integers and
$(a_{n})_{n\in S}$ is a nonincreasing positive sequence such that $\sum_{n\in S}a_{n}<\infty$ and $\inf\left\{nc_{n}:n\in S\right\}=\alpha>0,$ then $S$
has zero density.
Proof.
According to our hypotheses, the elements of $S\ $can be counted as
$k_{1}<k_{2}<k_{3}<....$ Since
$$0<\frac{n}{k_{n}}=\frac{na_{k_{n}}}{k_{n}a_{k_{n}}}\leq\frac{1}{\alpha}na_{k_{%
n}},$$
we infer from Theorem 1 that $\lim_{n\rightarrow\infty}\frac{n}{{}_{k_{n}}}=0.$
Then
$$\frac{\left|S\cap\{1,...,n)\right|}{n}=\frac{p}{n}=\frac{\left|S\cap\{1,...,k_%
{p})\right|}{k_{p}}\leq\frac{p}{k_{p}}\rightarrow 0,$$
whence
$$d(S)=\lim_{n\rightarrow\infty}\frac{\left|S\cap\{1,...,n)\right|}{n}=0.$$
∎
Proof of Theorem 5. For $\varepsilon>0$ arbitrarily
fixed we denote
$$S_{\varepsilon}=\left\{n:a_{n}b_{n}\geq\varepsilon\right\}.$$
Then
$$\infty>\sum\nolimits_{n\in S_{\varepsilon}}a_{n}\geq\sum\nolimits_{n\in S_{%
\varepsilon}}\frac{1}{b_{n}},$$
whence by Lemma 4 it follows that $S_{\varepsilon}$ has zero lower
density. Therefore $(\underline{d})$-$\lim_{n\rightarrow\infty}a_{n}b_{n}=0$.
When $\inf_{n}\frac{n}{b_{n}}=\alpha>0,$ then
$$\infty>\sum\nolimits_{n\in S_{\varepsilon}}\frac{1}{b_{n}}\geq\alpha\sum%
\nolimits_{n\in S_{\varepsilon}}\frac{1}{n}$$
so by Lemma 5 we infer that $S_{\varepsilon}$ has zero
density. In this case, $(d)$-$\lim_{n\rightarrow\infty}a_{n}b_{n}=0$. $\square$
4. Convergence associated to higher order densities
The convergence in lower density is very weak. A better way to formulate
higher order Šalát-Toma type criteria is to consider the convergence
in harmonic density. We will illustrate this idea by proving a non-monotonic
version of Remark 1.
The harmonic density $d_{h}$ is defined by the formula
$$d_{h}(A)=\lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum_{k=1}^{n}\frac{\chi_{A}(%
k)}{k},$$
and the limit in harmonic density, $(d_{h})$-$\lim_{n\rightarrow\infty}a_{n}=\ell,$ means that each of the sets $\left\{n:\left|a_{n}-\ell\right|\geq\varepsilon\right\}$ has zero harmonic density,
whenever $\varepsilon>0$. Since
$$d(A)=0\text{ implies }d_{h}(A)=0,$$
(see [16], Lemma 1, p. 241), it follows that the existence of limit in
density assures the existence of limit in harmonic density.
The harmonic density has a nice application to Benford’s law, which states
that in lists of numbers from many real-life sources of data the leading digit
is distributed in a specific, non-uniform way. See [8] for more details.
Theorem 6.
If $\sum a_{n}$ is a convergent positive series, then
$$(d_{h})\text{-}\lim_{n\rightarrow\infty}\left(n\ln n\right)a_{n}=0.$$
Proof.
We start by noticing the following analogue of Lemma 5: If
$(b_{n})_{n}$ is a positive sequence such that $(nb_{n})_{n}$ is decreasing
and
$$\inf\left(n\ln n\right)b_{n}=\alpha>0,$$
then every subset $S$ of $\mathbb{N}$ for which $\sum\nolimits_{n\in S}b_{n}<\infty$ has zero harmonic density.
To prove this assertion, it suffices to consider the case where $S$ is
infinite and to show that
($$H$$)
$$\lim_{x\rightarrow\infty}\left(\sum\nolimits_{k\in S\cap\left\{1,...,n\right\}%
}\frac{1}{k}\right)nb_{n}=0.$$
The details are very similar to those used in Lemma 5, and
thus they are omitted.
Having ($H$) at hand, the proof of Theorem 6 can be
completed by considering for each $\varepsilon>0$ the set
$$S_{\varepsilon}=\left\{n:\left(n\ln n\right)a_{n}\geq\varepsilon\right\}.$$
Since
$$\varepsilon\sum\nolimits_{n\in S_{\varepsilon}}\frac{1}{n\ln n}\leq\sum%
\nolimits_{n\in S_{\varepsilon}}a_{n}<\infty,$$
by the aforementioned analogue of Lemma 5 applied to
$b_{n}=1/\left(n\ln n\right)$ we infer that $S_{\varepsilon}$ has zero
harmonic density. Consequently ($d_{h}$)-$\lim_{x\rightarrow\infty}\left(n\ln n\right)a_{n}=0,$ and the proof is done.
∎
An integral version of the previous theorem can be found in [24] and
[25].
One might think that the fulfilment of a sequence of conditions like
$(D_{k}),$ for all $k\in\mathbb{N},$ (or something similar) using other
series, is strong enough to force the convergence of a positive series $\sum a_{n}.$ That this is not the case was shown by Paul Du Bois-Raymond [6]
(see also [21], Ch. IX, Section 41) who proved that for every sequence
of divergent positive series, each divergent essentially slower than the
previous one, it is possible to construct a series diverging slower than all
of them.
Under these circumstances the following problem seems of utmost interest:
Problem 2.
Find an algorithm to determine whether a positive series is convergent or not.
5. The relevance of the harmonic series
Surprisingly, the study of the nature of positive series is very close to that
of subseries of the harmonic series $\sum\frac{1}{n}.$
Lemma 6.
If $(a_{n})_{n}$ is a unbounded sequence of real numbers
belonging to $[1,\infty)$, then the series $\sum\frac{1}{a_{n}}$ and
$\sum\frac{1}{\lfloor a_{n}\rfloor}$ have the same nature.
Proof.
The convergence of the series $\sum\frac{1}{a_{n}}$ follows from the
convergence of the series $\sum\frac{1}{\lfloor a_{n}\rfloor},$ by the
comparison test. In fact, $a_{n}\geq\lfloor a_{n}\rfloor,$ whence $\frac{1}{\lfloor a_{n}\rfloor}\geq\frac{1}{a_{n}}.$
Conversely, if the series $\sum\frac{1}{a_{n}}$ is convergent, then so are the
series $\sum\frac{1}{\lfloor a_{n}\rfloor+1}$ and $\sum\frac{1}{\lfloor a_{n}\rfloor\left(\lfloor a_{n}\rfloor+1\right)}.$ This is a consequence
of the inequalities
$$\frac{1}{\lfloor a_{n}\rfloor\left(\lfloor a_{n}\rfloor+1\right)}\leq\frac{1}{%
\lfloor a_{n}\rfloor+1}\leq\frac{1}{a_{n}},$$
and the comparison test. Since
$$\frac{1}{\lfloor a_{n}\rfloor}=\frac{1}{\lfloor a_{n}\rfloor+1}+\frac{1}{%
\lfloor a_{n}\rfloor\left(\lfloor a_{n}\rfloor+1\right)}$$
we conclude that the series $\sum\frac{1}{\lfloor a_{n}\rfloor}$ is convergent too.
∎
By combining Lemma 5 and Lemma 6 we infer
the following result:
Corollary 2.
If $(a_{n})_{n}$ is a sequence of positive numbers whose
integer parts form a set of positive upper density, then the series $\sum\frac{1}{a_{n}}$ is divergent.
The converse of Corollary 2 is not true. A counterexample is
provided by the series $\sum_{p=\text{prime}}\frac{1}{p},$ of inverses of
prime numbers, which is divergent (see [3] or [10] for a
short argument). According to an old result due to Chebyshev, if
$\pi(n)=\left|\left\{p\leq n:p\text{ prime}\right\}\right|,$
then
$$\frac{7}{8}<\frac{\pi(n)}{n/\ln n}<\frac{9}{8}$$
and thus the set of prime numbers has zero density.
The following estimates of the $k$th prime number,
$$k\left(\ln k+\ln\ln k-1\right)\leq p_{k}\leq k\left(\ln k+\ln\ln k\right)\text%
{\quad for }k\geq 6,$$
which are made available by a recent paper of P. Dusart [9], shows that
the speed of divergence of the series $\sum_{p=\text{prime}}\frac{1}{p}$ is
comparable with that of $\sum\frac{1}{k\left(\ln k+\ln\ln k\right)}.$
Lemma 6 suggests that the nature of positive series
$\sum\frac{1}{a_{n}}$ could be related to some combinatorial properties of the
sequence $(\lfloor a_{n}\rfloor)_{n}$ (of natural numbers).
Problem 3.
Given an increasing function $\varphi:\mathbb{N\rightarrow}(0,\infty)$ with
$\lim_{n\rightarrow\infty}\varphi(n)=\infty,$ we define the upper density of
weight $\varphi$ by the formula
$$\bar{d}_{\varphi}(A)=\underset{n\rightarrow\infty}{\lim\sup}\frac{\left|A\cap[%
1,n]\right|}{\varphi(n)}.$$
Does every subset $A\subset\mathbb{N}$ with $\bar{d}_{n/\ln n}(A)>0$ generate
a divergent subseries $\sum_{n\in A}\frac{1}{n}$ of the harmonic series?
What about the case of other weights
$$n/[\left(\ln n\right)\left(\ln\ln n\right)\cdots(\underset{k\text{
times}}{\underbrace{\ln\ln\cdots\ln n}})]?$$
This problem seems important in connection with the following long-standing
conjecture due to P. Erdös:
Conjecture 1.
$($P. Erdös$)$. If the sum of reciprocals of a set $A$ of integers
diverges, then that set contains arbitrarily long arithmetic progressions.
This conjecture is still open even if one only seeks a single progression of
length three. However, in the special case where the set $A$ has positive
upper density, a positive answered was provided by E. Szemerédi [29]
in 1975. Recently, Green and T. Tao [15] proved Erdös’ Conjecture
in the case where $A$ is the set of prime numbers, or a relatively dense
subset thereof.
Theorem 7.
Assuming the truth of Erdös’ conjecture, any unbounded sequence
$(a_{n})_{n}$ of positive numbers whose sum of reciprocals $\sum_{n}\frac{1}{a_{n}}$ is divergent must contain arbitrarily long $\varepsilon$-progressions, for any $\varepsilon>0$.
By an $\varepsilon$-progression of length $n$ we mean any string
$c_{1},...,c_{n}$ such that
$$\left|c_{k}-a-kr\right|<\varepsilon$$
for suitable $a,r\in\mathbb{R}$ and all $k=1,...,n.$
The converse of Theorem 7 is not true. A counterexample is provided by
the convergent series $\sum_{n=1}^{\infty}\left(\frac{1}{10^{n}+1}+\cdots+\frac{1}{10^{n}+n}\right).$
It seems to us that what is relevant in the matter of convergence is not only
the existence of some progressions but the number of them. We believe not only
that the divergent subseries of the harmonic series have progressions of
arbitrary length but that they have a huge number of such progressions and of
arbitrarily large common differences. Notice that the counterexample above
contains only progressions of common difference 1 (or subprogressions of
them). Hardy and Littlewood’s famous paper [20] advanced the hypothesis
that the number of progressions of length $k$ is asymptotically of the form
$C_{k}n^{2}/\ln^{k}n$, for some constant $C_{k}$.
References
[1]
N. H. Abel, Note sur le memoire de Mr. L. Olivier No. 4 du
second tome de ce journal, ayant pour titre ’Remarques sur les series infinies
et leur convergence’, Journal fűr die reine und angewandte
Mathematik, 3 (1828), 79-81. Available from the Göttinger
Digitalisierungszentrum at http://gdz.sub.uni-goettingen.de/no_cache/dms/load/toc/?IDDOC=238618
[2]
S. A. Ali, The $m$th Ratio Test: New Convergence Tests for
Series, The American Mathematical Monthly 115 (2008), No. 6, 514-524.
[3]
T.M. Apostol, What Is the Most Surprising Result in
Mathematics? Part II, Math Horizons, 4 (1997), No. 3, 26-31.
[4]
T. M. Apostol, An Elementary View of Euler’s Summation
Formula, The American Mathematical Monthly, 106 (1999), No. 5, 409-418.
[5]
J. Marshall Ash, Neither a Worst Convergent Series nor a Best
Divergent Series Exists, The College Mathematics Journal 28
(1997, No. 4, 296-297.
[6]
P. Du Bois-Reymond, Eine neue Theorie der Convergenz und
Divergenz von Reihen mit positiven Gliedern, Journal fűr die reine
und angewandte Mathematik 76 (1873), 61-91.
[7]
M. Burgin and O. Duman, Statistical Convergence and Convergence
in Statistics. Preprint, 2006, available at arxiv.org/pdf/math/0612179.
[8]
P. Diaconis, G.H. Hardy and probability???, Bull. London
Math. Soc. 34 (2002), 385-402.
[9]
P. Dusart, The $k$th prime is greater than $k\left(\ln k+\ln\ln k-1\right)$ for $k\geq 2$, Mathematics of Computation
68 (1999), 411–415.
[10]
Ch. V. Eynden, Proofs that $\sum 1/p$ diverges, Amer.
Math. Month. 87 (1980), 394-397.
[11]
P. Erdös and P. Turán, On some sequences of integers,
J. London Math. Soc. 11 (1936), 261–264.
[12]
H. Fast, Sur la convergence statistique, Colloq. Math.
2 (1951) 241-244.
[13]
H. Furstenberg, Recurrence in Ergodic Theory and
Combinatorial Number Theory, Princeton University Press, Princeton, New
Jersey, 1981.
[14]
M. Goar, Olivier and Abel on Series Convergence: An Episode
from Early 19th Century Analysis, Mathematics Magazine, 72
(1999), No. 5, 347-355.
[15]
B. Green and T. Tao, The primes contain arbitrarily
long arithmetic progressions, Annals of Math.,
167 (2008), No. 2, 481-547.
[16]
H. Halberstam and K. F. Roth, Sequences, second Edition,
Springer-Verlag, New York, 1983.
[17]
G. H. Hardy, Orders of infinity, Cambridge Univ. Press, 1910.
[18]
G. H. Hardy, A Course of Pure mathematics, 3rd Ed.,
Cambridge Univ. Press, 1921.
[19]
G.H. Hardy and J.E. Littlewood, Contributions to the
arithmetic theory of series, Proc. London Math. Soc. (2) 11
(1912-1913), 411–478; Collected Papers of G.H. Hardy, Vol. 6. Oxford:
Clarendon Press, 1974, pp. 428–495.
[20]
G.H. Hardy and J.E. Littlewood, Some problems of
“partitio numerorum”; III: On the expression
of a number as a sum of primes, Acta Math. 44 (1923), 1 - 70.
[21]
K. Knopp, Theory and Application of Infinite Series,
Blackie and Son Ltd., London, UK, Reprinted, 1954.
[22]
B. O. Koopman and J. von Neumann, Dynamical systems of
continuous spectra, Proc. Natl. Acad. Sci. U.S.A. 18
(1932), 255-263.
[23]
E. Liflyand, S. Tikhonov and M. Zeltser, Extending tests for
convergence of number series, J. Math. Anal. Appl. 377 (2011), No. 1, 194-206.
[24]
C. P. Niculescu and F. Popovici, A note on the behavior of
integrable functions at infinity, J. Math. Anal. Appl. 381
(2011), 742-747.
[25]
C. P. Niculescu and F. Popovici, The asymptotic behavior of
integrable functions, submitted.
[26]
L. Olivier, Remarques sur les series infinies et leur
convergence, Journal fűr die reine und angewandte Mathernatik,
2 (1827), 31-44.
[27]
T. Šalát and V. Toma, A classical Olivier’s theorem and
statistically convergence, Annales Math. Blaise Pascal 10
(2003), 305-313.
[28]
H. Steinhaus, Sur la convergence ordinaire et la convergence
asymptotique, Colloq. Math. 2 (1951) 73-74.
[29]
E. Szemerédi, On sets of integers containing no $k$ elements
in arithmetic progres sion, Acta Arith. 27 (1975), 299-345.
[30]
P. Terán, A reduction principle for obtaining Tauberian
theorems for statistical convergence in metric spaces, Bull. Belg. Math.
Soc. 12 (2005), 295–299.
[31]
A. Zygmund, Trigonometric Series, Cambridge Univ. Press,
Cambridge, 1979. |
The geometry of the Sasaki metric on the sphere bundle of Euclidean Atiyah vector bundles
Mohamed Boucetta
Université Cadi-Ayyad
Faculté des sciences et techniques
BP 549 Marrakech Maroc
e-mail: m.boucetta@uca.ac.ma
Hasna Essoufi
Université Cadi-Ayyad
Faculté des sciences et techniques
BP 549 Marrakech Maroc
e-mail: essoufi.hasna@gmail.com
Abstract
Let $(M,\langle\;,\;\rangle_{TM})$ be a Riemannian manifold. It is well-known that the Sasaki metric on $TM$ is very rigid but it has nice properties when restricted to $T^{(r)}M=\{u\in TM,|u|=r\}$. In this paper, we consider a general situation where we replace $TM$ by a vector bundle $E\longrightarrow M$ endowed with a Euclidean product $\langle\;,\;\rangle_{E}$ and a connection $\nabla^{E}$ which preserves $\langle\;,\;\rangle_{E}$. We define the Sasaki metric on $E$ and we consider its restriction $h$ to $E^{(r)}=\{a\in E,\langle a,a\rangle_{E}=r^{2}\}$. We study the Riemannian geometry of $(E^{(r)},h)$ generalizing many results first obtained on $T^{(r)}M$ and establishing new ones.
We apply the results obtained in this general setting to the class of Euclidean Atiyah vector bundles introduced by the authors in boucettaessoufi . Finally, we prove that any unimodular three dimensional Lie group $G$ carries a left invariant Riemannian metric such that $(T^{(1)}G,h)$ has a positive scalar curvature.
††journal: Journal of Differential Geometry and its applications
Keywords: Sasaki metric, sphere bundles, Atiyah Lie algebroids
1 Introduction
Through this paper, a Euclidean vector bundle is a vector bundle $\pi_{E}:E\longrightarrow M$ endowed with $\langle\;,\;\rangle_{E}\in\Gamma(E^{*}\otimes E^{*})$ which is bilinear symmetric and definite positive in the restriction to each fiber.
Let $(M,\langle\;,\;\rangle_{TM})$ be a Riemannian manifold of dimension $n$, $\pi_{E}:E\longrightarrow M$ a vector bundle of rank $m$ endowed with a Euclidean product $\langle\;,\;\rangle_{E}$ and a linear connection $\nabla^{E}$ which preserves $\langle\;,\;\rangle_{E}$. Denote by $K:TE\longrightarrow E$ the connection map of $\nabla^{E}$ locally given by
$$K\left(\sum_{i=1}^{n}b_{i}\partial_{x_{i}}+\sum_{j=1}^{m}Z_{j}\partial_{\mu_{j%
}}\right)=\sum_{l=1}^{m}\left(Z_{l}+\sum_{i=1}^{n}\sum_{j=1}^{m}b_{i}\mu_{j}%
\Gamma_{ij}^{l}\right)s_{l},$$
where $(x_{1},\ldots,x_{n})$ is a system of local coordinates, $(s_{1},\ldots,s_{m})$ is a basis of local sections of $E$, $(x_{i},\mu_{j})$ the associated system of coordinates on $E$ and $\nabla_{\partial_{x_{i}}}^{E}s_{j}=\sum_{l=1}^{m}\Gamma_{ij}^{l}s_{l}$. Then
$$TE=\ker d\pi_{E}\oplus\ker K.$$
The Sasaki metric $g_{s}$ on $E$ is the Riemannian metric given by
$$g_{s}(A,B)=\langle d\pi_{E}(A),d\pi_{E}(B)\rangle_{TM}+\langle K(A),K(B)%
\rangle_{E},\quad A,B\in T_{a}E.$$
For any $r>0$, the sphere bundle of radius $r$ is the hypersurface $E^{(r)}=\left\{a\in E,\langle a,a\rangle_{E}=r^{2}\right\}$.
They are two classes of such Euclidean vector bundles naturally associated to a Riemannian manifold.
We refer to the first one as the classical case. It is the case where $E=TM$, $\langle\;,\;\rangle_{E}=\langle\;,\;\rangle_{TM}$ and $\nabla^{E}$ is the Levi-Civita connection of $(M,\langle\;,\;\rangle_{TM})$.
The second case will be called the Euclidean Atiyah vector bundle associated to a Riemannian manifold. It has been introduced by the authors in boucettaessoufi . It is defined as follows.
Let $(M,\langle\;,\;\rangle_{TM})$ be a Riemannian manifold, $\mathrm{so}(TM)=\bigcup_{x\in M}\mathrm{so}(T_{x}M)$ where $\mathrm{so}(T_{x}M)$ is the vector space of skew-symmetric endomorphisms of $T_{x}M$ and $k>0$. The Levi-Civita connection $\nabla^{M}$ of $(M,\langle\;,\;\rangle_{TM})$ defines a connection on the vector bundle $\mathrm{so}(TM)$ which we will denote in the same way and it is given, for any $X\in\Gamma(TM)$ and $F\in\Gamma(\mathrm{so}(TM))$, by
$$\nabla^{M}_{X}F(Y)=\nabla^{M}_{X}(F(Y))-F(\nabla_{X}^{M}Y).$$
The Atiyah Euclidean vector bundle111The origin of this vector bundle and the justification of its name can found in boucettaessoufi . associated to $(M,\langle\;,\;\rangle_{TM},k)$ is the triple $(E(M,k),\langle\;,\;\rangle_{k},\nabla^{E})$ where
$E(M,k)=TM\oplus\mathrm{so}(TM)\longrightarrow M$, $\langle\;,\;\rangle_{k}$ and $\nabla^{E}$ are a Euclidean product and a connection on $E(M,k)$ given, for any $X,Y\in\Gamma(TM)$ and $F,G\in\Gamma(\mathrm{so}(TM)),$ by
$$\displaystyle\nabla^{E}_{X}Y$$
$$\displaystyle=$$
$$\displaystyle\nabla^{M}_{X}Y+H_{X}Y,\;\nabla^{E}_{X}F=H_{X}F+\nabla^{M}_{X}F,$$
$$\displaystyle\langle X+F,Y+G\rangle_{k}$$
$$\displaystyle=$$
$$\displaystyle\langle X,Y\rangle_{TM}-k\;\mathrm{tr}(F\circ G),$$
where $R^{M}$ is the curvature tensor of $\nabla^{M}$ given by $R^{M}(X,Y)=\nabla^{M}_{[X,Y]}-\left(\nabla_{X}^{M}\nabla_{Y}^{M}-\nabla_{Y}^{M%
}\nabla_{X}^{M}\right),$
$$H_{X}Y=-\frac{1}{2}R^{M}(X,Y)\;\quad\mbox{and}\quad\;\langle H_{X}F,Y\rangle_{%
TM}=-\frac{1}{2}k\;\mathrm{tr}(F\circ R^{M}(X,Y)).$$
(1)
The connection $\nabla^{E}$ preserves $\langle\;,\;\rangle_{k}$ and its curvature $R^{\nabla^{E}}$ plays a key role in the study of $(E^{(r)}(M,k)$ endowed with the Sasaki metric. Since $R^{\nabla^{E}}$ depends only on $(M,\langle\;,\;\rangle_{TM},k)$, we will call it the supra-curvature of $(M,\langle\;,\;\rangle_{TM},k)$.
This paper has two goals:
1.
The study of the Riemannian geometry of $E^{(r)}$ endowed with the Riemannian metric $h$ restriction of $g_{s}$ in order to generalize all the results obtained in the classical case. We refer to bo ; kowalski for a survey on the geometry of $(T^{(r)}M,h)$.
2.
The application of the results obtained in the general case to the Euclidean Atiyah vector bundle $E^{(r)}(M,k)$ endowed with the Sasaki metric. We will show that the geometry of $(E^{(r)}(M,k),h)$ is so rich and by doing so we open new horizons for further explorations.
Let us give now the organization of this paper. In Section 2, we give the different curvatures of $(E^{(r)},h)$. In Section 3 we derive sufficient conditions for which $(E^{(r)},h)$ has either nonnegative sectional curvature, positive Ricci curvature, positive or constant scalar curvature. In Section 4, we first compute the supra-curvature of different classes of Riemannian manifolds and we characterize those with vanishing supra-curvature (see Theorem 4.1). Then we perform a detailed study of $(E^{(r)}(M,k),h)$ having in mind the results obtained in Section 3.
In Section 5, we prove that any unimodular three dimensional Lie group $G$ carries a left invariant Riemannian metric such that $(T^{(1)}G,h)$ has a positive scalar curvature.
2 Sectional curvature, Ricci curvature and scalar curvature of the Sasaki metric on sphere bundles
Through this section, $(M,\langle\;,\;\rangle_{TM})$ is a $n$-dimensional Riemannian manifold and $\pi_{E}:E\longrightarrow M$ a vector bundle of rank $m$ endowed with a Euclidean product $\langle\;,\;\rangle_{E}$ and a linear connection $\nabla^{E}$ for which $\langle\;,\;\rangle_{E}$ is parallel. We shall denote by $\nabla^{M}$ the Levi-Civita connection of $(M,\langle\;,\;\rangle_{TM})$, by $R^{M}$ and $R^{\nabla^{E}}$ the tensor curvatures of $\nabla^{M}$ and $\nabla^{E}$, respectively. We use the convention
$$R^{M}(X,Y)=\nabla^{M}_{[X,Y]}-\left(\nabla_{X}^{M}\nabla_{Y}^{M}-\nabla_{Y}^{M%
}\nabla_{X}^{M}\right)\quad\mbox{and}\quad R^{\nabla^{E}}(X,Y)=\nabla^{E}_{[X,%
Y]}-\left(\nabla^{E}_{X}\nabla^{E}_{Y}-\nabla^{E}_{Y}\nabla^{E}_{X}\right).$$
The derivative of $R^{\nabla^{E}}$ with respect to $\nabla^{M}$ and $\nabla^{E}$ is the tensor field $\nabla_{X}^{M,E}(R^{\nabla^{E}})$ given, for any $X,Y,Z\in\Gamma(TM)$, $\alpha\in\Gamma(E)$, by
$$\nabla_{X}^{M,E}(R^{\nabla^{E}})(Y,Z,\alpha)=\nabla_{X}^{E}(R^{\nabla^{E}}(Y,Z%
)\alpha)-R^{\nabla^{E}}(\nabla_{X}^{M}Y,Z)\alpha-R^{\nabla^{E}}(Y,\nabla_{X}^{%
M}Z)\alpha-R^{\nabla^{E}}(Y,Z)\nabla_{X}^{E}\alpha.$$
(2)
Let $K^{M}$, ${\mathrm{ric}}^{M}$ and $s^{M}$ denotes the sectional curvature, the Ricci curvature and the scalar curvature of $(M,\langle\;,\;\rangle_{TM})$, respectively. An element of $E$ will be denoted by $(x,a)$ with $x\in M$ and $a\in E_{x}$.
We recall the definition of the Sasaki metric $g_{S}$ on $E$, we consider its restriction $h$ to the sphere bundles $E^{(r)}=\left\{a\in E,\langle a,a\rangle_{E}=r^{2}\right\}$ $(r>0)$ and we give the expressions of the different curvatures of $(E^{(r)},h)$.
For any $(x,a)\in E$ there exists an injective linear map $h^{(x,a)}:T_{x}M\longrightarrow T_{(x,a)}E$ given in a coordinates system $(x_{i},\beta_{j})$ on $E$ associated to a coordinates $(x_{i})_{i=1}^{n}$ on $M$ and a local trivialization $(s_{1},\ldots,s_{m})$ of $E$ by
$$h^{(x,a)}(u)=\sum_{i=1}^{n}u_{i}\partial_{x_{i}}-\sum_{k=1}^{m}\left(\sum_{i=1%
}^{n}\sum_{j=1}^{m}u_{i}\beta_{j}\Gamma_{ij}^{k}\right)\partial_{\beta_{k}},$$
where
$$u=\sum_{i=1}^{n}u_{i}\partial_{x_{i}},\;\nabla^{E}_{\partial_{x_{i}}}s_{j}=%
\sum_{k=1}^{m}\Gamma_{ij}^{k}s_{k}\quad\mbox{and}\quad a=\sum_{i=1}^{m}\beta_{%
i}s_{i}.$$
Moreover, if $\mathcal{H}_{(x,a)}E$ denotes the image of $h^{(x,a)}$ then
$$TE={\mathcal{V}}E\oplus\mathcal{H}E,$$
where ${\mathcal{V}}E=\ker d\pi_{E}$.
For any $\alpha\in\Gamma(E)$ and for any $X\in\Gamma(TM)$, we denote by $\alpha^{v}\in\Gamma(TE)$ and $X^{h}\in\Gamma(TE)$ the vertical and horizontal vector field associated to $\alpha$ and $X$. The flow of $\alpha^{v}$ is given by $\Phi^{\alpha}(t,(x,a))=(x,a+t\alpha(x))$ and $X^{h}$ is given by $X^{h}(x,a)=h^{(x,a)}(X(x))$.
The Sasaki metric $g_{s}$ on $E$ is determined by the formulas
$$\displaystyle g_{s}(X^{h},Y^{h})$$
$$\displaystyle=$$
$$\displaystyle\langle X,Y\rangle_{TM}\circ\pi_{E},\;g_{s}(\alpha^{v},\beta^{v})%
=\langle\alpha,\beta\rangle_{E}\circ\pi_{E}\quad\mbox{and}\quad g_{s}(X^{h},%
\alpha^{v})=0,$$
for all $X,Y\in\Gamma(TM)$ and $\alpha,\beta\in\Gamma(E)$.
For any $X\in\Gamma(TM)$ and $\alpha\in\Gamma(E)$, $X^{h}$ is tangent to $E^{(r)}$ however $\alpha^{v}$ is not tangent to $E^{(r)}$. So we define
the tangential lift of $\alpha$ by
$$\alpha^{t}(x,a)=\alpha^{v}(x,a)-\langle\alpha,a\rangle_{E}\frac{U(x,a)}{r^{2}}%
,\quad(x,a)\in E,$$
where $U$ is the vertical vector field on $E$ whose flow is given by $\Phi(t,(x,a))=(x,e^{t}a)$.
We have
$$T_{(x,a)}E^{(r)}=\left\{X^{h}+\alpha^{t}\;/\;X\in T_{x}M\;\text{and}\;\alpha%
\in E_{x}\;\text{with}\;\langle\alpha,a\rangle_{E}=0\right\}.$$
The restriction $h$ of $g_{S}$ to $E^{(r)}$ is given by
$$\displaystyle h(X^{h},Y^{h})$$
$$\displaystyle=$$
$$\displaystyle\langle X,Y\rangle_{TM}\circ\pi_{E},\quad h(X^{h},\alpha^{t})=0,$$
$$\displaystyle h(\alpha^{t},\beta^{t})(x,a)$$
$$\displaystyle=$$
$$\displaystyle\langle\alpha,\beta\rangle_{E}-\frac{\langle\alpha,a\rangle_{E}%
\langle\beta,a\rangle_{E}}{r^{2}}=\langle\bar{\alpha},\bar{\beta}\rangle_{E},$$
where $\alpha,\beta\in\Gamma(E),\;X,Y\in\Gamma(TM)$ and $\bar{\alpha}=\alpha-\frac{\langle\alpha,a\rangle_{E}}{r^{2}}a$.
The following proposition can be established in the same way as the classical case where $E=TM$, $\langle\;,\;\rangle_{E}=\langle\;,\;\rangle_{TM}$ and $\nabla^{E}=\nabla^{M}$.
Proposition 2.1.
We have
$$[\alpha^{t},\beta^{t}]=\frac{\langle\alpha,a\rangle_{E}}{r^{2}}\beta^{t}-\frac%
{\langle\beta,a\rangle_{E}}{r^{2}}\alpha^{t},\;[X^{h},\alpha^{t}]=\left(\nabla%
^{E}_{X}\alpha\right)^{t}\quad\mbox{and}\quad[X^{h},Y^{h}](x,a)=[X,Y]^{h}(x,a)%
+(R^{\nabla^{E}}(X,Y)a)^{t},$$
where $R^{\nabla^{E}}$ is the curvature of ${\nabla^{E}}$ given by $R^{\nabla^{E}}(X,Y)=\nabla^{E}_{[X,Y]}-\left(\nabla^{E}_{X}\nabla^{E}_{Y}-%
\nabla^{E}_{Y}\nabla^{E}_{X}\right)$.
To compute the Riemannian invariants of $(E^{(r)},h)$ (Levi-Civita connection and the different curvatures), we will use the following facts:
$(i)$
The projection $\pi_{E}:(E^{(r)},h)\longrightarrow(M,\langle\;,\;\rangle_{TM})$ is a Riemannian submersion with totally geodesic fibers and hence the different Riemannian invariants can be computed by using O’Neill formulas (see (bes, , chap. 9)). Here the O’Neill shape tensor, say $B$, is given by the expression of $[X^{h},Y^{h}]$. So, by virtue of Proposition 2.1, we get
$$B_{X^{h}}Y^{h}((x,a))=\frac{1}{2}{\mathcal{V}}[X^{h},Y^{h}](x,a)=\frac{1}{2}(R%
^{\nabla^{E}}(X,Y)a)^{v}=\frac{1}{2}(R^{\nabla^{E}}(X,Y)a)^{t},$$
(3)
$B_{\alpha^{t}}=0$ and $h(B_{X^{h}}\alpha^{t},Y^{h})=-h(B_{X^{h}}Y^{h},\alpha^{t})$ for any $\alpha\in\Gamma(E)$, $X,Y\in\Gamma(TM)$ and $(x,a)\in E$.
$(ii)$
O’Neill’s formulas involve the Riemannian invariants of $(M,\langle\;,\;\rangle_{TM})$, the tensor $B$ and the Riemannian invariants of the restriction of $h$ to the fibers.
Based on these facts,
the Levi-Civita connection $\overline{\nabla}$ of $(E^{(r)},h)$ is given by
$$\displaystyle\overline{\nabla}_{X^{h}}Y^{h}(x,a)$$
$$\displaystyle=$$
$$\displaystyle(\nabla_{X}^{M}Y)^{h}(x,a)+\frac{1}{2}(R^{\nabla^{E}}(X,Y)a)^{t},%
\;\overline{\nabla}_{X^{h}}\alpha^{t}=B_{X^{h}}\alpha^{t}+(\nabla^{E}_{X}%
\alpha)^{t},\;\overline{\nabla}_{\alpha^{t}}X^{h}=B_{X^{h}}\alpha^{t},$$
$$\displaystyle(\overline{\nabla}_{\alpha^{t}}\beta^{t})(x,a)$$
$$\displaystyle=$$
$$\displaystyle-\frac{\langle\beta,a\rangle}{r^{2}}\alpha^{t}\quad\mbox{and}%
\quad h(B_{X^{h}}\alpha^{t},Y^{h})=-h(B_{X^{h}}Y^{h},\alpha^{t}),$$
(4)
$X,Y\in\Gamma(TM)$, $\alpha,\beta\in\Gamma(E)$ and $(x,a)\in E$.
Note that if $(X_{i})_{i=1}^{n}$ is a local orthonormal frame of $TM$, $X\in\Gamma(TM)$ and $\alpha\in\Gamma(E)$
$$B_{X^{h}}\alpha^{t}=\frac{1}{2}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(X,X_{i})%
\alpha,a\rangle_{E}X_{i}^{h}.$$
(5)
Remark 1.
When $E=TM$, $\langle\;,\;\rangle_{E}=\langle\;,\;\rangle_{TM}$ and $\nabla^{E}=\nabla^{M}$ we have a simple expression of $B_{X^{h}}\alpha^{t}$ thanks to the symmetries of $R^{\nabla^{E}}=R^{M}$, namely,
$$(B_{X^{h}}Y^{t})(x,a)=\frac{1}{2}R^{M}(Y(x),a)X(x),\;\quad X,Y\in\Gamma(TM).$$
(6)
A direct computation shows that the tensor curvature, the Ricci curvature and the scalar curvature of the fibers are given by
$$\displaystyle R^{v}(\alpha^{t},\beta^{t})\gamma^{t}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{r^{2}}\left(h(\alpha^{t},\gamma^{t})\beta^{t}-h(\beta^{t%
},\gamma^{t})\alpha^{t}\right),\;{\mathrm{ric}}^{v}(\alpha^{t},\beta^{t})=%
\frac{1}{r^{2}}(m-2)h(\alpha^{t},\beta^{t})\quad\mbox{and}\quad s^{v}=\frac{1}%
{r^{2}}(m-1)(m-2).$$
In order to compute the different curvatures of $(E^{(r)},h)$, we need the following formulas.
Proposition 2.2.
For any $X,Y,Z\in\Gamma(TM)$, $\alpha,\beta\in\Gamma(E)$ and $(x,a)\in E$, we have
$$h((\overline{\nabla}_{X^{h}}B)_{Y^{h}}Z^{h},\alpha^{t})(x,a)=-\frac{1}{2}%
\langle\nabla_{X}^{M,E}(R^{\nabla^{E}})(Y,Z,\alpha),a\rangle_{E}.$$
Moreover, if
$\langle\alpha(x),a\rangle_{E}=\langle\beta(x),a\rangle_{E}=0$ then
$$\displaystyle h((\overline{\nabla}_{\alpha^{t}}B)_{X^{h}}Y^{h},\beta^{t})(x,a)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\langle R^{\nabla^{E}}(X,Y)\alpha,\beta\rangle_{E}(x)+%
h(B_{Y^{h}}\alpha^{t},B_{X^{h}}\beta^{t})(x,a)-h(B_{X^{h}}\alpha^{t},B_{Y^{h}}%
\beta^{t})(x,a).$$
Proof.
Suppose first that $\langle\alpha(x),a\rangle_{E}=\langle\beta(x),a\rangle_{E}=0$. We have
$$\displaystyle h((\overline{\nabla}_{\alpha^{t}}B)_{X^{h}}Y^{h},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle h(\overline{\nabla}_{\alpha^{t}}(B_{X_{h}}Y^{h}),\beta^{t})-h(B_%
{\overline{\nabla}_{\alpha^{t}}X^{h}}Y^{h},\beta^{t})-h(B_{X^{h}}\overline{%
\nabla}_{\alpha^{t}}Y^{h},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle\alpha^{t}.h(B_{X_{h}}Y^{h},\beta^{t})-h(B_{X_{h}}Y^{h},\overline%
{\nabla}_{\alpha^{t}}\beta^{t})+h(B_{Y^{h}}\overline{\nabla}_{\alpha^{t}}X^{h}%
,\beta^{t})+h(\overline{\nabla}_{\alpha^{t}}Y^{h},B_{X^{h}}\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle\alpha^{t}.h(B_{X_{h}}Y^{h},\beta^{t})-h(B_{X_{h}}Y^{h},\overline%
{\nabla}_{\alpha^{t}}\beta^{t})+h(B_{Y^{h}}\alpha^{t},B_{X^{h}}\beta^{t})-h(B_%
{X^{h}}\alpha^{t},B_{Y^{h}}\beta^{t}).$$
From (4) and the definition of $\alpha^{t}$ we get
$$\overline{\nabla}_{\alpha^{t}}\beta^{t}(x,a)=0\quad\mbox{and}\quad(\alpha^{t}.%
h(B_{X_{h}}Y^{h},\beta^{t}))(x,a)=(\alpha^{v}.h(B_{X_{h}}Y^{h},\beta^{t}))(x,a).$$
But
$$\displaystyle\alpha^{v}.h(B_{X_{h}}Y^{h},\beta^{t})(x,a)$$
$$\displaystyle=$$
$$\displaystyle\frac{d}{dt}_{|t=0}h(B_{X_{h}}Y^{h}(a+t\alpha),\beta^{t}(a+t%
\alpha))$$
$$\displaystyle=$$
$$\displaystyle\frac{d}{dt}_{|t=0}\left[h(B_{X_{h}}Y^{h}(a+t\alpha),\beta^{v}(a+%
t\alpha))-\frac{1}{r^{2}}\langle\beta,a+t\alpha\rangle_{E}h(B_{X_{h}}Y^{h}(a+t%
\alpha),U(a+t\alpha))\right]$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\frac{d}{dt}_{|t=0}\langle R^{\nabla^{E}}(X,Y)(a+t%
\alpha),\beta\rangle_{E}(x)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\langle R^{\nabla^{E}}(X,Y)\alpha,\beta\rangle_{E}(x),$$
which complete to establish the second formula.
On the other hand,
$$\displaystyle h((\overline{\nabla}_{X^{h}}B)_{Y^{h}}Z^{h},\alpha^{t})(x,a)$$
$$\displaystyle=$$
$$\displaystyle h(\overline{\nabla}_{X^{h}}(B_{Y^{h}}Z^{h}),\alpha^{t})(x,a)-h(B%
_{\overline{\nabla}_{X^{h}}Y^{h}}Z^{h},\alpha^{t})(x,a)-h(B_{Y^{h}}\overline{%
\nabla}_{X^{h}}Z^{h},\alpha^{t})(x,a)$$
$$\displaystyle=$$
$$\displaystyle X^{h}.h(B_{Y^{h}}Z^{h},\alpha^{t})(x,a)-\frac{1}{2}\langle R^{%
\nabla^{E}}(Y,Z)a,\nabla^{E}_{X}\alpha\rangle_{E}-\frac{1}{2}\langle R^{\nabla%
^{E}}(\nabla^{M}_{X}Y,Z)a,\alpha\rangle_{E}$$
$$\displaystyle-\frac{1}{2}\langle R^{\nabla^{E}}(Y,\nabla^{M}_{X}Z)a,\alpha%
\rangle_{E}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}\langle R^{\nabla^{E}}(Y,Z)\nabla^{E}_{X}\alpha+R^{%
\nabla^{E}}(\nabla^{M}_{X}Y,Z)\alpha+R^{\nabla^{E}}(Y,\nabla^{M}_{X}Z)\alpha,a%
\rangle_{E}+X^{h}.h(B_{Y^{h}}Z^{h},\alpha^{t})(x,a).$$
The key point is that if $\phi_{t}^{X}(x)$ is the integral curve of $X$ passing through $x$ then the integral curve of $X^{h}$ at $a$ is the $\nabla^{E}$-parallel section $a^{t}$ along $\phi_{t}^{X}(x)$ with $a^{0}=a$. So
$$\displaystyle X^{h}.h(B_{Y^{h}}Z^{h},\alpha^{t})(x,a)$$
$$\displaystyle=$$
$$\displaystyle\frac{d}{dt}_{|t=0}h(B_{Y^{h}}Z^{h},\alpha^{t})(a^{t})$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}\frac{d}{dt}_{|t=0}\langle R^{\nabla^{E}}(Y(\phi_{t}^%
{X}(x)),Z(\phi_{t}^{X}(x)))\alpha(\phi_{t}^{X}(x)),a^{t}\rangle_{E}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}\langle\nabla_{X}^{E}(R^{\nabla^{E}}(Y,Z)\alpha)(x),a%
\rangle_{E}.$$
This completes the proof.
∎
Proposition 2.3.
Let $P\subset T_{(x,a)}E^{(r)}$ be a plane. Then:
1.
If $\mathrm{rank}(E)=2$ then there exists a basis $\{X^{h}+\alpha^{t},Y^{h}\}$ of $P$ satisfying
$$\alpha\in E_{x},\;X,Y\in T_{x}M,\;|X|^{2}+|\alpha|^{2}=|Y|^{2}=1,\;\langle X,Y%
\rangle_{TM}=0\quad\mbox{and}\quad\langle\alpha,a\rangle_{E}=0.$$
The sectional curvature of $(E^{(r)},h)$ at $P$ is given by
$$\displaystyle K(P)$$
$$\displaystyle=$$
$$\displaystyle\langle R^{M}(X,Y)X,Y\rangle_{TM}-\frac{3}{4}|R^{\nabla^{E}}(X,Y)%
a|^{2}+\frac{1}{4}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(Y,X_{i})\alpha,a\rangle%
_{E}^{2}$$
$$\displaystyle+\langle\nabla_{Y}^{M,E}(R^{\nabla^{E}})(X,Y,\alpha),a\rangle_{E}.$$
2.
If $\mathrm{rank}(E)\geq 3$ then there exists a basis $\{X^{h}+\alpha^{t},Y^{h}+\beta^{t}\}$ of $P$ satisfying
$$\alpha,\beta\in E_{x},\;X,Y\in T_{x}M,\;|X|^{2}+|\alpha|^{2}=|Y|^{2}+|\beta|^{%
2}=1,\;\langle X,Y\rangle_{TM}=\langle\alpha,\beta\rangle_{E}=0\quad\mbox{and}%
\quad\langle\alpha,a\rangle_{E}=\langle\beta,a\rangle_{E}=0.$$
The sectional curvature of $(E^{(r)},h)$ at $P$ is given by,
$$\displaystyle K(P)$$
$$\displaystyle=$$
$$\displaystyle\langle R^{M}(X,Y)X,Y\rangle_{TM}+\frac{1}{r^{2}}|\alpha|^{2}|%
\beta|^{2}+3\langle R^{\nabla^{E}}(X,Y)\alpha,\beta\rangle_{E}-\frac{3}{4}%
\langle R^{\nabla^{E}}(X,Y)a,R^{\nabla^{E}}(X,Y)a\rangle_{E}$$
$$\displaystyle+\frac{1}{4}\sum_{i=1}^{n}\left(\langle R^{\nabla^{E}}(X,X_{i})%
\beta,a\rangle_{E}+\langle R^{\nabla^{E}}(Y,X_{i})\alpha,a\rangle_{E}\right)^{%
2}-\sum_{i=1}^{n}\langle R^{\nabla^{E}}(X,X_{i})\alpha,a\rangle_{E}\langle R^{%
\nabla^{E}}(Y,X_{i})\beta,a\rangle_{E}$$
$$\displaystyle+\langle\nabla_{Y}^{M,E}(R^{\nabla^{E}})(X,Y,\alpha)-\nabla_{X}^{%
M,E}(R^{\nabla^{E}})(X,Y,\beta),a\rangle_{E},$$
where $(X_{i})_{i=1}^{n}$ is any orthonormal basis of $T_{x}M$.
Proof.
If the rank of $E$ is equal to 2 then $\dim T_{(x,a)}E^{(r)}=n+1$ and $P\cap\{X^{h},X\in T_{x}M\}\not=0$ and hence $P$ contains a unitary vector $Y^{h}$. We take a unit vector $X^{h}+\alpha^{t}$ orthogonal to $Y^{h}$ to get a basis $(X^{h}+\alpha^{t},Y^{h})$ of $P$.
If $\mathrm{rank}(E)>2$ we take an orthonormal basis $(X^{h}+\alpha^{t},Y^{h}+\beta^{t})$ of $P$, i.e,
$$|X|^{2}+|\alpha|^{2}=|Y|^{2}+|\beta|^{2}=1,\;\langle X,Y\rangle_{TM}+\langle%
\alpha,\beta\rangle_{E}=0\quad\mbox{and}\quad\langle\alpha,a\rangle_{E}=%
\langle\beta,a\rangle_{E}=0.$$
We suppose that $\langle X,Y\rangle_{TM}\not=0$ and write
$(\frac{1}{2}(|X|^{2}-|Y|^{2}),\langle X,Y\rangle_{TM})=\rho(\cos(\mu),\sin(\mu))$ with $\mu\in[0,\frac{\pi}{2})$.
Then the vectors
$$U=\cos\left(\frac{\mu}{2}\right)(X^{h}+\alpha^{t})+\sin\left(\frac{\mu}{2}%
\right)(Y^{h}+\beta^{t})\quad\mbox{and}\quad V=-\sin\left(\frac{\mu}{2}\right)%
(X^{h}+\alpha^{t})+\cos\left(\frac{\mu}{2}\right)(Y^{h}+\beta^{t})$$
constitute a basis of $P$ satisfying the desired relations.
Let us compute the sectional curvature at $P$. We denote by $R$ the curvature tensor of $(E^{(r)},h)$.
$$\displaystyle K(P)$$
$$\displaystyle=$$
$$\displaystyle h(R(X^{h}+\alpha^{t},Y^{h}+\beta^{t})(X^{h}+\alpha^{t}),Y^{h}+%
\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle h(R(X^{h}+\alpha^{t},Y^{h}+\beta^{t})X^{h},Y^{h})+h(R(X^{h}+%
\alpha^{t},Y^{h}+\beta^{t})X^{h},\beta^{t})+h(R(X^{h}+\alpha^{t},Y^{h}+\beta^{%
t})\alpha^{t},Y^{h})$$
$$\displaystyle+h(R(X^{h}+\alpha^{t},Y^{h}+\beta^{t})\alpha^{t},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle h(R(X^{h},Y^{h})X^{h},Y^{h})+h(R(X^{h},\beta^{t})X^{h},Y^{h})+h(%
R(\alpha^{t},Y^{h})X^{h},Y^{h})+h(R(\alpha^{t},\beta^{t})X^{h},Y^{h})$$
$$\displaystyle+h(R(X^{h},Y^{h})X^{h},\beta^{t})+h(R(X^{h},\beta^{t})X^{h},\beta%
^{t})+h(R(\alpha^{t},Y^{h})X^{h},\beta^{t})+h(R(\alpha^{t},\beta^{t})X^{h},%
\beta^{t})$$
$$\displaystyle+h(R(X^{h},Y^{h})\alpha^{t},Y^{h})+h(R(X^{h},\beta^{t})\alpha^{t}%
,Y^{h})+h(R(\alpha^{t},Y^{h})\alpha^{t},Y^{h})+h(R(\alpha^{t},\beta^{t})\alpha%
^{t},Y^{h})$$
$$\displaystyle+h(R(X^{h},Y^{h})\alpha^{t},\beta^{t})+h(R(X^{h},\beta^{t})\alpha%
^{t},\beta^{t})+h(R(\alpha^{t},Y^{h})\alpha^{t},\beta^{t})+h(R(\alpha^{t},%
\beta^{t})\alpha^{t},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle h(R(X^{h},Y^{h})X^{h},Y^{h})+2h(R(X^{h},Y^{h})X^{h},\beta^{t})+2%
h(R(X^{h},Y^{h})\alpha^{t},Y^{h})+2h(R(X^{h},Y^{h})\alpha^{t},\beta^{t})$$
$$\displaystyle+h(R(X^{h},\beta^{t})X^{h},\beta^{t})+2h(R(\alpha^{t},Y^{h})X^{h}%
,\beta^{t})+2h(R(\alpha^{t},\beta^{t})X^{h},\beta^{t})$$
$$\displaystyle+h(R(\alpha^{t},Y^{h})\alpha^{t},Y^{h})+2h(R(\alpha^{t},\beta^{t}%
)\alpha^{t},Y^{h})+h(R(\alpha^{t},\beta^{t})\alpha^{t},\beta^{t}).$$
Recall that the projection $\pi_{E}:(E^{(r)},h)\longrightarrow(M,\langle\;,\;\rangle_{TM})$ is a Riemannian submersion with totally geodesic fibers and
O’Neill shape tensor $B$ is given by (3). So we can use O’Neill’s formulas for curvature given in (bes, , chap. 9 pp.241). From these formulas we have $h(R(\alpha^{t},\beta^{t})X^{h},\beta^{t})=h(R(\alpha^{t},\beta^{t})\alpha^{t},%
Y^{h})=0$ and hence
$$\displaystyle K(P)$$
$$\displaystyle=$$
$$\displaystyle h(R(X^{h},Y^{h})X^{h},Y^{h})+h(R(X^{h},\beta^{t})X^{h},\beta^{t}%
)+h(R(\alpha^{t},Y^{h})\alpha^{t},Y^{h})+h(R(\alpha^{t},\beta^{t})\alpha^{t},%
\beta^{t})$$
$$\displaystyle+2h(R(X^{h},Y^{h})X^{h},\beta^{t})+2h(R(X^{h},Y^{h})\alpha^{t},Y^%
{h})+2h(R(X^{h},Y^{h})\alpha^{t},\beta^{t})+2h(R(\alpha^{t},Y^{h})X^{h},\beta^%
{t}).$$
Let us give every term in this expression by using O’Neill’s formulas and Proposition (2.2).
$$\displaystyle h(R(X^{h},Y^{h})X^{h},Y^{h})$$
$$\displaystyle=$$
$$\displaystyle\langle R^{M}(X,Y)X,Y\rangle_{TM}-\frac{3}{4}\langle R^{\nabla^{E%
}}(X,Y)a,R^{\nabla^{E}}(X,Y)a\rangle_{E},$$
$$\displaystyle h(R(X^{h},\beta^{t})X^{h},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle h((\overline{\nabla}_{\beta^{t}}B)_{X^{h}}X^{h},\beta^{t})+h(B_{%
X^{h}}\beta^{t},B_{X^{h}}\beta^{t})=h(B_{X^{h}}\beta^{t},B_{X^{h}}\beta^{t}),$$
$$\displaystyle h(R(\alpha^{t},Y^{h})\alpha^{t},Y^{h})$$
$$\displaystyle=$$
$$\displaystyle h((\overline{\nabla}_{\alpha^{t}}B)_{Y^{h}}Y^{h},\alpha^{t})+h(B%
_{Y^{h}}\alpha^{t},B_{Y^{h}}\alpha^{t})=h(B_{Y^{h}}\alpha^{t},B_{Y^{h}}\alpha^%
{t}),$$
$$\displaystyle h(R(\alpha^{t},\beta^{t})\alpha^{t},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{r^{2}}|\alpha|^{2}|\beta|^{2},$$
$$\displaystyle 2h(R(X^{h},Y^{h})X^{h},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle 2h((\overline{\nabla}_{X^{h}}B)_{X^{h}}Y^{h}),\beta^{t})=-%
\langle\nabla_{X}^{M,E}(R^{\nabla^{E}})(X,Y,\beta),a\rangle_{E},$$
$$\displaystyle 2h(R(X^{h},Y^{h})\alpha^{t},Y^{h})$$
$$\displaystyle=$$
$$\displaystyle-2h((\overline{\nabla}_{Y^{h}}B)_{X^{h}}Y^{h}),\alpha^{t})=%
\langle\nabla_{Y}^{M,E}(R^{\nabla^{E}})(X,Y,\alpha),a\rangle_{E},$$
$$\displaystyle 2h(R(X^{h},Y^{h})\alpha^{t},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle 2h((\overline{\nabla}_{\alpha^{t}}B)_{X^{h}}Y^{h},\beta^{t})-2h(%
(\overline{\nabla}_{\beta^{t}}B)_{X^{h}}Y^{h},\alpha^{t})+2h(B_{X^{h}}\alpha^{%
t},B_{Y^{h}}\beta^{t})-2h(B_{X^{h}}\beta^{t},B_{Y^{h}}\alpha^{t})$$
$$\displaystyle=$$
$$\displaystyle 2\langle R^{\nabla^{E}}(X,Y)\alpha,\beta\rangle_{E}-2h(B_{X^{h}}%
\alpha^{t},B_{Y^{h}}\beta^{t})+2h(B_{X^{h}}\beta^{t},B_{Y^{h}}\alpha^{t})$$
$$\displaystyle 2h(R(\alpha^{t},Y^{h})X^{h},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle-2h(R(X^{h},\beta^{t})Y^{h},\alpha^{t})=-2h((\overline{\nabla}_{%
\beta^{t}}B)_{X^{h}}Y^{h},\alpha^{t})-2h(B_{Y^{h}}\beta^{t},B_{X^{h}}\alpha^{t})$$
$$\displaystyle=$$
$$\displaystyle\langle R^{\nabla^{E}}(X,Y)\alpha,\beta\rangle_{E}-2h(B_{X^{h}}%
\alpha^{t},B_{Y^{h}}\beta^{t}).$$
To complete the proof, we need to compute the quantity
$$Q=h(B_{X^{h}}\beta^{t},B_{X^{h}}\beta^{t})+h(B_{Y^{h}}\alpha^{t},B_{Y^{h}}%
\alpha^{t})-4h(B_{X^{h}}\alpha^{t},B_{Y^{h}}\beta^{t})+2h(B_{Y^{h}}\alpha^{t},%
B_{X^{h}}\beta^{t}).$$
When $E=TM$, $\langle\;,\;\rangle_{E}=\langle\;,\;\rangle_{TM}$ and $\nabla^{E}=\nabla^{M}$, one can use the formula (6) to recover the expression of the sectional curvature given in
kowal .
In the general case, we use instead (5) and we get
$$\displaystyle Q$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{4}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(X,X_{i})\beta,a%
\rangle_{E}^{2}+\frac{1}{4}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(Y,X_{i})\alpha%
,a\rangle_{E}^{2}-\sum_{i=1}^{n}\langle R^{\nabla^{E}}(X,X_{i})\alpha,a\rangle%
_{E}\langle R^{\nabla^{E}}(Y,X_{i})\beta,a\rangle_{E}$$
$$\displaystyle+\frac{1}{2}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(Y,X_{i})\alpha,a%
\rangle_{E}\langle R^{\nabla^{E}}(X,X_{i})\beta,a\rangle_{E}.$$
This completes the proof.
∎
Example 1.
Let $M=S^{2}$ with its canonical metric $\langle\;,\;\rangle_{TM}$, $E=TM$ and $\nabla^{E}=\nabla^{M}$. Let us compute the sectional curvature of $(T^{(1)}M,h)$. According to Proposition 2.3, if $P$ is a plan in $T_{(x,u)}T^{(1)}M$ then $P=\mathrm{span}\{X^{h}+Z^{t},Y^{h}\}$ with $X,Y,Z\in T_{x}M$, $|X|^{2}+|Z|^{2}=|Y|^{2}=1$ and $\langle Z,u\rangle_{TM}=0$.
The curvature $R^{M}$ is given by
$R^{M}(X,Y)Z=\langle X,Z\rangle_{TM}Y-\langle Y,Z\rangle_{TM}X$. Hence
$$\displaystyle K(P)$$
$$\displaystyle=$$
$$\displaystyle\langle R^{M}(X,Y)X,Y\rangle_{TM}-\frac{3}{4}|R^{M}(X,Y)u|^{2}+%
\frac{1}{4}|R^{M}(Z,u)Y|^{2}$$
$$\displaystyle=$$
$$\displaystyle|X|^{2}-\frac{3}{4}\left(\langle X,u\rangle_{TM}^{2}+\langle Y,u%
\rangle_{TM}^{2}|X|^{2}\right)+\frac{1}{4}\left(\langle Z,Y\rangle_{TM}^{2}+%
\langle u,Y\rangle_{TM}^{2}|Z|^{2}\right).$$
If $Z=0$ then $K(P)=\frac{1}{4}$. If $Z\not=0$ then $\{Z,u\}$ becomes an orthogonal basis of $T_{x}M$ and
$$1=|Y|^{2}=\langle Y,u\rangle_{TM}^{2}+\frac{1}{|Z|^{2}}\langle Y,Z\rangle_{TM}%
^{2}.$$
Thus
$$K(P)=|X|^{2}+\frac{1}{4}|Z|^{2}-\frac{3}{4}\left(\langle X,u\rangle_{TM}^{2}+%
\langle Y,u\rangle_{TM}^{2}|X|^{2}\right).$$
If $X=0$ then $K(P)=\frac{1}{4}$. If $X\not=0$ then $\{X,Y\}$ is an orthogonal basis and hence
$$1=|u|^{2}=\langle Y,u\rangle_{TM}^{2}+\frac{1}{|X|^{2}}\langle X,u\rangle_{TM}%
^{2}$$
and hence $K(P)=\frac{1}{4}$. So $(T^{(1)}M,h)$ has constant sectional curvature $\frac{1}{4}$. This has been proved first in nagy .
Proposition 2.4.
Let $X,Y\in T_{x}M$, $\alpha,\beta\in E_{x}$ and $(x,a)\in E^{(r)}$ and $(X_{i})_{i=1}^{n}$ any orthonormal basis of $T_{x}M$. Then:
1.
The Ricci curvature of $(E^{(r)},h)$ is given by
$$\displaystyle{\mathrm{ric}}(X^{h}+\alpha^{t},Y^{h}+\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle\frac{(m-2)}{r^{2}}\langle\overline{\alpha},\overline{\beta}%
\rangle_{E}+{\mathrm{ric}}^{M}(X,Y)-\frac{1}{2}\sum_{i=1}^{n}\langle R^{\nabla%
^{E}}(X,X_{i})a,R^{\nabla^{E}}(Y,X_{i})a\rangle_{E}$$
$$\displaystyle-\frac{1}{2}\sum_{i=1}^{n}\langle\nabla^{M,E}_{X_{i}}(R^{\nabla^{%
E}})(X_{i},X,\beta)+\nabla^{M,E}_{X_{i}}(R^{\nabla^{E}})(X_{i},Y,\alpha),a%
\rangle_{E}$$
$$\displaystyle+\frac{1}{4}\sum_{i=1}^{n}\sum_{j=1}^{n}\langle R^{\nabla^{E}}(X_%
{i},X_{j})a,\alpha\rangle_{E}\langle R^{\nabla^{E}}(X_{i},X_{j})a,\beta\rangle%
_{E}.$$
2.
The scalar curvature of $(E^{(r)},h)$ is given by
$$\tau^{r}(x,a)=s^{M}(x)+\frac{1}{r^{2}}(m-1)(m-2)-\frac{1}{4}\xi_{x}(a,a),$$
where
$$\xi_{x}(a,b)=\sum_{j=1}^{n}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(X_{i},X_{j})a,%
R^{\nabla^{E}}(X_{i},X_{j})b\rangle_{E},\quad a,b\in E_{x}.$$
Proof.
We will use the O’Neil formulas for the Ricci curvature and scalar curvature given in (bes, , Proposition 9.36, Corollary 9.37). From these formulas, Proposition 2.2 and the fact that the fibers are Einstein, we get
$$\displaystyle{\mathrm{ric}}(X^{h},Y^{h})$$
$$\displaystyle=$$
$$\displaystyle{\mathrm{ric}}^{M}(X,Y)-2\sum_{i=1}^{n}h(B_{X^{h}}X_{i}^{h},B_{Y^%
{h}}X_{i}^{h})={\mathrm{ric}}^{M}(X,Y)-\frac{1}{2}\sum_{i=1}^{n}\langle R^{%
\nabla^{E}}(X,X_{i})a,R^{\nabla^{E}}(Y,X_{i})a\rangle_{E},$$
$$\displaystyle{\mathrm{ric}}(\alpha^{t},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle\frac{(m-2)}{r^{2}}\langle\overline{\alpha},\overline{\beta}%
\rangle_{E}+\sum_{i=1}^{n}h(B_{X_{i}^{h}}\alpha^{t},B_{X_{i}^{h}}\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle\frac{(m-2)}{r^{2}}\langle\overline{\alpha},\overline{\beta}%
\rangle_{E}+\frac{1}{4}\sum_{i=1}^{n}\sum_{j=1}^{n}\langle R^{\nabla^{E}}(X_{i%
},X_{j})a,\alpha\rangle_{E}\langle R^{\nabla^{E}}(X_{i},X_{j})a,\beta\rangle_{%
E},$$
$$\displaystyle{\mathrm{ric}}(X^{h},\beta^{t})$$
$$\displaystyle=$$
$$\displaystyle-h(\check{\delta}{B}X^{h},\beta^{t})=\sum_{i=1}^{n}h((\overline{%
\nabla}_{X_{i}^{h}}B)_{X_{i}^{h}}X,\beta^{t})=-\frac{1}{2}\sum_{i=1}^{n}%
\langle\nabla^{M,E}_{X_{i}}(R^{\nabla^{E}})(X_{i},X,\beta),a\rangle_{E}.$$
This establish the expression of the Ricci curvature. The scalar curvature is given by $\tau^{r}=s^{M}\circ\pi_{E}+s^{v}+|B|^{2}$ which completes the proof.
∎
3 On the sign of the different curvatures of $(E^{(r)},h)$
In this section, we study the sign of sectional, Ricci and scalar curvature) of sphere bundles $E^{(r)}$ equipped with the Sasaki metric $h$.
Through this section, $(M,\langle\;,\;\rangle_{TM})$ is a Riemannian manifold of dimension $n$ and $(E,\langle\;,\;\rangle_{E})$ is a Euclidean vector bundle of rank $m$ with an invariant connection $\nabla^{E}$.
3.1 The case $R^{\nabla^{E}}=0$
Note that $R^{\nabla^{E}}=0$ if and only if the O’Neill shape tensor of the Riemannian submersion $\pi_{E}:(E^{(r)},h)\longrightarrow(M,\langle\;,\;\rangle_{TM})$ vanishes which is equivalent to $E^{(r)}$ being locally the Riemannian product of $M$ and the fiber. So we have the following results.
Proposition 3.1.
Suppose $R^{\nabla^{E}}=0$ and $m=2$. Then, by using the notations in Propositions 2.3 and 2.4
$$K(P)=\langle R^{M}(X,Y)X,Y\rangle_{TM},\;{\mathrm{ric}}(X^{h}+\alpha^{t},Y^{h}%
+\beta^{t})={\mathrm{ric}}^{M}(X,Y)\quad\mbox{and}\quad\tau(x,a)=s^{M}(x).$$
Proposition 3.2.
Suppose $R^{\nabla^{E}}=0$ and $m\geq 3$. Then
1.
$(M,\langle\;,\;\rangle_{TM})$ has constant scalar curvature if and only if $(E^{(r)},h)$ has constant scalar curvature,
2.
$(M,\langle\;,\;\rangle_{TM})$ is locally symmetric if and only if $(E^{(r)},h)$ is locally symmetric,
3.
$(M,\langle\;,\;\rangle_{TM})$ is Einstein with Einstein constant $\frac{m-2}{r^{2}}$ if and only if $(E^{(r)},h)$ is Einstein with the same Einstein constant,
4.
$(E^{(r)},h)$ can never have a constant sectional curvature.
For the Euclidean vector bundles with large rank compared to the dimension of the base, the following theorem constitutes a converse to the third assertion in Proposition 3.2. Note that the rank of the Atiyah vector bundle $E(M,k)$ is $\frac{n(n+1)}{2}$ and hence it satisfies the hypothesis of the next theorem.
Theorem 3.1.
Suppose that $m-1>\frac{n(n-1)}{2}$ where $m$ is the rank of $E$ and $n=\dim M$. Then:
1.
$(E^{(r)},h)$ is Einstein with Einstein constant $\lambda$ if and only if $R^{\nabla^{E}}=0$, $\lambda=\frac{(m-2)}{r^{2}}$ and $M$ is Einstein with Einstein constant $\frac{(m-2)}{r^{2}}$.
2.
$(E^{(r)},h)$ can never has constant sectional curvature.
Proof.
1.
If $(E^{(r)},h)$ is Einstein then, according to Proposition 2.4, we have for any $x\in M$, $X\in T_{x}M$, $a\in E_{x}^{(r)}$ and $\alpha\in E_{x}$ with $\langle\alpha,a\rangle_{E}=0,$
$$\displaystyle\lambda|\alpha|^{2}$$
$$\displaystyle=$$
$$\displaystyle\frac{(m-2)}{r^{2}}|\alpha|^{2}+\frac{1}{4}\sum_{i=1}^{n}\sum_{j=%
1}^{n}\langle R^{\nabla^{E}}(X_{i},X_{j})a,\alpha\rangle_{E}^{2}.$$
(7)
Fix $x\in M$, $a\in E_{x}^{(r)}$ and an orthonormal basis $(X_{i})$ of $T_{x}M$ and choose an orthonormal family $(\alpha_{1},\ldots,\alpha_{m-1})$ of elements in the orthogonal of $a$. For any $k=1,\ldots,m-1$ define the vector $U_{k}\in\hbox{\bb R}^{\frac{n(n-1)}{2}}$ by putting
$$U_{k}=\left(\langle R^{\nabla^{E}}(X_{1},X_{2})a,\alpha_{k}\rangle_{E},R^{%
\nabla^{E}}(X_{1},X_{3})a,\alpha_{k}\rangle_{E},\ldots,\langle R^{\nabla^{E}}(%
X_{n-1},X_{n})a,\alpha_{k}\rangle_{E}\right).$$
If we take $\alpha=\alpha_{k}$ in (7), we get that the Euclidean norm of $U_{k}$ satisfies $|U_{k}|^{2}=2\left(\lambda-\frac{(m-2)}{r^{2}}\right)$. Moreover, if we take $\alpha=\alpha_{k}+\alpha_{l}$ with $l\not=k$ we get that $\langle U_{l},U_{k}\rangle=0$. Thus $(U_{1},\ldots,U_{m-1})$ is an orthogonal family of vector in $\hbox{\bb R}^{\frac{n(n-1)}{2}}$. Since $m-1>\frac{n(n-1)}{2}$ they must be linearly dependent. But they have the same norm so they must vanish. This completes the proof of the first assertion.
2.
If $(E^{(r)},h)$ has a constant sectional curvature then it is Einstein and hence $R^{\nabla^{E}}=0$. But, according to the expression of the sectional curvature given in Proposition 2.3 it cannot be constant. This completes the proof.∎
3.2 The case $\nabla^{M,E}(R^{\nabla^{E}})=0$
If $\nabla^{M,E}(R^{\nabla^{E}})=0$ then $R^{\nabla^{E}}$ is invariant under parallel transport of $\nabla^{M}$ and $\nabla^{E}$ and hence
there exists a constant $\mathbf{K}>0$ such that for any $X,Y\in\Gamma(TM)$, $\alpha\in\Gamma(E)$,
$$|R^{\nabla^{E}}(X,Y)\alpha|\leq\mathbf{K}|X||Y||\alpha|.$$
(8)
The following theorem generalize a result obtained in kowal .
Theorem 3.2.
Suppose that $\nabla^{M,E}(R^{\nabla^{E}})=0$ and the sectional curvature of $M$ is bounded below by a positive constant $C$. Then
1.
The sectional curvature of $(E^{(r)},h)$ can never be nonpositive.
2.
If $\mathrm{rank}(E)=2$, then the sectional curvature of $(E^{(r)},h)$ is nonnegative if $r^{2}\leq\frac{4C}{3\mathbf{K}}$.
3.
If $\mathrm{rank}(E)\geq 3$, then the sectional curvature of $(E^{(r)},h)$ is nonnegative if
$$C-\frac{3}{4}r^{2}\mathbf{K}^{2}\left(4+3r^{2}(n-2)\mathbf{K}+\frac{3}{4}r^{4}%
(n-2)^{2}\mathbf{K}^{2}\right)\geq 0.$$
(9)
In particular, for $r$ sufficiently small the sectional curvature of $(E^{(r)},h)$ is nonnegative.
Proof.
Let $P\subset T_{(x,a)}E^{(r)}$ be a plane. Then there exists an orthonormal basis $\{X^{h}+\alpha^{t},Y^{h}+\beta^{t}\}$ of $P$ satisfying $|X|^{2}+|\alpha|^{2}=|Y|^{2}+|\beta|^{2}=1,\;\langle X,Y\rangle_{TM}=\langle%
\alpha,\beta\rangle_{E}=0$ and $\langle\alpha,a\rangle_{E}=\langle\beta,a\rangle_{E}=0.$ Put $X=\cos(t)\widetilde{X}$, $\alpha=\sin(t)\widetilde{\alpha}$, $Y=\cos(s)\widetilde{Y}$, $\beta=\sin(s)\widetilde{\beta}$ and $a=r\widetilde{a}$ with $s,t\in[0,\pi/2]$ and $|\widetilde{X}|=|\widetilde{Y}|=|\widetilde{\alpha}|=|\widetilde{\beta}|=1$. We replace in the expression of $K(P)$ given in Proposition 2.3 and we get
$$\displaystyle K(P)$$
$$\displaystyle=$$
$$\displaystyle A\cos^{2}(t)\cos^{2}(s)+\frac{1}{r^{2}}\sin^{2}(t)\sin^{2}(s)+B%
\cos(t)\cos(s)\sin(t)\sin(s)+D\cos^{2}(t)\sin^{2}(s)$$
$$\displaystyle+E\sin^{2}(t)\cos^{2}(s),$$
where
$$\displaystyle A$$
$$\displaystyle=$$
$$\displaystyle K^{M}(\{\widetilde{X},\widetilde{Y}\})-\frac{3}{4}r^{2}|R^{%
\nabla^{E}}(\widetilde{X},\widetilde{Y})\widetilde{a}|^{2},$$
$$\displaystyle B$$
$$\displaystyle=$$
$$\displaystyle 3\langle R^{\nabla^{E}}(\widetilde{X},\widetilde{Y})\widetilde{%
\alpha},\widetilde{\beta}\rangle_{E}-r^{2}\sum_{i=1}^{n}\langle R^{\nabla^{E}}%
(\widetilde{X},X_{i})\widetilde{\alpha},\widetilde{a}\rangle_{E}\langle R^{%
\nabla^{E}}(\widetilde{Y},X_{i})\widetilde{\beta},\widetilde{a}\rangle_{E}$$
$$\displaystyle+\frac{r^{2}}{2}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(\widetilde{X%
},X_{i})\widetilde{\beta},\widetilde{a}\rangle_{E}\langle R^{\nabla^{E}}(%
\widetilde{Y},X_{i})\widetilde{\alpha},\widetilde{a}\rangle_{E},$$
$$\displaystyle D$$
$$\displaystyle=$$
$$\displaystyle\frac{r^{2}}{4}\sum_{i=1}^{n}\langle R^{\nabla^{E}}(\widetilde{X}%
,X_{i})\widetilde{\beta},\widetilde{a}\rangle_{E}^{2},\quad E=\frac{r^{2}}{4}%
\sum_{i=1}^{n}\langle R^{\nabla^{E}}(\widetilde{Y},X_{i})\widetilde{\alpha},%
\widetilde{a}\rangle_{E}^{2}.$$
1.
If $\cos(t)=\cos(s)=0$ then $K(P)=\frac{1}{r^{2}}>0$ and hence sectional curvature of $(E^{(r)},h)$ can never be nonpositive.
Let us prove now the second and the third assertion.
If $X=0$ or $Y=0$ then $K(P)\geq 0$. Suppose now that $X\not=0$ and $Y\not=0$, so we can choose $X_{1}=\widetilde{X}$ and $X_{2}=\widetilde{Y}$ and get
$$A\geq C-\frac{3}{4}r^{2}\mathbf{K}^{2}\quad\mbox{and}\quad B\geq-\frac{3%
\mathbf{K}}{2}\left(2+r^{2}(n-2)\mathbf{K}\right).$$
2.
If $\mathrm{rank}(E)=2$, we can choose $\beta=0$ and hence
$$K(P)\geq(C-\frac{3}{4}r^{2}\mathbf{K})\cos^{2}(t)\cos^{2}(s)+\frac{1}{r^{2}}%
\sin^{2}(t)\sin^{2}(s).$$
Thus
the sectional curvature is nonnegative if $r^{2}\leq\frac{4C}{3\mathbf{K}}$.
3.
Suppose that $\mathrm{rank}(E)>2$. Then, by using the estimations of $A$ and $B$ given above, we get
$$K(P)\geq\left(C-\frac{3}{4}r^{2}\mathbf{K}^{2}\right)\cos^{2}(t)\cos^{2}(s)+%
\frac{1}{r^{2}}\sin^{2}(t)\sin^{2}(s)-\frac{3\mathbf{K}}{2}\left(2+r^{2}(n-2)%
\mathbf{K}\right)\cos(t)\cos(s)\sin(t)\sin(s).$$
The right side of this inequality, say $Q$, can be arranged in the following way:
$$\displaystyle Q$$
$$\displaystyle=$$
$$\displaystyle\left[\frac{1}{r}\sin(t)\sin(s)-\frac{3r\mathbf{K}}{4}\left(2+r^{%
2}(n-2)\mathbf{K}\right)\cos(t)\cos(s)\right]^{2}$$
$$\displaystyle+\left(C-\frac{3}{4}r^{2}\mathbf{K}^{2}\left(4+3r^{2}(n-2)\mathbf%
{K}+\frac{3}{4}r^{4}(n-2)^{2}\mathbf{K}^{2}\right)\right)\cos^{2}(t)\cos^{2}(s).$$
This ends the proof of the last assertion.∎
Remark 2.
1.
In the classical case, i.e., $E=TM$, $\langle\;,\;\rangle_{E}=\langle\;,\;\rangle_{TM}$ and $\nabla^{E}=\nabla^{M}$ the hypotheses $\nabla^{M}(R^{M})=0$ and $M$ has positive sectional curvature imply that the sectional curvature of $M$ is bounded bellow by a positive constant. Thus, in this case our result is the same as the result obtained in kowal .
2.
The left side of the inequality (9), say $Q$, goes to $C$ when $r$ goes to 0 which permitted as to get our result. In some cases the constant $\mathbf{K}$ can depend on a parameter and by varying this parameter one can make $Q>0$. This is the case in Theorem 4.3.
Theorem 3.3.
Suppose that $\nabla^{M,E}(R^{\nabla^{E}})=0$ and $R^{\nabla^{E}}\not=0$ and there exists a positive constant $\rho$ such that ${\mathrm{ric}}^{M}(X,X)\geq\rho|X|^{2}$ for any $X\in\Gamma(TM)$. Then:
1.
If $\mathrm{rank}(E)=2$ then $(E^{(r)},h)$ has nonnegative Ricci curvature for $r^{2}\leq\frac{2\rho}{n\mathbf{K}^{2}}$, where the constant $\mathbf{K}$ is given in (8).
2.
If $\mathrm{rank}(E)>2$ then $(E^{(r)},h)$ has positive Ricci curvature for $r^{2}<\frac{2\rho}{n\mathbf{K}^{2}}$, where the constant $\mathbf{K}$ is given in (8).
Proof.
For any $x\in M$, $a\in E_{x}^{(r)}$, $X\in T_{x}M$ and $\alpha\in E_{x}$ such that $|X|^{2}+|\alpha|^{2}=1$ and $\langle\alpha,a\rangle_{E}=0$, we have from Proposition 2.4 that
$$\displaystyle{\mathrm{ric}}(X^{h}+\alpha^{t},X^{h}+\alpha^{t})$$
$$\displaystyle=$$
$$\displaystyle\frac{(m-2)}{r^{2}}|\alpha|^{2}+{\mathrm{ric}}^{M}(X,X)-\frac{1}{%
2}\sum_{i=1}^{n}|R^{\nabla^{E}}(X,X_{i})a|^{2}$$
$$\displaystyle-\sum_{i=1}^{n}\langle\nabla^{M,E}_{X_{i}}(R^{\nabla^{E}})(X_{i},%
X,\alpha),a\rangle_{E}+\frac{1}{4}\sum_{i=1}^{n}\sum_{j=1}^{n}\langle R^{%
\nabla^{E}}(X_{i},X_{j})a,\alpha\rangle_{E}^{2}.$$
Let us write $X=\mathrm{cos}(t)\hat{X}$, $\alpha=\mathrm{sin}(t)\hat{\alpha}$ and $\hat{a}=a/r$ where $\hat{X}$ and $\hat{\alpha}$ are unit vectors.
Suppose that $\nabla^{M,E}(R^{\nabla^{E}})=0.$ We obtain
$$\displaystyle{\mathrm{ric}}(X^{h}+\alpha^{t},X^{h}+\alpha^{t})$$
$$\displaystyle=$$
$$\displaystyle\mathrm{cos}^{2}(t)\left({\mathrm{ric}}^{M}(\hat{X},\hat{X})-%
\frac{r^{2}}{2}\sum_{i=1}^{n}|R^{\nabla^{E}}(\hat{X},X_{i})\hat{a}|^{2}\right)$$
$$\displaystyle+\mathrm{sin}^{2}(t)\left(\frac{(m-2)}{r^{2}}+\frac{r^{2}}{4}\sum%
_{i=1}^{n}\sum_{j=1}^{n}\langle R^{\nabla^{E}}(X_{i},X_{j})\hat{a},\hat{\alpha%
}\rangle_{E}^{2}\right).$$
From the hypothesis on ${\mathrm{ric}}^{M}$ and (8), we get
$${\mathrm{ric}}(X^{h}+\alpha^{t},X^{h}+\alpha^{t})\geq\left(\rho-\frac{nr^{2}%
\mathbf{K}^{2}}{2}\right)\mathrm{cos}^{2}(t)+\frac{(m-2)}{r^{2}}\mathrm{sin}^{%
2}(t).$$
This shows the two assertions.
∎
3.3 Ricci and scalar curvatures
The two following theorems are a generalization of (kowal, , Theorem 3, Theorem 1) established in the case when $E=TM$.
Theorem 3.4.
If $M$ is compact with positive Ricci curvature and $\mathrm{rank}(E)\geq 3$, then for $r$ sufficiently small the Ricci curvature of the sphere bundle $(E^{(r)},h)$ is positive.
Proof.
Suppose now that $M$ is compact with positive Ricci curvature and put $X=\cos(t)\hat{X}$, $\alpha=\sin(t)\hat{\alpha}$ and $\hat{a}=\frac{a}{r}$ where $\hat{X}\in T_{x}M$, $\hat{\alpha}\in E_{x}$, $|\hat{X}|=|\hat{\alpha}|=1$ and $(x,a)\in E^{(r)}$.
We have
$$\displaystyle{\mathrm{ric}}(X^{h}+\alpha^{t},X^{h}+\alpha^{t})$$
$$\displaystyle=$$
$$\displaystyle\mathrm{cos}^{2}(t)\;{\mathrm{ric}}^{M}(\hat{X},\hat{X})+\frac{(m%
-2)}{r^{2}}\mathrm{sin}^{2}(t)-\frac{1}{2}r^{2}\mathrm{cos}^{2}(t)\sum_{i=1}^{%
n}|R^{\nabla^{E}}(\hat{X},X_{i})\hat{a}|^{2}$$
$$\displaystyle-r\mathrm{cos}(t)\mathrm{sin}(t)\sum_{i=1}^{n}\langle\nabla^{M,E}%
_{X_{i}}(R^{\nabla^{E}})(X_{i},\hat{X})\hat{\alpha},\hat{a}\rangle_{E}+\frac{1%
}{4}\sum_{i=1}^{n}\sum_{j=1}^{n}\langle R^{\nabla^{E}}(X_{i},X_{j})a,\alpha%
\rangle_{E}^{2},$$
$$\displaystyle\geq\mathrm{cos}^{2}(t){\mathrm{ric}}^{M}(\hat{X},\hat{X})+\frac{%
(m-2)}{r^{2}}\mathrm{sin}^{2}(t)-\frac{1}{2}r^{2}\mathrm{cos}^{2}(t)\sum_{i=1}%
^{n}|R^{\nabla^{E}}(\hat{X},X_{i})\hat{a}|^{2}$$
$$\displaystyle-r\;\mathrm{cos}(t)\mathrm{sin}(t)\sum_{i=1}^{n}\langle\nabla^{M,%
E}_{X_{i}}(R^{\nabla^{E}})(X_{i},\hat{X})\hat{\alpha},\hat{a}\rangle_{E}.$$
Since $M$ is compact, there exists positive constants $L_{1}$ and $L_{2}$ such that for any $x\in M$ and for any unit vectors $\hat{X},\hat{Y},\hat{Z}\in T_{x}M$ $\hat{\alpha},\hat{\beta}\in E_{x}$,
$$|R^{\nabla^{E}}(\hat{X},\hat{Y})\hat{Z}|\leq L_{1}\quad\mbox{and}\quad|\langle%
\nabla^{M,E}_{\hat{X}}(R^{\nabla^{E}})(\hat{Y},\hat{Z})\hat{\alpha},\hat{\beta%
}\rangle_{E}|\leq L_{2}.$$
On the other hand, there is a positive number $\epsilon$ such that ${\mathrm{ric}}^{M}(\hat{X},\hat{X})\geq\epsilon$ for every unit vector $\hat{X}$.
Then, by using the above estimations, we get
$$\displaystyle{\mathrm{ric}}(X^{h}+\alpha^{t},X^{h}+\alpha^{t})$$
$$\displaystyle\geq$$
$$\displaystyle\mathrm{cos}^{2}(t)(\epsilon-\frac{1}{2}r^{2}nL_{1}^{2})+\frac{(m%
-2)}{r^{2}}\mathrm{sin}^{2}(t)-rnL_{2}\mathrm{cos}(t)\mathrm{sin}(t)$$
$$\displaystyle=$$
$$\displaystyle\left(\sqrt{A}\cos(t)-\frac{B}{2\sqrt{A}}\sin(t)\right)^{2}+C\sin%
^{2}(t),$$
where $A=\epsilon-\frac{1}{2}r^{2}nL_{1}^{2}$, $B=rnL_{2}$, $C\left(\frac{m-2}{r^{2}}-\frac{B^{2}}{4{A}}\right)$ and $r$ taken such that $A,C>0$. Then, the right side of this inequality is positive for every $t$.
∎
Theorem 3.5.
Let $(M,\langle\;,\;\rangle_{TM})$ be a compact Riemannian manifold and $(E,\langle\;,\;\rangle_{E})$ be a Euclidean vector bundle with an invariant connection $\nabla^{E}$. Then for $r$ sufficiently small the scalar curvature of $(E^{(r)},h)$ is positive.
Proof.
Suppose now that $M$ is compact and put $\hat{a}=\frac{a}{r}$ where $(x,a)\in E^{(r)}$.
We have
$$\tau^{r}(x,a)=s^{M}(x)+\frac{1}{r^{2}}(m-1)(m-2)-\frac{1}{4}r^{2}\xi_{x}(\hat{%
a},\hat{a}).$$
Since $M$ is compact, there exists positive constants $L_{1}$ and $L_{2}$ such that for any $x\in M$ and for any unit vectors $X,Y\in T_{x}M$, $\alpha,\beta\in E_{x}$
$$|\langle R^{M}(X,Y)X,Y\rangle_{TM}|\leq L_{1}\quad\mbox{and}\quad|R^{\nabla^{E%
}}(X,Y)\alpha|\leq L_{2}.$$
Then,
$$\tau^{r}(x,a)\geq\frac{1}{r^{2}}(m-1)(m-2)+\frac{1}{4}n(n-1)(4L_{1}-rL_{2}^{2}.)$$
This means that $\tau^{r}$ is positive on $E^{(r)}$, when $r$ is sufficiently small.
∎
Let $E\longrightarrow M$ be a vector bundle. Recall that its associated sphere bundle is the quotient $S(E)=E/\sim$ where $a\sim b$ if there exists $t>0$ such that $a=tb$. Let $\langle\;,\;\rangle_{E}$ be a Euclidean product on $E$. The associated $O(m)$-principal bundle has a connection so there exits a connection $\nabla^{E}$ on $E$ which preserves the metric $\langle\;,\;\rangle_{E}$. Since $S(E)$ can be identified to $E^{(r)}$ for any $r$, by using Theorems 3.4 and 3.5 we get the following corollary which has been proved in nash by a different method.
Corollary 3.1.
Let $E\longrightarrow M$ be a vector bundle over a compact Riemannian manifold and $S(E)\longrightarrow M$ its associated sphere bundle. Then
1.
If the Ricci curvature of $M$ is positive then $S(E)$ admits a complete Riemannian metric of positive curvature.
2.
$S(E)$ admits a complete Riemannian metric of positive scalar curvature.
We will end this section with a result which has been proved in book
when $E=TM$, $\langle\;,\;\rangle_{TM}=\langle\;,\;\rangle_{E}$ and $\nabla^{E}$ is the Levi-Civita connection of $\langle\;,\;\rangle_{TM}$.
Theorem 3.6.
Let $(M,\langle\;,\;\rangle_{TM})$ be a Riemannian manifold and $(E,\langle\;,\;\rangle_{E})$ a Euclidean vector bundle with an invariant connection $\nabla^{E}$. Then, the sphere bundle $(E^{(r)},h)$ equipped with the Sasaki metric has constant scalar curvature if and only if
$$\displaystyle\xi$$
$$\displaystyle=$$
$$\displaystyle\frac{|R^{\nabla^{E}}|^{2}}{m}\langle\;,\;\rangle_{E},$$
(10)
$$\displaystyle 4ms^{M}-r^{2}|R^{\nabla^{E}}|^{2}$$
$$\displaystyle=$$
$$\displaystyle\mbox{constant}.$$
(11)
where $\xi(a,b)=\sum_{j=1}^{n}\left(\sum_{i=1}^{n}\langle R^{\nabla^{E}}(X_{i},X_{j})%
a,R^{\nabla^{E}}(X_{i},X_{j})b\rangle_{E}\right)$ for any $a,b\in\Gamma(E)$.
Proof.
The scalar curvature $\tau^{r}$ is giving by, for $(x,a)\in E^{(r)}$
$$\tau^{r}(x,a)=s^{M}(x)+\frac{1}{r^{2}}(m-1)(m-2)-\frac{1}{4}\xi_{x}(a,a).$$
Suppose that $\tau$ is constant along $E^{(r)}$. For fixed $x\in M$ , $\tau^{r}(x,a)$ does not depend on the choice of the vector $a\in E^{(r)}_{x}$. This implies that $\xi_{x}$ is proportional to the metric $\langle\;,\;\rangle_{E}$ and the coefficient of proportionality is necessarily equal to $|R^{\nabla^{E}}|^{2}/m$.
∎
4 Sasaki metric on the sphere bundle of the Atiyah Euclidean vector bundle associated to a Riemannian manifold
We have seen in the last section that many results obtained on the sphere bundles of tangent bundles over Riemannian manifolds can be generalized to any Euclidean vector bundle. In this section, we will express these results in the case of the sphere bundle of the Atiyah Euclidean vector bundle introduced in the introduction to get some new interesting geometric situations and to open new horizons for further explorations.
4.1 The Atiyah Euclidean vector bundle and the supra-curvature of a Riemannian manifold
Let $(M,\langle\;,\;\rangle_{TM})$ be a Riemannian manifold, $k>0$ and $(E(M,k),\langle\;,\;\rangle_{k},\nabla^{E})$ the associated
Atiyah Euclidean vector bundle defined in the introduction.
Let $K^{M}:\mathrm{so}(TM)\to\mathrm{so}(TM)$ be the curvature operator given by $K^{M}(X\wedge Y)=R^{M}(X,Y)$ where $X\wedge Y(Z)=\langle Y,Z\rangle_{TM}X-\langle X,Z\rangle_{TM}Y.$
The curvature $R^{\nabla^{E}}$ of $\nabla^{E}$ (we refer to as the supra-curvature of $(M,\langle\;,\;\rangle_{TM},k)$) was computed in (boucettaessoufi, , Theorem 3.1). It is given by the following formulas:
$$\displaystyle R^{\nabla^{E}}(X,Y)Z$$
$$\displaystyle=$$
$$\displaystyle\left\{R^{M}(X,Y)Z+H_{Y}H_{X}Z-H_{X}H_{Y}Z\right\}+\left\{-\frac{%
1}{2}\nabla_{Z}^{M}(K^{M})(X\wedge Y)\right\},$$
$$\displaystyle R^{\nabla^{E}}(X,Y)F$$
$$\displaystyle=$$
$$\displaystyle\left\{(R^{\nabla^{E}}(X,Y)F)_{TM}\right\}+\left\{[R^{M}(X,Y),F]+%
H_{Y}H_{X}F-H_{X}H_{Y}F\right\},$$
(12)
$$\displaystyle\langle(R^{\nabla^{E}}(X,Y)F)_{TM},Z\rangle_{k}$$
$$\displaystyle=$$
$$\displaystyle-\langle R^{\nabla^{E}}(X,Y)Z,F\rangle_{k},$$
$X,Y,Z\in\Gamma(TM)$, $F\in\Gamma(\mathrm{so}(TM))$. We denote by $E^{(r)}(M,k)$ the sphere bundle of radius $r$ associated to $E(M,k)$ and $h$ the Sasaki metric on $E^{(r)}(M,k)$.
The supra-curvature is deeply related to the geometry of $(M,\langle\;,\;\rangle_{TM})$. Let us compute it in some particular cases. This computation will be useful in the proof of Theorem 4.1 where we will characterize the Riemannian manifolds with vanishing supra-curvature.
Supra-curvature of the Riemannian product of Riemannian manifolds
Proposition 4.1.
Let $(M,\langle\;,\;\rangle_{TM})$ be the Riemannian product of $p$ Riemannian manifolds $(M_{1},\langle\;,\;\rangle_{1}),\ldots,(M_{p},\langle\;,\;\rangle_{p})$. Then the supra-curvature of $(M,\langle\;,\;\rangle_{TM})$ at a point $x=(x_{1},\ldots,x_{p})$ is given by
$$\left\{\begin{matrix}R^{\nabla^{E}}[(X_{1},\ldots,X_{p}),(Y_{1},\ldots,Y_{p})]%
(Z_{1},\ldots,Z_{p})=R^{\nabla^{E_{1}}}(X_{1},Y_{1})Z_{1}+\ldots+R^{\nabla^{E_%
{p}}}(X_{p},Y_{p})Z_{p},\\
R^{\nabla^{E}}[(X_{1},\ldots,X_{p}),(Y_{1},\ldots,Y_{p})](F)=R^{\nabla^{E_{1}}%
}(X_{1},Y_{1})F_{1}+\ldots+R^{\nabla^{E_{p}}}(X_{p},Y_{p})F_{p},\end{matrix}\right.$$
where $X_{i},Y_{i},Z_{i}\in T_{x_{i}}M_{i}$, $F\in\mathrm{so}(T_{x}M)$, $F_{i}=\mathrm{pr}_{i}\circ F_{|TM_{i}}$, $R^{\nabla^{E_{i}}}$ is the supra-curvature of $(M_{i},\langle\;,\;\rangle_{i},k)$ and $i=1,\ldots,p$.
Proof.
It is an immediate consequence of the following formulas
$$\displaystyle R^{M}[X,Y](Z)$$
$$\displaystyle=$$
$$\displaystyle(R^{M_{1}}(X_{1},Y_{1})Z_{1},\ldots,R^{M_{p}}(X_{p},Y_{p})Z_{p}),$$
$$\displaystyle H_{X}Y$$
$$\displaystyle=$$
$$\displaystyle H_{X_{1}}^{1}Y_{1}+\ldots+H_{X_{p}}^{p}Y_{p},$$
$$\displaystyle H_{X}F$$
$$\displaystyle=$$
$$\displaystyle H_{X_{1}}^{1}F_{1}+\ldots+H_{X_{p}}^{p}F_{p},$$
$$\displaystyle\nabla^{M}_{X}(K^{M})(X\wedge Y)$$
$$\displaystyle=$$
$$\displaystyle\nabla_{Z_{1}}(K^{M_{1}})(X_{1}\wedge Y_{1})+\ldots+\nabla_{Z_{p}%
}(K^{M_{p}})(X_{p}\wedge Y_{p}),$$
where $X=(X_{1},\ldots,X_{p})$, $Y=(Y_{1},\ldots,Y_{p})$, $Z=(Z_{1},\ldots,Z_{p})$ and $F_{i}=\mathrm{pr}_{i}\circ F_{|TM_{i}}$.
∎
Supra-curvature of Riemannian manifolds with constant curvature
Proposition 4.2.
Suppose that $(M,\langle\;,\;\rangle_{TM})$ has constant sectional curvature $c$ and put $\varpi=\frac{1}{4}c(2-ck)$. Then, for any $X,Y\in\Gamma(TM)$ and $F\in\Gamma(\mathrm{so}(TM))$,
$$R^{\nabla^{E}}(X,Y)Z=-2\varpi X\wedge Y(Z)\quad\mbox{and}\quad R^{\nabla^{E}}(%
X,Y)F=-2\varpi[X\wedge Y,F].$$
Proof.
The expression of $R^{\nabla^{E}}$ is given by (12). We have $H_{X}Y=-\frac{1}{2}R^{M}(X,Y)=\frac{1}{2}cX\wedge Y$. Moreover, since the curvature is constant then $\nabla^{M}(K^{M})=0$.
Now if $(X_{i})_{i=1}^{n}$ is a local frame of orthonormal vector fields then
$$\displaystyle\langle H_{X}F,Y\rangle_{TM}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}k\;{\mathrm{tr}}(F\circ R^{M}(X,Y))=-\frac{1}{2}ck%
\sum_{i=1}^{n}\langle F(X_{i}),X\wedge Y(X_{i})\rangle_{TM}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}ck\sum_{i=1}^{n}\left(\langle Y,X_{i}\rangle_{TM}%
\langle F(X_{i}),X\rangle_{TM}-\langle X,X_{i}\rangle_{TM}\langle F(X_{i}),Y%
\rangle_{TM}\right)$$
$$\displaystyle=$$
$$\displaystyle-ck\langle F(Y),X\rangle_{TM}.$$
Thus $H_{X}F=ckF(X)$. So
$$\displaystyle\;[H_{Y},H_{X}]Z$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}(H_{Y}R^{M}(Z,X)+H_{X}R^{M}(Y,Z))$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2}ck(R^{M}(Z,X)Y+R^{M}(Y,Z)X)$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}ckR^{M}(X,Y)Z.$$
Thus
$$R^{\nabla^{A}}(X,Y)Z=\frac{1}{2}(2-ck)R^{M}(X,Y)Z=-\frac{1}{2}c(2-ck)X\wedge Y%
(Z).$$
On the other hand,
$$\displaystyle\;[H_{Y},H_{X}]F$$
$$\displaystyle=$$
$$\displaystyle ck(H_{Y}F(X)-H_{X}F(Y))$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}ck(R^{M}(Y,F(X))+R^{M}(F(Y),X)),$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}c^{2}k([F,X\wedge Y]).$$
This completes the proof.
∎
Supra-curvature of some locally symmetric spaces
Let $G$ be a compact connected Lie group with $\mathfrak{g}$ its Lie algebra and $K$ be a closed subgroup of $G$ with $\mathfrak{k}$ its Lie algebra. Denote by $\pi:G\longrightarrow G/K$ the canonical projection. Suppose that $\mathfrak{g}=\mathfrak{k}\oplus{\mathfrak{p}}$ with ${\mathfrak{p}}$ is ${\mathrm{Ad}}_{K}$-invariant, $[{\mathfrak{p}},{\mathfrak{p}}]\subset\mathfrak{k}$ and the restriction of the Killing form $B$ of $\mathfrak{g}$ to $p$ is negative definite. The scalar product $\langle\;,\;\rangle_{\mathfrak{p}}=\lambda B_{|{\mathfrak{p}}\times{\mathfrak{%
p}}}$ with $\lambda<0$ defines a $G$-invariant Riemannian metric $\langle\;,\;\rangle_{G/K}$ on $G/K$ which is locally symmetric. For any $X\in\mathfrak{k}$, we denote by $\Phi_{X}$ the restriction of ${\mathrm{ad}}_{X}$ to ${\mathfrak{p}}$, then
$$\mathrm{so}({\mathfrak{p}},\langle\;,\;\rangle_{\mathfrak{p}})=\Phi_{\mathfrak%
{k}}\oplus(\Phi_{\mathfrak{k}})^{\perp},$$
(13)
where $(\Phi_{\mathfrak{k}})^{\perp}$ is the orthogonal with respect to the invariant scalar product on $\mathrm{so}({\mathfrak{p}},\langle\;,\;\rangle_{\mathfrak{p}})$, $(A,B)\mapsto-{\mathrm{tr}}(AB)$.
Proposition 4.3.
The supra-curvature of $(G/K,\langle\;,\;\rangle_{G/K},k)$ at $\pi(e)$ is given by
$$\displaystyle R^{\nabla^{E}}(X,Y)Z$$
$$\displaystyle=$$
$$\displaystyle[[X,Y],Z]-\frac{k}{4}\left([Y,U(\Phi_{[X,Z]})]-[X,U(\Phi_{[Y,Z]})%
]\right),$$
$$\displaystyle R^{\nabla^{E}}(X,Y)F$$
$$\displaystyle=$$
$$\displaystyle[\Phi_{[X,Y]},\Phi_{X^{F}+\frac{k}{4}U(F)}]+[\Phi_{[X,Y]},F^{%
\perp}],$$
where $X,Y,Z\in T_{\pi(e)}G/K={\mathfrak{p}},$
$F={\mathrm{ad}}_{X^{F}}+F^{\perp}\in\mathrm{so}({\mathfrak{p}},\langle\;,\;%
\rangle_{\mathfrak{p}})=\Phi_{\mathfrak{k}}\oplus(\Phi_{\mathfrak{k}})^{\perp}$ and $U(F)$ is the element of $\mathfrak{k}$ given by
$$U(F)=\sum_{i=1}^{n}[X_{i},F(X_{i})],$$
$(X_{1},\ldots,X_{n})$ an orthonormal basis of ${\mathfrak{p}}$.
Proof.
The expression of $R^{\nabla^{E}}$ is given by (12). The curvature of $G/K$ at $\pi(e)$ is given by (see (bes, , Proposition 7.72))
$$R^{G/K}(X,Y)Z=[[X,Y],Z],\quad X,Y,Z\in{\mathfrak{p}},$$
and $\nabla^{G/K}(K^{G/K})=0$. Choose $(X_{i})_{i=1}^{n}$ an orthonormal basis of ${\mathfrak{p}}$.
We have
$$\displaystyle\langle H_{X}F,Y\rangle_{k}$$
$$\displaystyle=$$
$$\displaystyle\langle H_{X}F,Y\rangle_{{\mathfrak{p}}}$$
$$\displaystyle=$$
$$\displaystyle-\frac{k}{2}\sum_{i}\langle F(X_{i}),[[X,Y],X_{i}]\rangle_{%
\mathfrak{p}}$$
$$\displaystyle=$$
$$\displaystyle\frac{\lambda k}{2}\sum_{i}B(F(X_{i}),[[X,Y],X_{i}])$$
$$\displaystyle=$$
$$\displaystyle\frac{k}{2}\sum_{i}\langle[X,[X_{i},F(X_{i})]],Y\rangle_{%
\mathfrak{p}}.$$
Thus
$H_{X}F=\frac{k}{2}[X,U(F)]$. We deduce that
$$\displaystyle H_{Y}H_{X}Z-H_{X}H_{Y}Z$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}H_{Y}(\Phi_{[X,Z]})+\frac{1}{2}H_{X}(\Phi_{[Y,Z]})$$
$$\displaystyle=$$
$$\displaystyle-\frac{k}{4}[Y,U(\Phi_{[X,Z]})]+\frac{k}{4}[X,U(\Phi_{[Y,Z]})],$$
$$\displaystyle H_{Y}H_{X}F-H_{X}H_{Y}F$$
$$\displaystyle=$$
$$\displaystyle-\frac{k}{4}\Phi_{[Y,[X,U(F)]]}+\frac{k}{4}\Phi_{[X,[Y,U(F)]]}$$
$$\displaystyle=$$
$$\displaystyle\frac{k}{4}[\Phi_{[X,Y]},\Phi_{U(F)}].$$
This gives the desired formulas.
∎
Supra-curvature of complex projective spaces
Let $\pi:\mathbb{C}^{n+1}\setminus\{0\}\longrightarrow P^{n}(\mathbb{C})$ be the natural projection and $\pi_{s}:S^{2n+1}\longrightarrow P^{n}(\mathbb{C})$ its restriction to $S^{2n+1}\subset\mathbb{C}^{n+1}\setminus\{0\}$. For any $m\in S^{2n+1}$, put $F_{m}=\ker((\pi_{s})_{*})_{m}$ and let $F_{m}^{\perp}$ be the orthogonal complementary subspace to $F_{m}$ in $T_{m}(S^{2n+1})$;
$$T_{m}(S^{2n+1})=F_{m}\oplus F_{m}^{\perp}.$$
We introduce the Riemannian metric $\langle\;,\;\rangle_{P^{n}(\mathbb{C})}$ on $P^{n}(\mathbb{C})$ so that the restriction of $(\pi_{s})_{*}$ to $F_{m}^{\perp}$ is an isometry onto $T_{\pi(m)}(P^{n}(\mathbb{C}))$. Let $J_{0}$ be the canonical complex structures on $\mathbb{C}^{n+1}$ and the standard complex structures $J$ on $P^{n}(\mathbb{C})$ is given by
$$J(\pi_{s})_{*}v=(\pi_{s})_{*}J_{0}v,\;v\in F_{m}^{\perp}.$$
Proposition 4.4.
The curvature and the supra-curvature of $(P^{n}(\mathbb{C}),g,k)$ are given by
$$\displaystyle R^{P^{n}(\mathbb{C})}(X,Y)Z$$
$$\displaystyle=$$
$$\displaystyle\langle X,Z\rangle_{P^{n}(\mathbb{C})}Y-\langle Y,Z\rangle_{P^{n}%
(\mathbb{C})}X-2\langle JY,X\rangle_{P^{n}(\mathbb{C})}JZ+\langle JZ,Y\rangle_%
{P^{n}(\mathbb{C})}JX-\langle JZ,X\rangle_{P^{n}(\mathbb{C})}JY,$$
$$\displaystyle R^{\nabla^{E}}(X,Y)Z$$
$$\displaystyle=$$
$$\displaystyle(k-1)\left(\langle Y,Z\rangle_{P^{n}(\mathbb{C})}X-\langle X,Z%
\rangle_{P^{n}(\mathbb{C})}Y+2\langle JY,X\rangle_{P^{n}(\mathbb{C})}JZ\right)$$
$$\displaystyle+((2n+3)k-1)\left(\langle JZ,X\rangle_{P^{n}(\mathbb{C})}JY-%
\langle JZ,Y\rangle_{P^{n}(\mathbb{C})}JX\right),$$
$$\displaystyle R^{\nabla^{E}}(X,Y)F$$
$$\displaystyle=$$
$$\displaystyle\left(\frac{k}{2}-1\right)[F,X\wedge Y+JX\wedge JY]+2\langle JY,X%
\rangle_{P^{n}(\mathbb{C})}[F,J]$$
$$\displaystyle+\frac{k}{2}\left([J\circ F\circ J,X\wedge Y]-J\circ F(X)\wedge JY%
-JX\wedge J\circ F(Y)\right),$$
where $X,Y,Z\in\Gamma(TP^{n}(\mathbb{C}))$ and $F\in\Gamma(\mathrm{so}(TP^{n}(\mathbb{C})))$.
Proof.
The projection $\pi_{s}:S^{2n+1}\longrightarrow P^{n}(\mathbb{C})$ is a Riemannian submersion with totally geodesic fiber and its O’Neill shape tensor is given by $A_{X^{h}}Y^{h}=-\langle J_{0}X^{h},Y^{h}\rangle_{\mathbb{C}^{n+1}}J_{0}N$ where $N$ is the radial vector field and $X^{h},Y^{h}$ are the horizontal lift of $X,Y\in\Gamma(P^{n}(\mathbb{C}))$. The expression of $R^{P^{n}(\mathbb{C})}$ follows from the formulas
$$\displaystyle\langle R^{S^{2n+1}}(X^{h},Y^{h})Z^{h},T^{h}\rangle_{S^{2n+1}}$$
$$\displaystyle=$$
$$\displaystyle\langle R^{P^{n}(\mathbb{C})}(X,Y)Z,T\rangle_{P^{n}(\mathbb{C})}%
\circ\pi_{s}-2\langle A_{X^{h}}Y^{h},A_{Z^{h}}T^{h}\rangle_{S^{2n+1}}$$
$$\displaystyle+\langle A_{Y^{h}}Z^{h},A_{X^{h}}T^{h}\rangle_{S^{2n+1}}-\langle A%
_{X^{h}}Z^{h},A_{Y^{h}}T^{h}\rangle_{S^{2n+1}},$$
$$\displaystyle R^{S^{2n+1}}(X^{h},Y^{h})Z^{h}$$
$$\displaystyle=$$
$$\displaystyle-(X^{h}\wedge Y^{h})Z^{h}.$$
To compute the supra-curvature, we use (12). We choose an orthonormal frame $(X_{i})_{i=1}^{2n}$ of $P^{n}(\mathbb{C})$. We have
$$\displaystyle\langle H_{X}F,Y\rangle_{P^{n}(\mathbb{C})}$$
$$\displaystyle=$$
$$\displaystyle\frac{k}{2}\sum_{i=1}^{2n}\langle R^{P^{n}(\mathbb{C})}(X,Y)X_{i}%
,F(X_{i})\rangle_{P^{n}(\mathbb{C})}$$
$$\displaystyle=$$
$$\displaystyle\frac{k}{2}\sum_{i=1}^{2n}\left[\langle X,X_{i}\rangle_{P^{n}(%
\mathbb{C})}\langle Y,F(X_{i})\rangle_{P^{n}(\mathbb{C})}-\langle Y,X_{i}%
\rangle_{P^{n}(\mathbb{C})}\langle X,F(X_{i})\rangle_{P^{n}(\mathbb{C})}-2%
\langle JY,X\rangle_{P^{n}(\mathbb{C})}\langle JX_{i},F(X_{i})\rangle_{P^{n}(%
\mathbb{C})}\right.$$
$$\displaystyle\left.+\langle JX_{i},Y\rangle_{P^{n}(\mathbb{C})}\langle JX,F(X_%
{i})\rangle_{P^{n}(\mathbb{C})}-\langle JX_{i},X\rangle_{P^{n}(\mathbb{C})}%
\langle JY,F(X_{i})\rangle_{P^{n}(\mathbb{C})}\right]$$
$$\displaystyle=$$
$$\displaystyle\frac{k}{2}\left(2\langle F(X),Y\rangle_{P^{n}(\mathbb{C})}-2{%
\mathrm{tr}}(F\circ J)\langle JX,Y\rangle_{P^{n}(\mathbb{C})}-\langle JX,F(JY)%
\rangle_{P^{n}(\mathbb{C})}+\langle JY,F(JX)\rangle_{P^{n}(\mathbb{C})}\right).$$
Thus
$$H_{X}F=k(F(X)-{\mathrm{tr}}(F\circ J)JX-J\circ F\circ J(X)).$$
So
$$\displaystyle H_{Y}H_{X}Z$$
$$\displaystyle=$$
$$\displaystyle-\frac{k}{2}(R^{P^{n}(\mathbb{C})}(X,Z)Y-{\mathrm{tr}}(R^{P^{n}(%
\mathbb{C})}(X,Z)\circ J)JY-J\circ R^{P^{n}(\mathbb{C})}(X,Z)\circ J(Y))$$
But $R^{P^{n}(\mathbb{C})}(X,Z)\circ J=J\circ R^{P^{n}(\mathbb{C})}(X,Z)$ and a direct computation gives that ${\mathrm{tr}}(J\circ R^{P^{n}(\mathbb{C})}(X,Y))=4(n+1)\langle JY,X\rangle_{P^%
{n}(\mathbb{C})}$.
So
$$\displaystyle H_{Y}H_{X}Z$$
$$\displaystyle=$$
$$\displaystyle k\left(2(n+1)\langle JZ,X\rangle_{P^{n}(\mathbb{C})}JY-R^{P^{n}(%
\mathbb{C})}(X,Z)Y\right)$$
$$\displaystyle=$$
$$\displaystyle k\left(\langle Y,Z\rangle_{P^{n}(\mathbb{C})}X-\langle X,Y%
\rangle_{P^{n}(\mathbb{C})}Z-\langle JY,Z\rangle_{P^{n}(\mathbb{C})}JX+\langle
JY%
,X\rangle_{P^{n}(\mathbb{C})}JZ+2(n+1)\langle JZ,X\rangle_{P^{n}(\mathbb{C})}%
JY\right).$$
Thus
$$\displaystyle H_{Y}H_{X}Z-H_{X}H_{Y}Z$$
$$\displaystyle=$$
$$\displaystyle k(\langle Y,Z\rangle_{P^{n}(\mathbb{C})}X-\langle X,Z\rangle_{P^%
{n}(\mathbb{C})}Y+2\langle JY,X\rangle_{P^{n}(\mathbb{C})}JZ$$
$$\displaystyle+(2n+3)\left(\langle JZ,X\rangle_{P^{n}(\mathbb{C})}JY-\langle JZ%
,Y\rangle_{P^{n}(\mathbb{C})}JX\right)).$$
Then
$$\displaystyle R^{\nabla^{E}}(X,Y)Z$$
$$\displaystyle=$$
$$\displaystyle(k-1)\left(\langle Y,Z\rangle_{P^{n}(\mathbb{C})}X-\langle X,Z%
\rangle_{P^{n}(\mathbb{C})}Y+2\langle JY,X\rangle_{P^{n}(\mathbb{C})}JZ\right)$$
$$\displaystyle+((2n+3)k-1)\left(\langle JZ,X\rangle_{P^{n}(\mathbb{C})}JY-%
\langle JZ,Y\rangle_{P^{n}(\mathbb{C})}JX\right).$$
On the other hand,
$$\displaystyle H_{Y}H_{X}F$$
$$\displaystyle=$$
$$\displaystyle k\left(H_{Y}F(X)-{\mathrm{tr}}(F\circ J)H_{Y}JX-H_{Y}J\circ F%
\circ J(X)\right)$$
$$\displaystyle=$$
$$\displaystyle\frac{k}{2}(Y\wedge F(X)+JY\wedge F\circ J(X)+JY\wedge J\circ F(X%
)-Y\wedge J\circ F\circ J(X)$$
$$\displaystyle+2\langle J\circ F(X)-F\circ J(X),Y\rangle_{P^{n}(\mathbb{C})}J-{%
\mathrm{tr}}(F\circ J)\left(Y\wedge JX-JY\wedge X+2\langle X,Y\rangle_{P^{n}(%
\mathbb{C})}J\right)).$$
So, since $F(X)\wedge Y+X\wedge F(Y)=[F,X\wedge Y]$
$$H_{Y}H_{X}F-H_{X}H_{Y}F=\frac{k}{2}\left([X\wedge Y+JX\wedge JY,F]+[J\circ F%
\circ J,X\wedge Y]-J\circ F(X)\wedge JY-JX\wedge J\circ F(Y)\right),$$
and
$$[R^{P^{n}(\mathbb{C})}(X,Y),F]=-[X\wedge Y+JX\wedge JY+2\langle JY,XJ\rangle_{%
P^{n}(\mathbb{C})},F]=-[X\wedge Y+JX\wedge JY,F]-2\langle JY,X\rangle_{P^{n}(%
\mathbb{C})}[J,F].$$
Thus
$$\displaystyle R^{\nabla^{E}}(X,Y)F$$
$$\displaystyle=$$
$$\displaystyle[R^{P^{n}(\mathbb{C})}(X,Y),F]+H_{Y}H_{X}F-H_{X}H_{Y}F$$
$$\displaystyle=$$
$$\displaystyle(\frac{k}{2}-1)[F,X\wedge Y+JX\wedge JY]+2\langle JY,X\rangle_{P^%
{n}(\mathbb{C})}[F,J]+\frac{k}{2}[J\circ F\circ J,X\wedge Y]$$
$$\displaystyle-\frac{k}{2}(J\circ F(X)\wedge JY+JX\wedge J\circ F(Y)).$$
∎
It is obvious that if $(M,\langle\;,\;\rangle_{TM})$ is flat then, for any $k>0$, the supra-curvature of $(M,\langle\;,\;\rangle_{TM},k)$ vanishes. Furthermore, according to Propositions 4.1 and 4.2, if $(M,\langle\;,\;\rangle_{TM})$ is the Riemannian product of $p$ Riemannian manifolds all having constant sectional curvature $\frac{2}{k}$ then the supra-curvature of $(M,\langle\;,\;\rangle_{TM},k)$ vanishes. Actually, there are the only cases where the supra-curvature vanishes.
Theorem 4.1.
Let $(M,\langle\;,\;\rangle_{TM})$ be a connected Riemannian manifold. Then the supra-curvature of $(M,\langle\;,\;\rangle_{TM},k)$ vanishes if and only if the Riemannian universal cover of $(M,\langle\;,\;\rangle_{TM})$ is isometric to $(\hbox{\bb R}^{n},\langle\;,\;\rangle_{0})\times{\mathbb{S}}^{n_{1}}\left(%
\sqrt{\frac{k}{2}}\right)\times\ldots\times{\mathbb{S}}^{n_{p}}\left(\sqrt{%
\frac{k}{2}}\right)$ where ${\mathbb{S}}^{n_{i}}\left(\sqrt{\frac{k}{2}}\right)$ is the Riemannian sphere of dimension $n_{i}$, of radius $\sqrt{\frac{k}{2}}$ and constant curvature $\frac{2}{k}$.
Proof.
Suppose that the supra-curvature of $(M,\langle\;,\;\rangle_{TM},k)$ vanishes and consider the Riemannian covering $(N,\langle\;,\;\rangle_{TN})$ of $(M,\langle\;,\;\rangle_{TM})$. Since $(M,\langle\;,\;\rangle_{TM})$ and
$(N,\langle\;,\;\rangle_{TN})$ are locally isometric then the supra-curvature of $(N,\langle\;,\;\rangle_{TN},k)$ vanishes.
This implies by virtue of (12) that $(N,\langle\;,\;\rangle_{TN})$ is locally symmetric
and for any $X,Y\in\Gamma(TN)$,
$$\langle R^{N}(X,Y)X,Y\rangle_{TN}=\langle H_{X}Y,H_{X}Y\rangle_{k}\geq 0.$$
Thus
$(N,\langle\;,\;\rangle_{TN})$ has non-negative sectional curvature. Since $N$ is simply-connected
then $(N,\langle\;,\;\rangle_{TN})$ is a symmetric space. But a simply-connected symmetric space is the Riemannian product of a Euclidean space and a finite family of irreducible symmetric spaces (see (bes, , Theorem 7.76)). Thus, $(N,\langle\;,\;\rangle_{TN})=(E,\langle\;,\;\rangle_{0})\times(N_{1},\langle\;%
,\;\rangle_{1})\times\ldots\times(N_{p},\langle\;,\;\rangle_{p})$ where $(E,\langle\;,\;\rangle_{0})$ is flat and the $(N_{i},\langle\;,\;\rangle_{i})$ are irreducible symmetric spaces with non-negative sectional curvature. This implies that the $N_{i}$ are compact and Einstein. According to Proposition 4.1, the vanishing of the supra-curvature of $(N,\langle\;,\;\rangle_{TN},k)$ implies the vanishing of the supra-curvature of $(N_{i},\langle\;,\;\rangle_{i},k)$ for $i=1,\ldots,p$.
Let $i\in\{1,\ldots,p\}$ and denote by $n_{i}$ the dimension of $N_{i}$. The symmetric space $N_{i}$ can be identified to $G/K$, where $G$ is the component of the identity of the group of isometries of $(N_{i},\langle\;,\;\rangle_{i})$ and $K$ is the isotropy at some point. Moreover, the Lie algebra $\mathfrak{g}$ of $G$ has a splitting $\mathfrak{g}=\mathfrak{k}\oplus{\mathfrak{p}}$ where $\mathfrak{k}$ is the Lie algebra of $K$ and $[{\mathfrak{p}},{\mathfrak{p}}]\subset\mathfrak{k}$. Since $N_{i}$ is Einstein, the metric in restriction to ${\mathfrak{p}}$ is proportional to the restriction of the Killing form.
The vanishing of the supra-curvature of $(N_{i},\langle\;,\;\rangle_{i},k)$ implies, by virtue of the second formula in Proposition 4.3,
$[\Phi_{[{\mathfrak{p}},{\mathfrak{p}}]},\Phi_{\mathfrak{k}}^{\perp}]=0$. This relation and the fact that $[{\mathfrak{p}},{\mathfrak{p}}]$ is an ideal of $\mathfrak{k}$ imply that
$\Phi_{[{\mathfrak{p}},{\mathfrak{p}}]}$ is an ideal of $\mathrm{so}({\mathfrak{p}})$. But if $\dim{\mathfrak{p}}\not=4$ then the real Lie algebra $\mathrm{so}({\mathfrak{p}})$ is simple (see (knapp, , Theorem 6.105 )) and, in this case,
$\Phi_{[{\mathfrak{p}},{\mathfrak{p}}]}=0$ or $\Phi_{[{\mathfrak{p}},{\mathfrak{p}}]}=\mathrm{so}({\mathfrak{p}})$. If $\Phi_{[{\mathfrak{p}},{\mathfrak{p}}]}=0$ then $R^{N_{i}}=0$ and we get the result. Otherwise, $\dim\mathfrak{k}\geq\dim\Phi_{\mathfrak{k}}\geq\dim\mathrm{so}({\mathfrak{p}})%
=\frac{n_{i}(n_{i}-1)}{2}$. So
$$\dim G=\dim\mathfrak{k}+n_{i}\geq\frac{n_{i}(n_{i}+1)}{2}.$$
But the dimension of the group of isometries is always less or equal to $\frac{n_{i}(n_{i}+1)}{2}$ with equality when the manifold has constant curvature.
Thus $\dim G=\frac{n(n+1)}{2}$ and hence $N_{i}$ has constant curvature. If $\dim{\mathfrak{p}}=4$, $(N_{i},\langle\;,\;\rangle_{i})$ is a Einstein four dimensional homogeneous space and according to the main result in jensen , $(N_{i},\langle\;,\;\rangle_{i})$ is isometric to ${\mathbb{S}}^{4}(r)$, ${\mathbb{S}}^{2}(r)\times{\mathbb{S}}^{2}(r)$ or $P^{2}(\hbox{\bb C})$. But Proposition 4.4 shows that the supra-curvature of $P^{2}(\hbox{\bb C})$ doesn’t vanishes and Proposition 4.2 shows that ${\mathbb{S}}^{n}(r)$ has vanishing supra-curvature if and only if $r={\sqrt{\frac{k}{2}}}$. This completes the proof.
∎
4.2 Geometry of $(E^{(r)}(M,k),h)$ when $M$ is locally symmetric
The following proposition is a key step in order to apply Theorems 3.2 and 3.3 to $E(M,k)$.
Proposition 4.5.
If $M$ is locally symmetric then $\nabla^{M,E}(R^{\nabla^{E}})=0$.
Proof.
Assume that $M$ is locally symmetric which is equivalent to $\nabla^{M}(K^{M})=0$. Note first that $\nabla^{M,E}(R^{\nabla^{E}})=0$ if and only if for any curve $\gamma:[a,b]\longrightarrow M$, $V_{1},V_{2},V_{3}:[a,b]\longrightarrow TM$ parallel vector fields along $c$ and $F:[a,b]\longrightarrow\mathrm{so}(TM)$ parallel section along $c$ then $R^{\nabla^{E}}(V_{1},V_{2})V_{3}$ and
$R^{\nabla^{E}}(V_{1},V_{2})F$ are parallel along $c$. But $R^{M}(V_{1},V_{2})V_{3}$ is parallel, $H_{V_{1}}V_{2}$ and $H_{V_{1}}F$ are also parallel and by using (12) we can conclude.
∎
The following theorem is an immediate consequence of Theorem 3.2, Theorem 3.3 and Proposition 4.5.
Theorem 4.2.
1.
If $(M,\langle\;,\;\rangle_{TM})$ is locally symmetric and its sectional curvature is positive then, for $r$ sufficiently small, $(E^{(r)}(M,k),h)$ has nonnegative sectional curvature.
2.
If $M$ is compact with positive Ricci curvature or locally symmetric with positive Ricci curvature, then for $r$ sufficiently small the Ricci curvature of $(E^{(r)}(M,k),h)$ is positive.
When $M$ has positive constant sectional curvature one can apply Theorem 4.2 but in this case we can apply Remark 2 to get a better result.
Theorem 4.3.
Let $(M,\langle\;,\;\rangle_{TM})$ be a Riemannian manifold with positive constant sectional curvature $c$. Then, for $k$ close to $\frac{2}{c}$, $(E^{(r)}(M,k),h)$ has nonnegative sectional curvature.
Proof.
Suppose that $M$ of constant curvature $c$. Let us find in this case a $\mathbf{K}$ as in (8). For any $X,Y,Z\in\Gamma(TM)$ and $F\in\Gamma(\mathrm{so}(TM))$, we have
$$|R^{\nabla^{E}}(X,Y)(Z+F)|\leq|R^{\nabla^{E}}(X,Y)Z|+|R^{\nabla^{E}}(X,Y)F|.$$
From Proposition 4.2, we get that
$$|R^{\nabla^{E}}(X,Y)Z|\leq 4|\varpi||X||Y||Z|\quad\mbox{and}\quad R^{\nabla^{E%
}}(X,Y)F=2\varpi\left(F(X)\wedge Y+X\wedge F(Y)\right).$$
Let us compute $|F(X)\wedge Y|$. Let $(X_{i})_{i=1}^{n}$ be a local orthonormal frame of $TM$. Then
$$\displaystyle|F(X)\wedge Y|^{2}$$
$$\displaystyle=$$
$$\displaystyle-k{\mathrm{tr}}((F(X)\wedge Y)^{2})$$
$$\displaystyle=$$
$$\displaystyle k\sum_{i=1}^{n}\langle F(X)\wedge Y(X_{i}),F(X)\wedge Y(X_{i})%
\rangle_{TM}$$
$$\displaystyle=$$
$$\displaystyle k\sum_{i=1}^{n}\langle\langle Y,X_{i}\rangle_{TM}F(X)-\langle F(%
X),X_{i}\rangle_{TM}Y,\langle\langle Y,X_{i}\rangle_{TM}F(X)-\langle F(X),X_{i%
}\rangle_{TM}Y\rangle_{TM}$$
$$\displaystyle=$$
$$\displaystyle 2k|F(X)|^{2}|Y|^{2}+2k\langle F(X),Y\rangle_{TM}^{2}\leq 4|F|^{2%
}|X|^{2}|Y|^{2}.$$
Finally,
$$|R^{\nabla^{E}}(X,Y)(Z+F)|\leq 8|\varpi||X||Y|(|Z|+|F|).$$
So we can take $\mathbf{K}=8|\varpi|$ which goes to zero when $k$ goes to $\frac{2}{c}$. Thus when $k$ is close to $\frac{2}{c}$ the inequality (9) holds and we get the desired result.
∎
4.3 Riemannian manifolds whose $(E^{(r)}(M,k),h)$ is Einstein
It has been proved in book that $(T^{(r)}M,h)$ is Einstein if and only if $\dim M=2$ and either $M$ is flat or has constant curvature $\frac{1}{r^{2}}$. We have a more rich situation in the case of $(E^{(r)}(M,k),h)$.
Theorem 4.4.
Let $(M,\langle\;,\;\rangle_{TM})$ be a connected Riemannian manifold. Then:
1.
$(E^{(r)}(M,k),h)$ is Einstein with Einstein constant $\lambda$ if and only if the Riemannian covering of $(M,\langle\;,\;\rangle_{TM})$ is locally isometric to the Riemannian product ${\mathbb{S}}^{p}\left(\sqrt{\frac{k}{2}}\right)\times\ldots\times{\mathbb{S}}^%
{p}\left(\sqrt{\frac{k}{2}}\right)$ of $q$ spheres of dimension $p$ and radius $\sqrt{\frac{k}{2}}$ with
$$\lambda=\frac{2(p-1)}{k}=\frac{qp(qp+1)-4}{2r^{2}}.$$
2.
$(E^{(r)}(M,k),h)$ can never have a constant sectional curvature.
Proof.
This is an immediate consequence of Theorems 3.1 and 4.1.
∎
4.4 Scalar curvature of $(E^{(r)}(M,k),h)$
As an application of Theorem 3.6 , we have the following result:
Theorem 4.5.
Suppose that $(M,\langle\;,\;\rangle_{TM})$ has constant sectional curvature $c$. Then
$(E^{(r)}(M,k),h)$ has constant scalar curvature if and only if either $n=3$, $c=0$ or $c>0$ and $k=\frac{2}{c}$.
Proof.
The scalar curvature $\tau$ is giving by, for $(x,Z+F)\in E^{(r)}(M,k)$
$$\tau(x,Z+F)=n(n-1)c+\frac{1}{r^{2}}(m-1)(m-2)-\frac{1}{4}\xi_{x}(Z+F,Z+F),$$
where
$$\xi_{x}(Z+F,Z+F)=2\varpi^{2}(n-1)|Z+F|^{2}+2\varpi^{2}(n-3)|F|^{2},\quad\varpi%
=\frac{1}{4}c(2-ck).$$
So we get the desired result.
∎
We end this subsection by giving all two-dimensional Riemannian manifolds $(M,\langle\;,\;\rangle_{TM})$ for which $(E^{(r)}(M,k),h)$ has constant scalar curvature.
Proposition 4.6.
Let $(M,\langle\;,\;\rangle_{TM})$ be a 2-dimensional Riemannian manifold with curvature $R^{M}(X,Y)=-CX\wedge Y$ with $C\in C^{\infty}(M)$. Then,
for any $X,Y\in\Gamma(TM)$ and $F\in\Gamma(\mathrm{so}(TM))$,
$$R^{\nabla^{E}}(X,Y)Z=-\varpi X\wedge Y(Z)+\frac{1}{2}Z(C)X\wedge Y\quad\mbox{%
and}\quad R^{\nabla^{E}}(X,Y)F=-\varpi[X\wedge Y,F]+k\langle F(X),Y\rangle_{TM%
}\mathrm{grad}(C),$$
where $\varpi=\frac{1}{2}C(2-kC)$ and $X\wedge Y$ is the skew-symmetric endomorphism of $TM$ given by
$$X\wedge Y(Z)=\langle Y,Z\rangle_{TM}X-\langle X,Z\rangle_{TM}Y.$$
Proof.
According to (12),
$$R^{\nabla^{E}}(X,Y,Z)=R^{M}(X,Y,Z)+H_{Y}H_{X}Z-H_{X}H_{Y}Z-\frac{1}{2}\nabla^{%
M}_{Z}(K^{M})(X\wedge Y),$$
where
$H_{X}Y=-\frac{1}{2}R^{M}(X,Y)=\frac{1}{2}CX\wedge Y$ and
$$\langle H_{X}F,Y\rangle_{TM}=-\frac{1}{2}k\;{\mathrm{tr}}(F\circ R^{M}(X,Y))=-%
\frac{1}{2}Ck\sum_{i=1}^{n}\langle F(X_{i}),X\wedge Y(X_{i})\rangle_{TM}=-Ck%
\langle F(Y),X\rangle_{TM}.$$
Thus $H_{X}F=CkF(X)$ and
$$H_{Y}H_{X}Z-H_{X}H_{Y}Z=\frac{1}{2}C^{2}k(X\wedge Z(Y)-Y\wedge Z(X))=\frac{1}{%
2}C^{2}kX\wedge Y(Z).$$
Moreover,
$$\displaystyle\nabla^{M}_{Z}(K^{M})(X\wedge Y)$$
$$\displaystyle=$$
$$\displaystyle\nabla^{M}_{Z}(K^{M}(X\wedge Y))-K^{M}(\nabla^{M}_{Z}X\wedge Y)-K%
^{M}(X\wedge\nabla^{M}_{Z}Y)$$
$$\displaystyle=$$
$$\displaystyle-\nabla^{M}_{Z}(CX\wedge Y)+C\nabla^{M}_{Z}X\wedge Y+CX\wedge%
\nabla^{M}_{Z}Y$$
$$\displaystyle=$$
$$\displaystyle-Z(C)X\wedge Y.$$
By adding the expressions above we get the first formula.
On the other hand,
$$R^{\nabla^{E}}(X,Y)F=\left\{(R^{\nabla^{E}}(X,Y)F)_{TM}\right\}+\left\{[R^{M}(%
X,Y),F]+H_{Y}H_{X}F-H_{X}H_{Y}F\right\},$$
where
$$\displaystyle\langle(R^{\nabla^{E}}(X,Y)F)_{TM},Z\rangle_{k}$$
$$\displaystyle=$$
$$\displaystyle-\langle R^{\nabla^{E}}(X,Y)Z,F\rangle_{k}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{2}Z(C)\langle X\wedge Y,F\rangle_{k}$$
$$\displaystyle=$$
$$\displaystyle-\frac{k}{2}\langle\mathrm{grad}(C),Z\rangle_{TM}\sum_{i=1}^{n}%
\langle X\wedge Y(X_{i}),F(X_{i})\rangle_{TM}$$
$$\displaystyle=$$
$$\displaystyle k\langle F(X),Y\rangle_{TM}\langle\mathrm{grad}(C),Z\rangle_{TM}.$$
Thus $(R^{\nabla^{E}}(X,Y)F)_{TM}=k\langle F(X),Y\rangle_{TM}\mathrm{grad}(C).$ Furthermore,
$$[H_{Y},H_{X}]F=Ck(H_{Y}F(X)-H_{X}F(Y))=-\frac{1}{2}Ck(R^{M}(Y,F(X))+R^{M}(F(Y)%
,X))=-\frac{1}{2}C^{2}k([F,X\wedge Y]).$$
This completes the proof.
∎
Theorem 4.6.
Let $(M,\langle\;,\;\rangle_{TM})$ be a 2-dimensional Riemannian manifold. Then
$(E^{(r)}(M,k),h)$ has constant scalar curvature if and only if $(M,\langle\;,\;\rangle_{TM})$ has constant curvature $C=0$ or $C=\frac{2}{k}$.
Proof.
We choose an orthonormal basis $(X_{1},X_{2})$ such that $Ric^{M}(X_{i})=\rho_{i}X_{i}$ and we put $F_{12}=\frac{1}{\sqrt{2k}}X_{1}\wedge X_{2}$. The family $(X_{1},X_{2},F_{12})$ is a local orthonormal frame of $E(M,k)$. We have, for any vector field $Z$,
$$R^{\nabla^{E}}(X_{1},X_{2})Z=-\frac{1}{2}C(2-kC)X_{1}\wedge X_{2}(Z)\quad\mbox%
{and}\quad R^{\nabla^{E}}(X_{1},X_{2})F_{12}=-\sqrt{\frac{k}{2}}\mathrm{grad}(%
C).$$
Then,
$$\displaystyle\xi(X_{i},X_{i})$$
$$\displaystyle=$$
$$\displaystyle 2\varpi^{2}+k(X_{i}(C))^{2},\;i=1,2$$
$$\displaystyle\xi(F_{12},F_{12})$$
$$\displaystyle=$$
$$\displaystyle k|\mathrm{grad}(C)|^{2},$$
$$\displaystyle\xi(X_{1},X_{2})$$
$$\displaystyle=$$
$$\displaystyle kX_{1}(C)X_{2}(C),$$
$$\displaystyle\xi(X_{i},F_{12})$$
$$\displaystyle=$$
$$\displaystyle\varpi^{2}\sqrt{2k}\langle\mathrm{grad}(C),X_{1}\wedge X_{2}(X_{i%
})\rangle,\;i=1,2.$$
On the other hand
$$|R^{\nabla^{E}}|^{2}=4\varpi^{2}+2k|\mathrm{grad}(C)|^{2}.$$
Suppose that $(E^{(r)}(M,k),h)$ has constant scalar curvature. The equation (10) gives, for $F_{12}$
$$4\varpi^{2}-k|\mathrm{grad}(C)|^{2}=0.$$
We eliminate $|\mathrm{grad}(C)|^{2}$ in the equation (11), to find
$$24C-3C^{2}(2-kC)^{2}=\mathrm{constant}.$$
So $C$ must be constant and $C=0$ or $C=\frac{2}{k}$.
∎
5 The Sasaki metric with positive scalar curvature on the unit bundle of three dimensional unimodular Lie groups
The purpose of this section is to prove the following result.
Theorem 5.1.
Let $G$ be a three dimensional connected unimodular Lie group. Then there exists a left invariant Riemannian metric on $G$ such that $(T^{(1)}G,h)$ has positive scalar curvature.
Proof.
Let $G$ be a connected $3$-dimensional unimodular Lie group with left invariant metric. By using an argument developed in milnor ,
there exists an orthonormal basis $(X_{1},X_{2},X_{3})$ of left invariant vector fields such that
$$[X_{1},X_{2}]=mX_{3},\hskip 5.690551pt\;[X_{1},X_{3}]=nX_{2}\quad\mbox{and}%
\quad[X_{2},X_{3}]=pX_{1}.$$
By straightforward computation using the Koszul formula, we get that the Levi-Civita connexion in this case is given by
$$\displaystyle\nabla_{X_{1}}$$
$$\displaystyle=$$
$$\displaystyle(m+n-p)X_{2}\wedge X_{3},$$
$$\displaystyle\nabla_{X_{2}}$$
$$\displaystyle=$$
$$\displaystyle(m+n+p)X_{1}\wedge X_{3},$$
$$\displaystyle\nabla_{X_{3}}$$
$$\displaystyle=$$
$$\displaystyle(-m+n+p)X_{1}\wedge X_{2}.$$
Thus, we obtain the following formula for the Riemann curvature tensor $R^{G}$
$$R^{G}(X_{i},X_{j})=\mu_{ij}X_{i}\wedge X_{j},$$
where $i,j\in\{1,2,3\}$, $i<j$ and $\mu_{ij}$ are constants given by
$$\displaystyle\mu_{12}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{4}\left((p+n+m)(-n-p+m)+(-p+m+n)(-n-p+m)+(-p+n+m)(p+n+m)%
\right),$$
$$\displaystyle\mu_{13}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{4}\left((-p+n+m)(-n-p+m)+(p+n+m)(-n-p+m)-(-p+m+n)(p+n+m%
)\right),$$
$$\displaystyle\mu_{23}$$
$$\displaystyle=$$
$$\displaystyle-\frac{1}{4}\left((-p+m+n)(p+n+m)-(-p+m+n)(-n-p+m)+(p+n+m)(-n-p+m%
)\right).$$
The scalar curvature of the unit tangent sphere bundle $(T^{(1)}G,h)$ of $G$ equipped with the Sasaki metric is given by, for any $(x,a)\in T^{(1)}G$
$$\tau(x,a)=1-\mu_{12}-\mu_{13}-\mu_{23}-\frac{1}{4}\xi_{x}(a,a),$$
where $\xi(a,a)=\sum_{i,j=1}^{3}|R^{G}(X_{i},X_{j})a|^{2}$. We have
$$\xi(X_{1},X_{1})=2(\mu_{12}^{2}+\mu_{13}^{2}),\hskip 5.690551pt\;\xi(X_{2},X_{%
2})=2(\mu_{12}^{2}+\mu_{23}^{2})\quad\mbox{and}\quad\xi(X_{3},X_{3})=2(\mu_{13%
}^{2}+\mu_{23}^{2}).$$
Put
$$\displaystyle\lambda_{1}$$
$$\displaystyle=$$
$$\displaystyle\mu_{12}^{2}+\mu_{13}^{2}+4(\mu_{12}+\mu_{13}+\mu_{23}-1),$$
$$\displaystyle\lambda_{2}$$
$$\displaystyle=$$
$$\displaystyle\mu_{12}^{2}+\mu_{23}^{2}+4(\mu_{12}+\mu_{13}+\mu_{23}-1),$$
$$\displaystyle\lambda_{3}$$
$$\displaystyle=$$
$$\displaystyle\mu_{13}^{2}+\mu_{23}^{2}+4(\mu_{12}+\mu_{13}+\mu_{23}-1).$$
Then, the scalar curvature $\tau$ of $(T^{(1)}G,h)$ is positive if and only if $\lambda_{i}<0$ for all $i\in\{1,2,3\}$.
There are values for parameters $m,n$ and $p$ for which $\lambda_{i}$ is negative for all $i\in\{1,2,3\}.$
1.
For $m=\frac{1}{2}$, $n=\frac{1}{3}$ and $p=\frac{1}{4}:$ In this case, the Lie group $G$ is isomorphic to the group $SO(3)$, or $SU(3)$,
$$\lambda_{1}=-\frac{543127}{165888},\hskip 5.690551pt\;\lambda_{2}=-\frac{54567%
5}{165888}\quad\mbox{and}\quad\lambda_{3}=-\frac{542035}{165888}.$$
2.
For $m=\frac{1}{2}$, $n=\frac{1}{3}$ and $p=-\frac{1}{4}:$ $G\cong SL(2,\hbox{\bb R})$ or $O(1,2)$,
$$\lambda_{1}=-\frac{505879}{165888},\hskip 5.690551pt\;\lambda_{2}=-\frac{50405%
9}{165888}\quad\mbox{and}\quad\lambda_{3}=-\frac{522259}{165888}.$$
3.
For $m=\frac{1}{2}$, $n=\frac{1}{3}$ and $p=0:$ $G\cong E(2)$,
$$\lambda_{1}=-\frac{33547}{10368},\hskip 5.690551pt\;\lambda_{2}=-\frac{33347}{%
10368}\quad\mbox{and}\quad\lambda_{3}=-\frac{33847}{10368}.$$
4.
For $m=\frac{1}{2}$, $n=\frac{1}{3}$ and $p=0:$ $G\cong E(1,1),$
$$\lambda_{1}=-\frac{33547}{10368},\hskip 5.690551pt\;\lambda_{2}=-\frac{33347}{%
10368}\quad\mbox{and}\quad\lambda_{3}=-\frac{33847}{10368}.$$
5.
For $m=\frac{1}{2}$, $n=-\frac{1}{3}$ and $p=0:$ $G\cong H(3,\hbox{\bb R}),$
$$\lambda_{1}=-\frac{33547}{10368},\hskip 5.690551pt\;\lambda_{2}=-\frac{33347}{%
10368}\quad\mbox{and}\quad\lambda_{3}=-\frac{33847}{10368}.$$
∎
References
(1)
M.T.K. Abbassi, $g$-natural metrics: new horizons
in the geometry of tangent bundles of Riemannian manifolds, Note Mat. 1 (2008), suppl. n. 1, 6-35.
(2)
A. Besse, Einstein manifolds, Springer-Verlag, Berlin-Hiedelberg-New York (1987).
(3)
E. Boeckx and L. Vanhecke, Leuven, Unit tangent sphere bundle with constant scalar curvature, Czechoslovak Mathematical Journal, 51 (126) (2001), 523-544
(4)
Borisenko, A. A., Yampolsky, A. L., On the Sasaki metric of the tangent and the normal bundles, Sov. Math., Dokl. 35 (1987), 479-482.
(5)
M. Boucetta and H. Essoufi, The geometry of generalized Cheeger-Gromoll metrics on the total space of transitive Euclidean Lie algebroids, arXiv preprint arXiv:1808.01254 (2018). To appear in Journal of Geometry and Physics.
(6)
G. R. Jensen, Homogeneous Einstein spaces of dimension four, J. Differential Geometry
3 (1969) 309-349.
(7)
O. Kowalski: Curvature of the induced Riemannian metric of the tangent bundle of Riemannian manifold, J. Reine Angew. Math. 250 (1971), 124-129.
(8)
Kowalski, O., Sekizawa, M., On tangent sphere bundles with small or large constant radius, Ann. Global Anal. Geom. 18 (2000), 207-219.
(9)
A. Knapp, Lie groups Beyond an introduction, Progress in Mathematics 140 (1996).
(10)
E. Musso and F. Tricerri: Riemannian metrics on tangent bundles, Ann. Math. Pura Appl. (4) 150 (1988), 1-20.
(11)
Nagy, P. T., Geodesics on the tangent sphere bundle of a Riemannian manifold, Geom. Dedicata 7 (1978), 233-243.
(12)
John C. Nash, Positive Ricci curvature on fibre bundles. J. Differential Geom. 14 (1979), no. 2, 241-254.
(13)
J. Milnor, Curvatures of left invariant metrics on Lie groups. Advances in Mathematics 21,293-329 (1976). |
General solution of the Jeans
equations for triaxial galaxies with separable potentials
G. van de Ven,${}^{1}$ C. Hunter,${}^{2}$ E.K. Verolme,${}^{1}$ P.T. de Zeeuw${}^{1}$
${}^{1}$Sterrewacht Leiden, Postbus 9513, 2300 RA Leiden, The Netherlands
${}^{2}$Department of Mathematics, Florida State University,
Tallahassee, FL 32306-4510
E-mail: glenn@strw.leidenuniv.nl
(Accepted 0000 Month 00. Received 0000 Month 00;in original 0000 Month 00)
Abstract
The Jeans equations relate the second-order velocity moments to the
density and potential of a stellar system.
For general three-dimensional stellar systems, there are three
equations and six independent moments.
By assuming that the potential is triaxial and of separable Stäckel
form, the mixed moments vanish in confocal ellipsoidal coordinates.
Consequently, the three Jeans equations and three remaining
non-vanishing moments form a closed system of three highly-symmetric
coupled first-order partial differential equations in three
variables.
These equations were first derived by Lynden–Bell, over 40 years ago,
but have resisted solution by standard methods.
We present the general solution here.
We consider the two-dimensional limiting cases first.
We solve their Jeans equations by a new method which superposes
singular solutions.
The singular solutions, which are new, are standard Riemann–Green
functions.
The resulting solutions of the Jeans equations give the second moments
throughout the system in terms of prescribed boundary values of certain
second moments.
The two-dimensional solutions are applied to non-axisymmetric discs,
oblate and prolate spheroids, and also to the scale-free triaxial
limit.
There are restrictions on the boundary conditions which we discuss in
detail.
We then extend the method of singular solutions to the triaxial case,
and obtain a full solution, again in terms of prescribed boundary
values of second moments.
There are restrictions on these boundary values as well, but the
boundary conditions can all be specified in a single plane.
The general solution can be expressed in terms of complete
(hyper)elliptic integrals which can be evaluated in a straightforward
way, and provides the full set of second moments which can support a
triaxial density distribution in a separable triaxial potential.
keywords:
celestial mechanics, stellar dynamics – galaxies: elliptical and
lenticular, cD – galaxies: kinematics and dynamics –
galaxies: structure
††pagerange: General solution of the Jeans
equations for triaxial galaxies with separable potentials–A††pubyear: 0000
1 Introduction
Much has been learned about the mass distribution and internal
dynamics of galaxies by modeling their observed kinematics with
solutions of the Jeans equations (e.g., Binney & Tremaine
1987). These are obtained
by taking velocity moments of the collisionless Boltzmann equation for
the phase-space distribution function $f$, and connect the second
moments (or the velocity dispersions, if the mean streaming motion is
known) directly to the density and the gravitational potential of the
galaxy, without the need to know $f$. In nearly all cases there are
fewer Jeans equations than velocity moments, so that additional
assumptions have to be made about the degree of anisotropy.
Furthermore, the resulting second moments may not correspond to a
physical distribution function $f\geq 0$. These significant drawbacks
have not prevented wide application of the Jeans approach to
the kinematics of galaxies, even though the results need to be
interpreted with care.
Fortunately, efficient analytic and numerical methods have been
developed in the past decade to calculate the full range of
distribution functions $f$ that correspond to spherical or
axisymmetric galaxies, and to fit them directly to kinematic
measurements (e.g., Gerhard 1993; Qian et
al. 1995; Rix et al. 1997; van der Marel et al. 1998). This has provided, for example,
accurate intrinsic shapes, mass-to-light ratios, and central black
hole masses (e.g., Verolme et al. 2002;
Gebhardt et al. 2003).
Many galaxy components are not spherical or axisymmetric, but have
triaxial shapes (Binney 1976,
1978). These include early-type bulges,
bars, and giant elliptical galaxies. In this geometry, there are three
Jeans equations, but little use has been made of them, as they contain
six independent second moments, three of which have to be
chosen ad-hoc (see, e.g., Evans, Carollo & de Zeeuw
2000). At the same time,
not much is known about the range of physical solutions, as very few
distribution functions have been computed, and even fewer have been
compared with kinematic data
(but see Zhao 1996).
An exception is provided by the special set of triaxial mass models
that have a gravitational potential of Stäckel form.
In these systems, the Hamilton–Jacobi equation separates in
orthogonal curvilinear coordinates
(Stäckel 1891),
so that all orbits have three exact integrals of motion, which are
quadratic in the velocities.
The associated mass distributions can have arbitrary central axis
ratios and a large range of density profiles, but they all have cores
rather than central density cusps, which implies that they do not
provide perfect fits to galaxies
(de Zeeuw, Peletier & Franx 1986).
Even so, they capture much of the rich internal dynamics of large
elliptical galaxies
(de Zeeuw 1985a, hereafter Z85;
Statler 1987,
1991;
Arnold, de Zeeuw & Hunter 1994).
Numerical and analytic distribution functions have been constructed
for these models
(e.g., Bishop 1986;
Statler 1987;
Dejonghe & de Zeeuw 1988;
Hunter & de Zeeuw 1992, hereafter HZ92;
Mathieu & Dejonghe 1999),
and their projected properties have been used to provide constraints
on the intrinsic shapes of individual galaxies (e.g.,
Statler 1994a,
b;
Statler & Fry 1994;
Statler, DeJonghe & Smecker-Hane 1999;
Bak & Statler 2000;
Statler 2001).
The Jeans equations for triaxial Stäckel systems have
received little attention. This is remarkable, as
Eddington (1915)
already knew that the velocity ellipsoid in these models is everywhere
aligned with the confocal ellipsoidal coordinate system in which the
motion separates. This means that there are only three, and not six,
non-vanishing second-order velocity moments in these coordinates, so
that the Jeans equations form a closed system. However, Eddington, and
later Chandrasekhar (1939,
1940), did not study the velocity moments,
but instead assumed a form for the distribution function, and then
determined which potentials are consistent with it. Lynden–Bell
(1960) was the first to derive the explicit
form of the Jeans equations for the triaxial Stäckel models. He
showed that they constitute a highly symmetric set of three
first-order partial differential equations (PDEs) for three unknowns,
each of which is a function of the three confocal ellipsoidal
coordinates, but he did not derive solutions. When it was realized
that the orbital structure in the triaxial Stäckel models is very
similar to that in the early numerical models for triaxial galaxies
with cores (Schwarzschild 1979; Z85),
interest in the second moments increased, and the Jeans equations were
solved for a number of special cases. These include the axisymmetric
limits and elliptic discs (Dejonghe & de Zeeuw
1988; Evans & Lynden–Bell
1989, hereafter EL89), triaxial galaxies
with only thin tube orbits (HZ92), and, most recently,
the scale-free limit
(Evans et al. 2000).
In all these cases the equations
simplify to a two-dimensional problem, which can be solved with
standard techniques after recasting two first-order equations into a
single second-order equation in one dependent variable. However, these
techniques do not carry over to a single third-order equation in one dependent
variable, which is the best that one could expect to have in the general case.
As a result, the general case has remained unsolved.
In this paper, we first present an alternative solution method for the
two-dimensional limiting cases which does not use the standard
approach, but instead uses superpositions of singular solutions. We
show that this approach can be extended to three dimensions, and
provides the general solution for the triaxial case in
closed form, which we give explicitly. We will apply our solutions in
a follow-up paper, and will use them together with the mean streaming
motions (Statler 1994a) to study the
properties of the observed velocity and dispersion fields of triaxial
galaxies.
In §2, we define our notation and
derive the Jeans equations for the triaxial Stäckel models in
confocal ellipsoidal coordinates, together with the continuity
conditions. We summarise the limiting cases, and show that the Jeans
equations for all the cases with two degrees of freedom correspond to
the same two-dimensional problem. We solve this problem in
§3, first by employing a standard approach with a
Riemann–Green function, and then via the singular solution
superposition method. We also discuss the choice of boundary
conditions in detail. We relate our solution to that derived by EL89
in Appendix A, and explain why it is different.
In §4, we
extend the singular solution approach to the three-dimensional
problem, and derive the general solution of the Jeans equations for
the triaxial case.
It contains complete (hyper)elliptic integrals, which we express as
single quadratures that can be numerically evaluated in a
straightforward way.
We summarise our conclusions in §5.
2 The Jeans equations for separable models
We first summarise the essential properties of the triaxial Stäckel
models in confocal ellipsoidal coordinates. Further details can be
found in Z85. We show that for these models the mixed second-order
velocity moments vanish, so that the Jeans equations form a closed
system. We derive the Jeans equations and find the corresponding
continuity conditions for the general case of a triaxial galaxy. We
then give an overview of the limiting cases and show that solving the
Jeans equations for the various cases with two degrees of freedom
reduces to an equivalent two-dimensional problem.
2.1 Triaxial Stäckel models
We define confocal ellipsoidal coordinates ($\lambda,\mu,\nu$) as the
three roots for $\tau$ of
$$\frac{x^{2}}{\tau+\alpha}+\frac{y^{2}}{\tau+\beta}+\frac{z^{2}}{\tau+\gamma}=1,$$
(2.1)
with ($x,y,z$) the usual Cartesian coordinates, and with constants
$\alpha,\beta$ and $\gamma$ such that $-\gamma\leq\nu\leq-\beta\leq\mu\leq-\alpha\leq\lambda$.
For each point ($x,y,z$), there is a unique set ($\lambda,\mu,\nu$),
but a given combination ($\lambda,\mu,\nu$) generally corresponds
to eight different points ($\pm x,\pm y,\pm z$).
We assume all three-dimensional Stäckel models in this paper to be
likewise eightfold symmetric.
Surfaces of constant $\lambda$ are ellipsoids, and surfaces of
constant $\mu$ and $\nu$ are hyperboloids of one and two sheets,
respectively (Fig. 1).
The confocal ellipsoidal coordinates are approximately Cartesian near
the origin and become a conical coordinate system at large radii with
a system of spheres together with elliptic and hyperbolic
cones (Fig. 3).
At each point, the three coordinate surfaces are perpendicular to each
other.
Therefore, the line element is of the form
$ds^{2}=P^{2}d\lambda^{2}+Q^{2}d\mu^{2}+R^{2}d\nu^{2}$, with the metric coefficients
$$\displaystyle P^{2}$$
$$\displaystyle=$$
$$\displaystyle\frac{(\lambda-\mu)(\lambda-\nu)}{4(\lambda+\alpha)(\lambda+\beta%
)(\lambda+\gamma)},$$
$$\displaystyle Q^{2}$$
$$\displaystyle=$$
$$\displaystyle\frac{(\mu-\nu)(\mu-\lambda)}{4(\mu+\alpha)(\mu+\beta)(\mu+\gamma%
)},$$
(2.2)
$$\displaystyle R^{2}$$
$$\displaystyle=$$
$$\displaystyle\frac{(\nu-\lambda)(\nu-\mu)}{4(\nu+\alpha)(\nu+\beta)(\nu+\gamma%
)}.$$
We restrict attention to models with a gravitational potential
$V_{S}(\lambda,\mu,\nu)$ of Stäckel form
(Weinacht 1924)
$$V_{S}=-\frac{F(\lambda)}{(\lambda\!-\!\mu)(\lambda\!-\!\nu)}-\frac{F(\mu)}{(%
\mu\!-\!\nu)(\mu\!-\!\lambda)}-\frac{F(\nu)}{(\nu\!-\!\lambda)(\nu\!-\!\mu)},$$
(2.3)
where $F(\tau)$ is an arbitrary smooth function.
Adding any linear function of $\tau$ to $F(\tau)$ changes $V_{S}$ by at
most a constant, and hence has no effect on the dynamics. Following
Z85, we use this freedom to write
$$F(\tau)=(\tau+\alpha)(\tau+\gamma)G(\tau),$$
(2.4)
where $G(\tau)$ is smooth. It equals the potential along the
intermediate axis. This choice will simplify the analysis of the large
radii behaviour of the various limiting cases.111Other,
equivalent, choices include
$F(\tau)=-(\tau+\alpha)(\tau+\gamma)G(\tau)$ by HZ92, and
$F(\tau)=(\tau+\alpha)(\tau+\beta)U(\tau)$ by
de Zeeuw et al. (1986),
with $U(\tau)$ the potential along the short axis.
The density $\rho_{S}$ that corresponds to $V_{S}$ can be found from
Poisson’s equation or by application of Kuzmin’s
(1973) formula (see de Zeeuw
1985b). This formula shows that, once we
have chosen the central axis ratios and the density along the short
axis, the mass model is fixed everywhere by the requirement of
separability. For centrally concentrated mass models, $V_{S}$ has the
$x$-axis as long axis and the $z$-axis as short axis. In most cases
this is also true for the associated density
(de Zeeuw et al. 1986).
2.2 Velocity moments
A stellar system is completely described by its distribution function
(DF), which in general is a time-dependent function $f$ of the six
phase-space coordinates ($\mathbf{x},\mathbf{v}$). Assuming the system
to be in equilibrium ($df/dt=0$) and in steady-state ($\partial f/\partial t=0$), the DF is independent of time $t$ and satisfies the
(stationary) collisionless Boltzmann equation (CBE). Integration of
the DF over all velocities yields the zeroth-order velocity moment,
which is the density $\rho$ of the stellar system. The first- and
second-order velocity moments are defined as
$$\displaystyle\langle v_{i}\rangle(\mathbf{x})$$
$$\displaystyle=$$
$$\displaystyle\!\!\!\frac{1}{\rho}\iiint v_{i}f(\mathbf{x},\mathbf{v})\;{%
\mathrm{d}}^{3}v,$$
(2.5)
$$\displaystyle\langle v_{i}v_{j}\rangle(\mathbf{x})$$
$$\displaystyle=$$
$$\displaystyle\!\!\!\frac{1}{\rho}\iiint v_{i}v_{j}f(\mathbf{x},\mathbf{v})\;{%
\mathrm{d}}^{3}v,$$
where $i,j=1,2,3$. The streaming motions $\langle v_{i}\rangle$
together with the symmetric second-order velocity moments $\langle v_{i}v_{j}\rangle$ provide the velocity dispersions $\sigma_{ij}^{2}=\langle v_{i}v_{j}\rangle-\langle v_{i}\rangle\langle v_{j}\rangle$.
The continuity equation that results from integrating the CBE over
all velocities, relates the streaming motion to the density $\rho$ of
the system. Integrating the CBE over all velocities
after multiplication by each of the three velocity components,
provides the Jeans equations, which relate the second-order velocity
moments to $\rho$ and $V$, the potential of the system.
Therefore, if the density and potential are known, we in general have one
continuity equation with three unknown first-order
velocity moments and three Jeans equations with six unknown
second-order velocity moments.
The potential (2.3) is the most general form
for which the Hamilton–Jacobi equation separates (Stäckel
1890; Lynden–Bell
1962b; Goldstein
1980). All orbits have three exact isolating
integrals
of motion, which are quadratic in the velocities (e.g., Z85). It
follows that there are no irregular orbits, so that Jeans’
(1915) theorem is strictly valid
(Lynden–Bell 1962a; Binney
1982) and the DF is a function of the three
integrals. The orbital motion is a combination of three independent
one-dimensional motions — either an oscillation or a rotation — in
each of the three ellipsoidal coordinates. Different combinations of
rotations and oscillations result in four families of orbits in
triaxial Stäckel models (Kuzmin 1973; Z85): inner
(I) and outer (O) long-axis tubes, short (S) axis tubes and box
orbits. Stars on box orbits carry out an oscillation in all three
coordinates, so that they provide no net contribution to the mean
streaming. Stars on I- and O-tubes carry out a rotation in $\nu$ and
those on S-tubes a rotation in $\mu$, and oscillations in the other
two coordinates. The fractions of clockwise and counterclockwise stars
on these orbits may be unequal. This means that each of the tube
families can have at most one nonzero first-order velocity moment,
related to $\rho$ by the continuity equation.
Statler (1994a)
used this property to construct velocity fields for triaxial Stäckel
models.
It is not difficult to show by similar arguments (e.g., HZ92) that all
mixed second-order velocity moments also vanish
$$\langle v_{\lambda}v_{\mu}\rangle=\langle v_{\mu}v_{\nu}\rangle=\langle v_{\nu%
}v_{\lambda}\rangle=0.$$
(2.6)
Eddington (1915) already knew that in a
potential of the form (2.3), the axes of the
velocity ellipsoid at any given point are perpendicular to the
coordinate surfaces, so that the mixed second-order velocity moments are
zero. We are left with three second-order velocity moments, $\langle v_{\lambda}^{2}\rangle$, $\langle v_{\mu}^{2}\rangle$ and $\langle v_{\nu}^{2}\rangle$, related by three Jeans equations.
2.3 The Jeans equations
The Jeans equations for triaxial Stäckel models in confocal
ellipsoidal coordinates were first derived by Lynden–Bell
(1960).
We give an alternative derivation here, using the Hamilton equations.
We first write the DF as a function of ($\lambda,\mu,\nu$) and the
conjugate momenta
$$p_{\lambda}=P^{2}\frac{d\lambda}{dt},\quad p_{\mu}=Q^{2}\frac{d\mu}{dt},\quad p%
_{\nu}=R^{2}\frac{d\nu}{dt},$$
(2.7)
with the metric coefficients $P$, $Q$ and $R$ given in
(2.1).
In these phase-space coordinates the steady-state CBE reads
$$\frac{d\tau}{dt}\frac{\partial f}{\partial\tau}+\frac{dp_{\tau}}{dt}\frac{%
\partial f}{\partial p_{\tau}}=0,$$
(2.8)
where we have used the summation convention with respect to
$\tau=\lambda,\mu,\nu$. The Hamilton equations are
$$\frac{d\tau}{dt}=\frac{\partial H}{\partial p_{\tau}},\quad\frac{dp_{\tau}}{dt%
}=\frac{\partial H}{\partial\tau},$$
(2.9)
with the Hamiltonian defined as
$$H=\frac{p_{\lambda}^{2}}{2P^{2}}+\frac{p_{\mu}^{2}}{2Q^{2}}+\frac{p_{\nu}^{2}}%
{2R^{2}}+V(\lambda,\mu,\nu).$$
(2.10)
The first Hamilton equation in (2.9) defines the
momenta (2.7) and gives no new information. The
second gives
$$\frac{dp_{\lambda}}{dt}=\frac{p_{\lambda}^{2}}{P^{3}}\frac{\partial P}{%
\partial\lambda}+\frac{p_{\mu}^{2}}{Q^{3}}\frac{\partial Q}{\partial\lambda}+%
\frac{p_{\nu}^{2}}{R^{3}}\frac{\partial R}{\partial\lambda}-\frac{\partial V}{%
\partial\lambda},$$
(2.11)
and similar for $p_{\mu}$ and $p_{\nu}$ by replacing the derivatives
with respect to $\lambda$ by derivatives to $\mu$ and $\nu$,
respectively.
We assume the potential to be of the form $V_{S}$ defined in
(2.3), and we substitute
(2.7) and
(2.11) in the CBE
(2.8).
We multiply this equation by $p_{\lambda}$ and integrate over all
momenta. The mixed second moments vanish
(2.6), so that we are left
with
$$\displaystyle\frac{3\langle fp_{\lambda}^{2}\rangle}{P^{3}}\frac{\partial P}{%
\partial\lambda}+\frac{\langle fp_{\mu}^{2}\rangle}{Q^{3}}\frac{\partial Q}{%
\partial\lambda}+\frac{\langle fp_{\nu}^{2}\rangle}{R^{3}}\frac{\partial R}{%
\partial\lambda}\\
\displaystyle-\frac{1}{P^{2}}\frac{\partial}{\partial\lambda}\langle fp_{%
\lambda}^{2}\rangle-\langle f\rangle\frac{\partial V_{S}}{\partial\lambda}=0,$$
(2.12)
where we have defined the moments
$$\displaystyle\langle f\rangle$$
$$\displaystyle\equiv$$
$$\displaystyle\int f{\mathrm{d}}^{3}p=PQR\,\rho,$$
(2.13)
$$\displaystyle\langle fp_{\lambda}^{2}\rangle$$
$$\displaystyle\equiv$$
$$\displaystyle\int p_{\lambda}^{2}f{\mathrm{d}}^{3}p=P^{3}QR\,T_{\lambda\lambda},$$
with the diagonal components of the stress tensor
$$T_{\tau\tau}(\lambda,\mu,\nu)\equiv\rho\langle v_{\tau}^{2}\rangle,\qquad\tau=%
\lambda,\mu,\nu.$$
(2.14)
The moments $\langle fp_{\mu}^{2}\rangle$ and $\langle fp_{\nu}^{2}\rangle$
follow from $\langle fp_{\lambda}^{2}\rangle$ by cyclic permutation
$\lambda\to\mu\to\nu\to\lambda$, for which
$P\!\to\!Q\!\to\!R\!\to\!P$. We substitute the definitions
(2.13) in eq. (2.12)
and carry out the partial differentiation in the fourth term. The
first term in (2.12) then cancels, and,
after rearranging the remaining terms and dividing by $PQR$, we obtain
$$\frac{\partial T_{\lambda\lambda}}{\partial\lambda}+\frac{T_{\lambda\lambda}\!%
-\!T_{\mu\mu}}{Q}\frac{\partial Q}{\partial\lambda}+\frac{T_{\lambda\lambda}\!%
-\!T_{\nu\nu}}{R}\frac{\partial R}{\partial\lambda}=-\rho\frac{\partial V_{S}}%
{\partial\lambda}.$$
(2.15)
Substituting the metric coefficients (2.1)
and carrying out the partial differentiations results in the
Jeans equations
$$\frac{\partial T_{\lambda\lambda}}{\partial\lambda}+\frac{T_{\lambda\lambda}-T%
_{\mu\mu}}{2(\lambda-\mu)}+\frac{T_{\lambda\lambda}-T_{\nu\nu}}{2(\lambda-\nu)%
}=-\rho\frac{\partial V_{S}}{\partial\lambda},$$
(2.16a)
$$\frac{\partial T_{\mu\mu}}{\partial\mu}+\frac{T_{\mu\mu}-T_{\nu\nu}}{2(\mu-\nu%
)}+\frac{T_{\mu\mu}-T_{\lambda\lambda}}{2(\mu-\lambda)}=-\rho\frac{\partial V_%
{S}}{\partial\mu},$$
(2.16b)
$$\frac{\partial T_{\nu\nu}}{\partial\nu}+\frac{T_{\nu\nu}-T_{\lambda\lambda}}{2%
(\nu-\lambda)}+\frac{T_{\nu\nu}-T_{\mu\mu}}{2(\nu-\mu)}=-\rho\frac{\partial V_%
{S}}{\partial\nu},$$
(2.16c)
where the equations for $\mu$ and $\nu$ follow from the one for
$\lambda$ by cyclic permutation. These equations are identical to
those derived by Lynden–Bell (1960).
In self-consistent models, the density $\rho$ must equal $\rho_{S}$,
with $\rho_{S}$ related to the potential $V_{S}$
(2.3) by Poisson’s equation. The Jeans
equations, however, do not require self-consistency. Hence, we make no
assumptions on the form of the density other than that it is
triaxial, i.e., a function of $(\lambda,\mu,\nu)$, and that it tends
to zero at infinity. The resulting solutions for the stresses
$T_{\tau\tau}$ do not all correspond to physical distribution
functions $f\geq 0$. The requirement that the $T_{\tau\tau}$ are
non-negative removes many (but not all) of the unphysical solutions.
2.4 Continuity conditions
We saw in §2.2 that the velocity ellipsoid is
everywhere aligned with the confocal ellipsoidal coordinates. When
$\lambda\to-\alpha$, the ellipsoidal coordinate surface degenerates
into the area inside the focal ellipse
(Fig. 2). The area outside the focal ellipse is
labeled by $\mu=-\alpha$. Hence, $T_{\lambda\lambda}$ is perpendicular
to the surface inside and $T_{\mu\mu}$ is perpendicular to
the surface outside the focal ellipse. On the focal ellipse,
i.e. when $\lambda=\mu=-\alpha$, both stress components therefore have
to be equal. Similarly, $T_{\mu\mu}$ and $T_{\nu\nu}$ are
perpendicular to the area inside ($\mu=-\beta$) and outside
($\nu=-\beta$) the two branches of the focal hyperbola, respectively,
and have to be equal on the focal hyperbola itself
($\mu=\nu=-\beta$). This results in the following two continuity
conditions
$$T_{\lambda\lambda}(-\alpha,-\alpha,\nu)=T_{\mu\mu}(-\alpha,-\alpha,\nu),$$
(2.17a)
$$T_{\mu\mu}(\lambda,-\beta,-\beta)=T_{\nu\nu}(\lambda,-\beta,-\beta).$$
(2.17b)
These conditions not only follow from geometrical arguments, but are
also precisely the conditions necessary to avoid singularities in the
Jeans equations (2.16) when $\lambda=\mu=-\alpha$
and $\mu=\nu=-\beta$.
For the sake of physical understanding, we will also obtain the
corresponding continuity conditions by geometrical arguments for the
limiting cases that follow.
2.5 Limiting cases
When two or all three of the constants $\alpha$, $\beta$ or $\gamma$
are equal, the triaxial Stäckel models reduce to limiting cases with
more symmetry and thus with fewer degrees of freedom.
We show in §2.6 that solving the Jeans equations
for all the models with two degrees of freedom reduces to the same
two-dimensional problem.
EL89 first solved this generalised problem and applied it to the disc,
oblate and prolate case.
Evans et al. (2000) showed
that the large radii case with scale-free DF reduces to the problem
solved by EL89.
We solve the same problem in a different way in §3,
and obtain a simpler expression than EL89.
In order to make application of the resulting solution
straightforward, and to define a unified notation, we first give an
overview of the limiting cases.
2.5.1 Oblate spheroidal coordinates: prolate potentials
When $\gamma=\beta$, the coordinate surfaces for constant $\lambda$ and
$\mu$ reduce to oblate spheroids and hyperboloids of revolution around
the $x$-axis. Since the range of $\nu$ is zero, it cannot be used as a
coordinate. The hyperboloids of two sheets are now planes containing
the $x$-axis. We label these planes by an azimuthal angle $\chi$,
defined as $\tan\chi=z/y$. In these oblate spheroidal coordinates
($\lambda,\mu,\chi$) the potential $V_{S}$ has the form (cf. Lynden–Bell 1962b)
$$V_{S}=-\frac{f(\lambda)-f(\mu)}{\lambda\!-\!\mu}-\frac{g(\chi)}{(\lambda+\beta%
)(\mu+\beta)},$$
(2.18)
where the function $g(\chi)$ is arbitrary, and
$f(\tau)=(\tau+\alpha)G(\tau)$, with $G(\tau)$ as in
eq. (2.4). The denominator of the second term is
proportional to $y^{2}+z^{2}$, so that these potentials are singular along
the entire $x$-axis unless $g(\chi)\equiv 0$. In this case, the
potential is prolate axisymmetric, and the associated density $\rho_{S}$
is generally prolate as well
(de Zeeuw et al. 1986).
The Jeans equations (2.16) reduce to
$$\displaystyle\frac{\partial T_{\lambda\lambda}}{\partial\lambda}+\frac{T_{%
\lambda\lambda}-T_{\mu\mu}}{2(\lambda-\mu)}+\frac{T_{\lambda\lambda}-T_{\chi%
\chi}}{2(\lambda+\beta)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\lambda},$$
$$\displaystyle\frac{\partial T_{\mu\mu}}{\partial\mu}+\frac{T_{\mu\mu}-T_{%
\lambda\lambda}}{2(\mu-\lambda)}+\frac{T_{\mu\mu}-T_{\chi\chi}}{2(\mu+\beta)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\mu},$$
(2.19)
$$\displaystyle\frac{\partial T_{\chi\chi}}{\partial\chi}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\chi}.$$
The continuity condition (2.17a) still holds,
except that the focal ellipse has become a focal circle. For
$\mu=-\beta$, the one-sheeted hyperboloid degenerates into the
$x$-axis, so that $T_{\mu\mu}$ is perpendicular to the $x$-axis and
coincides with $T_{\chi\chi}$. This gives the following two continuity
conditions
$$\displaystyle T_{\lambda\lambda}(-\alpha,-\alpha,\chi)$$
$$\displaystyle=$$
$$\displaystyle T_{\mu\mu}(-\alpha,-\alpha,\chi),$$
(2.20)
$$\displaystyle T_{\mu\mu}(\lambda,-\beta,\chi)$$
$$\displaystyle=$$
$$\displaystyle T_{\chi\chi}(\lambda,-\beta,\chi).$$
By integrating along characteristics, Hunter et al. (1990) obtained the solution of
(2.5.1) for the special prolate models in which only
the thin I- and O-tube orbits are populated, so that $T_{\mu\mu}\equiv 0$ and $T_{\lambda\lambda}\equiv 0$, respectively (cf. §2.5.6).
2.5.2 Prolate spheroidal coordinates: oblate potentials
When $\beta=\alpha$, we cannot use $\mu$ as a coordinate and replace it
by the azimuthal angle $\phi$, defined as $\tan\phi=y/x$. Surfaces of
constant $\lambda$ and $\nu$ are confocal prolate spheroids and
two-sheeted hyperboloids of revolution around the $z$-axis. The
prolate spheroidal coordinates ($\lambda,\phi,\nu$) follow from the
oblate spheroidal coordinates ($\lambda,\mu,\chi$) by taking
$\mu\!\to\!\nu$, $\chi\!\to\!\phi$ and
$\beta\!\to\!\alpha\!\to\!\gamma$.
The potential $V_{S}(\lambda,\phi,\nu)$ is (cf. Lynden–Bell
1962b)
$$V_{S}=-\frac{f(\lambda)-f(\nu)}{\lambda\!-\!\nu}-\frac{g(\phi)}{(\lambda+%
\alpha)(\nu+\alpha)}.$$
(2.21)
In this case, the denominator of the second term is proportional to
$R^{2}=x^{2}+y^{2}$, so that the potential is singular along the entire
$z$-axis, unless $g(\phi)$ vanishes. When $g(\phi)\equiv 0$, the
potential is oblate, and the same is generally true for the associated
density $\rho_{S}$.
The Jeans equations (2.16) reduce to
$$\displaystyle\frac{\partial T_{\lambda\lambda}}{\partial\lambda}+\frac{T_{%
\lambda\lambda}-T_{\phi\phi}}{2(\lambda+\alpha)}+\frac{T_{\lambda\lambda}-T_{%
\nu\nu}}{2(\lambda-\nu)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\lambda},$$
$$\displaystyle\frac{\partial T_{\phi\phi}}{\partial\phi}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\phi}.$$
(2.22)
$$\displaystyle\frac{\partial T_{\nu\nu}}{\partial\nu}+\frac{T_{\nu\nu}-T_{%
\lambda\lambda}}{2(\nu-\lambda)}+\frac{T_{\nu\nu}-T_{\phi\phi}}{2(\nu+\alpha)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\nu}.$$
For $\lambda=-\alpha$, the prolate spheroidal coordinate surfaces
reduce to the part of the $z$-axis between the foci. The part beyond
the foci is reached if $\nu=-\alpha$.
Hence, in this case, $T_{\lambda\lambda}$ is perpendicular to part of
the $z$-axis between, and $T_{\nu\nu}$ is perpendicular to the part of
the $z$-axis beyond the foci. They coincide at the foci
($\lambda=\nu=-\alpha$), resulting in one continuity condition. Two
more follow from the fact that $T_{\phi\phi}$ is perpendicular to the
(complete) $z$-axis, and thus coincides with
$T_{\lambda\lambda}$ and $T_{\nu\nu}$ on the part between and beyond
the foci, respectively:
$$\displaystyle T_{\lambda\lambda}(-\alpha,\phi,-\alpha)$$
$$\displaystyle=$$
$$\displaystyle T_{\nu\nu}(-\alpha,\phi,-\alpha),$$
$$\displaystyle T_{\lambda\lambda}(-\alpha,\phi,\nu)$$
$$\displaystyle=$$
$$\displaystyle T_{\phi\phi}(-\alpha,\phi,\nu),$$
(2.23)
$$\displaystyle T_{\nu\nu}(\lambda,\phi,-\alpha)$$
$$\displaystyle=$$
$$\displaystyle T_{\phi\phi}(\lambda,\phi,-\alpha).$$
For oblate models with thin S-tube orbits ($T_{\lambda\lambda}\equiv 0$, see §2.5.6), the analytical solution of
(2.5.2) was derived by Bishop
(1987) and by de Zeeuw & Hunter
(1990). Robijn & de Zeeuw
(1996) obtained the second-order velocity
moments for models in which the thin tube orbits were thickened
iteratively. Dejonghe & de Zeeuw (1988,
Appendix D) found a general solution by integrating along
characteristics. Evans (1990) gave an
algorithm for solving (2.5.2) numerically, and
Arnold (1995) computed a solution using
characteristics without assuming a separable potential.
2.5.3 Confocal elliptic coordinates: non-circular discs
In the principal plane $z=0$, the ellipsoidal coordinates reduce to
confocal elliptic coordinates ($\lambda,\mu$), with coordinate curves
that are ellipses ($\lambda$) and hyperbolae ($\mu$), that share their
foci on the symmetry $y$-axis. The potential of the perfect elliptic
disc, with its surface density distribution stratified on concentric
ellipses in the plane $z=0$ ($\nu=-\gamma$), is of Stäckel form both
in and outside this plane. By a superposition of perfect elliptic
discs, one can construct other surface densities and corresponding
disc potentials that are of Stäckel form in the plane $z=0$, but not
necessarily outside it (Evans & de Zeeuw
1992). The expression for the potential in
the disc is of the form (2.18) with
$g(\chi)\equiv 0$:
$$V_{S}=-\frac{f(\lambda)-f(\mu)}{\lambda\!-\!\mu},$$
(2.24)
where again $f(\tau)=(\tau+\alpha)G(\tau)$, so that $G(\tau)$ equals
the potential along the $y$-axis.
Omitting all terms with $\nu$ in (2.16), we
obtain the Jeans equations for non-circular Stäckel discs
$$\displaystyle\frac{\partial T_{\lambda\lambda}}{\partial\lambda}+\frac{T_{%
\lambda\lambda}-T_{\mu\mu}}{2(\lambda-\mu)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\lambda},$$
(2.25)
$$\displaystyle\frac{\partial T_{\mu\mu}}{\partial\mu}+\frac{T_{\mu\mu}-T_{%
\lambda\lambda}}{2(\mu-\lambda)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\mu},$$
where now $\rho$ denotes a surface density. The parts of the $y$-axis
between and beyond the foci are labeled by $\lambda=-\alpha$
and $\mu=-\alpha$, resulting in the continuity condition
$$T_{\lambda\lambda}(-\alpha,-\alpha)=T_{\mu\mu}(-\alpha,-\alpha).$$
(2.26)
2.5.4 Conical coordinates: scale-free triaxial limit
At large radii, the confocal ellipsoidal coordinates
($\lambda,\mu,\nu$) reduce to conical coordinates ($r,\mu,\nu$), with
$r$ the usual distance to the origin, i.e., $r^{2}=x^{2}+y^{2}+z^{2}$ and
$\mu$ and $\nu$ angular coordinates on the sphere
(Fig. 3). The potential
$V_{S}(r,\mu,\nu)$ is scale-free, and of the form
$$V_{S}=-{\tilde{F}}(r)+\frac{F(\mu)-F(\nu)}{r^{2}(\mu\!-\!\nu)},$$
(2.27)
where ${\tilde{F}}(r)$ is arbitrary, and $F(\tau)=(\tau+\alpha)(\tau+\gamma)G(\tau)$, as in eq. (2.4).
The Jeans equations in conical coordinates follow from the general
triaxial case (2.16) by going to large
radii. Taking $\lambda\to r^{2}\gg-\alpha\geq\mu,\nu$, the stress
components approach each other and we have
$$\frac{T_{\lambda\lambda}-T_{\mu\mu}}{2(\lambda-\mu)},\;\frac{T_{\lambda\lambda%
}-T_{\nu\nu}}{2(\lambda-\nu)}\sim\frac{1}{r}\to 0,\quad\frac{\partial}{%
\partial\lambda}\to\frac{1}{2r}\frac{\partial}{\partial\lambda}.$$
(2.28)
Hence, after multiplying (2.16a) by $2r$, the
Jeans equations for scale-free Stäckel models are
$$\displaystyle\frac{\partial T_{rr}}{\partial r}+\frac{2T_{rr}-T_{\mu\mu}-T_{%
\nu\nu}}{r}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial r},$$
$$\displaystyle\frac{\partial T_{\mu\mu}}{\partial\mu}+\frac{T_{\mu\mu}-T_{\nu%
\nu}}{2(\mu-\nu)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\mu},$$
(2.29)
$$\displaystyle\frac{\partial T_{\nu\nu}}{\partial\nu}+\frac{T_{\nu\nu}-T_{\mu%
\mu}}{2(\nu-\mu)}$$
$$\displaystyle=$$
$$\displaystyle-\rho\frac{\partial V_{S}}{\partial\nu}.$$
The general Jeans equations in conical coordinates, as derived by
Evans et al. (2000),
reduce to (2.5.4) for vanishing mixed second
moments. At the transition points between the curves of constant $\mu$
and $\nu$ ($\mu=\nu=-\beta$), the tensor components $T_{\mu\mu}$
and $T_{\nu\nu}$ coincide, resulting in the continuity condition
$$T_{\lambda\lambda}(r,-\beta,-\beta)=T_{\phi\phi}(r,-\beta,-\beta).$$
(2.30)
2.5.5 One-dimensional limits
There are several additional limiting cases with more symmetry for
which the form of $V_{S}$ (Lynden–Bell 1962b)
and the associated Jeans equations follow in a straightforward way
from the expressions that were given above.
We only mention spheres and circular discs.
When $\alpha\!=\!\beta\!=\!\gamma$, the variables $\mu$ and $\nu$ loose their
meaning and the ellipsoidal coordinates reduce to spherical
coordinates $(r,\theta,\phi)$. A steady-state spherical model
without a preferred axis is invariant under a rotation over the angles
$\theta$ and $\phi$, so that we are left with only one Jeans equation
in $r$, and $T_{\theta\theta}=T_{\phi\phi}$. This equation can
readily be obtained from the CBE in spherical coordinates (e.g.,
Binney & Tremaine 1987).
It also follows as a limit from the Jeans equations
(2.16) for triaxial Stäckel models or from any
of the above two-dimensional limiting cases. Consider for example
the Jeans equations in conical coordinates (2.5.4), and
take $\mu\to\theta$ and $\nu\to\phi$. The stress components $T_{rr}$
and $T_{\mu\mu}=T_{\nu\nu}=T_{\phi\phi}=T_{\theta\theta}$ depend only
$r$, so that we are left with
$$\frac{dT_{rr}}{dr}+\frac{2(T_{rr}-T_{\theta\theta})}{r}=-\rho\frac{dV_{S}}{dr},$$
(2.31)
which is the well-known result for non-rotating spherical systems
(Binney & Tremaine 1987).
In a similar way, the one Jeans equation for the circular disc-case
follows from, e.g., the first equation of (2.25) by
taking $\mu=-\alpha$ and replacing $T_{\mu\mu}$ by $T_{\phi\phi}$,
where $\phi$ is the azimuthal angle defined in
§2.5.2.
With $\lambda+\alpha=R^{2}$ this gives
$$\frac{dT_{RR}}{dR}+\frac{T_{RR}-T_{\phi\phi}}{R}=-\rho\frac{dV_{S}}{dR},$$
(2.32)
which may be compared with Binney & Tremaine
(1987), their eq. (4.29).
2.5.6 Thin tube orbits
Each of the three tube orbit families in a triaxial Stäckel model
consists of a rotation in one of the ellipsoidal coordinates
and oscillations in the other two (§2.2).
The I-tubes, for example, rotate in $\nu$ and oscillate in $\lambda$
and $\mu$, with turning points $\mu_{1}$, $\mu_{2}$ and $\lambda_{0}$, so
that a typical orbit fills the volume
$$-\gamma\leq\nu\leq-\beta,\quad\mu_{1}\leq\mu\leq\mu_{2},\quad-\alpha\leq%
\lambda\leq\lambda_{0}.$$
(2.33)
When we restrict ourselves to infinitesimally thin I-tubes, i.e.,
$\mu_{1}=\mu_{2}$, there is no motion in the $\mu$-coordinate.
Therefore, the second-order velocity moment in this coordinate is
zero, and thus also the corresponding stress component
$T_{\mu\mu}^{\mathrm{I}}\equiv 0$.
As a result, eq. (2.16b) reduces to an
algebraic relation between $T_{\lambda\lambda}^{\mathrm{I}}$ and
$T_{\nu\nu}^{\mathrm{I}}$.
This relation can be used to eliminate $T_{\nu\nu}^{\mathrm{I}}$ and
$T_{\lambda\lambda}^{\mathrm{I}}$ from the remaining Jeans equations
(2.16a) and (2.16c)
respectively.
HZ92 solved the resulting two first-order PDEs (their Appendix B) and
showed that the same result is obtained by direct evaluation of the
second-order velocity moments, using the thin I-tube DF.
They derived similar solutions for thin O- and S-tubes, for which
there is no motion in the $\lambda$-coordinate, so that
$T_{\lambda\lambda}^{\mathrm{O}}\equiv 0$ and
$T_{\lambda\lambda}^{\mathrm{S}}\equiv 0$, respectively.
In Stäckel discs we have – besides the flat box orbits – only one
family of (flat) tube orbits.
For infinitesimally thin tube orbits $T_{\lambda\lambda}\equiv 0$,
so that the Jeans equations (2.25) reduce to two
different relations between $T_{\mu\mu}$ and the density and
potential.
In §3.4.4, we show how this places restrictions on
the form of the density and we give the solution for $T_{\mu\mu}$.
We also show that the general solution of (2.25),
which we obtain in §3, contains the thin tube result.
The same is true for the triaxial case: the general solution of
(2.16), which we derive in §4,
contains the three thin tube orbit solutions as special cases (§4.6.6).
2.6 All two-dimensional cases are similar
EL89 showed that the Jeans equations in oblate and prolate spheroidal
coordinates, (2.5.1) and (2.5.2), can
be transformed to a system that is equivalent to the two Jeans
equations (2.25) in confocal elliptic
coordinates.
Evans et al. (2000)
arrived at the same two-dimensional form for
Stäckel models with a scale-free DF. We introduce a transformation
which differs slightly from that of EL89, but has the advantage that
it removes the singular denominators in the Jeans equations.
The Jeans equations (2.5.1) for prolate potentials
can be simplified by introducing as dependent variables
$$\mathcal{T}_{\tau\tau}(\lambda,\mu)=(\lambda\!+\!\beta)^{\frac{1}{2}}(\mu\!+\!%
\beta)^{\frac{1}{2}}(T_{\tau\tau}\!-\!T_{\chi\chi}),\quad\tau=\lambda,\mu,$$
(2.34)
so that the first two equations in (2.5.1) transform to
$$\displaystyle\frac{\partial\mathcal{T}_{\lambda\lambda}}{\partial\lambda}\!+\!%
\frac{\mathcal{T}_{\lambda\lambda}\!-\!\mathcal{T}_{\mu\mu}}{2(\lambda\!-\!\mu)}$$
$$\displaystyle=$$
$$\displaystyle-(\lambda\!+\!\beta)^{\frac{1}{2}}(\mu\!+\!\beta)^{\frac{1}{2}}\!%
\!\biggl{[}\rho\frac{\partial V_{S}}{\partial\lambda}\!+\!\frac{\partial T_{%
\chi\chi}}{\partial\lambda}\!\biggr{]},$$
(2.35)
$$\displaystyle\frac{\partial\mathcal{T}_{\mu\mu}}{\partial\mu}\!+\!\frac{%
\mathcal{T}_{\mu\mu}\!-\!\mathcal{T}_{\lambda\lambda}}{2(\mu\!-\!\lambda)}$$
$$\displaystyle=$$
$$\displaystyle-(\mu\!+\!\beta)^{\frac{1}{2}}(\lambda\!+\!\beta)^{\frac{1}{2}}\!%
\!\biggl{[}\rho\frac{\partial V_{S}}{\partial\mu}\!+\!\frac{\partial T_{\chi%
\chi}}{\partial\mu}\!\biggr{]}.$$
The third Jeans equation (2.5.1) can be integrated in
a straightforward fashion to give the $\chi$-dependence of
$T_{\chi\chi}$. It is trivially satisfied for prolate models with
$g(\chi)\equiv 0$. Hence if, following EL89, we regard
$T_{\chi\chi}(\lambda,\mu)$ as a function which can be prescribed,
then equations (2.35) have known right hand
sides, and are therefore of the same form as those of the disc case
(2.25). The singular denominator $(\mu+\beta)$ of
(2.5.1) has disappeared, and there is a boundary
condition
$$\mathcal{T}_{\mu\mu}(\lambda,-\beta)=0,$$
(2.36)
due to the second continuity condition of (2.20)
and the definition (2.34).
A similar reduction applies for oblate potentials. The middle equation
of (2.5.2) can be integrated to give the
$\phi$-dependence of $T_{\phi\phi}$, and is trivially satisfied for
oblate models. The remaining two equations (2.5.2)
transform to
$$\displaystyle\frac{\partial\mathcal{T}_{\lambda\lambda}}{\partial\lambda}\!+\!%
\frac{\mathcal{T}_{\lambda\lambda}\!-\!\mathcal{T}_{\nu\nu}}{2(\lambda\!-\!\nu)}$$
$$\displaystyle=$$
$$\displaystyle-(\lambda\!+\!\alpha)^{\frac{1}{2}}(-\alpha\!-\!\nu)^{\frac{1}{2}%
}\!\!\biggl{[}\rho\frac{\partial V_{S}}{\partial\lambda}\!+\!\frac{\partial T_%
{\phi\phi}}{\partial\lambda}\!\!\biggr{]},$$
(2.37)
$$\displaystyle\frac{\partial\mathcal{T}_{\nu\nu}}{\partial\nu}\!+\!\frac{%
\mathcal{T}_{\nu\nu}\!-\!\mathcal{T}_{\lambda\lambda}}{2(\nu\!-\!\lambda)}$$
$$\displaystyle=$$
$$\displaystyle-(-\alpha\!-\!\nu)^{\frac{1}{2}}(\lambda\!+\!\alpha)^{\frac{1}{2}%
}\!\!\biggl{[}\rho\frac{\partial V_{S}}{\partial\nu}\!+\!\frac{\partial T_{%
\phi\phi}}{\partial\nu}\!\!\biggr{]},$$
in terms of the dependent variables
$$\mathcal{T}_{\tau\tau}(\lambda,\nu)=(\lambda\!+\!\alpha)^{\frac{1}{2}}(-\alpha%
\!-\!\nu)^{\frac{1}{2}}(T_{\tau\tau}\!-\!T_{\phi\phi}),\quad\tau=\lambda,\nu.$$
(2.38)
We now have two boundary conditions
$$\mathcal{T}_{\lambda\lambda}(-\alpha,\nu)=0,\quad\mathcal{T}_{\nu\nu}(\lambda,%
-\alpha)=0,$$
(2.39)
as a consequence of the last two continuity conditions of
(2.5.2) and the definitions
(2.38).
In the case of a scale-free DF, the stress components in the Jeans
equations in conical coordinates (2.5.4) have the form
$T_{\tau\tau}=r^{-\zeta}\mathcal{T}_{\tau\tau}(\mu,\nu)$, with
$\zeta>0$ and $\tau=r,\mu,\nu$. After substitution and multiplication
by $r^{\zeta+1}$, the first equation of (2.5.4)
reduces to
$$(2-\zeta)\mathcal{T}_{rr}+\mathcal{T}_{\mu\mu}+\mathcal{T}_{\nu\nu}=r^{\zeta+1%
}\rho\frac{\partial V_{S}}{\partial r}.$$
(2.40)
When $\zeta=2$, $\mathcal{T}_{rr}$ drops out, so that the relation
between $\mathcal{T}_{\mu\mu}$ and $\mathcal{T}_{\nu\nu}$ is known and
the remaining two Jeans equations can be readily solved
(Evans et al. 2000).
In all other cases, $\mathcal{T}_{rr}$ can be obtained from
(2.40) once we have solved the last two
equations of (2.5.4) for $\mathcal{T}_{\mu\mu}$ and
$\mathcal{T}_{\nu\nu}$.
This pair of equations is identical to the system of Jeans equations
(2.25) for the case of disc potentials.
The latter is the simplest form of the equivalent two-dimensional
problem for all Stäckel models with two degrees of freedom.
We solve it in the next section.
Once we have derived the solution of (2.25), we may
obtain the solution for prolate Stäckel potentials by replacing all
terms $-\rho\partial V_{s}/\partial\tau$ $(\tau=\lambda,\mu)$ by the
right-hand side of (2.35) and
substituting the transformations (2.34) for
$T_{\lambda\lambda}$ and $T_{\mu\mu}$. Similarly, our unified notation
makes the application of the solution of (2.25) to
the oblate case and to models with a scale-free DF straightforward
(§3.4).
3 The two-dimensional case
We first apply Riemann’s method to solve the Jeans equations
(2.25) in confocal elliptic coordinates for
Stäckel discs (§2.5.3).
This involves finding a Riemann–Green function that describes the
solution for a source point of stress.
The full solution is then obtained in compact form by representing the
known right-hand side terms as a sum of sources.
In §3.2, we introduce an alternative
approach, the singular solution method.
Unlike Riemann’s method, this can be extended to the three-dimensional
case, as we show in §4.
We analyse the choice of the boundary conditions in detail in
§3.3.
In §3.4, we apply the two-dimensional
solution to the axisymmetric and scale-free limits, and we also
consider a Stäckel disc built with thin tube orbits.
3.1 Riemann’s method
After differentiating the first Jeans equation of
(2.25) with respect to $\mu$ and eliminating terms
in $T_{\mu\mu}$ by applying the second equation, we obtain a
second-order partial differential equation (PDE) for
$T_{\lambda\lambda}$ of the form
$$\frac{\partial^{2}T_{\lambda\lambda}}{\partial\lambda\partial\mu}-\frac{3}{2(%
\lambda\!-\!\mu)}\frac{\partial T_{\lambda\lambda}}{\partial\lambda}+\frac{1}{%
2(\lambda\!-\!\mu)}\frac{\partial T_{\lambda\lambda}}{\partial\mu}=U_{\lambda%
\lambda}(\lambda,\mu).$$
(3.1)
Here $U_{\lambda\lambda}$ is a known function given by
$$U_{\lambda\lambda}=-\frac{1}{(\lambda\!-\!\mu)^{\frac{3}{2}}}\frac{\partial}{%
\partial\mu}\biggl{[}(\lambda\!-\!\mu)^{\frac{3}{2}}\rho\frac{\partial V_{S}}{%
\partial\lambda}\biggr{]}-\frac{\rho}{2(\lambda\!-\!\mu)}\frac{\partial V_{S}}%
{\partial\mu}.$$
(3.2)
We obtain a similar second-order PDE for $T_{\mu\mu}$ by interchanging
$\lambda\leftrightarrow\mu$. Both PDEs can be solved by Riemann’s
method. To solve them simultaneously, we define the linear
second-order differential operator
$$\mathcal{L}=\frac{\partial^{2}}{\partial\lambda\partial\mu}-\frac{c_{1}}{%
\lambda\!-\!\mu}\frac{\partial}{\partial\lambda}+\frac{c_{2}}{\lambda\!-\!\mu}%
\frac{\partial}{\partial\mu},$$
(3.3)
with $c_{1}$ and $c_{2}$ constants to be specified. Hence, the more
general second-order PDE
$$\mathcal{L}\,T=U,$$
(3.4)
with $T$ and $U$ functions of $\lambda$ and $\mu$ alone, reduces to
those for the two stress components by taking
$$\displaystyle T=T_{\lambda\lambda}$$
$$\displaystyle:$$
$$\displaystyle c_{1}={\textstyle\frac{3}{2}},\quad c_{2}={\textstyle\frac{1}{2}%
},\quad U=U_{\lambda\lambda},$$
(3.5)
$$\displaystyle T=T_{\mu\mu}$$
$$\displaystyle:$$
$$\displaystyle c_{1}={\textstyle\frac{1}{2}},\quad c_{2}={\textstyle\frac{3}{2}%
},\quad U=U_{\mu\mu}.$$
In what follows, we introduce a Riemann–Green function $\mathcal{G}$
and incorporate the left-hand side of (3.4)
into a divergence. Green’s theorem then allows us to rewrite the
surface integral as a line integral over its closed boundary, which
can be evaluated if $\mathcal{G}$ is chosen suitably. We determine the
Riemann–Green function $\mathcal{G}$ which satisfies the required
conditions, and then construct the solution.
3.1.1 Application of Riemann’s method
We form a divergence by defining a linear operator
$\mathcal{L}^{\star}$, called the adjoint of $\mathcal{L}$
(e.g., Copson 1975), as
$$\mathcal{L}^{\star}=\frac{\partial^{2}}{\partial\lambda\partial\mu}+\frac{%
\partial}{\partial\lambda}\biggl{(}\!\frac{c_{1}}{\lambda\!-\!\mu}\!\biggr{)}-%
\frac{\partial}{\partial\mu}\biggl{(}\!\frac{c_{2}}{\lambda\!-\!\mu}\!\biggr{)}.$$
(3.6)
The combination $\mathcal{G}\mathcal{L}T-T\mathcal{L}^{\star}\mathcal{G}$ is a divergence for any twice differentiable function
$\mathcal{G}$ because
$$\mathcal{G}\mathcal{L}T-T\mathcal{L}^{\star}\mathcal{G}=\partial L/\partial%
\lambda+\partial M/\partial\mu,$$
(3.7)
where
$$\displaystyle L(\lambda,\mu)$$
$$\displaystyle=$$
$$\displaystyle\frac{\mathcal{G}}{2}\frac{\partial T}{\partial\mu}-\frac{T}{2}%
\frac{\partial\mathcal{G}}{\partial\mu}-\frac{c_{1}\,\mathcal{G}\,T}{\lambda\!%
-\!\mu},$$
(3.8)
$$\displaystyle M(\lambda,\mu)$$
$$\displaystyle=$$
$$\displaystyle\frac{\mathcal{G}}{2}\frac{\partial T}{\partial\lambda}-\frac{T}{%
2}\frac{\partial\mathcal{G}}{\partial\lambda}+\frac{c_{2}\,\mathcal{G}\,T}{%
\lambda\!-\!\mu}.$$
We now apply the PDE (3.4) and the definition
(3.6) in zero-subscripted variables $\lambda_{0}$
and $\mu_{0}$. We integrate the divergence
(3.7) over the domain $D=\{(\lambda_{0},\mu_{0})$: $\lambda\leq\lambda_{0}\leq\infty,\mu\leq\mu_{0}\leq-\alpha\}$, with closed boundary $\Gamma$
(Fig. 4). It follows by Green’s theorem
that
$$\displaystyle\iint\limits_{D}\hskip-3.0pt{\mathrm{d}}\lambda_{0}{\mathrm{d}}%
\mu_{0}\Bigl{(}\mathcal{G}\mathcal{L}_{0}T-T\mathcal{L}_{0}^{\star}\mathcal{G}%
\Bigr{)}=\\
\displaystyle\oint\limits_{\Gamma}\!{\mathrm{d}}\mu_{0}\,L(\lambda_{0},\mu_{0}%
)-\oint\limits_{\Gamma}\!{\mathrm{d}}\lambda_{0}\,M(\lambda_{0},\mu_{0}),$$
(3.9)
where $\Gamma$ is circumnavigated counter-clockwise. Here $\mathcal{L}_{0}$
and $\mathcal{L}_{0}^{\star}$ denote the operators
(3.3) and (3.6) in
zero-subscripted variables. We shall seek a Riemann–Green function
$\mathcal{G}(\lambda_{0},\mu_{0})$ which solves the PDE
$$\mathcal{L}_{0}^{\star}\mathcal{G}=0,$$
(3.10)
in the interior of $D$. Then the left-hand side of
(3.9) becomes $\iint_{D}{\mathrm{d}}\lambda_{0}{\mathrm{d}}\mu_{0}\mathcal{G}(\lambda_{0},\mu%
_{0})\,U(\lambda_{0},\mu_{0})$. The
right-hand side of (3.9) has a
contribution from each of the four sides of the rectangular boundary
$\Gamma$. We suppose that $M(\lambda_{0},\mu_{0})$ and $L(\lambda_{0},\mu_{0})$
decay sufficiently rapidly as $\lambda_{0}\to\infty$ so that the
contribution from the boundary at $\lambda_{0}=\infty$ vanishes and the
infinite integration over $\lambda_{0}$ converges. Partial integration
of the remaining terms then gives for the boundary integral
$$\displaystyle\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0}%
\Bigl{[}\!\Bigl{(}\frac{\partial\mathcal{G}}{\partial\lambda_{0}}\!-\!\frac{c_%
{2}\,\mathcal{G}}{\lambda_{0}\!-\!\mu_{0}}\Bigr{)}T\hskip-6.0pt\underset{\mu_{%
0}=\mu}{\Bigr{]}}\!+\!\!\int\limits_{\mu}^{-\alpha}\hskip-4.0pt{\mathrm{d}}\mu%
_{0}\Bigl{[}\Bigl{(}\frac{\partial\mathcal{G}}{\partial\mu_{0}}\!+\!\frac{c_{1%
}\,\mathcal{G}}{\lambda_{0}\!-\!\mu_{0}}\Bigr{)}T\hskip-6.0pt\underset{\lambda%
_{0}=\lambda}{\Bigr{]}}\\
\displaystyle\!+\!\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}\Bigl{[}\Bigl{(}\frac{\partial T}{\partial\lambda_{0}}\!+\!\frac{c_%
{2}\,T}{\lambda_{0}\!-\!\mu_{0}}\Bigr{)}\mathcal{G}\hskip-9.0pt\underset{\mu_{%
0}=-\alpha}{\Bigr{]}}\hskip-10.0pt+\,\mathcal{G}(\lambda,\mu)T(\lambda,\mu).$$
(3.11)
We now impose on $\mathcal{G}$ the additional conditions
$$\mathcal{G}(\lambda,\mu)=1,$$
(3.12)
and
$$\displaystyle\frac{\partial\mathcal{G}}{\partial\lambda_{0}}-\frac{c_{2}\,%
\mathcal{G}}{\lambda_{0}\!-\!\mu_{0}}=0$$
$$\displaystyle\mathrm{on}$$
$$\displaystyle\quad\mu_{0}=\mu,$$
(3.13)
$$\displaystyle\frac{\partial\mathcal{G}}{\partial\mu_{0}}+\frac{c_{1}\,\mathcal%
{G}}{\lambda_{0}\!-\!\mu_{0}}=0$$
$$\displaystyle\mathrm{on}$$
$$\displaystyle\quad\lambda_{0}=\lambda.$$
Then eq. (3.9) gives the explicit
solution
$$\displaystyle T(\lambda,\mu)=\int\limits_{\lambda}^{\infty}\hskip-4.0pt{%
\mathrm{d}}\lambda_{0}\hskip-4.0pt\int\limits_{\mu}^{-\alpha}\hskip-5.0pt{%
\mathrm{d}}\mu_{0}\,\mathcal{G}(\lambda_{0},\mu_{0})\,U(\lambda_{0},\mu_{0})\\
\displaystyle-\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\Bigl{[}\Bigl{(}\frac{\partial T}{\partial\lambda_{0}}\!+\!\frac{c_{2}\,T}{%
\lambda_{0}\!-\!\mu_{0}}\Bigr{)}\mathcal{G}\hskip-10.0pt\underset{\mu_{0}=-%
\alpha}{\Bigr{]}}\hskip 10.0pt,$$
(3.14)
for the stress component, once we have found the Riemann–Green
function $\mathcal{G}$.
3.1.2 The Riemann–Green function
Our prescription for the Riemann–Green function
$\mathcal{G}(\lambda_{0},\mu_{0})$ is that it satisfies the PDE
(3.10) as a function of $\lambda_{0}$ and $\mu_{0}$, and
that it satisfies the boundary conditions (3.12) and
(3.13) at the specific values $\lambda_{0}=\lambda$
and $\mu_{0}=\mu$. Consequently $\mathcal{G}$ depends on two sets of
coordinates. Henceforth, we denote it as
$\mathcal{G}(\lambda,\mu;\lambda_{0},\mu_{0})$.
An explicit expression for the Riemann–Green function which solves
(3.10) is (Copson 1975)
$$\mathcal{G}(\lambda,\mu;\lambda_{0},\mu_{0})=\frac{(\lambda_{0}\!-\!\mu_{0})^{%
c_{2}}(\lambda\!-\!\mu_{0})^{c_{1}-c_{2}}}{(\lambda\!-\!\mu)^{c_{1}}}F(w),$$
(3.15)
where the parameter $w$ is defined as
$$w=\frac{(\lambda_{0}\!-\!\lambda)(\mu_{0}\!-\!\mu)}{(\lambda_{0}\!-\!\mu_{0})(%
\lambda\!-\!\mu)},$$
(3.16)
and $F(w)$ is to be determined. Since $w=0$ when $\lambda_{0}=\lambda$
or $\mu_{0}=\mu$, it follows from (3.12) that the
function $F$ has to satisfy $F(0)=1$. It is straightforward to verify
that $\mathcal{G}$ satisfies the conditions (3.13),
and that eq. (3.10) reduces to the following
ordinary differential equation for $F(w)$
$$w(1\!-\!w)F^{\prime\prime}+[1\!-\!(2+c_{1}\!-\!c_{2})w]F^{\prime}-c_{1}(1\!-\!%
c_{2})F=0.$$
(3.17)
This is a hypergeometric equation (e.g., Abramowitz & Stegun
1965), and its unique solution satisfying
$F(0)=1$ is
$$F(w)={}_{2}F_{1}(c_{1},1\!-\!c_{2};1;w).$$
(3.18)
The Riemann–Green function (3.15) represents the
influence at a field point at $(\lambda,\mu)$ due to a source point at
$(\lambda_{0},\mu_{0})$. Hence it satisfies the PDE
$$\mathcal{L}\,\mathcal{G}(\lambda,\mu;\lambda_{0},\mu_{0})=\delta(\lambda_{0}\!%
-\!\lambda)\delta(\mu_{0}\!-\!\mu).$$
(3.19)
The first right-hand side term of the solution (3.14)
is a sum over the sources in $D$ which are due to the inhomogeneous term
$U$ in the PDE (3.4). That PDE is hyperbolic
with characteristic variables $\lambda$ and $\mu$. By choosing to
apply Green’s theorem to the domain $D$, we made it the domain of
dependence (Strauss 1992) of the field point $(\lambda,\mu)$
for (3.4), and hence we implicitly decided to
integrate that PDE in the direction of decreasing $\lambda$ and
decreasing $\mu$.
The second right-hand side term of the solution
(3.14) represents the solution to the homogeneous
PDE $\mathcal{L}\,T=0$ due to the boundary values of $T$ on the part
of the boundary $\mu=-\alpha$ which lies within the domain of
dependence. There is only one boundary term because we implicitly
require that $T(\lambda,\mu)\to 0$ as $\lambda\to\infty$. We verify
in §3.1.4 that this requirement is indeed
satisfied.
3.1.3 The disc solution
We obtain the Riemann–Green functions for $T_{\lambda\lambda}$ and
$T_{\mu\mu}$, labeled as $\mathcal{G}_{\lambda\lambda}$ and
$\mathcal{G}_{\mu\mu}$, respectively, from expressions
(3.15) and (3.18) by substitution
of the values for the constants $c_{1}$ and $c_{2}$ from
(3.5). The hypergeometric function in
$\mathcal{G}_{\lambda\lambda}$ is the complete elliptic integral of
the second kind222We use the definition $E(w)=\!\!\int_{0}^{\frac{\pi}{2}}{\mathrm{d}}\theta\,\sqrt{1-w\sin^{2}\theta}$, $E(w)$.
The hypergeometric function in $\mathcal{G}_{\mu\mu}$ can also be
expressed in terms of $E(w)$ using eq. (15.2.15) of
Abramowitz & Stegun (1965), so that we can write
$$\mathcal{G}_{\lambda\lambda}(\lambda,\mu;\lambda_{0},\mu_{0})=\frac{(\lambda_{%
0}\!-\!\mu_{0})^{\frac{3}{2}}}{(\lambda\!-\!\mu)^{\frac{1}{2}}}\frac{2E(w)}{%
\pi(\lambda_{0}\!-\!\mu)},$$
(3.20a)
$$\mathcal{G}_{\mu\mu}(\lambda,\mu;\lambda_{0},\mu_{0})=\frac{(\lambda_{0}\!-\!%
\mu_{0})^{\frac{3}{2}}}{(\lambda\!-\!\mu)^{\frac{1}{2}}}\frac{2E(w)}{\pi(%
\lambda\!-\!\mu_{0})},$$
(3.20b)
Substituting these into (3.14) gives the solution
of the stress components throughout the disc as
$$\displaystyle T_{\lambda\lambda}(\lambda,\mu)=\frac{2}{\pi(\lambda\!-\!\mu)^{%
\frac{1}{2}}}\Biggl{\{}\\
\displaystyle\int\limits_{\lambda}^{\infty}\hskip-5.0pt{\mathrm{d}}\lambda_{0}%
\hskip-6.0pt\int\limits_{\mu}^{-\alpha}\hskip-4.0pt{\mathrm{d}}\mu_{0}\!\frac{%
E(w)}{(\lambda_{0}\!\!-\!\mu)}\!\biggl{\{}\!\!\frac{\partial}{\partial\mu_{0}}%
\!\biggl{[}\!-(\lambda_{0}\!-\!\mu_{0})^{\frac{3}{2}}\!\rho\frac{\partial V_{S%
}}{\partial\lambda_{0}}\!\biggr{]}\!-\frac{(\lambda_{0}\!\!-\!\mu_{0})^{\frac{%
1}{2}}}{2}\!\rho\frac{\partial V_{S}}{\partial\mu_{0}}\!\biggr{\}}\\
\displaystyle-\!\!\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}\biggl{[}\!\frac{E(w)}{(\lambda_{0}\!-\!\mu)}\hskip-10.0pt\underset%
{\mu_{0}=-\alpha}{\biggr{]}}\hskip-10.0pt(\lambda_{0}\!+\!\alpha)\frac{{%
\mathrm{d}}}{{\mathrm{d}}\lambda_{0}}\!\Bigl{[}\!(\lambda_{0}\!+\!\alpha)^{%
\frac{1}{2}}T_{\lambda\lambda}(\lambda_{0},-\alpha)\!\Bigr{]}\!\!\Biggr{\}},$$
(3.21a)
$$\displaystyle T_{\mu\mu}(\lambda,\mu)=\frac{2}{\pi(\lambda\!-\!\mu)^{\frac{1}{%
2}}}\Biggl{\{}\\
\displaystyle\int\limits_{\lambda}^{\infty}\hskip-5.0pt{\mathrm{d}}\lambda_{0}%
\hskip-6.0pt\int\limits_{\mu}^{-\alpha}\hskip-4.0pt{\mathrm{d}}\mu_{0}\!\frac{%
E(w)}{(\lambda\!-\!\mu_{0})}\!\biggl{\{}\!\!\frac{\partial}{\partial\lambda_{0%
}}\!\biggl{[}\!-(\lambda_{0}\!-\!\mu_{0})^{\frac{3}{2}}\!\rho\frac{\partial V_%
{S}}{\partial\mu_{0}}\!\biggr{]}\!+\frac{(\lambda_{0}\!\!-\!\mu_{0})^{\frac{1}%
{2}}}{2}\!\rho\frac{\partial V_{S}}{\partial\lambda_{0}}\!\biggr{\}}\\
\displaystyle\hskip-20.0pt-\!\!\int\limits_{\lambda}^{\infty}\hskip-4.0pt{%
\mathrm{d}}\lambda_{0}\,\biggl{[}\!\frac{E(w)}{(\lambda\!-\!\mu_{0})}\hskip-10%
.0pt\underset{\mu_{0}=-\alpha}{\biggr{]}}\hskip-10.0pt\frac{{\mathrm{d}}}{{%
\mathrm{d}}\lambda_{0}}\!\Bigl{[}\!(\lambda_{0}\!+\!\alpha)^{\frac{3}{2}}T_{%
\mu\mu}(\lambda_{0},-\alpha)\!\Big{]}\!\!\Biggr{\}}.$$
(3.21b)
This solution depends on $\rho$ and $V_{S}$, which are assumed to be
known, and on $T_{\lambda\lambda}(\lambda,-\alpha)$ and
$T_{\mu\mu}(\lambda,-\alpha)$, i.e., the stress components on the part
of the $y$-axis beyond the foci.
Because these two stress components satisfy the first Jeans equation
of (2.25) at $\mu=-\alpha$, we are only free to choose
one of them, say $T_{\mu\mu}(\lambda,-\alpha)$.
$T_{\lambda\lambda}(\lambda,-\alpha)$ then follows by integrating this
first Jeans equation with respect to $\lambda$, using the continuity
condition (2.26) and requiring that
$T_{\lambda\lambda}(\lambda,-\alpha)\to 0$ as $\lambda\to\infty$.
3.1.4 Consistency check
We now investigate the behaviour of our solutions at large distances
and verify that our working hypothesis concerning the radial fall-off
of the functions $L$ and $M$ in eq. (3.8) is
correct. The solution (3.14) consists of two
components: an area integral due to the inhomogeneous right-hand side
term of the PDE (3.4), and a single integral
due to the boundary values. We examine them
in turn to obtain the conditions for the integrals to converge. Next,
we parameterise the behaviour of the density and potential at large
distances and apply it to the solution (3.21)
and to the energy equation (2.10) to check
if the convergence conditions are satisfied for physical
potential-density pairs.
As $\lambda_{0}\to\infty$, $w$ tends to the finite limit
$(\mu_{0}-\mu)/(\lambda-\mu)$. Hence $E(w)$ is finite, and so, by
(3.20),
$\mathcal{G}_{\lambda\lambda}=\mathcal{O}(\lambda_{0}^{1/2})$ and
$\mathcal{G}_{\mu\mu}=\mathcal{O}(\lambda_{0}^{3/2})$. Suppose now that
$U_{\lambda\lambda}(\lambda_{0},\mu_{0})=\mathcal{O}(\lambda_{0}^{-l_{1}-1})$
and $U_{\mu\mu}(\lambda_{0},\mu_{0})=\mathcal{O}(\lambda_{0}^{-m_{1}-1})$ as
$\lambda_{0}\to\infty$. The area integrals in the solution
(3.14) then converge, provided that
$l_{1}>\frac{1}{2}$ and $m_{1}>\frac{3}{2}$. These requirements place
restrictions on the behaviour of the density $\rho$ and potential
$V_{S}$ which we examine below. Since
$\mathcal{G}_{\lambda\lambda}(\lambda,\mu;\lambda_{0},\mu_{0})$ is
$\mathcal{O}(\lambda^{-1/2})$ as $\lambda\to\infty$, the area integral
component of $T_{\lambda\lambda}(\lambda,\mu)$ behaves as
$\mathcal{O}(\lambda^{-1/2}\int_{\lambda}^{\infty}\lambda_{0}^{-l_{1}-1/2}d%
\lambda_{0})$ and so is $\mathcal{O}(\lambda^{-l_{1}})$. Similarly, with
$\mathcal{G}_{\mu\mu}(\lambda,\mu;\lambda_{0},\mu_{0})=\mathcal{O}(\lambda^{-3/%
2})$ as $\lambda\to\infty$, the first
component of $T_{\mu\mu}(\lambda,\mu)$ is
$\mathcal{O}(\lambda_{0}^{-m_{1}})$.
To analyse the second component of the solution
(3.14), we suppose that the boundary value
$T_{\lambda\lambda}(\lambda_{0},-\alpha)=\mathcal{O}(\lambda_{0}^{-l_{2}})$
and $T_{\mu\mu}(\lambda_{0},-\alpha)=\mathcal{O}(\lambda_{0}^{-m_{2}})$ as
$\lambda_{0}\to\infty$.
A similar analysis then shows that the boundary integrals
converge, provided that $l_{2}>\frac{1}{2}$ and $m_{2}>\frac{3}{2}$, and
that the second components of $T_{\lambda\lambda}(\lambda,\mu)$ and
$T_{\mu\mu}(\lambda,\mu)$ are $\mathcal{O}(\lambda^{-l_{2}})$ and
$\mathcal{O}(\lambda^{-m_{2}})$ as $\lambda\to\infty$, respectively.
We conclude that the convergence of the integrals in the solution
(3.14) requires that
$T_{\lambda\lambda}(\lambda,\mu)$ and $T_{\mu\mu}(\lambda,\mu)$ decay
at large distance as
$\mathcal{O}(\lambda^{-l})$ with $l>\frac{1}{2}$ and
$\mathcal{O}(\lambda^{-m})$ with $m>\frac{3}{2}$, respectively.
The requirements which we have imposed on
$U(\lambda_{0},\mu_{0})$ and $T(\lambda_{0},-\alpha)$ cause the
contributions to $\oint_{\Gamma}d\mu_{0}L(\lambda_{0},\mu_{0})$ in Green’s
formula (3.9) from the segment of the
path at large $\lambda_{0}$ to be negligible in all cases.
Having obtained the requirements for the Riemann–Green function analysis
to be valid, we now investigate the circumstances in which they apply.
Following Arnold et al. (1994), we
consider densities $\rho$ that decay as $N(\mu)\lambda^{-s/2}$
at large distances.
We suppose that the function $G(\tau)$ introduced in
eq. (2.4) is $\mathcal{O}(\tau^{\delta})$ for
$-\frac{1}{2}\leq\delta<0$ as $\tau\to\infty$.
The lower limit $\delta=-\frac{1}{2}$ corresponds to a potential
due to a finite total mass, while the upper limit restricts it to
potentials that decay to zero at large distances.
For the disc potential (2.24), we then have that
$f(\tau)=\mathcal{O}(\tau^{\delta+1})$ when $\tau\to\infty$.
Using the definition (3.2), we obtain
$$U_{\lambda\lambda}(\lambda,\mu)=\frac{f^{\prime}(\mu)-f^{\prime}(\lambda)}{2(%
\lambda-\mu)^{2}}\rho+\frac{V_{S}+f^{\prime}(\lambda)}{(\lambda-\mu)}\frac{%
\partial\rho}{\partial\mu},$$
(3.22a)
$$U_{\mu\mu}(\lambda,\mu)=\frac{f^{\prime}(\lambda)-f^{\prime}(\mu)}{2(\lambda-%
\mu)^{2}}\rho-\frac{V_{S}+f^{\prime}(\mu)}{(\lambda-\mu)}\frac{\partial\rho}{%
\partial\lambda},$$
(3.22b)
where $\rho$ is the surface density of the disc.
It follows that $U_{\lambda\lambda}(\lambda,\mu)$ is generally the
larger and is $\mathcal{O}(\lambda^{\delta-s/2-1})$ as $\lambda\to\infty$, whereas $U_{\mu\mu}(\lambda,\mu)$ is
$\mathcal{O}(\lambda^{-2-s/2})$. Hence, for the components of the
stresses (3.21) we have
$T_{\lambda\lambda}=\mathcal{O}(\lambda^{\delta-s/2})$ and
$T_{\mu\mu}=\mathcal{O}(\lambda^{-1-s/2})$. This estimate for
$U_{\lambda\lambda}$ assumes that $\partial\rho/\partial\mu$ is also
$\mathcal{O}(\lambda^{-s/2})$. It is too high if the density becomes
independent of angle at large distances, as it does for discs with
$s<3$ (Evans & de Zeeuw 1992).
Using these estimates with the requirements for integral convergence
that were obtained earlier, we obtain the conditions $s>2\delta+1$ and
$s>1$, respectively, for inhomogeneous terms in
$T_{\lambda\lambda}(\lambda,\mu)$ and $T_{\mu\mu}(\lambda,\mu)$ to be
valid solutions. The second condition implies the first because
$\delta<0$.
With $V_{S}(\lambda,\mu)=\mathcal{O}(\lambda^{\delta})$ at large
$\lambda$, it follows from the energy equation
(2.10) for bound orbits that the
second-order velocity moments $\langle v_{\tau}^{2}\rangle$ cannot exceed
$\mathcal{O}(\lambda^{\delta})$, and hence that stresses
$T_{\tau\tau}=\rho\langle v_{\tau}^{2}\rangle$ cannot exceed
$\mathcal{O}(\lambda^{\delta-s/2})$. This implies for
$T_{\lambda\lambda}(\lambda,\mu)$ that $s>2\delta+1$, and for
$T_{\mu\mu}(\lambda,\mu)$ we have the more stringent requirement that
$s>2\delta+3$. This last requirement is unnecessarily restrictive, but
an alternative form of the solution is needed to do better. Since that
alternative form arises naturally with the singular solution method,
we return to this issue in §3.2.6.
Thus, for the Riemann–Green solution to apply, we find the conditions
$s>1$ and $-\frac{1}{2}\leq\delta<0$. These conditions are satisfied
for the perfect elliptic disk $(s=3,\delta=-\frac{1}{2})$, and for
many other separable discs (Evans & de Zeeuw
1992).
3.1.5 Relation to the EL89 analysis
EL89 solve for the difference $\Delta\equiv T_{\lambda\lambda}-T_{\mu\mu}$ using a Green’s function method which is essentially
equivalent to the approach used here. EL89 give the Fourier transform
of their Green’s function, but do not invert it. We give the
Riemann–Green function for $\Delta$ in Appendix A, and then rederive
it by a Laplace transform analysis. Our Laplace transform analysis can
be recast in terms of Fourier transforms. When we do this, we obtain
a result which differs from that of EL89.
3.2 Singular Solution Superposition
We have solved the disc problem (2.25) by combining
the two Jeans equations into a single second-order PDE in one of the
stress components, and then applying Riemann’s method to it.
However, Riemann’s method and other standard techniques do not carry
over to a single third-order PDE in one dependent variable, which is
the best that one could expect to have in the general case.
We therefore introduce an alternative but equivalent method of
solution, also based on the superposition of source points.
In constrast to Riemann’s method, this singular solution method is
applicable to the general case of triaxial Stäckel models.
3.2.1 Simplified Jeans equations
We define new independent variables
$$\displaystyle S_{\lambda\lambda}(\lambda,\mu)$$
$$\displaystyle=$$
$$\displaystyle|\lambda\!-\!\mu|^{\frac{1}{2}}\,T_{\lambda\lambda}(\lambda,\mu),$$
(3.23)
$$\displaystyle S_{\mu\mu}(\lambda,\mu)$$
$$\displaystyle=$$
$$\displaystyle|\mu\!-\!\lambda|^{\frac{1}{2}}\,T_{\mu\mu}(\lambda,\mu),$$
where $|.|$ denotes absolute value, introduced to make the square root
single-valued with respect to cyclic permutation of $\lambda\to\mu\to\lambda$. The Jeans equations (2.25) can then be
written in the form
$$\frac{\partial S_{\lambda\lambda}}{\partial\lambda}-\frac{S_{\mu\mu}}{2(%
\lambda\!-\!\mu)}=-|\lambda\!-\!\mu|^{\frac{1}{2}}\rho\frac{\partial V_{S}}{%
\partial\lambda}\equiv g_{1}(\lambda,\mu),$$
(3.24a)
$$\frac{\partial S_{\mu\mu}}{\partial\mu}-\frac{S_{\lambda\lambda}}{2(\mu\!-\!%
\lambda)}=-|\mu\!-\!\lambda|^{\frac{1}{2}}\rho\frac{\partial V_{S}}{\partial%
\mu}\equiv g_{2}(\lambda,\mu).$$
(3.24b)
For given density and potential, $g_{1}$ and $g_{2}$ are known functions
of $\lambda$ and $\mu$.
Next, we consider a simplified form of
(3.24) by taking for $g_{1}$ and $g_{2}$,
respectively
$$\tilde{g}_{1}(\lambda,\mu)=0,\quad\tilde{g}_{2}(\lambda,\mu)=\delta(\lambda_{0%
}\!-\!\lambda)\delta(\mu_{0}\!-\!\mu),$$
(3.25)
with $-\beta\leq\mu\leq\mu_{0}\leq-\alpha\leq\lambda\leq\lambda_{0}$. A similar
set of simplified equations is obtained by interchanging the
expressions for $\tilde{g}_{1}$ and $\tilde{g}_{2}$.
We refer to solutions of these simplified Jeans equations as
singular solutions.
Singular solutions can be interpreted as contributions to the stresses
at a fixed point $(\lambda,\mu)$ due to a source point in
$(\lambda_{0},\mu_{0})$ (Fig. 4). The full
stress at the field point can be obtained by adding all source point
contributions, each with a weight that depends on the local density and
potential. In what follows, we derive the singular solutions, and then
use this superposition principle to construct the solution for the
Stäckel discs in §3.2.6.
3.2.2 Homogeneous boundary problem
The choice (3.25) places constraints on the
functional form of $S_{\lambda\lambda}$ and $S_{\mu\mu}$.
The presence of the delta-functions in $\tilde{g}_{2}$
requires that $S_{\mu\mu}$ contains a term
$-\delta(\lambda_{0}\!-\!\lambda)\mathcal{H}(\mu_{0}\!-\!\mu)$,
with the step-function
$$\mathcal{H}(x\!-\!x_{0})=\begin{cases}0,&\text{$x<x_{0}$},\\
1,&\text{$x\geq x_{0}$}.\end{cases}$$
(3.26)
Since $\mathcal{H}^{\prime}(y)=\delta(y)$, it follows that, by taking the
partial derivative of
$-\delta(\lambda_{0}\!-\!\lambda)\mathcal{H}(\mu_{0}\!-\!\mu)$ with
respect to $\mu$, the delta-functions are balanced. There is no
balance when $S_{\lambda\lambda}$ contains
$\delta(\lambda_{0}\!-\!\lambda)$, and similarly neither stress
components can contain $\delta(\mu_{0}\!-\!\mu)$. We can, however, add a
function of $\lambda$ and $\mu$ to both components, multiplied by
$\mathcal{H}(\lambda_{0}\!-\!\lambda)\mathcal{H}(\mu_{0}\!-\!\mu)$. In this way, we obtain a singular solution of the form
$$\displaystyle S_{\lambda\lambda}$$
$$\displaystyle=$$
$$\displaystyle A(\lambda,\mu)\mathcal{H}(\lambda_{0}\!-\!\lambda)\mathcal{H}(%
\mu_{0}\!-\!\mu),$$
(3.27)
$$\displaystyle S_{\mu\mu}$$
$$\displaystyle=$$
$$\displaystyle B(\lambda,\mu)\mathcal{H}(\lambda_{0}\!-\!\lambda)\mathcal{H}(%
\mu_{0}\!-\!\mu)\!-\!\delta(\lambda_{0}\!-\!\lambda)\mathcal{H}(\mu_{0}\!-\!%
\mu),$$
in terms of functions $A$ and $B$ that have to be determined.
Substituting these forms in the simplified Jeans equations and
matching terms yields two homogeneous equations
$$\frac{\partial A}{\partial\lambda}-\frac{B}{2(\lambda\!-\!\mu)}=0,\quad\frac{%
\partial B}{\partial\mu}-\frac{A}{2(\mu\!-\!\lambda)}=0,$$
(3.28)
and two boundary conditions
$$A(\lambda_{0},\mu)=\frac{1}{2(\lambda_{0}\!-\!\mu)},\quad B(\lambda,\mu_{0})=0.$$
(3.29)
Two alternative boundary conditions which are useful below can be
found as follows. Integrating the first of the
equations (3.28) with respect to $\lambda$ on $\mu=\mu_{0}$,
where $B(\lambda,\mu_{0})=0$, gives
$$A(\lambda,\mu_{0})=\frac{1}{2(\lambda_{0}\!-\!\mu_{0})}.$$
(3.30)
Similarly, integrating the second of equations (3.28) with
respect to $\mu$ on $\lambda=\lambda_{0}$ where $A$ is known gives
$$B(\lambda_{0},\mu)=\frac{\mu_{0}\!-\!\mu}{4(\lambda_{0}\!-\!\mu_{0})(\lambda_{%
0}\!-\!\mu)}.$$
(3.31)
Even though expressions (3.30) and
(3.31) do not add new information, they will be useful
for identifying contour integral formulas in the analysis which
follows.
We have reduced the problem of solving the Jeans equations
(2.25) for Stäckel discs to a two-dimensional
boundary problem. We solve this problem by first deriving a
one-parameter particular solution (§3.2.3)
and then making a linear combination of particular solutions with
different values of their free parameter, such that the four boundary
expressions are satisfied simultaneously
(§3.2.4). This gives the solution of
the homogeneous boundary problem.
3.2.3 Particular solution
To find a particular solution of the homogeneous equations
(3.28) with one free parameter $z$, we take as an
Ansatz
$$\displaystyle A(\lambda,\mu)$$
$$\displaystyle\propto$$
$$\displaystyle\!\!\!(\lambda\!-\!\mu)^{a_{1}}(z\!-\!\lambda)^{a_{2}}(z\!-\!\mu)%
^{a_{3}},$$
(3.32)
$$\displaystyle B(\lambda,\mu)$$
$$\displaystyle\propto$$
$$\displaystyle\!\!\!(\lambda\!-\!\mu)^{b_{1}}(z\!-\!\lambda)^{b_{2}}(z\!-\!\mu)%
^{b_{3}},$$
with $a_{i}$ and $b_{i}$ $(i=1,2,3)$ all constants. Hence,
$$\displaystyle\frac{\partial A}{\partial\lambda}$$
$$\displaystyle=$$
$$\displaystyle A\biggl{(}\frac{a_{1}}{\lambda\!-\!\mu}-\frac{a_{2}}{z\!-\!%
\lambda}\biggr{)}=\frac{1}{2(\lambda\!-\!\mu)}\biggl{(}2a_{1}A\frac{z\!-\!\mu}%
{z\!-\!\lambda}\,\biggr{)},$$
(3.33)
$$\displaystyle\frac{\partial B}{\partial\mu}$$
$$\displaystyle=$$
$$\displaystyle B\biggl{(}\frac{b_{1}}{\mu\!-\!\lambda}-\frac{b_{3}}{z\!-\!\mu}%
\biggr{)}=\frac{1}{2(\mu\!-\!\lambda)}\biggl{(}2b_{1}B\frac{z\!-\!\lambda}{z\!%
-\!\mu}\,\biggr{)},$$
where we have set $a_{2}=-a_{1}$ and $b_{3}=-b_{1}$. Taking $a_{1}=b_{1}=\frac{1}{2}$,
the homogeneous equations are satisfied if
$$\frac{z\!-\!\lambda}{z\!-\!\mu}=\frac{A}{B}=\frac{(z\!-\!\lambda)^{-\frac{1}{2%
}-b_{2}}}{(z\!-\!\mu)^{-\frac{1}{2}-a_{3}}},$$
(3.34)
so, $a_{3}=b_{2}=-\frac{3}{2}$. We denote the resulting solutions as
$$A^{P}(\lambda,\mu)=\frac{|\lambda\!-\!\mu|^{\frac{1}{2}}}{(z\!-\!\lambda)^{%
\frac{1}{2}}(z\!-\!\mu)^{\frac{3}{2}}},$$
(3.35a)
$$B^{P}(\lambda,\mu)=\frac{|\mu\!-\!\lambda|^{\frac{1}{2}}}{(z\!-\!\mu)^{\frac{1%
}{2}}(z\!-\!\lambda)^{\frac{3}{2}}}.$$
(3.35b)
These particular solutions follow from each other by cyclic
permutation $\lambda\to\mu\to\lambda$, as is required from the
symmetry of the homogeneous equations (3.28).
3.2.4 The homogeneous solution
We now consider a linear combination of the particular solution
(3.35) by integrating it over the free parameter $z$,
which we assume to be complex. We choose the integration contours in
the complex $z$-plane, such that the four boundary expressions can be
satisfied simultaneously.
We multiply $B^{P}(\lambda,\mu)$ by $(z\!-\!\mu_{0})^{\frac{1}{2}}$, and
integrate it over the closed contour $C^{\mu}$
(Fig. 5). When $\mu=\mu_{0}$, the
integrand is analytic within $C^{\mu}$, so that the integral vanishes by
Cauchy’s theorem.
Since both the
multiplication factor and the integration are independent of $\lambda$
and $\mu$, it follows from the superposition principle that the
homogeneous equations are still satisfied. In this way, the second of
the boundary expressions (3.29) is
satisfied.
Next, we also multiply $B^{P}(\lambda,\mu)$ by
$(z-\lambda_{0})^{-\frac{1}{2}}$, so that the contour $C^{\lambda}$
(Fig. 5) encloses a double pole when
$\lambda=\lambda_{0}$. From the Residue theorem
(e.g., Conway 1973), it then follows that
$$\displaystyle\oint\limits_{C^{\lambda}}\frac{(z\!-\!\mu_{0})^{\frac{1}{2}}}{(z%
\!-\!\lambda_{0})^{\frac{1}{2}}}B^{P}(\lambda_{0},\mu)\,{\mathrm{d}}z=\oint%
\limits_{C^{\lambda}}\frac{(z\!-\!\mu_{0})^{\frac{1}{2}}(\lambda_{0}\!-\!\mu)^%
{\frac{1}{2}}}{(z\!-\!\mu)^{\frac{1}{2}}(z\!-\!\lambda_{0})^{2}}\,{\mathrm{d}}%
z=\\
\displaystyle 2\pi i(\lambda_{0}\!-\!\mu)^{\frac{1}{2}}\biggl{[}\frac{{\mathrm%
{d}}}{{\mathrm{d}}z}\biggl{(}\frac{z\!-\!\mu_{0}}{z\!-\!\mu}\biggr{)}^{\frac{1%
}{2}}\hskip-7.0pt\underset{z=\lambda_{0}}{\biggr{]}}\hskip-7.0pt=\!\frac{\pi i%
(\mu_{0}\!-\!\mu)}{(\lambda_{0}\!-\!\mu_{0})^{\frac{1}{2}}(\lambda_{0}\!-\!\mu%
)},$$
(3.36)
which equals the boundary expression (3.31), up to the
factor $4\pi i(\lambda_{0}\!-\!\mu_{0})^{\frac{1}{2}}$.
Taking into account the latter factor, and the ratio
(3.34) of $A$ and $B$, we postulate as homogeneous
solution
$$A(\lambda,\mu)\!=\!\!\frac{1}{4\pi i}\frac{|\lambda\!-\!\mu|^{\frac{1}{2}}}{|%
\lambda_{0}\!-\!\mu_{0}|^{\frac{1}{2}}}\hskip-3.0pt\oint\limits_{C}\hskip-3.0%
pt\frac{(z\!-\!\mu_{0})^{\frac{1}{2}}\,{\mathrm{d}}z}{(z\!-\!\lambda)^{\frac{1%
}{2}}\!(z\!-\!\mu)^{\frac{3}{2}}\!(z\!-\!\lambda_{0})^{\frac{1}{2}}},$$
(3.37a)
$$B(\lambda,\mu)\!=\!\!\frac{1}{4\pi i}\frac{|\mu\!-\!\lambda|^{\frac{1}{2}}}{|%
\lambda_{0}\!-\!\mu_{0}|^{\frac{1}{2}}}\hskip-3.0pt\oint\limits_{C}\hskip-3.0%
pt\frac{(z\!-\!\mu_{0})^{\frac{1}{2}}\,{\mathrm{d}}z}{(z\!-\!\mu)^{\frac{1}{2}%
}\!(z\!-\!\lambda)^{\frac{3}{2}}\!(z\!-\!\lambda_{0})^{\frac{1}{2}}},$$
(3.37b)
with the choice for the contour $C$ still to be specified.
The integrands in (3.37) consist of multi-valued
functions that all come in pairs
$(z\!-\!\tau)^{1/2-m}(z\!-\!\tau_{0})^{1/2-n}$,
for integer $m$ and $n$, and for $\tau$ being
either $\lambda$ or $\mu$. Hence, we can make the integrands
single-valued by specifying two cuts in the complex $z$-plane, one
from $\mu$ to $\mu_{0}$ and one from $\lambda$ to $\lambda_{0}$. The
integrands are now analytic in the cut plane away from its cuts and
behave as $z^{-2}$ at large distances, so that the integral over a
circular contour with infinite radius is zero333We evaluate the
square roots as $(z-\tau)^{\frac{1}{2}}=|z-\tau|\exp i\arg(z-\tau)$ with $|\arg(z-\tau)|\leq\pi$..
Connecting the simple contours $C^{\lambda}$ and $C^{\mu}$ with this
circular contour shows that the cumulative contribution from each of
these contours cancels. As a consequence, every time we integrate over
the contour $C^{\lambda}$, we will obtain the same result by integrating
over $-C^{\mu}$ instead.
This means we integrate over $C^{\mu}$ and take the negative of the
result or, equally, integrate over $C^{\mu}$ in clockwise direction.
For example, we obtained the boundary expression for $B$ in
(3.36) by applying the Residue theorem to the
double pole
enclosed by the contour $C^{\lambda}$. The evaluation of the integral
becomes less straightforward when we consider the contour $-C^{\mu}$
instead. Wrapping the contour around the branch points $\mu$ and
$\mu_{0}$ (Fig. 6), one may easily
verify that the contribution from the two arcs vanishes if their
radius goes to zero. Taking into account the change in phase when
going around the two branch points, one may show that the
contributions from the two remaining parts of the contour, parallel to
the real axis, are equivalent. Hence, we arrive at the following
(real) integral
$$B(\lambda_{0},\mu)=\frac{1}{2\pi}\frac{(\lambda\!-\!\mu_{0})^{\frac{1}{2}}}{(%
\lambda_{0}\!-\!\mu_{0})^{\frac{1}{2}}}\int\limits_{\mu}^{\mu_{0}}\frac{{%
\mathrm{d}}t}{(\lambda_{0}\!-\!t)^{2}}\sqrt{\frac{\mu_{0}\!-\!t}{t\!-\!\mu}}.$$
(3.38)
The substitution
$$t=\mu+\frac{(\mu_{0}\!-\!\mu)(\lambda_{0}\!-\!\mu_{0})\,\sin^{2}\theta}{(\mu_{%
0}\!-\!\mu)\,s\!-\!(\lambda_{0}\!-\!\mu)}$$
(3.39)
then indeed gives the correct boundary expression
(3.31).
When we take $\mu=\mu_{0}$ in (3.37b), we are left with the
integrand $(z-\lambda)^{-3/2}(z-\lambda_{0})^{-1/2}$. This is analytic
within the contour $C^{\mu}$ and hence it follows from Cauchy’s theorem
that there is no contribution. However, if we take the contour
$-C^{\lambda}$ instead, it is not clear at once that the integral indeed
is zero. To evaluate the complex integral we wrap the contour
$C^{\lambda}$ around the branch points $\lambda$ and $\lambda_{0}$
(Fig. 6). There will be no
contribution from the arc around $\lambda_{0}$ if its radius goes to
zero. However, since the integrand involves the term $z-\lambda$ with
power $-\frac{3}{2}$, the contribution from the arc around $\lambda$ is of
the order $\epsilon^{-1/2}$ and hence goes to infinity if its radius
$\epsilon>0$ reduces to zero.
If we let the two remaining straight parts
of the contour run from $\lambda+\epsilon$ to $\lambda_{0}$, then their
cumulative contribution becomes proportional to $\tan\theta(\epsilon)$, with $\theta(\epsilon)$ approaching $\frac{\pi}{2}$
when $\epsilon$ reduces to zero.
Hence, both the latter contribution and the contribution from
the arc around $\lambda$ approaches infinity. However, careful
investigation of their limiting behaviour shows that they cancel when
$\epsilon$ reaches zero, as is required for the boundary expression
$B(\lambda,\mu_{0})=0$.
We have shown that the use of $C^{\lambda}$ and $-C^{\mu}$ gives the
same result, but the effort to evaluate the contour integral
varies between the two choices.
The boundary expressions for $A(\lambda,\mu)$,
(3.29) and (3.30) are obtained most
easily if we consider $C^{\lambda}$ when $\lambda=\lambda_{0}$ and
$-C^{\mu}$ when $\mu=\mu_{0}$. In both cases the integrand in
(3.37a) has a single pole within the chosen contour, so
that the boundary expressions follow by straightforward application of
the Residue theorem.
We now have proven that the homogeneous solution
(3.37) solves the homogeneous equations
(3.28), satisfies the boundary values
(3.29)–(3.31) separately and, from
the observation that $C^{\lambda}$ and $-C^{\mu}$ produce the same result,
also simultaneously.
3.2.5 Evaluation of the homogeneous solution
The homogeneous solution (3.37) consists of complex
contour integrals, which we transform to real integrals by wrapping
the contours $C^{\lambda}$ and $C^{\mu}$ around the corresponding pair of
branch points (Fig. 6). To have no
contribution from the arcs around the branch points, we choose the
(combination of) contours such that the terms in the integrand
involving these branch points have powers larger than $-1$. In this
way, we can always evaluate the complex integral as a (real) integral
running from one branch point to the other.
In the homogeneous solution (3.37a) for $A$ we choose
$C=C^{\lambda}$ and in (3.37b) for $B$ we take $C=-C^{\mu}$.
Taking into account the changes in phase when going around the branch
points, we obtain the following expressions for the homogeneous
solution
$$A(\lambda,\mu)\!=\!\!\frac{1}{2\pi}\frac{|\lambda\!-\!\mu|^{\frac{1}{2}}}{|%
\lambda_{0}\!-\!\mu_{0}|^{\frac{1}{2}}}\hskip-3.0pt\int\limits_{\lambda}^{%
\lambda_{0}}\hskip-5.0pt\frac{{\mathrm{d}}t}{t\!-\!\mu}\!\sqrt{\!\frac{t\!-\!%
\mu_{0}}{(t\!-\!\lambda)(t\!-\!\mu)(\lambda_{0}\!-\!t)}},$$
(3.40a)
$$B(\lambda,\mu)\!=\!\!\frac{1}{2\pi}\frac{|\lambda\!-\!\mu|^{\frac{1}{2}}}{|%
\lambda_{0}\!-\!\mu_{0}|^{\frac{1}{2}}}\hskip-3.0pt\int\limits_{\mu}^{\mu_{0}}%
\hskip-5.0pt\frac{{\mathrm{d}}t}{\lambda\!-\!t}\!\sqrt{\!\frac{\mu_{0}\!-\!t}{%
(\lambda\!-\!t)(t\!-\!\mu)(\lambda_{0}\!-\!t)}}.$$
(3.40b)
By a parameterisation of the form (3.39), or by
using an integral table (e.g., Byrd & Friedman
1971), expressions
(3.40) can be written conveniently in terms of the
complete elliptic integral of the second kind, $E$, and its
derivative $E^{\prime}$
$$A(\lambda,\mu;\lambda_{0},\mu_{0})=\frac{E(w)}{\pi(\lambda_{0}\!-\!\mu)},$$
(3.41a)
$$B(\lambda,\mu;\lambda_{0},\mu_{0})=-\frac{2wE^{\prime}(w)}{\pi(\lambda_{0}\!-%
\!\lambda)}.$$
(3.41b)
with $w$ defined as in (3.16). The second set of
arguments that were added to $A$ and $B$ make explicit the
position $(\lambda_{0},\mu_{0})$ of the source point which is causing the
stresses at the field point $(\lambda,\mu)$.
3.2.6 The disc solution
The solution of equations (3.24) with
right hand sides of the simplified form
$$\tilde{g}_{1}(\lambda,\mu)=\delta(\lambda_{0}\!-\!\lambda)\delta(\mu_{0}\!-\!%
\mu),\quad\tilde{g}_{2}(\lambda,\mu)=0,$$
(3.42)
is obtained from the solution (3.27)
by interchanging $\lambda\leftrightarrow\mu$ and
$\lambda_{0}\leftrightarrow\mu_{0}$. It is
$$\displaystyle S_{\lambda\lambda}$$
$$\displaystyle=$$
$$\displaystyle B(\mu,\lambda;\mu_{0},\lambda_{0})\mathcal{H}(\lambda_{0}\!\!-\!%
\lambda)\mathcal{H}(\mu_{0}\!\!-\!\mu)\!-\!\delta(\mu_{0}\!\!-\!\mu)\mathcal{H%
}(\lambda_{0}\!\!-\!\lambda),$$
(3.43)
$$\displaystyle S_{\mu\mu}$$
$$\displaystyle=$$
$$\displaystyle A(\mu,\lambda;\mu_{0},\lambda_{0})\mathcal{H}(\lambda_{0}\!\!-\!%
\lambda)\mathcal{H}(\mu_{0}\!\!-\!\mu).$$
To find the solution to the full equations
(3.24) at $(\lambda,\mu)$, we multiply
the singular solutions (3.27) and
(3.43) by $g_{1}(\lambda_{0},\mu_{0})$
and $g_{2}(\lambda_{0},\mu_{0})$ respectively and integrate over $D$, the
domain of dependence of $(\lambda,\mu)$. This gives the first two
lines of the two equations (3.44) below. The terms in
the third lines are due to the boundary values of $S_{\mu\mu}$ at
$\mu=-\alpha$. They are found by multiplying the singular solution
(3.27) evaluated for $\mu_{0}=-\alpha$ by
$-S_{\mu\mu}(\lambda_{0},-\alpha)$ and integrating over $\lambda_{0}$ in
$D$. It is easily verified that this procedure correctly represents
the boundary values with singular solutions. The final result for the
general solution of the Jeans equations
(3.24) for Stäckel discs, after using
the evaluations (3.41), is
$$\displaystyle S_{\lambda\lambda}(\lambda,\mu)=-\int\limits_{\lambda}^{\infty}%
\hskip-4.0pt{\mathrm{d}}\lambda_{0}\;g_{1}(\lambda_{0},\mu)\\
\displaystyle+\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\hskip-5.0pt\int\limits_{\mu}^{-\alpha}\hskip-4.0pt{\mathrm{d}}\mu_{0}\biggl{%
[}-g_{1}(\lambda_{0},\mu_{0})\frac{2wE^{\prime}(w)}{\pi(\mu_{0}\!-\!\mu)}\!+\!%
g_{2}(\lambda_{0},\mu_{0})\frac{E(w)}{\pi(\lambda_{0}\!-\!\mu)}\biggr{]}\\
\displaystyle-\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\,S_{\mu\mu}(\lambda_{0},-\alpha)\,\biggl{[}\frac{E(w)}{\pi(\lambda_{0}\!-\!%
\mu)}\hskip-10.0pt\underset{\mu_{0}=-\alpha}{\biggr{]}}\hskip-10.0pt,$$
(3.44a)
$$\displaystyle S_{\mu\mu}(\lambda,\mu)=-\int\limits_{\mu}^{-\alpha}\hskip-4.0pt%
{\mathrm{d}}\mu_{0}\;g_{2}(\lambda,\mu_{0})\\
\displaystyle+\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\hskip-5.0pt\int\limits_{\mu}^{-\alpha}\hskip-4.0pt{\mathrm{d}}\mu_{0}\biggl{%
[}-g_{1}(\lambda_{0},\mu_{0})\frac{E(w)}{\pi(\lambda\!-\!\mu_{0})}\!-\!g_{2}(%
\lambda_{0},\mu_{0})\frac{2wE^{\prime}(w)}{\pi(\lambda_{0}\!-\!\lambda)}\biggr%
{]}\\
\displaystyle+S_{\mu\mu}(\lambda,-\alpha)-\int\limits_{\lambda}^{\infty}\hskip%
-4.0pt{\mathrm{d}}\lambda_{0}\,S_{\mu\mu}(\lambda_{0},-\alpha)\biggl{[}-\frac{%
2wE^{\prime}(w)}{\pi(\lambda_{0}\!-\!\lambda)}\hskip-10.0pt\underset{\mu_{0}=-%
\alpha}{\biggr{]}}\hskip-10.0pt.$$
(3.44b)
The terms $(\mu_{0}\!-\!\mu)^{-1}$ and $(\lambda_{0}\!-\!\lambda)^{-1}$ do
not cause singularities because they are canceled by components of
$w$. In order to show that equations (3.44) are equivalent
to the solution (3.21) given by Riemann’s
method, integrate the terms in $E^{\prime}(w)$ by parts, and use the
definitions of $S_{\tau\tau}$, $g_{1}$ and $g_{2}$.
3.2.7 Convergence of the disc solution
We now return to the convergence issues first discussed in
§3.1.4, where we assumed that the density
$\rho$ decays as $N(\mu)\lambda^{-s/2}$ at large distances
and the Stäckel potential as $\mathcal{O}(\lambda^{\delta})$.
For the physical reasons given there, the assigned
boundary stress $T_{\mu\mu}(\lambda,-\alpha)$ cannot exceed
$\mathcal{O}(\lambda^{\delta-s/2})$ at large $\lambda$, giving an
$S_{\mu\mu}(\lambda,-\alpha)$ of $\mathcal{O}(\lambda^{\delta-s/2+1/2})$.
It follows that the infinite integrals in $S_{\mu\mu}(\lambda_{0},-\alpha)$
in the solution (3.44) require only that $s>2\delta+1$
for their convergence. This is the less restrictive result to which
we referred earlier.
The terms in the boundary stress are seen to contribute terms of the
correct order $\mathcal{O}(\lambda^{\delta-s/2+1/2})$ to
$S_{\lambda\lambda}(\lambda,\mu)$ and $S_{\mu\mu}(\lambda,\mu)$.
The formulas for the density and potential show that
$g_{1}(\lambda,\mu)=\mathcal{O}(\lambda^{\delta-s/2-1/2})$ while
$g_{2}(\lambda,\mu)$ is larger and
$\mathcal{O}(\lambda^{-s/2-1/2})$ as $\lambda\to\infty$. The
$\lambda_{0}$ integrations with $g_{1}$ and $g_{2}$ in their integrands all
converge provided $s>2\delta+1$. Hence, both
$S_{\lambda\lambda}(\lambda,\mu)$ and $S_{\mu\mu}(\lambda,\mu)$ are
$\mathcal{O}(\lambda^{\delta-s/2+1/2})$, so that the stress components
$T_{\tau\tau}(\lambda,\mu)$ ($\tau=\lambda,\mu$) are
$\mathcal{O}(\lambda^{\delta-s/2})$,
which is consistent with the physical reasoning of
§3.1.4.
Hence, all the conditions necessary for (3.44) to be
a valid solution of the Jeans equations
(3.24) for a Stäckel disc are
satisfied provided that $s>2\delta+1$. We have seen in
§3.1.4 that $\delta$ must lie in the range
$[-\frac{1}{2},0)$. When $\delta\to 0$ the models approach the isothermal
disk, for which also $s=1$ when the density is consistent with the
potential. Only then our requirement $s>2\delta+1$ is violated.
3.3 Alternative boundary conditions
We now derive the alternative form of the general disc solution when
the boundary conditions are not specified on $\mu=-\alpha$ but on
$\mu=-\beta$, or on $\lambda=-\alpha$ rather than in the limit
$\lambda\to\infty$. While the former switch is straightforward, the
latter is non-trivial, and leads to non-physical solutions.
3.3.1 Boundary condition for $\mu$
The analysis in §3.1 and
§3.2 is that needed when the boundary
conditions are imposed at large $\lambda$ and at $\mu=-\alpha$. The
Jeans equations (2.25) can be solved in a similar
way when one or both of those conditions are imposed instead at the
opposite boundaries $\lambda=-\alpha$ and/or $\mu=-\beta$. The
solution by Riemann’s method is accomplished by applying Green’s
theorem to a different domain, for example
$D^{\prime}=\{(\lambda_{0},\mu_{0})$: $\lambda\leq\lambda_{0}\leq\infty,-\beta\leq\mu_{0}\leq\mu\}$ when the boundary conditions are at
$\mu=-\beta$ and as $\lambda\to\infty$. The Riemann–Green
functions have to satisfy the same PDE (3.10) and
the same boundary conditions (3.12) and
(3.13), and so again are given by equations
(3.20a) and (3.20b). The variable
$w$ is negative in $D^{\prime}$ instead of positive as in $D$, but
this is unimportant. The only significant difference in the solution
of eq. (3.4) is that of a sign due to changes
in the limits of the line integrals. The final result, in place of
eq. (3.14), is
$$\displaystyle T(\lambda,\mu)=-\int\limits_{\lambda}^{\infty}\hskip-4.0pt{%
\mathrm{d}}\lambda_{0}\hskip-4.0pt\int\limits_{-\beta}^{\mu}\hskip-5.0pt{%
\mathrm{d}}\mu_{0}\mathcal{G}(\lambda_{0},\mu_{0})\,U(\lambda_{0},\mu_{0})\\
\displaystyle-\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\Bigl{[}\Bigl{(}\frac{\partial T}{\partial\lambda_{0}}+\frac{c_{2}\,T}{%
\lambda_{0}\!-\!\mu_{0}}\Bigr{)}\mathcal{G}\hskip-10.0pt\underset{\mu_{0}=-%
\beta}{\Bigr{]}}\hskip-5.0pt.$$
(3.45)
To apply the method of singular solutions to solve for the stresses
when the boundary stresses are specified at $\mu=-\beta$ rather than at
$\mu=-\alpha$, we modify the singular solutions
(3.27) by replacing the step-function
$\mathcal{H}(\mu_{0}\!-\!\mu)$ by $-\mathcal{H}(\mu\!-\!\mu_{0})$ throughout.
No other change is needed because both functions give $-\delta(\mu-\mu_{0})$
on partial differentiation with respect to $\mu$.
The two-dimensional problem for $A$ and $B$ remains the same, and so,
as with Riemann’s method, its solution remains the same.
Summing over sources in $D^{\prime}$ now gives
$$\displaystyle S_{\lambda\lambda}(\lambda,\mu)=-\int\limits_{\lambda}^{\infty}%
\hskip-4.0pt{\mathrm{d}}\lambda_{0}\;g_{1}(\lambda_{0},\mu)\\
\displaystyle-\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\hskip-5.0pt\int\limits_{-\beta}^{\mu}\hskip-4.0pt{\mathrm{d}}\mu_{0}\biggl{[%
}-g_{1}(\lambda_{0},\mu_{0})\frac{2wE^{\prime}(w)}{\pi(\mu_{0}\!-\!\mu)}+g_{2}%
(\lambda_{0},\mu_{0})\frac{E(w)}{\pi(\lambda_{0}\!-\!\mu)}\biggr{]}\\
\displaystyle-\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\,S_{\mu\mu}(\lambda_{0},-\beta)\,\biggl{[}\frac{E(w)}{\pi(\lambda_{0}\!-\!%
\mu)}\hskip-10.0pt\underset{\mu_{0}=-\beta}{\biggr{]}}\hskip-10.0pt,$$
(3.46a)
$$\displaystyle S_{\mu\mu}(\lambda,\mu)=\int\limits_{-\beta}^{\mu}\hskip-4.0pt{%
\mathrm{d}}\mu_{0}\;g_{2}(\lambda,\mu_{0})\\
\displaystyle-\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0%
}\hskip-5.0pt\int\limits_{-\beta}^{\mu}\hskip-4.0pt{\mathrm{d}}\mu_{0}\biggl{[%
}-g_{1}(\lambda_{0},\mu_{0})\frac{E(w)}{\pi(\lambda\!-\!\mu_{0})}-g_{2}(%
\lambda_{0},\mu_{0})\frac{2wE^{\prime}(w)}{\pi(\lambda_{0}\!-\!\lambda)}\biggr%
{]}\\
\displaystyle+S_{\mu\mu}(\lambda,-\beta)-\int\limits_{\lambda}^{\infty}\hskip-%
4.0pt{\mathrm{d}}\lambda_{0}\,S_{\mu\mu}(\lambda_{0},-\beta)\biggl{[}-\frac{2%
wE^{\prime}(w)}{\pi(\lambda_{0}\!-\!\lambda)}\hskip-10.0pt\underset{\mu_{0}=-%
\beta}{\biggr{]}}\hskip-10.0pt.$$
(3.46b)
as an alternative to equations (3.44).
3.3.2 Boundary condition for $\lambda$
There is a much more significant difference when one assigns boundary
values at $\lambda=-\alpha$ rather than at $\lambda\to\infty$.
It is still necessary that stresses decay to zero at large distances.
The stresses induced by arbitrary boundary data at the finite
boundary $\lambda=-\alpha$ do decay to zero as a consequence of
geometric divergence. The issue is that of the rate of this decay.
We find that it is generally less than that required by our analysis in
§3.1.4.
To isolate the effect of boundary data at $\lambda=-\alpha$,
we study solutions of the two-dimensional Jeans equations
(2.25) when the inhomogeneous right hand side
terms are set to zero and homogeneous boundary conditions of
zero stress are applied at either $\mu=-\alpha$ or $\mu=-\beta$.
These solutions can be derived either by Riemann’s method or
by singular solutions. The solution of the homogeneous PDE
$\mathcal{L}T=0$ is
$$T(\lambda,\mu)=\hskip-3.0pt-\hskip-3.0pt\int\limits_{\mu}^{-\alpha}\hskip-4.0%
pt{\mathrm{d}}\mu_{0}\Bigl{[}\Bigl{(}\frac{\partial T}{\partial\mu_{0}}\!-\!%
\frac{c_{1}\,T}{\lambda_{0}\!-\!\mu_{0}}\Bigr{)}\mathcal{G}(\lambda,\mu;%
\lambda_{0},\mu_{0})\hskip-10.0pt\underset{\lambda_{0}=-\alpha}{\Bigr{]}}%
\hskip-5.0pt,$$
(3.47)
for the case of zero stress at $\mu=-\alpha$, and
$$T(\lambda,\mu)=\hskip-3.0pt\hskip-3.0pt\int\limits^{\mu}_{-\beta}\hskip-4.0pt{%
\mathrm{d}}\mu_{0}\Bigl{[}\Bigl{(}\frac{\partial T}{\partial\mu_{0}}\!-\!\frac%
{c_{1}\,T}{\lambda_{0}\!-\!\mu_{0}}\Bigr{)}\mathcal{G}(\lambda,\mu;\lambda_{0}%
,\mu_{0})\hskip-10.0pt\underset{\lambda_{0}=-\alpha}{\Bigr{]}}\hskip-5.0pt,$$
(3.48)
for the case of zero stress at $\mu=-\beta$.
The behaviour of the stresses at large distances is governed by the
behaviour of the Riemann–Green functions $\mathcal{G}$ for distant
field points $(\lambda,\mu)$ and source points at $\lambda_{0}=-\alpha$.
It follows from equations (3.20) that
$T_{\lambda\lambda}(\lambda,\mu)=\mathcal{O}(\lambda^{-1/2})$ and
$T_{\mu\mu}(\lambda,\mu)=\mathcal{O}(\lambda^{-3/2})$.
As a restult, the radial stresses dominate at large distances and they
decay as only the inverse first power of distance.
Their rate of decay is less than $\mathcal{O}(\lambda^{\delta-s/2})$
– obtained in §3.1.4 from physical
arguments – if the requirement $s>2\delta+1$ is satisfied.
This inequality is the necessary condition which we derived in
§3.2.6 for (3.44) to be a
valid solution of the disc Jeans equations
(3.24).
It is violated in the isothermal limit.
There is a physical implication of radial stresses which decay as only
the inverse first power of distance. It implies that net forces of
finite magnitude are needed at an outer boundary to maintain the
system, the finite magnitudes arising from the product of the decaying
radial stresses and the increasing length of the boundary over which
they act. That length grows as the first power of distance.
Because this situation is perhaps more naturally understood in three
dimensions, we return to it in our discussion of oblate models in
§3.4.2. For now, lacking any physical reason for
allowing a stellar system to have such an external constraint, we
conclude that boundary conditions can be applied only at large
$\lambda$ and not at $\lambda=-\alpha$.
3.3.3 Disc solution for a general finite region
We now apply the singular solution method to solve equations
(3.24) in some rectangle $\mu_{{\rm min}}\leq\mu\leq\mu_{{\rm max}}$, $\lambda_{{\rm min}}\leq\lambda\leq\lambda_{{\rm max}}$, when the stress $S_{\mu\mu}$ is
given a boundary in $\mu$, and $S_{\lambda\lambda}$ is given on a
boundary in $\lambda$. This solution includes
(3.44) and (3.46) as special
cases. It will be needed for the large-radii scale-free case of
§3.4.3.
As we saw in §3.3.1, singular solutions
can easily be adapted to alternative choices for the domain of
dependence of a field point $(\lambda,\mu)$. Originally this was $D$,
the first of the four quadrants into which $(\lambda_{0},\mu_{0})$-space
is split by the lines $\lambda_{0}=\lambda$ and $\mu_{0}=\mu$
(Fig. 4). It has the
singular solution (3.27). We then
obtained the singular solution for the fourth quadrant $D^{\prime}$
simply by replacing $\mathcal{H}(\mu_{0}\!-\!\mu)$ by
$-\mathcal{H}(\mu\!-\!\mu_{0})$ in (3.27).
We can similarly find the singular solution for the second quadrant
$\lambda_{{\rm min}}\leq\lambda_{0}\leq\lambda$, $\mu\leq\mu_{0}\leq\mu_{{\rm max}}$ by replacing
$\mathcal{H}(\lambda_{0}\!-\!\lambda)$ by
$-\mathcal{H}(\lambda\!-\!\lambda_{0})$, and for the third quadrant
$\lambda_{{\rm min}}\leq\lambda_{0}\leq\lambda$, $\mu_{{\rm min}}\leq\mu_{0}\leq\mu$ by replacing $\mathcal{H}(\lambda_{0}\!-\!\lambda)$
by $-\mathcal{H}(\lambda\!-\!\lambda_{0})$ and
$\mathcal{H}(\mu_{0}\!-\!\mu)$ by $-\mathcal{H}(\mu\!-\!\mu_{0})$. We
find the part of the solution of equations
(3.24) due to the right hand side $g$
terms by multiplying the first and second terms of the singular
solutions by $g_{1}(\lambda_{0},\mu_{0})$ and $g_{2}(\lambda_{0},\mu_{0})$,
respectively, and integrating over the relevant domain. We use
$\lambda=\lambda_{e}$ and $\mu=\mu_{e}$ to denote the boundaries at which
stresses are specified. We find the part of the solution generated by
the boundary values of $S_{\mu\mu}$ by multiplying the singular
solution (3.27), modified for the domain
and evaluated at $\mu_{0}=\mu_{e}$, by $\pm S_{\mu\mu}(\lambda_{0},\mu_{e})$
and integrating over $\lambda_{0}$ in the domain. The plus sign is
needed when $\mu_{e}=\mu_{{\rm min}}$ and the minus when
$\mu_{e}=\mu_{{\rm max}}$. Similarly, the part of the solution generated
by the boundary values of $S_{\lambda\lambda}$ is obtained by
multiplying the singular solution
(3.43), modified for the domain and
evaluated at $\lambda_{0}=\lambda_{e}$, by $\pm S_{\lambda\lambda}(\lambda_{e},\mu_{0})$ and integrating over $\mu_{0}$ in
the domain. The sign is plus if $\lambda_{e}=\lambda_{{\rm min}}$ and
minus if $\lambda_{e}=\lambda_{{\rm max}}$. The final solution is
$$\displaystyle S_{\lambda\lambda}(\lambda,\mu)=S_{\lambda\lambda}(\lambda_{e},%
\mu)-\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{\mathrm{d}}\lambda_{0}g%
_{1}(\lambda_{0},\mu)\\
\displaystyle+\!\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}\!\!\int\limits_{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}}\mu_{0}\left%
[g_{1}(\lambda_{0},\!\mu_{0}\!)B(\mu,\!\lambda;\!\mu_{0},\!\lambda_{0}\!)\!+\!%
g_{2}(\lambda_{0},\!\mu_{0}\!)A(\lambda,\!\mu;\!\lambda_{0},\!\mu_{0}\!)\right%
]\\
\displaystyle-\!\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}S_{\mu\mu}(\!\lambda_{0},\!\mu_{e})A(\lambda,\!\mu;\!\lambda_{0},\!%
\mu_{e}\!)-\!\!\int\limits_{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}}\mu_{0}S_{%
\lambda\lambda}(\!\lambda_{e},\!\mu_{0})B(\mu,\!\lambda;\!\mu_{0},\!\lambda_{e%
}\!),\\
$$
(3.49a)
$$\displaystyle S_{\mu\mu}(\lambda,\mu)=S_{\mu\mu}(\lambda,\mu_{e})-\int\limits_%
{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}}\mu_{0}g_{2}(\lambda,\mu_{0})\\
\displaystyle+\!\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}\!\!\int\limits_{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}}\mu_{0}\left%
[g_{1}(\lambda_{0},\!\mu_{0}\!)A(\mu,\!\lambda;\!\mu_{0},\!\lambda_{0}\!)\!+\!%
g_{2}(\lambda_{0},\!\mu_{0}\!)B(\lambda,\!\mu;\!\lambda_{0},\!\mu_{0}\!)\right%
]\\
\displaystyle-\!\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}S_{\mu\mu}(\!\lambda_{0},\!\mu_{e}\!)B(\lambda,\!\mu;\!\lambda_{0},%
\!\mu_{e}\!)-\!\!\int\limits_{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}}\mu_{0}S_{%
\lambda\lambda}(\!\lambda_{e},\!\mu_{0}\!)\,A(\mu,\!\lambda;\!\mu_{0},\!%
\lambda_{e}\!).\\
$$
(3.49b)
This solution is uniquely determined once $g_{1}$ and $g_{2}$ are given,
and the boundary values $S_{\mu\mu}(\lambda_{0},\mu_{e})$ and
$S_{\lambda\lambda}(\lambda_{e},\mu_{0})$ are prescribed. It shows that
the hyperbolic equations (3.24) can
equally well be integrated in either direction in the characteristic
variables $\lambda$ and $\mu$. Solutions (3.44) and
(3.46) are obtained by taking $\lambda_{e}\to\infty$, $S_{\lambda\lambda}(\lambda_{e},\mu_{0})\to 0$, setting
$\mu_{e}=-\alpha$ and $\mu_{e}=-\beta$ respectively, and evaluating $A$
and $B$ by equations (3.41).
3.4 Applying the disc solution to limiting cases
We showed in §2.6 that the Jeans equations for
prolate and oblate potentials and for three-dimensional Stäckel
models with a scale-free DF all reduce to a set of two equations
equivalent to those for the Stäckel disc. Here we apply our solution
for the Stäckel disc to these special three-dimensional cases, with
particular attention to the behaviour at large radii and the boundary
conditions. This provides further insight in some of the previously
published solutions. We also consider the case of a Stäckel disc
built with thin tube orbits.
3.4.1 Prolate potentials
We can apply the disc solution (3.46) to solve the
Jeans equations (2.35) by setting
$S_{\lambda\lambda}(\lambda,\mu)=|\lambda-\mu|^{\frac{1}{2}}\mathcal{T}_{%
\lambda\lambda}(\lambda,\mu)$ and
$S_{\mu\mu}(\lambda,\mu)=|\mu-\lambda|^{\frac{1}{2}}\mathcal{T}_{\mu\mu}(%
\lambda,\mu)$, and taking
$$\displaystyle g_{1}(\lambda,\mu)$$
$$\displaystyle=$$
$$\displaystyle-|\lambda\!-\!\mu|^{\frac{1}{2}}(\lambda\!+\!\beta)^{\frac{1}{2}}%
(\mu\!+\!\beta)^{\frac{1}{2}}\biggl{[}\rho\frac{\partial V_{S}}{\partial%
\lambda}\!+\!\frac{\partial T_{\chi\chi}}{\partial\lambda}\!\biggr{]},$$
(3.50)
$$\displaystyle g_{2}(\lambda,\mu)$$
$$\displaystyle=$$
$$\displaystyle-|\mu\!-\!\lambda|^{\frac{1}{2}}(\lambda\!+\!\beta)^{\frac{1}{2}}%
(\mu\!+\!\beta)^{\frac{1}{2}}\biggl{[}\rho\frac{\partial V_{S}}{\partial\mu}\!%
+\!\frac{\partial T_{\chi\chi}}{\partial\mu}\!\biggr{]}.$$
The boundary terms in $S_{\mu\mu}(\lambda,-\beta)$ vanish because of
the boundary condition (2.36). As before, we regard the
azimuthal stress $T_{\chi\chi}$ as a variable that can be arbitrarily
assigned, provided that it has the correct behaviour at large
$\lambda$ (§3.1.4).
The choice of $T_{\chi\chi}$ is also restricted by
the requirement that the resulting solutions for the stresses
$T_{\lambda\lambda}$ and $T_{\mu\mu}$ must be non-negative (see
§2.3).
The analysis needed to show that the solution obtained in this way
is valid requires only minor modifications of that of
§3.2.7. We suppose that the prescribed azimuthal
stresses also decay as $\mathcal{O}(\lambda^{\delta-s/2})$ as
$\lambda\to\infty$. As a result of the extra factor in the
definitions (3.50), we now have
$g_{1}(\lambda,\mu)=\mathcal{O}(\lambda^{\delta-s/2})$
and $g_{2}(\lambda,\mu)=\mathcal{O}(\lambda^{-s/2})$ as
$\lambda\to\infty$. The $\lambda_{0}$ integrations
converge provided $s>2\delta+2$, and $S_{\lambda\lambda}$
and $S_{\mu\mu}$ are $\mathcal{O}(\lambda^{\delta-s/2+1})$.
Hence the stresses $T_{\lambda\lambda}$ and $T_{\mu\mu}$,
which follow from
$T_{\tau\tau}=T_{\chi\chi}+S_{\tau\tau}/\sqrt{(\lambda\!-\!\mu)(\lambda\!+\!%
\beta)(\mu\!+\!\beta)}$, are once again
$\mathcal{O}(\lambda^{\delta-s/2})$.
The requirement $s>2\delta+2$ is no stronger than the requirement
$s>2\delta+1$ of §3.2.7; it is simply the
three-dimensional version of that requirement. It also does not break
down until the isothermal limit. That limit is still $\delta\to 0$,
but now $s\to 2$.
3.4.2 Oblate potentials
The oblate case with Jeans equations
(2.37) differs significantly from the
prolate case.
Now $S_{\lambda\lambda}(\lambda,\nu)=|\lambda-\nu|^{\frac{1}{2}}\mathcal{T}_{%
\lambda\lambda}(\lambda,\nu)$
vanishes at $\lambda=-\alpha$ and
$S_{\nu\nu}(\lambda,\nu)=|\nu-\lambda|^{\frac{1}{2}}\mathcal{T}_{\nu\nu}(%
\lambda,\nu)$
vanishes at $\nu=-\alpha$.
If one again supposes that the azimuthal stresses $T_{\phi\phi}$ can
be assigned initially, then one encounters the problem discussed in
§3.3.2 of excessively large radial stresses
at large distances.
To relate that analysis to the present case, we use the solution
(3.44) with $\mu$ replaced by $\nu$, and with zero
boundary value $S_{\nu\nu}(\lambda,-\alpha)$, and for $g_{1}$ and $g_{2}$
the right hand side of (2.37) multiplied
by $|\lambda-\nu|^{\frac{1}{2}}$ and $|\nu-\lambda|^{\frac{1}{2}}$,
respectively.
The estimates we obtained for the prolate case are still valid, so the
stresses $T_{\lambda\lambda}$ and $T_{\nu\nu}$ are
$\mathcal{O}(\lambda^{\delta-s/2})$. Difficulties arise when this
solution for $S_{\lambda\lambda}$ does not vanish at
$\lambda=-\alpha$, but instead has some nonzero value $\kappa(\nu)$
there. To obtain a physically acceptable solution, we must add to it a
solution of the homogeneous equations
(2.37) with boundary values
$\mathcal{T}_{\lambda\lambda}(-\alpha,\nu)=-\kappa(\nu)/\sqrt{-\alpha-\nu}$ and
$\mathcal{T}_{\nu\nu}(\lambda,-\alpha)=0$. This is precisely the
problem we discussed in §3.3.2 where we showed
that the resulting solution gives $\mathcal{T}_{\lambda\lambda}(\lambda,\mu)=\mathcal{O}(\lambda^{-1/2})$, and hence
$T_{\lambda\lambda}(\lambda,\mu)=\mathcal{O}(\lambda^{-1})$. This is
larger than $\mathcal{O}(\lambda^{\delta-s/2})$ when the
three-dimensional requirement $s>2\delta+2$ is met. We therefore
conclude that the approach in which one first selects the azimuthal
stress $T_{\phi\phi}$ and then calculates the other two stresses will
be unsuccessful unless the choice of $T_{\phi\phi}$ is fortunate, and
leads to $\kappa(\nu)\equiv 0$. Otherwise, it leads only to models
which either violate the continuity condition $T_{\lambda\lambda}-T_{\phi\phi}=0$ at $\lambda=-\alpha$, or else have radial stresses
which require external forces at large distances.
The physical implication of radial stresses which decay as only
$\mathcal{O}(\lambda^{-1})$, or the inverse second power of distance,
is that net forces of finite magnitude are needed at an outer boundary
to maintain the system. This finite magnitude arises from the product
of the decaying radial stresses and the increasing surface area of the
boundary over which they act, which grows as the second power of
distance. This situation is analogous to that of an isothermal sphere,
as illustrated in problem 4–9 of Binney & Tremaine
(1987), for which the contribution
from an outer surface integral must be taken into account in the
balance between energies required by the virial theorem.
There are, of course, many physical models which satisfy the
continuity condition and whose radial stresses decay in the physically
correct manner at large distances, but some strategy other than that
of assigning $T_{\phi\phi}$ initially is needed to find them. In fact,
only Evans (1990) used the approach of
assigning $T_{\phi\phi}$ initially. He computed a numerical solution
for a mass model with $s=3$ and $V_{S}\propto\mathcal{O}(\lambda^{-1/2}\ln\lambda)$ for large $\lambda$, so that the
stresses there should be $\mathcal{O}(\lambda^{-2}\ln\lambda)$. He
set $T_{\phi\phi}=-\frac{1}{3}\rho V_{S}$, which is of this magnitude,
and integrated from $\lambda=-\alpha$ in the direction of increasing
$\lambda$ for a finite range. Evans does not report on the large
$\lambda$ behaviour, and it is possible that his choice of
$T_{\phi\phi}$ gives $\kappa(\nu)=0$, but his Figure 2 especially
shows velocity ellipsoids which become increasingly elongated in the
radial direction, consistent with our prediction that
$T_{\lambda\lambda}$ generally grows as $\mathcal{O}(\lambda^{-1})$
when the boundary value of $T_{\lambda\lambda}$ is assigned at
$\lambda=-\alpha$.
A more common and effective approach to solve the Jeans equations
for oblate models has been to specify the ratio
$T_{\lambda\lambda}/T_{\nu\nu}$, and then to solve for one of those
stresses and $T_{\phi\phi}$
(Bacon, Simien & Monnet 1983;
Dejonghe & de Zeeuw 1988;
Evans & Lynden–Bell 1991;
Arnold 1995).
This leads
to a much simpler mathematical problem with just a single first-order
PDE. The characteristics of that PDE have non-negative slopes
$d\lambda/d\nu$, and therefore cut across the coordinate lines of constant
$\lambda$ and $\nu$. The solution is obtained by integrating in along
the characteristics from large $\lambda$. The continuity conditions
(2.5.2) are taken care of automatically, the region
$-\gamma\leq\nu\leq-\alpha\leq\infty$ is covered, and it is easy to
verify that the stresses so obtained are everywhere positive.
3.4.3 Large radii limit with scale-free DF
We found in §2.5.4 that the first of the Jeans
equations in conical coordinates (2.5.4) reduces to an
algebraic relation for the radial stress $T_{rr}$. The problem that
remains is that of solving the second and third Jeans equations for
$T_{\mu\mu}$ and $T_{\nu\nu}$. Those equations are exactly the same as
those of the disc case after we apply the coordinate permutation
$\lambda\to\mu\to\nu$, and the physical domain is $-\gamma\leq\nu\leq-\beta\leq\mu\leq-\alpha$ with finite ranges of both
variables. Hence, the solution (3.49) can be
applied with $T_{\mu\mu}$ assigned at either $\mu_{e}=-\alpha$ or
$\mu_{e}=-\beta$, and $T_{\nu\nu}$ at either $\nu_{e}=-\beta$ or $\nu_{e}=-\gamma$. For $g_{1}$ and $g_{2}$ we take the same expressions as for
the disc case, i.e., the right-hand side of
(3.24), but with $\lambda\to\mu\to\nu$ and multiplied by $r^{\zeta}$. To obtain $T_{\mu\mu}$ and
$T_{\nu\nu}$ from the $S_{\lambda\lambda}$ and $S_{\mu\mu}$
respectively, we use the transformation
$$S_{\tau\tau}=(\mu\!-\!\nu)^{\frac{1}{2}}\,r^{\zeta}T_{\tau\tau},\quad\tau=\mu,\nu,$$
(3.51)
with $\zeta>0$ the scaling factor. We can choose to specify the stress
components on the two boundaries $\mu=-\beta$ and $\nu=-\beta$. For a
given radius $r$ these boundaries cover the circular cross section
with the $(x,z)$-plane (Fig. 3).
We can consider the $(x,z)$-plane as the starting space for the
solution. It turns out that the latter also applies to the triaxial
solution (§4.6.3) and compares well with
Schwarzschild (1993), who used the same
plane to start his numerically calculated orbits from.
3.4.4 Thin tube orbits
For infinitesimally thin tube orbits in Stäckel discs we have that
$S_{\lambda\lambda}\equiv 0$ (§2.5.6),
so that equations (3.24) reduce to
$$-\frac{S_{\mu\mu}}{2(\lambda\!-\!\mu)}=g_{1}(\lambda,\mu),\quad\frac{\partial S%
_{\mu\mu}}{\partial\mu}=g_{2}(\lambda,\mu).$$
(3.52)
A solution is possible only if the right hand side terms satisfy the
subsidiary equation
$$g_{2}(\lambda,\mu)=-2\frac{\partial}{\partial\mu}\left[(\lambda\!-\!\mu)g_{1}(%
\lambda,\mu)\right].$$
(3.53)
We find below that this equation places restrictions on the form of
the (surface) density $\rho$, and we use this relation between $g_{1}$
and $g_{2}$ to show that the disc solution (3.44)
yields the right results for the stress components.
If we write the disc potential (2.24) as a divided
difference, $V_{S}=-f[\lambda,\mu]$, we have that
$$g_{1}=(\lambda\!-\!\mu)^{\frac{1}{2}}\rho f[\lambda,\lambda,\mu],\quad g_{2}=(%
\lambda\!-\!\mu)^{\frac{1}{2}}\rho f[\lambda,\mu,\mu].$$
(3.54)
Upon substitution of these expressions in
(3.53) we obtain a PDE in $\mu$, of which the
solution implies the following form for the density
$$\rho(\lambda,\mu)=\frac{\tilde{f}(\lambda)}{(\lambda\!-\!\mu)\sqrt{f[\lambda,%
\lambda,\mu]}},$$
(3.55)
where $\tilde{f}(\lambda)$ is an arbitrary function independent of
$\mu$. From (3.52) and the definition
(3.23) it then follows that
$T_{\mu\mu}(\lambda,\mu,\nu)=-2\tilde{f}(\lambda)\sqrt{f[\lambda,\lambda,\mu]}$.
The tube density that de Zeeuw, Hunter & Schwarzschild
(1987) derive from the DF for thin tube
orbits in the perfect elliptic disk (their eq. [4.25]) is indeed of
the form (3.55).
To show that the general disc solution (3.44) gives
$S_{\lambda\lambda}(\lambda,\mu)=0$, we substitute
eq. (3.53) for $g_{2}(\lambda,\mu)$ in
(3.44a).
After partial integration and using
$$2(\lambda_{0}\!-\!\mu_{0})\frac{\partial}{\partial\mu_{0}}\frac{E(w)}{\pi(%
\lambda_{0}\!-\!\mu)}=\frac{2wE^{\prime}(w)}{\pi(\mu_{0}\!-\!\mu)},$$
(3.56)
we find that the area integral reduces to
$$\int\limits_{\lambda}^{\infty}\hskip-3.0pt{\mathrm{d}}\lambda_{0}\biggl{\{}g_{%
1}(\lambda_{0},\mu)-2(\lambda_{0}\!+\!\alpha)g_{1}(\lambda_{0},-\alpha)\biggl{%
[}\frac{E(w)}{\pi(\lambda_{0}\!-\!\mu)}\hskip-10.0pt\underset{\mu_{0}=-\alpha}%
{\biggr{]}}\hskip-10.0pt\biggr{\}}.$$
(3.57)
The first part cancels the first line of (3.44a) and
since from (3.52) we have that
$-2(\lambda_{0}\!+\!\alpha)g_{1}(\lambda_{0},-\alpha)=S_{\mu\mu}(\lambda_{0},-\alpha)$, the second part cancels the third
line.
Hence, we have $S_{\lambda\lambda}(\lambda,\mu)=0$ as required.
To see that the general disc solution also yields
$S_{\mu\mu}(\lambda,\mu)$ correctly, we apply similar steps to
(3.44b), where we use the relation
$$-2(\lambda_{0}\!-\!\mu_{0})\frac{\partial}{\partial\mu_{0}}\frac{2wE^{\prime}(%
w)}{\pi(\lambda_{0}\!-\!\lambda)}=\frac{E(w)}{\pi(\lambda\!-\!\mu_{0})}.$$
(3.58)
We are finally left with
$$S_{\mu\mu}(\lambda,\mu)=S_{\mu\mu}(\lambda,-\alpha)-\int\limits_{\mu}^{-\alpha%
}\hskip-3.0pt{\mathrm{d}}\mu_{0}g_{2}(\lambda,\mu_{0}),$$
(3.59)
which is just the second equation of (3.52)
integrated with respect to $\mu$.
4 The general case
We now solve the system of three Jeans equations
(2.16) for triaxial Stäckel models by applying the
singular solution superposition method, introduced in
§3.2 for the two-dimensional case.
Although the calculations are more complex for a triaxial model, the
step-wise solution method is similar to that in two dimensions.
Specifically, we first simplify the Jeans equations and show that they
reduce to a three-dimensional homogeneous boundary problem.
We then find a two-parameter particular solution and apply contour
integration to both complex parameters to obtain the general
homogeneous solution.
The latter yields the three singular solutions of the simplified Jeans
equations, from which, by superposition, we construct the general
solution.
4.1 Simplified Jeans equations
We start by introducing the functions
$$S_{\tau\tau}(\lambda,\mu,\nu)=\sqrt{(\lambda\!-\!\mu)(\lambda\!-\!\nu)(\mu\!-%
\!\nu)}\,T_{\tau\tau}(\lambda,\mu,\nu),$$
(4.1)
with $\tau=\lambda,\mu,\nu$, to write the Jeans equations for triaxial
Stäckel models (2.16) in the more convenient form
$$\frac{\partial S_{\lambda\lambda}}{\partial\lambda}-\frac{S_{\mu\mu}}{2(%
\lambda\!-\!\mu)}-\frac{S_{\nu\nu}}{2(\lambda\!-\!\nu)}=g_{1}(\lambda,\mu,\nu),$$
(4.2a)
$$\frac{\partial S_{\mu\mu}}{\partial\mu}-\frac{S_{\nu\nu}}{2(\mu\!-\!\nu)}-%
\frac{S_{\lambda\lambda}}{2(\mu\!-\!\lambda)}=g_{2}(\lambda,\mu,\nu),$$
(4.2b)
$$\frac{\partial S_{\nu\nu}}{\partial\nu}-\frac{S_{\lambda\lambda}}{2(\nu\!-\!%
\lambda)}-\frac{S_{\mu\mu}}{2(\nu\!-\!\mu)}=g_{3}(\lambda,\mu,\nu),$$
(4.2c)
where the function $g_{1}$ is defined as
$$g_{1}(\lambda,\mu,\nu)=-\sqrt{(\lambda\!-\!\mu)(\lambda\!-\!\nu)(\mu\!-\!\nu)}%
\,\rho\,\frac{\partial V_{S}}{\partial\lambda},$$
(4.3)
and $g_{2}$ and $g_{3}$ follow by cyclic permutation
$\lambda\to\mu\to\nu\to\lambda$. We keep the three terms $\lambda\!-\!\mu$,
$\lambda\!-\!\nu$ and $\mu\!-\!\nu$ under one square root. With each
cyclic permutation two of the three terms change sign, so that the
combination of the three terms is always positive real.
Therefore. the square root of the combination is always single-valued,
whereas in the case of three separate square roots we would have a
multi-valued function.
We simplify equations (4.2) by substituting for
$g_{1}$, $g_{2}$ and $g_{3}$, respectively
$$\displaystyle\tilde{g}_{1}(\lambda,\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle\tilde{g}_{2}(\lambda,\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle\,\delta(\lambda_{0}\!-\!\lambda)\,\delta(\mu_{0}\!-\!\mu)\,%
\delta(\nu_{0}\!-\!\nu),$$
(4.4)
$$\displaystyle\tilde{g}_{3}(\lambda,\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle 0,$$
with
$$-\gamma\leq\nu\leq\nu_{0}\leq-\beta\leq\mu\leq\mu_{0}\leq-\alpha\leq\lambda%
\leq\lambda_{0}.$$
(4.5)
We obtain two similar systems of simplified equations by cyclic
permutation of the left-hand side of (4.2).
Once we have obtained the singular solutions of the simplified system
with the right-hand side given by (4.1), those for
the other two systems follow via cyclic permutation.
4.2 Homogeneous boundary problem
The choice (4.1) implies that the functions
$S_{\tau\tau}(\lambda,\mu,\nu)$ (4.1) must have the
following forms
$$\displaystyle S_{\lambda\lambda}$$
$$\displaystyle=$$
$$\displaystyle A(\lambda,\mu,\nu)\,\mathcal{H}(\lambda_{0}\!-\!\lambda)\mathcal%
{H}(\mu_{0}\!-\!\mu)\mathcal{H}(\nu_{0}\!-\!\nu)$$
$$\displaystyle+\;F(\lambda,\mu)\,\delta(\nu_{0}\!-\!\nu)\mathcal{H}(\lambda_{0}%
\!-\!\lambda)\mathcal{H}(\mu_{0}\!-\!\mu),$$
$$\displaystyle S_{\mu\mu}$$
$$\displaystyle=$$
$$\displaystyle B(\lambda,\mu,\nu)\,\mathcal{H}(\lambda_{0}\!-\!\lambda)\mathcal%
{H}(\mu_{0}\!-\!\mu)\mathcal{H}(\nu_{0}\!-\!\nu)$$
$$\displaystyle+\;G(\lambda,\mu)\,\delta(\nu_{0}\!-\!\nu)\mathcal{H}(\lambda_{0}%
\!-\!\lambda)\mathcal{H}(\mu_{0}\!-\!\mu)$$
$$\displaystyle+\;H(\mu,\nu)\,\delta(\lambda_{0}\!-\!\lambda)\mathcal{H}(\mu_{0}%
\!-\!\mu)\mathcal{H}(\nu_{0}\!-\!\nu)$$
$$\displaystyle-\;\delta(\lambda_{0}\!-\!\lambda)\delta(\nu_{0}\!-\!\nu)\mathcal%
{H}(\mu_{0}\!-\!\mu),$$
$$\displaystyle S_{\nu\nu}$$
$$\displaystyle=$$
$$\displaystyle C(\lambda,\mu,\nu)\,\mathcal{H}(\lambda_{0}\!-\!\lambda)\mathcal%
{H}(\mu_{0}\!-\!\mu)\mathcal{H}(\nu_{0}\!-\!\nu)$$
$$\displaystyle+\;I(\mu,\nu)\,\delta(\lambda_{0}\!-\!\lambda)\mathcal{H}(\mu_{0}%
\!-\!\mu)\mathcal{H}(\nu_{0}\!-\!\nu),$$
with $A$, $B$, $C$ and $F$, $G$, $H$, $I$ yet unknown functions of
three and two coordinates, respectively, and $\mathcal{H}$ the
step-function (3.26). After substituting these forms
into the simplified Jeans equations and matching terms we obtain 14
equations. Eight of them comprise the following two homogeneous
systems with two boundary conditions each
$$\left\{\begin{array}[]{lclclcl}\displaystyle\frac{\partial F}{\partial\lambda}%
-\frac{G}{2(\lambda\!-\!\mu)}&\hskip-7.0pt=&0,&&\displaystyle F(\lambda_{0},%
\mu)&\hskip-7.0pt=&\displaystyle\frac{1}{2(\lambda_{0}\!-\!\mu)},\\
\displaystyle\frac{\partial G}{\partial\mu}-\frac{F}{2(\mu\!-\!\lambda)}&%
\hskip-7.0pt=&0,&&\displaystyle G(\lambda,\mu_{0})&\hskip-7.0pt=&0,\end{array}\right.$$
(4.7)
and
$$\left\{\begin{array}[]{lclclcl}\displaystyle\frac{\partial H}{\partial\mu}-%
\frac{I}{2(\mu\!-\!\nu)}&\hskip-7.0pt=&0,&&\displaystyle H(\mu_{0},\nu)&\hskip%
-7.0pt=&0,\\
\displaystyle\frac{\partial I}{\partial\nu}-\frac{H}{2(\nu\!-\!\mu)}&\hskip-7.%
0pt=&0,&&\displaystyle I(\mu,\nu_{0})&\hskip-7.0pt=&\displaystyle\frac{1}{2(%
\nu_{0}\!-\!\mu)}.\end{array}\right.$$
(4.8)
We have shown in §3 how to solve these
two-dimensional homogeneous boundary problems in terms of the complete
elliptic integral of the second kind $E$ and its derivative
$E^{\prime}$. The solutions are
$$\displaystyle F(\lambda,\mu)$$
$$\displaystyle=\frac{E(w)}{\pi(\lambda_{0}-\mu)},$$
$$\displaystyle G(\lambda,\mu)$$
$$\displaystyle=-\frac{2wE^{\prime}(w)}{\pi(\lambda_{0}-\lambda)},$$
$$\displaystyle H(\mu,\nu)$$
$$\displaystyle=-\frac{2uE^{\prime}(u)}{\pi(\nu_{0}-\nu)},$$
$$\displaystyle I(\mu,\nu)$$
$$\displaystyle=-\frac{E(u)}{\pi(\mu-\nu_{0})},$$
where $u$ and similarly $v$, which we will encounter later on,
follow from $w$ (3.16) by cyclic permutation
$\lambda\to\mu\to\nu\to\lambda$ and $\lambda_{0}\to\mu_{0}\to\nu_{0}\to\lambda_{0}$, so that
$$u=\frac{(\mu_{0}\!-\!\mu)(\nu_{0}\!-\!\nu)}{(\mu_{0}\!-\!\nu_{0})(\mu\!-\!\nu)%
},\quad v=\frac{(\nu_{0}\!-\!\nu)(\lambda_{0}\!-\!\lambda)}{(\lambda_{0}\!-\!%
\nu_{0})(\lambda\!-\!\nu)}.$$
(4.10)
The remaining six equations form a three-dimensional homogeneous
boundary problem, consisting of three homogeneous Jeans equations
$$\displaystyle\frac{\partial A}{\partial\lambda}-\frac{B}{2(\lambda\!-\!\mu)}-%
\frac{C}{2(\lambda\!-\!\nu)}$$
$$\displaystyle=$$
$$\displaystyle 0,$$
$$\displaystyle\frac{\partial B}{\partial\mu}-\frac{C}{2(\mu\!-\!\nu)}-\frac{A}{%
2(\mu\!-\!\lambda)}$$
$$\displaystyle=$$
$$\displaystyle 0,$$
(4.11)
$$\displaystyle\frac{\partial C}{\partial\nu}-\frac{A}{2(\nu\!-\!\lambda)}-\frac%
{B}{2(\nu\!-\!\mu)}$$
$$\displaystyle=$$
$$\displaystyle 0.$$
and three boundary conditions, specifically the values of
$A(\lambda_{0},\mu,\nu)$, $B(\lambda,\mu_{0},\nu)$, and
$C(\lambda,\mu,\nu_{0})$.
As in §3.2.2, it is useful to
supplement these boundary conditions with the values of $A$, $B$, and
$C$ at the other boundary surfaces.
These are obtained by integrating the pairs of equations
(4.2) which apply at those surfaces, and using the
boundary conditions.
This results in the following nine boundary values
$$\displaystyle A(\lambda_{0},\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2\pi}\!\Biggl{[}\!\frac{E(u)}{(\lambda_{0}\!\!-\!\nu)(%
\mu\!-\!\nu_{0})}\!+\!\frac{2uE^{\prime}(u)}{(\lambda_{0}\!\!-\!\mu)(\nu_{0}\!%
\!-\!\nu)}\!\Biggr{]},$$
$$\displaystyle A(\lambda,\mu_{0},\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2\pi}\!\Biggl{[}\!\frac{E(v)}{(\lambda_{0}\!\!-\!\nu)(%
\mu_{0}\!\!-\!\nu_{0})}\!+\!\frac{2vE^{\prime}(v)}{(\lambda_{0}\!\!-\!\mu_{0})%
(\nu_{0}\!\!-\!\nu)}\!\Biggr{]},$$
$$\displaystyle A(\lambda,\mu,\nu_{0})$$
$$\displaystyle=$$
$$\displaystyle\frac{E(w)}{4\pi(\lambda_{0}\!\!-\!\mu)}\!\Biggl{[}\!\frac{%
\lambda\!-\!\mu}{(\lambda\!-\!\nu_{0})(\mu\!-\!\nu_{0})}\!+\!\frac{\lambda_{0}%
\!\!-\!\mu_{0}}{(\lambda_{0}\!\!-\!\nu_{0})(\mu_{0}\!\!-\!\nu_{0})}\!\Biggr{]},$$
$$\displaystyle B(\lambda_{0},\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{uE^{\prime}(u)}{2\pi(\nu_{0}\!\!-\!\nu)}\!\Biggl{[}\!\frac{%
\mu_{0}\!\!-\!\mu}{(\lambda_{0}\!-\!\!\mu_{0})(\lambda_{0}\!\!-\!\mu)}\!-\!%
\frac{\nu_{0}\!\!-\!\nu}{(\lambda_{0}\!\!-\!\nu_{0})(\lambda_{0}\!\!-\!\nu)}\!%
\Biggr{]},$$
$$\displaystyle B(\lambda,\mu_{0},\nu)$$
$$\displaystyle=$$
$$\displaystyle\,0,$$
(4.12)
$$\displaystyle B(\lambda,\mu,\nu_{0})$$
$$\displaystyle=$$
$$\displaystyle\frac{wE^{\prime}(w)}{2\pi(\lambda_{0}\!\!-\!\lambda)}\!\Biggl{[}%
\!\frac{\mu_{0}\!\!-\!\mu}{(\mu_{0}\!\!-\!\nu_{0})(\mu\!-\!\nu_{0})}\!-\!\frac%
{\lambda_{0}\!\!-\!\lambda}{(\lambda_{0}\!\!-\!\nu_{0})(\lambda\!-\!\nu_{0})}%
\!\Biggr{]},$$
$$\displaystyle C(\lambda_{0},\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{E(u)}{4\pi(\mu\!-\!\nu_{0})}\!\Biggl{[}\!\frac{\mu\!-\!\nu}%
{(\lambda_{0}\!\!-\!\mu)(\lambda_{0}\!\!-\!\nu)}\!+\!\frac{\mu_{0}\!\!-\!\nu_{%
0}}{(\lambda_{0}\!\!-\!\mu_{0})(\lambda_{0}\!\!-\!\nu_{0})}\!\Biggr{]},$$
$$\displaystyle C(\lambda,\mu_{0},\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2\pi}\!\Biggl{[}\!\frac{E(v)}{(\lambda_{0}\!\!-\!\mu_{0}%
)(\lambda\!-\!\nu_{0})}\!-\!\frac{2vE^{\prime}(v)}{(\mu_{0}\!\!-\!\nu_{0})(%
\lambda_{0}\!\!-\!\lambda)}\!\Biggr{]},$$
$$\displaystyle C(\lambda,\mu,\nu_{0})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2\pi}\!\Biggl{[}\!\frac{E(w)}{(\lambda_{0}\!\!-\!\mu)(%
\lambda\!-\!\nu_{0})}\!-\!\frac{2wE^{\prime}(w)}{(\mu\!-\!\nu_{0})(\lambda_{0}%
\!\!-\!\lambda)}\!\Biggr{]}.$$
If we can solve the three homogeneous equations
(4.2) and satisfy the nine boundary expressions
(4.2) simultaneously, then we obtain the
singular solutions (4.2).
By superposition, we can then construct the solution of the Jeans
equations for triaxial Stäckel models.
4.3 Particular solution
By analogy with the two-dimensional case, we look for particular
solutions of the homogeneous equations (4.2) and by
superposition of these particular solutions we try to satisfy the
boundary expressions (4.2) simultaneously,
in order to obtain the homogeneous solution for $A$, $B$ and $C$.
4.3.1 One-parameter particular solution
By substitution one can verify that
$$A^{P}(\lambda,\mu,\nu)=\frac{\sqrt{(\lambda\!-\!\mu)(\lambda\!-\!\nu)(\mu\!-\!%
\nu)}}{(\lambda\!-\!\mu)(\lambda\!-\!\nu)}\frac{(z\!-\!\lambda)}{(z\!-\!\mu)(z%
\!-\!\nu)},$$
(4.13)
with $B^{P}$ and $C^{P}$ following from $A^{P}$ by cyclic permutation,
solves the homogeneous equations (4.2). To satisfy
the nine boundary expressions (4.2), we
could integrate this particular solution over its free parameter $z$,
in the complex plane. From §3.2.4, it
follows that, at the boundaries, this results in simple polynomials in
$(\lambda,\mu,\nu)$ and $(\lambda_{0},\mu_{0},\nu_{0})$. This means that the
nine boundary expressions (4.2) cannot be
satisfied, since in addition to these simple polynomials they also
contain $E$ and $E^{\prime}$. The latter are functions of one variable, so
that at least one extra freedom is necessary. Hence, we look for a
particular solution with two free parameters.
4.3.2 Two-parameter particular solution
A particular solution with two free parameters $z_{1}$ and $z_{2}$
can be found by splitting the $z$-dependent terms
of the one-parameter solution (4.13) into two similar
parts and then relabelling them. The result is the following
two-parameter particular solution
$$\displaystyle A^{P}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sqrt{(\lambda\!-\!\mu)(\lambda\!-\!\nu)(\mu\!-\!\nu)}}{(%
\lambda\!-\!\mu)(\lambda\!-\!\nu)}\prod_{i=1}^{2}\frac{(z_{i}\!-\!\lambda)^{%
\frac{1}{2}}}{(z_{i}\!-\!\mu)^{\frac{1}{2}}(z_{i}\!-\!\nu)^{\frac{1}{2}}},$$
$$\displaystyle B^{P}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sqrt{(\lambda\!-\!\mu)(\lambda\!-\!\nu)(\mu\!-\!\nu)}}{(%
\mu\!-\!\nu)(\mu\!-\!\lambda)}\prod_{i=1}^{2}\frac{(z_{i}\!-\!\mu)^{\frac{1}{2%
}}}{(z_{i}\!-\!\nu)^{\frac{1}{2}}\!(z_{i}\!-\!\lambda)^{\frac{1}{2}}},$$
(4.14)
$$\displaystyle C^{P}$$
$$\displaystyle=$$
$$\displaystyle\frac{\sqrt{(\lambda\!-\!\mu)(\lambda\!-\!\nu)(\mu\!-\!\nu)}}{(%
\nu\!-\!\lambda)(\nu\!-\!\mu)}\prod_{i=1}^{2}\frac{(z_{i}\!-\!\nu)^{\frac{1}{2%
}}}{(z_{i}\!-\!\lambda)^{\frac{1}{2}}(z_{i}\!-\!\mu)^{\frac{1}{2}}}.$$
These functions are cyclic in $(\lambda,\mu,\nu)$, as is required from
the symmetry of the homogeneous equations (4.2).
The presence of the square roots, such as occurred earlier in the
solution (3.32) for the disc case, allows us
to fit boundary values that contain elliptic integrals.
To show that this particular solution solves the homogeneous Jeans
equations, we calculate the derivative of $A^{P}(\lambda,\mu,\nu)$ with
respect to $\lambda$:
$$\frac{\partial A^{P}}{\partial\lambda}=\frac{A^{P}}{2}\Biggl{(}\frac{1}{%
\lambda\!-\!z_{1}}+\frac{1}{\lambda\!-\!z_{2}}-\frac{1}{\lambda\!-\!\mu}-\frac%
{1}{\lambda\!-\!\nu}\Biggr{)}.$$
(4.15)
This can be written as
$$\displaystyle\frac{\partial A^{P}}{\partial\lambda}$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{2(\lambda\!-\!\mu)}\Biggl{[}-\frac{(z_{1}\!-\!\mu)(z_{2}%
\!-\!\mu)(\lambda\!-\!\nu)}{(z_{1}\!-\!\lambda)(z_{2}\!-\!\lambda)(\mu\!-\!\nu%
)}A^{P}\Biggr{]}$$
$$\displaystyle+\;\frac{1}{2(\lambda\!-\!\nu)}\Biggl{[}\frac{(z_{1}\!-\!\nu)(z_{%
2}\!-\!\nu)(\lambda\!-\!\mu)}{(z_{1}\!-\!\lambda)(z_{2}\!-\!\lambda)(\mu\!-\!%
\nu)}A^{P}\Biggr{]}.$$
From the two-parameter particular solution we have
$$\displaystyle\frac{B^{P}}{A^{P}}$$
$$\displaystyle=$$
$$\displaystyle-\frac{(z_{1}-\mu)(z_{2}-\mu)(\lambda-\nu)}{(z_{1}-\lambda)(z_{2}%
-\lambda)(\mu-\nu)},$$
(4.17)
$$\displaystyle\frac{C^{P}}{A^{P}}$$
$$\displaystyle=$$
$$\displaystyle\frac{(z_{1}-\nu)(z_{2}-\nu)(\lambda-\mu)}{(z_{1}-\lambda)(z_{2}-%
\lambda)(\mu-\nu)},$$
so that, after substitution of these ratios, the first homogeneous
equation of (4.2), is indeed satisfied. The
remaining two homogeneous equations can be checked in the same way.
4.4 The homogeneous solution
In order to satisfy the four boundary expressions of the
two-dimensional case, we multiplied the one-parameter particular
solution by terms depending on $\lambda_{0}$, $\mu_{0}$ and the free
complex parameter $z$, followed by contour integration over the
latter. Similarly, in the triaxial case we multiply the two-parameter
particular solution (3.35) by terms depending on
$\lambda_{0}$, $\mu_{0}$, $\nu_{0}$ and the two free parameters $z_{1}$ and
$z_{2}$, in such a way that by contour integration over the latter two
complex parameters the nine boundary expressions
(4.2) can be satisfied. Since these terms
and the integration are independent of $\lambda$, $\mu$ and $\nu$, it
follows from the superposition principle that the homogeneous
equations (4.2) remain satisfied.
The contour integrations over $z_{1}$ and $z_{2}$ are mutually
independent, since we can separate the two-parameter particular
solution (4.3.2) with respect to these two
parameters. This allows us to choose a pair of contours, one contour
in the $z_{1}$-plane and the other contour in the $z_{2}$-plane, and
integrate over them separately. We consider the same simple contours
as in the disk case (Fig. 5) around
the pairs of branch points $(\lambda,\lambda_{0})$ and $(\mu,\mu_{0})$,
and a similar contour around $(\nu,\nu_{0})$. We denote these contours
by $C_{i}^{\lambda}$, $C_{i}^{\mu}$ and $C_{i}^{\nu}$ respectively, with $i=1,2$
indicating in which of the two complex planes we apply the contour
integration.
4.4.1 Boundary expressions for $B$
It follows from (4.2) that $B=0$ at the
boundary $\mu=\mu_{0}$. From Cauchy’s theorem, $B$ would indeed vanish
if, in this case, in either the $z_{1}$-plane or $z_{2}$-plane the
integrand for $B$ is analytic within the chosen integration contour.
The boundary expression for $B$ at $\nu=\nu_{0}$ follows from the one at
$\lambda=\lambda_{0}$ by taking $\nu\leftrightarrow\lambda$ and $\nu_{0}\leftrightarrow\lambda_{0}$. In addition to this symmetry, also the
form of both boundary expressions puts constraints on the solution for
$B$. The boundary expressions can be separated in two parts, one
involving the complete elliptic integral $E^{\prime}$ and the other
consisting of a two-component polynomial in $\tau$ and $\tau_{0}$
($\tau=\lambda,\mu,\nu$). Each of the two parts follows from a contour
integration in one of the two complex planes. For either of the
complex parameters, $z_{1}$ or $z_{2}$, the integrands will consist of a
combination of the six terms $z_{i}-\tau$ and $z_{i}-\tau_{0}$ with powers
that are half-odd integers, i.e., the integrals are of
hyperelliptic form. If two of the six terms cancel on one of
the boundaries, we will be left with an elliptic integral. We expect
the polynomial to result from applying the Residue theorem to a double
pole, as this would involve a first derivative and hence give two
components. This leads to the following Ansatz
$$\displaystyle B(\lambda,\mu,\nu)\propto\frac{\sqrt{(\lambda\!-\!\mu)(\lambda\!%
-\!\nu)(\mu\!-\!\nu)}}{(\mu\!-\!\nu)(\mu\!-\!\lambda)}\times\\
\displaystyle\hskip 20.0pt\oint\limits_{C_{1}}\frac{(z_{1}\!-\!\mu)^{\frac{1}{%
2}}(z_{1}\!-\!\lambda_{0})^{\frac{1}{2}}\,{\mathrm{d}}z_{1}}{(z_{1}\!-\!\nu)^{%
\frac{1}{2}}(z_{1}\!-\!\lambda)^{\frac{1}{2}}(z_{1}\!-\!\mu_{0})^{\frac{1}{2}}%
(z_{1}\!-\!\nu_{0})^{\frac{3}{2}}}\times\\
\displaystyle\oint\limits_{C_{2}}\frac{(z_{2}\!-\!\mu)^{\frac{1}{2}}(z_{2}\!-%
\!\nu_{0})^{\frac{1}{2}}\,{\mathrm{d}}z_{2}}{(z_{2}\!-\!\nu)^{\frac{1}{2}}(z_{%
2}\!-\!\lambda)^{\frac{1}{2}}(z_{2}\!-\!\mu_{0})^{\frac{1}{2}}(z_{2}\!-\!%
\lambda_{0})^{\frac{3}{2}}}.$$
(4.18)
Upon substitution of $\mu=\mu_{0}$, the terms involving $\mu_{0}$ cancel
in both integrals, so that the integrands are analytic in both
contours $C_{1}^{\mu}$ and $C_{2}^{\mu}$. Hence, by choosing either of these
contours as integration contour, the boundary expression
$B(\lambda,\mu_{0},\nu)=0$ is satisfied.
When $\lambda=\lambda_{0}$, the terms with $\lambda_{0}$ in the first
integral in (4.18) cancel, while in the second integral we
have $(z_{2}-\lambda_{0})^{-2}$. The first integral is analytic within
$C_{1}^{\lambda}$, so that there is no contribution from this
contour. However, the integral over $C_{1}^{\mu}$ is elliptic and can be
evaluated in terms of $E^{\prime}$ (cf. §3.2.5). We apply the Residue
theorem to the second integral, for which there is a double pole
inside the contour $C_{2}^{\lambda}$. Considering $C_{1}^{\mu}$ and
$C_{2}^{\lambda}$ as a pair of contours, the expression for $B$ at
$\lambda=\lambda_{0}$ becomes
$$\displaystyle B(\lambda,\mu,\nu)\propto-16\pi^{2}\frac{\sqrt{(\lambda_{0}\!-\!%
\mu_{0})(\lambda_{0}\!-\!\nu_{0})(\mu_{0}\!-\!\nu_{0})}}{(\mu_{0}\!-\!\nu_{0})%
(\mu_{0}\!-\!\lambda_{0})}\times\\
\displaystyle\frac{uE^{\prime}(u)}{2\pi(\nu_{0}\!\!-\!\nu)}\!\Biggl{[}\!\frac{%
\mu_{0}\!\!-\!\mu}{(\lambda_{0}\!\!-\!\mu_{0})(\lambda_{0}\!\!-\!\mu)}\!-\!%
\frac{\nu_{0}\!\!-\!\nu}{(\lambda_{0}\!\!-\!\nu_{0})(\lambda_{0}\!\!-\!\nu)}\!%
\Biggr{]},$$
(4.19)
which is the required boundary expression up to a scaling factor. As
before, we keep the terms $\lambda_{0}\!-\!\mu_{0}$, $\lambda_{0}\!-\!\nu_{0}$
and $\mu_{0}\!-\!\nu_{0}$ under one square root, so that it is
single-valued with respect to cyclic permutation in these coordinates.
The boundary expression for $B$ at $\nu=\nu_{0}$ is symmetric with the
one at $\lambda=\lambda_{0}$, so that a similar approach can be used.
In this case, for the second integral, there is no contribution from
$C_{2}^{\nu}$, whereas it can be expressed in terms of $E^{\prime}$ if
$C_{2}=C_{2}^{\mu}$.
The first integrand has a double pole in $C_{1}^{\nu}$.
The total contribution from the pair ($C_{1}^{\nu}$,$C_{2}^{\mu}$) gives the
correct boundary expression, up to a scaling factor that is the same
as in (4.19).
Taking into account the latter scaling factor, this shows that the
Ansatz (4.18) for $B$ produces the correct boundary
expressions and hence we postulate it as the homogeneous solution for
$B$. The expressions for $A$ and $C$ then follow from the ratios
(4.17). Absorbing the minus sign in
(4.19) into the pair of contours, i.e.,
either of the two contours we integrate in clockwise direction, we
postulate the following homogeneous solution
$$\displaystyle A(\lambda,\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{(\mu_{0}\!\!-\!\nu_{0}\!)(\mu_{0}\!\!-\!\lambda_{0}\!)}{16%
\pi^{2}(\lambda\!-\!\mu)(\lambda\!-\!\nu)}\sqrt{\!\frac{(\lambda\!-\!\mu)(%
\lambda\!-\!\nu)(\mu\!-\!\nu)}{(\lambda_{0}\!\!-\!\mu_{0}\!)(\lambda_{0}\!\!-%
\!\nu_{0}\!)(\mu_{0}\!\!-\!\nu_{0}\!)}}\times$$
$$\displaystyle\oint\limits_{C_{1}}$$
$$\displaystyle\frac{(z_{1}\!-\!\lambda)^{\frac{1}{2}}(z_{1}\!-\!\lambda_{0})^{%
\frac{1}{2}}\,{\mathrm{d}}z_{1}}{(z_{1}\!-\!\mu)^{\frac{1}{2}}(z_{1}\!-\!\nu)^%
{\frac{1}{2}}(z_{1}\!-\!\mu_{0})^{\frac{1}{2}}(z_{1}\!-\!\nu_{0})^{\frac{3}{2}%
}}\times$$
$$\displaystyle\oint\limits_{C_{2}}$$
$$\displaystyle\frac{(z_{2}\!-\!\lambda)^{\frac{1}{2}}(z_{2}\!-\!\nu_{0})^{\frac%
{1}{2}}\,{\mathrm{d}}z_{2}}{(z_{2}\!-\!\mu)^{\frac{1}{2}}(z_{2}\!-\!\nu)^{%
\frac{1}{2}}(z_{2}\!-\!\mu_{0})^{\frac{1}{2}}(z_{2}\!-\!\lambda_{0})^{\frac{3}%
{2}}},$$
(4.20)
$$\displaystyle B(\lambda,\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{(\mu_{0}\!\!-\!\nu_{0}\!)(\mu_{0}\!\!-\!\lambda_{0}\!)}{16%
\pi^{2}(\mu\!-\!\nu)(\mu\!-\!\lambda)}\sqrt{\!\frac{(\lambda\!-\!\mu)(\lambda%
\!-\!\nu)(\mu\!-\!\nu)}{(\lambda_{0}\!\!-\!\mu_{0}\!)(\lambda_{0}\!\!-\!\nu_{0%
}\!)(\mu_{0}\!\!-\!\nu_{0}\!)}}\times$$
$$\displaystyle\oint\limits_{C_{1}}$$
$$\displaystyle\frac{(z_{1}\!-\!\mu)^{\frac{1}{2}}(z_{1}\!-\!\lambda_{0})^{\frac%
{1}{2}}\,{\mathrm{d}}z_{1}}{(z_{1}\!-\!\nu)^{\frac{1}{2}}(z_{1}\!-\!\lambda)^{%
\frac{1}{2}}(z_{1}\!-\!\mu_{0})^{\frac{1}{2}}(z_{1}\!-\!\nu_{0})^{\frac{3}{2}}}\times$$
$$\displaystyle\oint\limits_{C_{2}}$$
$$\displaystyle\frac{(z_{2}\!-\!\mu)^{\frac{1}{2}}(z_{2}\!-\!\nu_{0})^{\frac{1}{%
2}}\,{\mathrm{d}}z_{2}}{(z_{2}\!-\!\nu)^{\frac{1}{2}}(z_{2}\!-\!\lambda)^{%
\frac{1}{2}}(z_{2}\!-\!\mu_{0})^{\frac{1}{2}}(z_{2}\!-\!\lambda_{0})^{\frac{3}%
{2}}},$$
(4.21)
$$\displaystyle C(\lambda,\mu,\nu)$$
$$\displaystyle=$$
$$\displaystyle\frac{(\mu_{0}\!\!-\!\nu_{0}\!)(\mu_{0}\!\!-\!\lambda_{0}\!)}{16%
\pi^{2}(\nu\!-\!\lambda)(\nu\!-\!\mu)}\sqrt{\!\frac{(\lambda\!-\!\mu)(\lambda%
\!-\!\nu)(\mu\!-\!\nu)}{(\lambda_{0}\!\!-\!\mu_{0}\!)(\lambda_{0}\!\!-\!\nu_{0%
}\!)(\mu_{0}\!\!-\!\nu_{0}\!)}}\times$$
$$\displaystyle\oint\limits_{C_{1}}$$
$$\displaystyle\frac{(z_{1}\!-\!\nu)^{\frac{1}{2}}(z_{1}\!-\!\lambda_{0})^{\frac%
{1}{2}}\,{\mathrm{d}}z_{1}}{(z_{1}\!-\!\lambda)^{\frac{1}{2}}(z_{1}\!-\!\mu)^{%
\frac{1}{2}}(z_{1}\!-\!\mu_{0})^{\frac{1}{2}}(z_{1}\!-\!\nu_{0})^{\frac{3}{2}}}\times$$
$$\displaystyle\oint\limits_{C_{2}}$$
$$\displaystyle\frac{(z_{2}\!-\!\nu)^{\frac{1}{2}}(z_{2}\!-\!\nu_{0})^{\frac{1}{%
2}}\,{\mathrm{d}}z_{2}}{(z_{2}\!-\!\lambda)^{\frac{1}{2}}(z_{2}\!-\!\mu)^{%
\frac{1}{2}}(z_{2}\!-\!\mu_{0})^{\frac{1}{2}}(z_{2}\!-\!\lambda_{0})^{\frac{3}%
{2}}}.$$
(4.22)
The integrands consist of multi-valued functions that all come in pairs
of the form $(z-\tau)^{\frac{1}{2}-m}(z-\tau_{0})^{\frac{1}{2}-n}$, for
integers $m$ and $n$, with $\tau$ equal to $\lambda$, $\mu$ or $\nu$.
Hence, completely analogous to our procedure in
§3.2.4, we can make the integrands
single-valued by specifying, in the complex $z_{1}$-plane and
$z_{2}$-plane, three cuts running between the three pairs
$(\lambda,\lambda_{0})$, $(\mu,\mu_{0})$, $(\nu,\nu_{0})$ of branch points,
that are enclosed by the simple contours. The integrands are now
analytic in the cut plane away from its cuts and behave again as
$z_{i}^{-2}$ at large distances, so that the integral over a circular
contour with radius going to infinity, will be zero. Hence, connecting
the simple contours $C_{i}^{\lambda}$, $C_{i}^{\mu}$ and $C_{i}^{\nu}$ with this
circular contour, shows that their cumulative contribution cancels
$$C_{i}^{\nu}+C_{i}^{\mu}+C_{i}^{\lambda}=0,\qquad i=1,2.$$
(4.23)
This relation will allow us to make a combination of contours, so that
the nine boundary expressions (4.2) can be
satisfied simultaneously
(§4.4.3). Before doing so, we first
establish whether, with the homogeneous solution for $A$ and $C$ given
by (4.4.1) and (4.4.1), respectively,
we indeed satisfy their corresponding boundary expressions separately.
4.4.2 Boundary expressions for $A$ and $C$
The boundary expressions and the homogeneous solution of $C$,
follow from those of $A$ by taking $\lambda\leftrightarrow\nu$ and
$\lambda_{0}\leftrightarrow\nu_{0}$. Henceforth, once we have
checked the boundary expressions for $A$, those for $C$ can be checked
in a similar way.
Upon substitution of $\lambda=\lambda_{0}$ in the expression for
$A$ (4.4.1), the first integrand is proportional to
$z_{1}-\lambda^{\prime}$ and thus is analytic within the contour
$C_{1}^{\lambda}$. The contribution to the boundary expression
therefore needs to come from either $C_{1}^{\mu}$ or $C_{1}^{\nu}$. The
substitution
$$z_{1}-\lambda_{0}=\frac{\lambda_{0}\!-\!\nu}{\mu\!-\!\nu}(z_{1}\!-\!\mu)-\frac%
{\lambda_{0}\!-\!\mu}{\mu\!-\!\nu}(z_{1}\!-\!\nu),$$
(4.24)
splits the first integral into two complete elliptic integrals
$$\displaystyle\frac{\lambda_{0}\!-\!\nu}{\mu\!-\!\nu}\oint\limits_{C_{1}}\frac{%
(z_{1}\!-\!\mu)^{\frac{1}{2}}\,{\mathrm{d}}z_{1}}{(z_{1}\!-\!\nu)^{\frac{1}{2}%
}(z_{1}\!-\!\mu_{0})^{\frac{1}{2}}(z_{1}\!-\!\nu_{0})^{\frac{3}{2}}}\\
\displaystyle-\frac{\lambda_{0}\!-\!\mu}{\mu\!-\!\nu}\oint\limits_{C_{1}}\frac%
{(z_{1}\!-\!\nu)^{\frac{1}{2}}\,{\mathrm{d}}z_{1}}{(z_{1}\!-\!\mu)^{\frac{1}{2%
}}(z_{1}\!-\!\mu_{0})^{\frac{1}{2}}(z_{1}\!-\!\nu_{0})^{\frac{3}{2}}}.$$
(4.25)
Within the contour $C_{1}^{\mu}$, the integrals can be evaluated in terms
of $E^{\prime}(u)$ and $E(u)$ respectively. When $\lambda=\lambda_{0}$, the
second integral in (4.4.1) has a single pole
contribution from the contour $C_{2}^{\lambda}$. Together, $-C_{1}^{\mu}C_{2}^{\lambda}$, exactly reproduces the boundary expression
$A(\lambda_{0},\mu,\nu)$ in (4.2).
When $\mu=\mu_{0}$, both integrands in the expression for $A$ have a
single pole within the contour $C_{i}^{\mu}$. However, the combination
$C_{1}^{\mu}C_{2}^{\mu}$ does not give the correct boundary expression. We
again split both integrals to obtain the required complete elliptic
integrals. In the first we substitute
$$z_{1}-\lambda_{0}=\frac{\lambda_{0}\!-\!\nu_{0}}{\mu_{0}\!-\!\nu_{0}}(z_{1}\!-%
\!\mu_{0})-\frac{\lambda_{0}\!-\!\mu_{0}}{\mu_{0}\!-\!\nu_{0}}(z_{1}\!-\!\nu_{%
0}).$$
(4.26)
For the contour $C_{1}^{\lambda}$, the first integral after the split can
be evaluated in terms of $E^{\prime}(v)$, but the second integral we
leave unchanged. For the integral in the $z_{2}$-plane we substitute
$$z_{2}-\nu_{0}=\frac{\lambda_{0}\!-\!\nu_{0}}{\lambda_{0}\!-\!\mu_{0}}(z_{2}\!-%
\!\mu_{0})-\frac{\mu_{0}\!-\!\nu_{0}}{\lambda_{0}\!-\!\mu_{0}}(z_{2}\!-\!%
\lambda_{0}).$$
(4.27)
We take $C_{2}^{\nu}$ as contour, and evaluate the first integral after
the split in terms of $E(v)$. We again leave the second integral
unchanged. Except for the contour choice, it is of the same form as
the integral we left unchanged in the $z_{1}$-plane.
To obtain the required boundary expression for $A$ at $\mu=\mu_{0}$, it
turns out that we have to add the contribution of three pairs of
contours, $C_{1}^{\lambda}C_{2}^{\mu}$, $C_{1}^{\mu}C_{2}^{\nu}$ and $C_{1}^{\mu}C_{2}^{\mu}$. With the above substitutions (4.26) and
(4.27), the first two pairs together provide the
required boundary expression, but in addition we have two similar
contour integrals
$$\frac{i/8\pi}{(\lambda_{0}\!\!-\!\nu_{0}\!)^{\frac{1}{2}}(\lambda\!-\!\nu)^{%
\frac{1}{2}}}\hskip-3.0pt\oint\limits_{C^{\tau}}\hskip-3.0pt\frac{(z\!-\!%
\lambda)^{\frac{1}{2}}\,{\mathrm{d}}z}{(z\!-\!\nu)^{\frac{1}{2}}\!(z\!-\!%
\lambda_{0}\!)^{\frac{1}{2}}\!(z\!-\!\nu_{0}\!)^{\frac{1}{2}}\!(z\!-\!\mu_{0}%
\!)},$$
(4.28)
with contours $C^{\lambda}$ and $C^{\nu}$, respectively. The third pair,
$C_{1}^{\mu}C_{2}^{\mu}$, involves the product of two single pole
contributions. The resulting polynomial
$$\frac{i/8\pi}{(\lambda_{0}\!\!-\!\nu_{0}\!)^{\frac{1}{2}}(\lambda\!-\!\nu)^{%
\frac{1}{2}}}\,\frac{2\pi i\,(\lambda\!-\!\mu_{0}\!)^{\frac{1}{2}}}{\!(\mu_{0}%
\!\!-\!\nu)^{\frac{1}{2}}\!(\lambda_{0}\!\!-\!\mu_{0}\!)^{\frac{1}{2}}\!(\mu_{%
0}\!\!-\!\nu_{0}\!)^{\frac{1}{2}}},$$
(4.29)
can be written in the same form as (4.28),
with contour $C^{\mu}$. As a result, we now have the same integral over
all three contours, so that from (4.23), the
cumulative result vanishes and we are left with the required boundary
expression.
The expression for $A$ at $\nu=\nu_{0}$ resembles the one for $B$ at the
same boundary. This is expected since their boundary expressions in
(4.2) are also very similar. The first
integral now has a contribution from a double pole in the contour
$C_{1}^{\nu}$. The second integral has no contribution from the contour
$C_{2}^{\nu}$. However, within $C_{2}^{\mu}$, the second integral can be
evaluated in terms of $E(w)$. We obtain the correct boundary
expression $A(\lambda,\mu,\nu_{0})$ by considering the pair $-C_{1}^{\nu}C_{2}^{\mu}$.
4.4.3 Combination of contours
In the previous paragraphs we have constructed a homogeneous solution
for $A$, $B$ and $C$, and we have shown that with this solution all
nine boundary expressions can be satisfied. For each boundary
expression separately, we have determined the required pair of
contours and also contours from which there is no contribution. Now we
have to find the right combination of all these contours to fit the
boundary expressions simultaneously.
We first summarise the required and non-contributing pairs of contours
per boundary expression
$$\displaystyle A(\lambda_{0},\mu,\nu)$$
$$\displaystyle:$$
$$\displaystyle-C_{1}^{\mu}C_{2}^{\lambda}\pm C_{1}^{\lambda}C_{2}^{\tau},$$
$$\displaystyle A(\lambda,\mu_{0},\nu)$$
$$\displaystyle:$$
$$\displaystyle+C_{1}^{\mu}C_{2}^{\nu}+C_{1}^{\lambda}C_{2}^{\mu}+C_{1}^{\mu}C_{%
2}^{\mu},$$
$$\displaystyle A(\lambda,\mu,\nu_{0})$$
$$\displaystyle:$$
$$\displaystyle-C_{1}^{\nu}C_{2}^{\mu}\pm C_{1}^{\tau}C_{2}^{\nu},$$
$$\displaystyle B(\lambda_{0},\mu,\nu)$$
$$\displaystyle:$$
$$\displaystyle-C_{1}^{\mu}C_{2}^{\lambda}\pm C_{1}^{\lambda}C_{2}^{\tau},$$
$$\displaystyle B(\lambda,\mu_{0},\nu)$$
$$\displaystyle:$$
$$\displaystyle\pm C_{1}^{\mu}C_{2}^{\tau}\pm C_{1}^{\tau}C_{2}^{\mu},$$
(4.30)
$$\displaystyle B(\lambda,\mu,\nu_{0})$$
$$\displaystyle:$$
$$\displaystyle-C_{1}^{\nu}C_{2}^{\mu}\pm C_{1}^{\tau}C_{2}^{\nu},$$
$$\displaystyle C(\lambda_{0},\mu,\nu)$$
$$\displaystyle:$$
$$\displaystyle-C_{1}^{\mu}C_{2}^{\lambda}\pm C_{1}^{\lambda}C_{2}^{\tau},$$
$$\displaystyle C(\lambda,\mu_{0},\nu)$$
$$\displaystyle:$$
$$\displaystyle+C_{1}^{\mu}C_{2}^{\nu}+C_{1}^{\lambda}C_{2}^{\mu}+C_{1}^{\mu}C_{%
2}^{\mu},$$
$$\displaystyle C(\lambda,\mu,\nu_{0})$$
$$\displaystyle:$$
$$\displaystyle-C_{1}^{\nu}C_{2}^{\mu}\pm C_{1}^{\tau}C_{2}^{\nu},$$
where $\tau$ can be $\lambda$, $\mu$ or $\nu$. At each boundary
separately, $\lambda=\lambda_{0}$, $\mu=\mu_{0}$ and $\nu=\nu_{0}$, the
allowed combination of contours matches between $A$, $B$ and $C$. This
leaves the question how to relate the combination of contours at the
different boundaries relative to each other.
From (4.23), we know that in both the complex
$z_{1}$-plane and $z_{2}$-plane, the cumulative contribution of the three
simple contours cancels. As a consequence, each of the following three
combinations of integration contours
$$C_{1}^{\mu}C_{2}^{\mu}=-\,C_{1}^{\mu}\,(\,C_{2}^{\lambda}+C_{2}^{\nu}\,)=-\,(%
\,C_{1}^{\lambda}+C_{1}^{\nu}\,)\,C_{2}^{\mu},$$
(4.31)
will give the same result. Similarly, we can add to each combination the
pairs $C_{1}^{\lambda}C_{2}^{\mu}$ and $C_{1}^{\mu}C_{2}^{\nu}$, to obtain
$$C_{1}^{\mu}C_{2}^{\nu}\!+\!C_{1}^{\lambda}C_{2}^{\mu}\!+\!C_{1}^{\mu}C_{2}^{%
\mu}\!=\!C_{1}^{\lambda}C_{2}^{\mu}\!-\!C_{1}^{\mu}C_{2}^{\lambda}\!=\!C_{1}^{%
\mu}C_{2}^{\nu}\!-\!C_{1}^{\nu}C_{2}^{\mu}.$$
(4.32)
The first combination of contour pairs matches the allowed range for
$\mu=\mu_{0}$ in (4.4.3) and the second and third
match the boundary expressions $\lambda=\lambda_{0}$ and
$\nu=\nu_{0}$. This completes the proof that the expressions
(4.4.1)–(4.4.1) for $A$, $B$ and $C$
solve the homogeneous equations (4.2) and
satisfy the nine boundary expressions (4.2)
simultaneously when the integration contour is any of the three
combinations (4.32).
We shall see below that the first of these combinations is preferred
in numerical evaluations.
4.5 Evaluation of the homogeneous solutions
We write the complex contour integrals in the homogeneous solutions
$A$, $B$ and $C$ (4.4.1–4.4.1) as
real integrals.
The resulting complete hyperelliptic integrals are expressed as single
quadratures, which can be evaluated numerically in a straightforward
way.
We also express the complete elliptic integrals in the two-dimensional
homogeneous solutions $F$, $G$, $H$ and $I$ (4.2) in this
way to facilitate their numerical evaluation.
4.5.1 From complex to real integrals
To transform the complex contour integrals in
(4.4.1)–(4.4.1) in real integrals we
wrap the contours $C^{\lambda}$, $C^{\mu}$ and $C^{\nu}$ around the
corresponding pair of branch points
(Fig. 6).
The integrands consist of terms $z-\tau$ and $z-\tau_{0}$, all with
powers larger than $-1$, except $z_{1}-\nu_{0}$ and $z_{2}-\lambda_{0}$, both
of which occur to the power $-\frac{3}{2}$.
This means that for all simple contours $C_{i}^{\tau}$
$(\tau=\lambda,\mu,\nu;i=1,2)$, except for $C_{1}^{\nu}$ and
$C_{2}^{\lambda}$, the contribution from the arcs around the branch points
vanishes.
In the latter case, we are left with the parts parallel to the real
axis, so that we can rewrite the complex integrals as real integrals
with the branch points as integration limits.
The only combination of contours of the three given in
(4.32) that does not involve both $C_{1}^{\nu}$ and
$C_{2}^{\lambda}$, is
$$S\equiv C_{1}^{\mu}C_{2}^{\nu}+C_{1}^{\lambda}C_{2}^{\mu}+C_{1}^{\mu}C_{2}^{%
\mu}.$$
(4.33)
We have to be careful with the changes in phase when wrapping each of
the simple contours around the branch points.
One can verify that the phase changes per contour are the same for all
three the homogeneous solutions $A$, $B$ and $C$, and also that the
contribution from the parts parallel to the real axis is equivalent.
This gives a factor 2 per contour and thus a factor 4 for the
combination of contour pairs in $S$.
In this way, we can transform the double complex contour integration
into the following combination of real integrals
$$\iint\limits_{S}\hskip-3.0pt{\mathrm{d}}z_{1}{\mathrm{d}}z_{2}=4(\int\limits_{%
\lambda}^{\lambda_{0}}\hskip-3.0pt{\mathrm{d}}t_{1}\!\int\limits_{\mu}^{\mu_{0%
}}\hskip-3.0pt{\mathrm{d}}t_{2}+\!\int\limits_{\mu}^{\mu_{0}}\hskip-3.0pt{%
\mathrm{d}}t_{1}\!\int\limits_{\nu}^{\nu_{0}}\hskip-3.0pt{\mathrm{d}}t_{2}-\!%
\int\limits_{\mu}^{\mu_{0}}\hskip-3.0pt{\mathrm{d}}t_{1}\!\int\limits_{\mu}^{%
\mu_{0}}\hskip-3.0pt{\mathrm{d}}t_{2}),$$
(4.34)
with $t_{i}$ the real part of $z_{i}$.
We apply this transformation to
(4.4.1)–(4.4.1), and we absorb the
factor of 4 left in the denominators into the integrals, so that we
can write
$$\displaystyle A(\!\lambda,\!\mu,\!\nu;\!\lambda_{0},\!\mu_{0},\!\nu_{0}\!)$$
$$\displaystyle=$$
$$\displaystyle\frac{(\mu_{0}\!\!-\!\nu_{0}\!)(\mu_{0}\!\!-\!\lambda_{0}\!)%
\Lambda}{\pi^{2}(\lambda\!-\!\mu)(\lambda\!-\!\nu)}\!\left(A_{1}A_{2}\!+\!A_{3%
}A_{4}\!-\!A_{2}A_{3}\!\right)\!,$$
$$\displaystyle B(\!\lambda,\!\mu,\!\nu;\!\lambda_{0},\!\mu_{0},\!\nu_{0}\!)$$
$$\displaystyle=$$
$$\displaystyle\frac{(\mu_{0}\!\!-\!\nu_{0}\!)(\mu_{0}\!\!-\!\lambda_{0}\!)%
\Lambda}{\pi^{2}(\mu\!-\!\nu)(\mu\!-\!\lambda)}\!\left(B_{1}B_{2}\!+\!B_{3}B_{%
4}\!-\!B_{2}B_{3}\!\right)\!,$$
$$\displaystyle C(\!\lambda,\!\mu,\!\nu;\!\lambda_{0},\!\mu_{0},\!\nu_{0}\!)$$
$$\displaystyle=$$
$$\displaystyle\frac{(\mu_{0}\!\!-\!\nu_{0}\!)(\mu_{0}\!\!-\!\lambda_{0}\!)%
\Lambda}{\pi^{2}(\nu\!-\!\lambda)(\nu\!-\!\mu)}\!\left(C_{1}C_{2}\!+\!C_{3}C_{%
4}\!-\!C_{2}C_{3}\!\right)\!,$$
where $A_{i}$, $B_{i}$ and $C_{i}$ ($i=1,2,3,4$) are complete hyperelliptic
integrals, for which we give expressions below, and
$$\Lambda^{2}=\frac{(\lambda\!-\!\mu)(\lambda\!-\!\nu)(\mu\!-\!\nu)}{(\lambda_{0%
}\!\!-\!\mu_{0}\!)(\lambda_{0}\!\!-\!\nu_{0}\!)(\mu_{0}\!\!-\!\nu_{0}\!)}.$$
(4.36)
The second set of arguments added to $A$, $B$ and $C$ make explicit
the position $(\lambda_{0},\mu_{0},\nu_{0})$ of the source point which is
causing the stresses at the field point $(\lambda,\mu,\nu)$.
4.5.2 The complete hyperelliptic integrals
With the transformation described in the previous section the
expression for, e.g., the complete hyperelliptic integral $A_{2}$ is of
the form
$$A_{2}=\frac{1}{2}\int\limits_{\mu}^{\mu_{0}}\hskip-5.0pt\frac{{\mathrm{d}}t}{%
\lambda_{0}\!-\!t}\sqrt{\frac{(\lambda\!-\!t)(t\!-\!\nu_{0})}{(\mu_{0}\!-\!t)(%
t\!-\!\mu)(\lambda_{0}\!-\!t)(t\!-\!\nu)}}.$$
(4.37)
The integrand has two singularities, one at the lower integration limit
$t=\mu$ and one at the upper integration limit $t=\mu_{0}$.
The substitution $t=\mu+(\mu_{0}-\mu)\cos^{2}\theta$ removes
both singularities, since ${\mathrm{d}}t/\sqrt{(\mu_{0}\!-\!t)(t\!-\!\mu)}=2(\mu_{0}-\mu){\mathrm{d}}\theta$.
All complete hyperelliptic integrals $A_{i}$, $B_{i}$ and $C_{i}$
($i=1,2,3,4$) in (4.5.1) are of the form
(4.37) and have at most two singularities at either
of the integration limits.
Hence, we can apply a similar substitution to remove the
singularities.
This results in the following expressions
$$\displaystyle A_{1}$$
$$\displaystyle\!=\!(\lambda_{0}\!-\!\lambda)^{2}\hskip-5.0pt\int\limits_{0}^{%
\pi/2}\!\!\frac{\sin^{2}\theta\cos^{2}\theta{\mathrm{d}}\theta}{x_{3}\Delta_{x%
}},$$
$$\displaystyle A_{2}$$
$$\displaystyle\!=\!\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!\frac{y_{1}y_{4}{%
\mathrm{d}}\theta}{y_{3}\Delta_{y}},$$
$$\displaystyle A_{4}$$
$$\displaystyle\!=\!(\nu_{0}\!-\!\nu)\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!%
\frac{z_{2}\sin^{2}\theta{\mathrm{d}}\theta}{z_{1}\Delta_{z}},$$
$$\displaystyle A_{3}$$
$$\displaystyle\!=\!\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!\frac{y_{3}y_{4}{%
\mathrm{d}}\theta}{y_{1}\Delta_{y}},$$
$$\displaystyle B_{1}$$
$$\displaystyle\!=\!(\lambda_{0}\!-\!\lambda)\hskip-5.0pt\int\limits_{0}^{\pi/2}%
\!\!\frac{x_{2}\sin^{2}\theta{\mathrm{d}}\theta}{x_{3}\Delta_{x}},$$
$$\displaystyle B_{2}$$
$$\displaystyle\!=\!(\mu_{0}\!-\!\mu)\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!%
\frac{y_{1}\cos^{2}\theta{\mathrm{d}}\theta}{y_{3}\Delta_{y}},$$
$$\displaystyle B_{4}$$
$$\displaystyle\!=\!(\nu_{0}\!-\!\nu)\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!%
\frac{z_{4}\sin^{2}\theta{\mathrm{d}}\theta}{z_{1}\Delta_{z}},$$
$$\displaystyle B_{3}$$
$$\displaystyle\!=\!(\mu_{0}\!-\!\mu)\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!%
\frac{y_{3}\cos^{2}\theta{\mathrm{d}}\theta}{y_{1}\Delta_{y}},$$
$$\displaystyle C_{1}$$
$$\displaystyle\!=\!(\lambda_{0}\!-\!\lambda)\hskip-5.0pt\int\limits_{0}^{\pi/2}%
\!\!\frac{x_{4}\sin^{2}\theta{\mathrm{d}}\theta}{x_{3}\Delta_{x}},$$
$$\displaystyle C_{2}$$
$$\displaystyle\!=\!\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!\frac{y_{1}y_{2}{%
\mathrm{d}}\theta}{y_{3}\Delta_{y}},$$
$$\displaystyle C_{4}$$
$$\displaystyle\!=\!(\nu_{0}\!-\!\nu)^{2}\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!%
\frac{\sin^{2}\theta\cos^{2}\theta{\mathrm{d}}\theta}{z_{1}\Delta_{z}},$$
$$\displaystyle C_{3}$$
$$\displaystyle\!=\!\hskip-5.0pt\int\limits_{0}^{\pi/2}\!\!\frac{y_{2}y_{3}{%
\mathrm{d}}\theta}{y_{1}\Delta_{y}},$$
where we have defined
$$\Delta_{x}^{2}=x_{1}x_{2}x_{3}x_{4},\quad\Delta_{y}^{2}=y_{1}y_{2}y_{3}y_{4},%
\quad\Delta_{z}^{2}=z_{1}z_{2}z_{3}z_{4},$$
(4.39)
and the factors $x_{i}$, $y_{i}$ and $z_{i}$ $(i=1,2,3,4)$ are given by
$$\displaystyle x_{1}$$
$$\displaystyle\!=\!(\lambda\!-\!\mu_{0})\!+\!(\lambda_{0}\!-\!\lambda)\cos^{2}\theta,$$
$$\displaystyle x_{2}$$
$$\displaystyle\!=\!(\lambda\!-\!\mu)\!+\!(\lambda_{0}\!-\!\lambda)\cos^{2}\theta,$$
$$\displaystyle x_{3}$$
$$\displaystyle\!=\!(\lambda\!-\!\nu_{0})\!+\!(\lambda_{0}\!-\!\lambda)\cos^{2}\theta,$$
$$\displaystyle x_{4}$$
$$\displaystyle\!=\!(\lambda\!-\!\nu)\!+\!(\lambda_{0}\!-\!\lambda)\cos^{2}\theta,$$
$$\displaystyle y_{1}$$
$$\displaystyle\!=\!(\mu\!-\!\nu_{0})\!+\!(\mu_{0}\!-\!\mu)\cos^{2}\theta,$$
$$\displaystyle y_{2}$$
$$\displaystyle\!=\!(\mu\!-\!\nu)\!+\!(\mu_{0}\!-\!\mu)\cos^{2}\theta,$$
$$\displaystyle y_{3}$$
$$\displaystyle\!=\!(\mu\!-\!\lambda_{0})\!+\!(\mu_{0}\!-\!\mu)\cos^{2}\theta,$$
$$\displaystyle y_{4}$$
$$\displaystyle\!=\!(\mu\!-\!\lambda)\!+\!(\mu_{0}\!-\!\mu)\cos^{2}\theta,$$
$$\displaystyle z_{1}$$
$$\displaystyle\!=\!(\nu\!-\!\lambda_{0})\!+\!(\nu_{0}\!-\!\nu)\cos^{2}\theta,$$
$$\displaystyle z_{2}$$
$$\displaystyle\!=\!(\nu\!-\!\lambda)\!+\!(\nu_{0}\!-\!\nu)\cos^{2}\theta,$$
$$\displaystyle z_{3}$$
$$\displaystyle\!=\!(\nu\!-\!\mu_{0})\!+\!(\nu_{0}\!-\!\nu)\cos^{2}\theta,$$
$$\displaystyle z_{4}$$
$$\displaystyle\!=\!(\nu\!-\!\mu)\!+\!(\nu_{0}\!-\!\nu)\cos^{2}\theta.$$
(4.40)
For each $i$ these factors follow from each other by cyclic
permutation of $\lambda\to\mu\to\nu\to\lambda$ and at the same
time $\lambda_{0}\to\mu_{0}\to\nu_{0}\to\lambda_{0}$.
Half of the factors – all $x_{i}$, $y_{1}$ and $y_{2}$ – are always
positive, whereas the other factors are always negative.
The latter implies that one has to be careful with the signs of the
factors under the square root when evaluating the single quadratures
numerically.
4.5.3 The complete elliptic integrals
The two-dimensional homogeneous solutions $F$, $G$, $H$ and $I$ are
given in (4.2) in terms of the Legendre complete elliptic
integrals $E(m)$ and $E^{\prime}(m)=[E(m)-K(m)]/2m$.
Numerical routines for $E(m)$ and $K(m)$ (e.g., Press et
al. 1999) generally require the argument
to be $0\leq m<1$.
In the allowed range of the confocal ellipsoidal coordinates,
the arguments $u$ (4.10) and $w$
(3.16) become larger than unity.
In these cases we can use transformations to express $E(m)$ and $K(m)$
in terms of $E(1/m)$ and $K(1/m)$ (e.g., Byrd & Friedman
1971).
We prefer, however, to write the complete elliptic integrals as single
quadratures similar to the above expressions for the hyperelliptic
integrals.
These quadratures can easily be evaluated numerically and apply to
the full range of the confocal ellipsoidal coordinates.
The resulting expressions for the two-dimensional homogeneous
solutions are
$$\displaystyle F(\lambda,\mu;\lambda_{0},\mu_{0})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\pi}\sqrt{\!\frac{\lambda\!-\!\mu}{\lambda_{0}\!-\!\mu_{%
0}}}\int\limits_{0}^{\pi/2}\!\!\frac{x_{1}d\theta}{x_{2}\sqrt{x_{1}x_{2}}},$$
$$\displaystyle G(\lambda,\mu;\lambda_{0},\mu_{0})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\pi}\sqrt{\!\frac{\lambda\!-\!\mu}{\lambda_{0}\!-\!\mu_{%
0}}}(\mu_{0}\!-\!\mu)\!\!\int\limits_{0}^{\pi/2}\!\!\frac{\sin^{2}\theta d%
\theta}{y_{4}\sqrt{y_{3}y_{4}}},$$
$$\displaystyle H(\mu,\nu;\mu_{0},\nu_{0})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\pi}\sqrt{\!\frac{\mu\!-\!\nu}{\mu_{0}\!-\!\nu_{0}}}(\mu%
_{0}\!-\!\mu)\!\!\int\limits_{0}^{\pi/2}\!\!\frac{\sin^{2}\theta d\theta}{y_{2%
}\sqrt{y_{1}y_{2}}},$$
$$\displaystyle I(\mu,\nu;\mu_{0},\nu_{0})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\pi}\sqrt{\!\frac{\mu\!-\!\nu}{\mu_{0}\!-\!\nu_{0}}}\int%
\limits_{0}^{\pi/2}\!\!\frac{z_{3}d\theta}{z_{4}\sqrt{z_{3}z_{4}}}.$$
(4.41)
Again we have added two arguments to make the position of the unit
source explicitly.
We note that the homogeneous solutions
$A(\lambda,\mu;\lambda_{0},\mu_{0})$ and $B(\lambda,\mu;\lambda_{0},\mu_{0})$
for the disc case (3.41) are equivalent to
$F$ and $G$ respectively.
4.6 General triaxial solution
We now construct the solution of the Jeans equations for triaxial
Stäckel models (4.2), by superposition of
singular solutions, which involve the homogeneous solution derived in
the above.
We match the solution to the boundary conditions at $\mu=-\alpha$ and
$\nu=-\beta$, and check for convergence of the solution when $\lambda\to\infty$.
Next, we consider alternative boundary conditions and present the
triaxial solution for a general finite region.
We also show that the general solution yields the correct result in
the case of thin tube orbits and the triaxial Abel models of
Dejonghe & Laurent (1991).
Finally, we describe a numerical test of the triaxial solution to a
polytrope model.
4.6.1 Superposition of singular solutions
Substitution of the functions $A$, $B$, $C$ (4.5.1) and the
functions $F$, $G$, $H$, $I$ (4.5.3) in expression
(4.2), provides the three singular solutions
of the system of simplified Jeans equations, with the right-hand side
given by (4.1).
We denote these by $S_{2}^{\tau\tau}$ $(\tau=\lambda,\mu,\nu)$.
The singular solutions of the two similar simplified systems, with the
triplet of delta functions at the right-hand side of the
first and third equation, $S_{1}^{\tau\tau}$ and
$S_{3}^{\tau\tau}$ then follow from $S_{2}^{\tau\tau}$ by cyclic
permutation.
This gives
$$\displaystyle S_{1}^{\lambda\lambda}$$
$$\displaystyle=$$
$$\displaystyle B(\nu,\lambda,\mu;\nu_{0},\lambda_{0},\mu_{0})\!+\!G(\nu,\lambda%
;\nu_{0},\lambda_{0})\delta(\mu_{0}\!-\!\mu)$$
$$\displaystyle+H(\lambda,\mu;\lambda_{0},\mu_{0})\delta(\nu_{0}\!-\!\nu)\!-\!%
\delta(\mu_{0}\!-\!\mu)\delta(\nu_{0}\!-\!\nu),$$
$$\displaystyle S_{1}^{\mu\mu}$$
$$\displaystyle=$$
$$\displaystyle C(\nu,\lambda,\mu;\nu_{0},\lambda_{0},\mu_{0})\!+\!I(\lambda,\mu%
;\lambda_{0},\mu_{0})\delta(\nu_{0}\!-\!\nu)$$
$$\displaystyle S_{1}^{\nu\nu}$$
$$\displaystyle=$$
$$\displaystyle A(\nu,\lambda,\mu;\nu_{0},\lambda_{0},\mu_{0})\!+\!F(\nu,\lambda%
;\nu_{0},\lambda_{0})\delta(\mu_{0}\!-\!\mu),$$
(4.42a)
$$\displaystyle S_{2}^{\lambda\lambda}$$
$$\displaystyle=$$
$$\displaystyle A(\lambda,\mu,\nu;\lambda_{0},\mu_{0},\nu_{0})\!+\!F(\lambda,\mu%
;\lambda_{0},\mu_{0})\delta(\nu_{0}\!-\!\nu),$$
$$\displaystyle S_{2}^{\mu\mu}$$
$$\displaystyle=$$
$$\displaystyle B(\lambda,\mu,\nu;\lambda_{0},\mu_{0},\nu_{0})\!+\!G(\lambda,\mu%
;\lambda_{0},\mu_{0})\delta(\nu_{0}\!-\!\nu)$$
$$\displaystyle+H(\mu,\nu;\mu_{0},\nu_{0})\delta(\lambda_{0}\!-\!\lambda)\!-\!%
\delta(\nu_{0}\!-\!\nu)\delta(\lambda_{0}\!-\!\lambda),$$
$$\displaystyle S_{2}^{\nu\nu}$$
$$\displaystyle=$$
$$\displaystyle C(\lambda,\mu,\nu;\lambda_{0},\mu_{0},\nu_{0})\!+\!I(\mu,\nu;\mu%
_{0},\nu_{0})\delta(\lambda_{0}\!-\!\lambda)$$
(4.42b)
$$\displaystyle S_{3}^{\lambda\lambda}$$
$$\displaystyle=$$
$$\displaystyle C(\mu,\nu,\lambda;\mu_{0},\nu_{0},\lambda_{0})\!+\!I(\nu,\lambda%
;\nu_{0},\lambda_{0})\delta(\mu_{0}\!-\!\mu),$$
$$\displaystyle S_{3}^{\mu\mu}$$
$$\displaystyle=$$
$$\displaystyle A(\mu,\nu,\lambda;\mu_{0},\nu_{0},\lambda_{0})\!+\!F(\mu,\nu;\mu%
_{0},\nu_{0})\delta(\lambda_{0}\!-\!\lambda)$$
$$\displaystyle S_{3}^{\nu\nu}$$
$$\displaystyle=$$
$$\displaystyle B(\mu,\nu,\lambda;\mu_{0},\nu_{0},\lambda_{0})\!+\!G(\mu,\nu;\mu%
_{0},\nu_{0})\delta(\lambda_{0}\!-\!\lambda)$$
(4.42c)
$$\displaystyle+H(\nu,\lambda;\nu_{0},\lambda_{0})\delta(\mu_{0}\!-\!\mu)\!-\!%
\delta(\lambda_{0}\!-\!\lambda)\delta(\mu_{0}\!-\!\mu).$$
These singular solutions describe the contribution of a source point
in $(\lambda_{0},\mu_{0},\nu_{0})$ to $(\lambda,\mu,\nu)$. To find the
solution of the full equations (4.2), we multiply
the singular solutions (4.42),
(4.42) and (4.42) by
$g_{1}(\lambda_{0},\mu_{0},\nu_{0})$, $g_{2}(\lambda_{0},\mu_{0},\nu_{0})$ and
$g_{3}(\lambda_{0},\mu_{0},\nu_{0})$, respectively, so that the contribution
from the source point naturally depends on the local density and
potential (cf. eq. [4.3]).
Then, for each coordinate $\tau=\lambda,\mu,\nu$, we add the three
weighted singular solutions, and integrate over the volume $\Omega$,
defined as
$$\Omega\!=\!\left\{(\lambda_{0},\mu_{0},\mu_{0}\!)\!:\!\lambda\!\leq\!\lambda_{%
0}\!\!<\!\!\infty,\mu\!\leq\!\mu_{0}\!\!\leq\!\!-\alpha,\nu\!\leq\!\nu_{0}\!\!%
\leq\!\!-\beta\right\}\!,$$
(4.43)
which is the three-dimensional extension of the integration domain $D$
in Fig. 4.
The resulting solution solves the inhomogeneous Jeans equations
(4.2), but does not give the correct values at the
boundaries $\mu=-\alpha$ and $\nu=-\beta$.
They are found by multiplying the singular solutions
(4.42) evaluated at $\mu_{0}=-\alpha$, and, similarly,
the singular solutions (4.42) evaluated at
$\nu_{0}=-\beta$, by $-S_{\mu\mu}(\lambda_{0},-\alpha,\nu_{0})$ and
$-S_{\nu\nu}(\lambda_{0},\mu_{0},-\beta)$, respectively, and integrating
in $\Omega$ over the coordinates that are not fixed.
One can verify that this procedure represents the boundary values
correctly.
The final result for the general solution of the Jeans equations
(4.2) for triaxial Stäckel models is
$$\displaystyle S_{\tau\tau}(\lambda,$$
$$\displaystyle\mu$$
$$\displaystyle,\!\nu)\!=\!\!\!\int\limits_{\lambda}^{\infty}\hskip-5.0pt{%
\mathrm{d}}\lambda_{0}\!\!\!\int\limits_{\mu}^{-\alpha}\hskip-5.0pt{\mathrm{d}%
}\mu_{0}\!\!\!\int\limits_{\nu}^{-\beta}\hskip-5.0pt{\mathrm{d}}\nu_{0}\!\!%
\sum_{i=1}^{3}g_{i}(\lambda_{0}\!,\!\mu_{0}\!,\!\nu_{0})S_{i}^{\tau\tau}\!(%
\lambda,\!\mu,\!\nu;\!\lambda_{0}\!,\!\mu_{0}\!,\!\nu_{0})$$
(4.44)
$$\displaystyle-$$
$$\displaystyle\!\!\!\int\limits_{\nu}^{-\beta}\hskip-4.0pt{\mathrm{d}}\nu_{0}\!%
\!\!\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0}\,S_{\mu%
\mu}(\lambda_{0},\!-\alpha,\!\nu_{0})\,S_{2}^{\tau\tau}\!(\lambda,\!\mu,\!\nu;%
\!\lambda_{0},\!-\alpha,\!\nu_{0})$$
$$\displaystyle-$$
$$\displaystyle\!\!\!\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}\!\!\!\int\limits_{\mu}^{-\alpha}\hskip-4.0pt{\mathrm{d}}\mu_{0}\,S%
_{\nu\nu}(\lambda_{0},\!\mu_{0},\!-\beta)\,S_{3}^{\tau\tau}\!(\lambda,\!\mu,\!%
\nu;\!\lambda_{0}\!,\!\mu_{0}\!,\!-\beta),$$
where $\tau=(\lambda,\mu,\nu)$.
This gives the stresses everywhere, once we have specified
$S_{\mu\mu}(\lambda,-\alpha,\nu)$ and
$S_{\nu\nu}(\lambda,\mu,-\beta)$.
At both boundaries $\mu=-\alpha$ and $\nu=-\beta$, the three stress
components are related by a set of two Jeans equations, i.e.,
(4.2) evaluated at $\mu=-\alpha$ and $\nu=-\beta$
respectively.
From §3, we know that the solution of these
two-dimensional systems both will involve a (boundary) function of one
variable.
We need this latter freedom to satisfy the continuity conditions
(2.17).
This means it is sufficient to specify any of the three stress
components at $\mu=-\alpha$ and $\nu=-\beta$.
4.6.2 Convergence of the general triaxial solution
As in §§3.1.4, 3.2.7 and
3.4 we suppose
$G(\tau)=\mathcal{O}(\tau^{\delta})$ when $\tau\to\infty$, with $\delta$
in the range $[-\frac{1}{2},0)$.
This implies that the potential $V_{S}$ (2.3)
is also $\mathcal{O}(\tau^{\delta})$.
We assume that the density $\rho$, which does not need to be the
density $\rho_{S}$ which generates $V_{S}$, is of the form $N(\mu,\nu)\lambda^{-s/2}$ when $\lambda\to\infty$.
In the special case where $\rho=\rho_{S}$, we have $s\leq 4$ except
possibly along the $z$-axis.
When $s=4$ the models remain flattened out to the largest radii, but
when $s<4$ the function $N(\mu,\nu)\to 1$ in the limit $\lambda\to\infty$ (de Zeeuw et al. 1986).
From the definition (4.3), we find that
$g_{1}(\lambda_{0},\mu_{0},\nu_{0})=\mathcal{O}(\lambda_{0}^{\delta-s/2})$
as $\lambda_{0}\to\infty$, while $g_{2}(\lambda_{0},\mu_{0},\nu_{0})$ and
$g_{3}(\lambda_{0},\mu_{0},\nu_{0})$ are larger and both
$\mathcal{O}(\lambda_{0}^{-s/2})$.
To investigate the behaviour of the singular solutions
(4.42) at large distance, we have to
carefully analyse the complete hyperelliptic (4.38)
and elliptic (4.5.3) integrals as $\lambda_{0}\to\infty$.
This is simplified by writing them as Carlson’s $R$-functions (Carlson
1977).
We finally find for the singular solutions that
$S_{1}^{\tau\tau}=\mathcal{O}(1)$ when $\lambda_{0}\to\infty$, whereas
$S_{2}^{\tau\tau}$ and $S_{3}^{\tau\tau}$ are smaller and
$\mathcal{O}(\lambda_{0}^{-1})$, with $\tau=\lambda,\mu,\nu$.
This shows that for the volume integral in the triaxial solution
(4.44) to converge, we must have
$\delta-s/2+1<0$.
This is equivalent to the requirement $s>2\delta+2$ we obtained in §3.4 for the limiting cases of prolate and
oblate potentials and for the large radii limit with scale-free DF.
From the convergence of the remaining two double integrals in
(4.44), we find that the boundary stresses
$S_{\mu\mu}(\lambda,-\alpha,\nu)$ and
$S_{\nu\nu}(\lambda,\mu,-\beta)$ cannot exceed
$\mathcal{O}(1)$ when $\lambda\to\infty$.
The latter is in agreement with the large $\lambda$ behaviour of
$S_{\tau\tau}(\lambda,\mu,\nu)$ that follows from the volume
integral.
The singular solutions $S_{i}^{\lambda\lambda}=\mathcal{O}(1)$
($i=1,2,3$) when $\lambda\to\infty$, larger than $S_{i}^{\mu\mu}$ and
$S_{i}^{\nu\nu}$, which are all $\mathcal{O}(\lambda^{-1})$.
Evaluating the volume integral at large distance then gives
$S_{\tau\tau}(\lambda,\mu,\nu)=\mathcal{O}(\lambda^{\delta-s/2+1})$,
i.e., not exceeding $\mathcal{O}(1)$ if the requirement $s>2\delta+2$
is satisfied.
We obtain the same behaviour and requirement from the energy equation
(2.10).
We conclude that for the general triaxial case, as well as for the
limiting cases with a three-dimensional shape, the stress components
$T_{\tau\tau}(\lambda,\mu,\nu)$ are
$\mathcal{O}(\lambda^{\delta-s/2})$ at large distance, with the
requirement that $s>2\delta+2$ for $-\frac{1}{2}\leq\delta<0$.
We obtained the same result for the stresses in the disc case, except
that then $s>2\delta+1$.
Both the three-dimensional and two-dimensional requirements are met
for many density distributions $\rho$ and potentials $V_{S}$ of
interest.
They do not break down until the isothermal limit $\delta\to 0$, with
$s=1$ (disc) and $s=2$ (three-dimensional) is reached.
4.6.3 Alternative boundary conditions
Our solution for the stress components at each point $(\lambda,\mu,\nu)$ in a triaxial model with a Stäckel potential consists of the
weighted contribution of all sources outwards of this point.
Accordingly, we have integrated with respect to $\lambda_{0}$, $\mu_{0}$
and $\nu_{0}$, with lower limits the coordinates of the chosen point and
upper limits $\infty$, $-\alpha$ and $-\beta$, respectively. To obtain
the correct expressions at the outer boundaries, the stresses must
vanish when $\lambda\to\infty$ and they have to be specified at
$\mu=-\alpha$ and $\nu=-\beta$.
The integration limits $\lambda$, $\mu$ and $\nu$ are fixed, but for
the other three limits we can, in principle, equally well choose
$-\alpha$, $-\beta$ and $-\gamma$ respectively. The latter choices
also imply the specification of the stress components at these
boundaries instead. Each of the eight possible combinations of these
limits corresponds to one of the octants into which the physical
region $-\gamma\leq\nu_{0}\leq-\beta\leq\mu_{0}\leq-\alpha\leq\lambda_{0}<\infty$ is split by the lines through the point $(\lambda,\mu,\nu)$.
By arguments similar to those given in §3.3,
one may show that in all octants the expressions (4.5.1)
for $A$, $B$, $C$, and (4.2) for $F$, $G$, $H$, $I$ are
equivalent.
Hence, again the only differences in the singular
solutions are due to possible changes in the sign of the
step-functions, but the changes in the integration limits cancel the
sign differences between the corresponding singular solutions.
However, as in §3.3 for the two-dimensional
case, it is not difficult to show that while switching the boundary
conditions $\mu$ and $\nu$ is indeed straightforward, the switch from
$\lambda\to\infty$ to $\lambda=-\alpha$ again leads to solutions which
generally have the incorrect radial fall-off, and hence are
non-physical.
4.6.4 Triaxial solution for a general finite region
If we denote non-fixed integration limits by $\lambda_{e}$, $\mu_{e}$ and
$\nu_{e}$ respectively, we can write the triaxial solution for a
general finite region as
$$\displaystyle S_{\tau\tau}(\lambda,$$
$$\displaystyle\mu$$
$$\displaystyle,\!\nu)\!=\!\!\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{%
\mathrm{d}}\lambda_{0}\!\!\!\int\limits_{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}%
}\mu_{0}\!\!\!\int\limits_{\nu}^{\nu_{e}}\hskip-4.0pt{\mathrm{d}}\nu_{0}\!\!%
\sum_{i=1}^{3}g_{i}(\lambda_{0}\!,\!\mu_{0}\!,\!\nu_{0})S_{i}^{\tau\tau}\!(%
\lambda,\!\mu,\!\nu;\!\lambda_{0}\!,\!\mu_{0}\!,\!\nu_{0})$$
(4.45)
$$\displaystyle-$$
$$\displaystyle\!\!\!\int\limits_{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}}\mu_{0}%
\!\!\!\int\limits_{\nu}^{\nu_{e}}\hskip-4.0pt{\mathrm{d}}\nu_{0}\,S_{\lambda%
\lambda}(\lambda_{e},\!\mu_{0}\!,\!\nu_{0})\,S_{1}^{\tau\tau}\!(\lambda,\!\mu,%
\!\nu;\!\lambda_{e},\!\mu_{0}\!,\!\nu_{0})$$
$$\displaystyle-$$
$$\displaystyle\!\!\!\int\limits_{\nu}^{\nu_{e}}\hskip-4.0pt{\mathrm{d}}\nu_{0}%
\!\!\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{\mathrm{d}}\lambda_{0}\,%
S_{\mu\mu}(\lambda_{0}\!,\!\mu_{e},\!\nu_{0})\,S_{2}^{\tau\tau}\!(\lambda,\!%
\mu,\!\nu;\!\lambda_{0}\!,\!\mu_{e},\!\nu_{0})$$
$$\displaystyle-$$
$$\displaystyle\!\!\!\int\limits_{\lambda}^{\lambda_{e}}\hskip-4.0pt{\mathrm{d}}%
\lambda_{0}\!\!\!\int\limits_{\mu}^{\mu_{e}}\hskip-4.0pt{\mathrm{d}}\mu_{0}\,S%
_{\nu\nu}(\lambda_{0}\!,\!\mu_{0}\!,\!\nu_{e})\,S_{3}^{\tau\tau}\!(\lambda,\!%
\mu,\!\nu;\!\lambda_{0}\!,\!\mu_{0}\!,\!\nu_{e}),$$
with, as usual, $\tau=\lambda,\mu,\nu$. The weight functions
$g_{i}$ ($i=1,2,3$) are defined in (4.3) and the
singular solutions $S_{i}^{\tau\tau}$ are given by
(4.42). The non-fixed integration limits are
chosen in the corresponding physical ranges, i.e.,
$\lambda_{e}\in[-\alpha,\infty]$, $\mu_{e}\in[-\beta,-\alpha]$ and
$\nu_{e}\in[-\gamma,-\beta]$, but $\lambda_{e}\neq-\alpha$ (see §4.6.3).
The solution requires the specification of
the stress components on the boundary surfaces $\lambda=\lambda_{e}$,
$\mu=\mu_{e}$ and $\nu=\nu_{e}$.
On each of these surfaces the three stress components are related by two
of the three Jeans equations (4.2) and the
continuity conditions (2.17).
Hence, once one of the stress components is prescribed on three
boundary surfaces, the solution (4.44) yields all
three stresses everywhere in the triaxial Stäckel galaxy.
The stresses on the remaining three boundary surfaces then follow as the
limits of the latter solution.
4.6.5 Physical solutions
Statler (1987) and HZ92 showed that many
different DFs are consistent with a triaxial density $\rho$ in the
potential $V_{S}$. Specifically, the boundary plane $\nu=-\beta$, i.e.,
the area outside the focal hyperbola in the ($x,z$)-plane
(Fig. 2), is only reached by inner (I) and outer
(O) long-axis tube orbits. A split between the contribution of both
orbit families to the density in this plane has to be chosen, upon
which the DF for both the I and O orbits is fixed in case only thin
tubes are populated, but many other possibilities exist when the full
set of I- and O-orbits is included. For each of these DFs, the density
provided by the I- and O-tubes can then in principle be found
throughout configuration space. In the area outside the focal ellipse
in the ($y,z$)-plane ($\mu=-\alpha$), only the O-tubes and S-tubes
contribute to the density. Subtracting the known density of the
O-orbits leaves the density to be provided by the S-tubes in this
plane, from which their DF can be determined. This is again unique
when only thin orbits are used, but is non-unique otherwise. The
density that remains after subtracting the I-, O-, and S-tube
densities from $\rho$ must be provided by the box (B) orbits. Their DF
is now fixed, and can be found by solving a system of linear
equations, starting from the outside ($\lambda\to\infty$).
The total DF is the sum of the DFs of the four orbit families, and is
hence highly non-unique. All these DFs give rise to a range of
stresses $T_{\lambda\lambda},T_{\mu\mu},T_{\nu\nu}$, and our
solution of the Jeans equations must be sufficiently general to
contain them as a subset. This is indeed the case, as we are allowed
to choose the stresses on the special surfaces $\nu=-\beta$ and
$\mu=-\alpha$. However, not all choices will correspond to physical
DFs. The requirement $T_{\tau\tau}\geq 0$ is necessary but not
sufficient for the associated DF to be non-negative everywhere.
4.6.6 The general solution for thin tube orbits
For each of the three tube families in case of infinitesimally thin
orbits one of the three stress components vanishes everywhere (see
§2.5.6).
We are left with two non-zero stress components related to the density
and potential by three reduced Jeans equations
(4.2).
We thus have subsidiary conditions on the three right hand side terms
$g_{1}$, $g_{2}$ and $g_{3}$.
HZ92 solved for the two non-trivial stresses and showed that they can
be found by single quadratures (with integrands involving no worse
than complete elliptic integrals), once the corresponding stress had
been chosen at $\nu=-\beta$ (for I- and O-tubes) or at $\mu=-\alpha$
(for S-tubes).
By analogy with the reasoning for the thin tube orbits in the disc
case (§3.4.4), we can show that for each of the
three tube families in the case of thin orbits the general triaxial
solution (4.45) gives the stress components
correctly.
Consider, e.g., the thin I-tubes, for which $S_{\mu\mu}\equiv 0$.
Apply the latter to (4.45), substitute for
$g_{1}$, $g_{2}$ and $g_{3}$ the subsidiary conditions that follow from the
reduced Jeans equations (4.2) and substitute for
the singular solutions the expressions
(4.42).
After several partial integrations, we use that the homogeneous
solutions $A$, $B$ and $C$ solve a homogeneous system similar to
(4.2), but now with respect to the source point
coordinates $(\lambda_{0},\mu_{0},\nu_{0})$
$$\frac{\partial B(\nu,\!\lambda,\!\mu;\!\nu_{0},\!\lambda_{0},\!\mu_{0}\!)}{%
\partial\lambda_{0}}\!=\!\frac{A(\lambda,\!\mu,\!\nu;\!\lambda_{0},\!\mu_{0},%
\!\nu_{0}\!)}{2(\lambda_{0}\!-\!\mu_{0})}+\frac{C(\mu,\!\nu,\!\lambda;\!\mu_{0%
},\!\nu_{0},\!\lambda_{0}\!)}{2(\lambda_{0}\!-\!\nu_{0})}\!,$$
(4.46)
where other relations follow by cyclic permutation of $\lambda\to\mu\to\nu\to\lambda$ and $\lambda_{0}\to\mu_{0}\to\nu_{0}\to\lambda_{0}$.
And similar for the two-dimensional homogeneous solutions $F$, $G$,
$H$ and $I$ the relations follow from
$$\displaystyle\frac{\partial G(\mu,\lambda;\mu_{0},\lambda_{0})}{\partial%
\lambda_{0}}$$
$$\displaystyle=$$
$$\displaystyle\frac{F(\lambda,\mu;\lambda_{0},\mu_{0})}{2(\lambda_{0}\!-\!\mu_{%
0})},$$
(4.47)
$$\displaystyle\frac{\partial H(\mu,\nu;\mu_{0},\nu_{0})}{\partial\mu_{0}}$$
$$\displaystyle=$$
$$\displaystyle\frac{I(\nu,\mu;\nu_{0},\mu_{0})}{2(\mu_{0}\!-\!\nu_{0})}.$$
It indeed turns out that for $S_{\mu\mu}(\lambda,\mu,\nu)$ all terms
cancel on the right hand side of (4.45).
The terms that are left in the case of $S_{\lambda\lambda}$ and
$S_{\nu\nu}$ are just eq. (4.2a) integrated
with respect to $\lambda$ and eq. (4.2c)
integrated with respect to $\nu$, respectively, and using that
$S_{\mu\mu}\equiv 0$.
A similar analysis as above shows that also for thin O- and S-tubes
– for which $S_{\lambda\lambda}\equiv 0$ in both cases – the
general triaxial solution yields the correct result.
4.6.7 Triaxial Abel models
For a galaxy with a triaxial potential of Stäckel form, the DF is a
function of the three exact isolating integrals of motion,
$f(\mathbf{x},\mathbf{v})=f(E,I_{2},I_{3})$
(see also §2.2).
The expressions for $E$, $I_{2}$ and $I_{3}$ in terms of the phase-space
coordinates ($\mathbf{x},\mathbf{v}$) can be found in e.g. Z85.
We can thus write the velocity moments of the DF as a triple integral
over $E$, $I_{2}$ and $I_{3}$.
Assuming that the DF is function of only one variable
$$S\equiv E+wI_{2}+uI_{3},$$
(4.48)
with $w$ and $u$ constants, Dejonghe & Laurent
(1991) show that the triple integration
simplifies to a one-dimensional Abel integration over $S$.
Even though a DF of this form can only describe a self-consistent
model in the spherical case (ellipsoidal hypothesis, see, e.g.,
Eddington 1915), the Jeans equations do
not require self-consistency.
The special Abel form results in a simple analytical relation between
the three stress components (Dejonghe & Laurent
1991, their eq. [5.6])
$$T_{\mu\mu}=T_{\lambda\lambda}a_{\mu\nu}/a_{\lambda\nu},\qquad T_{\nu\nu}=T_{%
\lambda\lambda}a_{\mu\nu}/a_{\mu\lambda},$$
(4.49)
with
$$a_{\sigma\tau}=(\gamma\!-\!\alpha)+(\sigma\!+\!\alpha)(\tau\!+\!\alpha)w-(%
\sigma\!+\!\gamma)(\tau\!+\!\gamma)u,$$
(4.50)
and $\sigma,\tau=\lambda,\mu,\nu$. With these relations we find that
$$\frac{T_{\lambda\lambda}\!-\!T_{\mu\mu}}{\lambda\!-\!\mu}=\frac{T_{\lambda%
\lambda}}{a_{\lambda\nu}}\frac{\partial a_{\lambda\nu}}{\partial\lambda},\quad%
\frac{T_{\lambda\lambda}\!-\!T_{\nu\nu}}{\lambda\!-\!\nu}=\frac{T_{\lambda%
\lambda}}{a_{\lambda\mu}}\frac{\partial a_{\lambda\mu}}{\partial\lambda}.$$
(4.51)
The first Jeans equation (2.16a)
now becomes a first-order partial
differential equation for $T_{\lambda\lambda}$.
This equation can be solved in a straightforward way and provides an
elegant and simple expression for the radial stress component
$$\displaystyle T_{\lambda\lambda}(\lambda,\mu,\nu)=\sqrt{\frac{a_{\lambda_{e}%
\mu}a_{\lambda_{e}\nu}}{a_{\lambda\mu}a_{\lambda\nu}}}\;T_{\lambda\lambda}(%
\lambda_{e},\mu,\nu)\\
\displaystyle+\int\limits_{\lambda}^{\lambda_{e}}\hskip-3.0pt{\mathrm{d}}%
\lambda_{0}\biggl{[}\sqrt{\frac{a_{\lambda_{0}\mu}a_{\lambda_{0}\nu}}{a_{%
\lambda\mu}a_{\lambda\nu}}}\;\rho\frac{\partial V_{S}}{\partial\lambda_{0}}%
\biggr{]}.$$
(4.52)
The expressions for $T_{\mu\mu}$ and $T_{\nu\nu}$ follow by
application of the ratios (4.49).
If we let the boundary value $\lambda_{e}\to\infty$, the first term on
the right-hand side of (4.52) vanishes.
The density $\rho$, which does not need to be the density $\rho_{S}$
which generates $V_{S}$, is of the Abel form as given in eq. (3.11) of
Dejonghe & Laurent (1991).
If we substitute this form in (4.52), we obtain, after
changing the order of integration and evaluating the integral with
respect to $\lambda$, again a single Abel integral that is equivalent
to the expression for $T_{\lambda\lambda}$ that follows from
eq. (3.10) of by Dejonghe & Laurent
(1991).
Using the relations (4.49) and the
corresponding subsidiary conditions for $g_{1}$, $g_{2}$ and $g_{3}$, it can
be shown that the general triaxial solution
(4.45) gives the stress components correctly.
4.6.8 Numerical test
We have numerically implemented the general triaxial solution
(4.45), and tested it on a polytrope dynamical
model, for which the DF depends only on energy $E$ as $f(E)\propto E^{n-3/2}$, with $n>\frac{1}{2}$.
Integration of this DF over velocity $v$, with $E=-V-\frac{1}{2}v^{2}$
for a potential $V\leq 0$, shows that the density $\rho\propto(-V)^{n}$ (e.g., Binney & Tremaine 1987,
p. 223).
This density is not consistent with the Stäckel potentials we use but,
as noted in §2.3,
the Jeans equations do not require self-consistency.
The first velocity moments and the mixed second moments
of the DF are all zero.
The remaining three moments all equal $-V/(n+1)$, so that the
isotropic stress of the polytrope model
$T_{\mathrm{pol}}\propto(-V)^{n+1}$.
We take the potential $V$ to be of Stäckel form $V_{S}$
(2.3), and consider two different choices
for $G(\tau)$ in (2.4).
The first is the simple form
$G(\tau)=-GM/(\sqrt{\tau}+\sqrt{-\alpha})$ that is related to
Hénon’s isochrone
(de Zeeuw & Pfenniger 1988).
The second is the form for the perfect ellipsoid, for which
$G(\tau)$ is given in Z85 in terms of complete elliptic integrals.
The partial derivatives of $V_{S}(\lambda,\mu,\nu)$, that appear in the
weights $g_{1}$, $g_{2}$ and $g_{3}$, can be obtained in terms of $G(\tau)$
and its derivative in a straightforward way by using the expressions
derived by de Zeeuw et al. (1986).
The calculation of the stresses is done in the following way.
We choose the polytrope index $n$, and fix the triaxial Stäckel
model by choosing $\alpha$, $\beta$ and $\gamma$.
This gives $T_{\mathrm{pol}}$.
Next, we obtain successively the stresses $T_{\lambda\lambda}$,
$T_{\mu\mu}$ and $T_{\nu\nu}$ from the general triaxial solution
(4.45) by numerical integration, where the
relation between $S_{\tau\tau}$ and $T_{\tau\tau}$
is given by (4.1).
We first fix the upper integration limits $\lambda_{e}$, $\mu_{e}$ and
$\nu_{e}$.
All integrands contain the singular solutions
(4.42), that involve the homogeneous
solutions $A$, $B$, $C$, $F$, $G$, $H$ and $I$, for which we
numerically evaluate the single quadratures (eq. [4.5.1],
[4.38] and [4.5.3]).
The weights $g_{1}$, $g_{2}$ and $g_{3}$ (4.3) involve
the polytrope density and Stäckel potential.
This leaves the boundary stresses in the integrands,
for which we use the polytrope stress $T_{\mathrm{pol}}$ that follows from the
choice of the DF, evaluated at the corresponding boundary surfaces.
We then evaluate the general solution away from these boundaries, and
compare it with the known result.
We carried out the numerical calculations for different choices of
$n$, $\alpha$, $\beta$ and $\gamma$ and at different field points
$(\lambda,\mu,\nu)$.
In each case the resulting stresses $T_{\lambda\lambda}$, $T_{\mu\mu}$
and $T_{\nu\nu}$ – independently calculated – were equivalent to
high precision and equal to $T_{\mathrm{pol}}$.
This agreement provides a check on the accuracy of both our formulae
and their numerical implementation, and demonstrates the feasibility
of using our methods for computing triaxial stress distributions.
That will be the subject of a follow-up paper.
5 Discussion and conclusions
Eddington (1915) showed that the velocity
ellipsoid in a triaxial galaxy with a separable potential of Stäckel
form is everywhere aligned with the confocal ellipsoidal coordinate
system in which the equations of motion separate. Lynden–Bell
(1960) derived the three Jeans equations which
relate the three principal stresses to the potential and the
density. They constitute a highly-symmetric set of first-order partial
differential equations in the three confocal coordinates. Solutions
were found for the various two-dimensional limiting cases, but with
methods that do not carry over to the general case, which, as a
consequence, remained unsolved.
In this paper, we have introduced an alternative solution method,
using superposition of singular solutions.
We have shown that this approach not only provides an elegant
alternative to the standard Riemann–Green method for the
two-dimensional limits, but also, unlike the standard methods, can be
generalised to solve the three-dimensional system.
The resulting solutions contain complete (hyper)elliptic integrals
which can be evaluated in a straightforward way.
In the derivation, we have recovered (and in some cases corrected) all
previously known solutions for the various two-dimensional limiting
cases with more symmetry, as well as the two special solutions known
for the general case, and have also clarified the restrictions on the
boundary values.
We have numerically tested our solution on a polytrope model.
The general Jeans solution is not unique, but requires specification
of principal stresses at certain boundary surfaces, given a separable
triaxial potential, and a triaxial density distribution (not
necessarily the one that generates the potential).
We have shown that these boundary surfaces can be taken to be the
plane containing the long and the short axis of the galaxy, and, more
specifically, the part that is crossed by all three families of tube
orbits and the box orbits.
This is not unexpected, as HZ92 demonstrated that the phase-space
distribution functions of these triaxial systems are defined by
specifying the population of each of the three tube orbit families in
a principal plane.
Once the tube orbit populations have been defined in this way, the
population of the box orbits is fixed, as it must reproduce the
density not contributed by the tubes, and there is only one way to do
this.
While HZ92 chose to define the population of inner and outer long axis
tubes in a part of the $(x,z)$-plane, and the short axis tubes in a
part of the $(y,z)$-plane, it is in fact also possible to specify all
three of them in the appropriate parts of the $(x,z)$-plane, just as
is needed for the stresses.
The set of all Jeans solutions (4.45) contains
all the stresses that are associated with the physical distribution
functions $f\geq 0$, but, as in the case of spherical and
axisymmetric models, undoubtedly also contains solutions which are
unphysical, e.g., those associated with distribution functions that
are negative in some parts of phase space.
The many examples of the use of spherical and axisymmetric Jeans
models in the literature suggest nevertheless that the Jeans solutions
can be of significant use.
While triaxial models with a separable potential do not provide an
adequate description of the nuclei of galaxies with cusped luminosity
profiles and a massive central black hole, they do catch much of the
orbital structure at larger radii, and in some cases even provide a
good approximation of the galaxy potential.
The solutions for the mean streaming motions, i.e., the first velocity
moments of the distribution function, are quite helpful in
understanding the variety of observed velocity fields in giant
elliptical galaxies and constraining their intrinsic shapes
(e.g., Statler 1991,
1994b;
Arnold et al.1994;
Statler, DeJonghe & Smecker-Hane 1999;
Statler 2001).
We expect that the projected velocity dispersion fields that can be
derived from our Jeans solutions will be similarly useful, and, in
particular, that they can be used to establish which combinations of
viewing directions and intrinsic axis ratios are firmly ruled out by
the observations.
As some of the projected properties of the Stäckel models can be
evaluated by analytic means (Franx 1988),
it is possible that this holds even for the intrinsic moments
considered here.
Work along these lines is in progress.
The solutions presented here constitute a significant step towards
completing the analytic description of the properties of the separable
triaxial models, whose history by now spans more than a century. It is
remarkable that the entire Jeans solution can be written down by means
of classical methods. This suggests that similar solutions can be
found for the higher dimensional analogues of
(2.16),
most likely involving hyperelliptic integrals of higher order.
It is also likely that the higher-order velocity moments for the
separable triaxial models can be found by similar analytic means, but
the effort required may become prohibitive.
acknowledgments
This paper owes much to Donald Lynden–Bell’s enthusiasm and
inspiration.
This work begun during a sabbatical visit by CH to Leiden
Observatory in 1992, supported in part by a Bezoekersbeurs from NWO,
and also by NSF through grant DMS 9001404.
CH is currently supported by NSF through grant DMS 0104751.
This research was supported in part by the Netherlands Research School
for Astronomy NOVA.
The authors gratefully acknowledge stimulating discussions with Wyn
Evans during the initial phases of this work.
References
[1]
Abramowitz M., Stegun I. A., 1965, Handbook of Mathematical Functions.
New York, Dover Publications
[2]
Arnold R., 1995, MNRAS, 276, 293
[3]
Arnold R., de Zeeuw P. T., Hunter C., 1994, MNRAS, 271, 924
[4]
Bacon R., Simien F., Monnet G., 1983, A&A, 128, 405
[5]
Bak J., Statler T. S., 2000, AJ, 120, 110
[6]
Binney J., 1976, MNRAS, 177, 19
[7]
Binney J., 1978, MNRAS, 183, 501
[8]
Binney J., 1982, MNRAS, 201, 15
[9]
Binney J., Tremaine S., 1987, Galactic Dynamics.
Princeton, NJ, Princeton University Press
[10]
Bishop J. L., 1986, ApJ, 305, 14
[11]
Bishop J. L., 1987, ApJ, 322, 618
[12]
Byrd P. F., Friedman M. D., 1971, Handbook of Elliptic
Integrals for Engineers and Scientists.
Springer-Verlag, Berlin Heidelberg New York, 2nd revised ed.
[13]
Carlson B. C., 1977, Special Functions of Applied Mathematics.
Academic Press, New York San Francisco London
[14]
Chandrasekhar S., 1939, ApJ, 90, 1
[15]
Chandrasekhar S., 1940, ApJ, 92, 441
[16]
Conway J., 1973, Functions of one complex variable.
New York, Springer-Verlag
[17]
Copson E. T., 1975, Partial Differential Equations.
Cambridge, Cambridge University Press
[18]
de Zeeuw P. T., 1985a, MNRAS, 216, 273 [Z85]
[19]
de Zeeuw P. T., 1985b, MNRAS, 216, 599
[20]
de Zeeuw P. T., Hunter C., 1990, ApJ, 356, 365
[21]
de Zeeuw P. T., Hunter C., Schwarzschild M., 1987, ApJ, 317, 607
[22]
de Zeeuw P. T., Peletier R., Franx M., 1986, MNRAS, 221, 1001
[23]
de Zeeuw P. T., Pfenniger D., 1988, MNRAS, 235, 949
[24]
Dejonghe H., de Zeeuw P. T., 1988, ApJ, 333, 90
[25]
Dejonghe H., Laurent D., 1991, MNRAS, 252, 606
[26]
Eddington A. S., 1915, MNRAS, 76, 37
[27]
Evans N., 1990, Intern. J. Computer Math., 34, 105
[28]
Evans N. W., Carollo C. M., de Zeeuw P. T., 2000, MNRAS,
318, 1131
[29]
Evans N. W., de Zeeuw P. T., 1992, MNRAS, 257, 152
[30]
Evans N. W., Lynden-Bell D., 1989, MNRAS, 236, 801 [EL89]
[31]
Evans N. W., Lynden-Bell D., 1991, MNRAS, 251, 213
[32]
Franx M., 1988, MNRAS, 231, 285
[33]
Gebhardt K., Richstone D., Tremaine S., Lauer T. D., Bender R.,
Bower G., Dressler A., Faber S., Filippenko A. V., Green R.,
Grillmair C., Ho L. C., Kormendy J., Magorrian J., Pinkney
J., 2003, ApJ, in press
[34]
Gerhard O. E., 1993, MNRAS, 265, 213
[35]
Goldstein H., 1980, Classical Mechanics.
London, Addison-Wesley
[36]
Hunter C., de Zeeuw P. T., 1992, ApJ, 389, 79 [HZ92]
[37]
Hunter C., de Zeeuw P. T., Park C., Schwarzschild M., 1990, ApJ,
363, 367
[38]
Jeans J. H., 1915, MNRAS, 76, 70
[39]
Kuzmin G., 1973, in Proc. All-Union Conf., Dynamics of Galaxies
and Clusters, ed. T.B. Omarov (Alma Ata: Akad. Nauk Kazakhskoj
SSR), 71 (English transl. in IAU Symp. 127, Structure and Dynamics
of Elliptical Galaxies, ed. P.T. de Zeeuw [Dordrecht: Reidel], 553)
[40]
Lynden-Bell D., 1960, PhD thesis, Cambridge University
[41]
Lynden-Bell D., 1962a, MNRAS, 124, 1
[42]
Lynden-Bell D., 1962b, MNRAS, 124, 95
[43]
Mathieu, A., Dejonghe, H., 1999, MNRAS, 303, 455
[44]
Oberhettinger F., Badii L., 1973, Tables of Laplace Transforms.
New York, Springer-Verlag
[45]
Press W. H., Teukolsky S. A., Vettering W. T., Flannery B. P.,
1992, Numerical Recipes.
Cambridge Univ. Press, Cambridge
[46]
Qian E. E., de Zeeuw P. T., van der Marel R. P., Hunter C., 1995,
MNRAS, 274, 602
[47]
Rix H., de Zeeuw P. T., Cretton N., van der Marel R. P.,
Carollo C. M., 1997, ApJ, 488, 702
[48]
Robijn F. H. A., de Zeeuw P. T., 1996, MNRAS, 279, 673
[49]
Schwarzschild M., 1979, ApJ, 232, 236
[50]
Schwarzschild M., 1993, ApJ, 409, 563
[51]
Stäckel P., 1890, Math. Ann., 35, 91
[52]
Stäckel P., 1891, Über die Integration der Hamilton-Jacobischen
Differential gleichung mittelst Separation der Variabeln.
Habilitationsschrift, Halle
[53]
Statler T. S., 1987, ApJ, 321, 113
[54]
Statler T. S., 1991, AJ, 102, 882
[55]
Statler T. S., 1994a, ApJ, 425, 458
[56]
Statler T. S., 1994b, ApJ, 425, 500
[57]
Statler T. S., 2001, AJ, 121, 244
[58]
Statler T. S., Fry A. M., 1994, ApJ, 425, 481
[59]
Statler, T. S., Dejonghe, H., Smecker-Hane, T., 1999, AJ, 117, 126
[60]
Strauss W. A., 1992, Partial Differential Equations.
New York, John Wiley
[61]
van der Marel R. P., Cretton N., de Zeeuw P. T., Rix H., 1998,
ApJ, 493, 613
[62]
Verolme E. K., Cappellari M., Copin Y., van der Marel R. P.,
Bacon R., Bureau M., Davies R., Miller B., de Zeeuw P. T.,
2002, MNRAS, 335, 517
[63]
Weinacht J., 1924, Math. Ann., 91, 279
[64]
Zhao H. S., 1996, MNRAS, 283, 149
Appendix A Solving for the difference in stress
We compare our solution for the stress components $T_{\lambda\lambda}$
and $T_{\mu\mu}$ with the result derived by EL89. They combine the two
Jeans equations (2.25) into the single equation
$$\frac{\partial^{2}\Delta}{\partial\lambda\partial\mu}+\biggl{(}\frac{\partial}%
{\partial\mu}\!-\!\frac{\partial}{\partial\lambda}\biggl{)}\frac{\Delta}{2(%
\lambda\!-\!\mu)}=\frac{\partial\rho}{\partial\lambda}\frac{\partial V_{S}}{%
\partial\mu}\!-\!\frac{\partial\rho}{\partial\mu}\frac{\partial V_{S}}{%
\partial\lambda},$$
(A.1)
for the difference $\Delta\equiv T_{\lambda\lambda}-T_{\mu\mu}$ of
the two stress components. Eq. (A.1) is of the form
$$\mathcal{L}^{\star}\Delta=\frac{\partial\rho}{\partial\lambda}\frac{\partial V%
_{S}}{\partial\mu}\!-\!\frac{\partial\rho}{\partial\mu}\frac{\partial V_{S}}{%
\partial\lambda},$$
(A.2)
where $\mathcal{L}^{\star}$ is the adjoint operator defined in
eq. (3.6). As in §3.1,
eq. (A.1) can be solved via a Riemann–Green function.
A.1 The Green’s function
In order to obtain the Riemann–Green function $\mathcal{G}^{\star}$ for
the adjoint operator $\mathcal{L}^{\star}$, we use the reciprocity
relation (Copson 1975, §5.2) to relate it to the Riemann–Green function
$\mathcal{G}$, derived in §3.1.2
for $\mathcal{L}$. With $c_{1}=c_{2}=-\frac{1}{2}$ in this case, we get
$$\displaystyle\mathcal{G}^{\star}(\lambda,\mu;\lambda_{0},\mu_{0})$$
$$\displaystyle=$$
$$\displaystyle\mathcal{G}(\lambda_{0},\mu_{0};\lambda,\mu)$$
(A.3)
$$\displaystyle=$$
$$\displaystyle\left(\frac{\lambda_{0}\!-\!\mu_{0}}{\lambda\!-\!\mu}\right)^{%
\frac{1}{2}}{}_{2}F_{1}(-\scriptstyle\frac{1}{2}\displaystyle,\scriptstyle%
\frac{3}{2}\displaystyle;1;w),$$
where $w$ as defined in (3.16). EL89 seek to solve
eq. (A.2) using a Green’s function $G$ which satisfies
the equation
$$\mathcal{L}^{\star}G=\delta(\lambda_{0}\!-\!\lambda)\delta(\mu_{0}\!-\!\mu).$$
(A.4)
That they impose
the same boundary conditions that we do is evident from their remark
that, if $\mathcal{L}^{\star}$ were the simpler operator
$\partial^{2}/\partial\lambda\partial\mu$, $G$ would be
$\mathcal{H}(\lambda_{0}\!-\!\lambda)\mathcal{H}(\mu_{0}\!-\!\mu)$. This
is the same result as would be obtained by the singular solution
method of §3.2, which, as we showed there,
is equivalent to the Riemann–Green analysis. Hence their $G$ should
match the $\mathcal{G}^{\star}$ of eq. (A.3). We
show in §A.3 that it does not.
A.2 Laplace transform
We use a Laplace transform to solve (A.4) because the
required solution is that to an initial value problem to which Laplace
transforms are naturally suited. The PDE is hyperbolic with the lines
$\lambda=\mathrm{const}$ and $\mu=\mathrm{const}$ as
characteristics, and its solution is non-zero only in the rectangle
bounded by the characteristics $\lambda=\lambda_{0}$ and $\mu=\mu_{0}$,
and the physical boundaries $\lambda=-\alpha$ and $\mu=-\beta$
(Fig. 7). We introduce new
coordinates
$$\xi=(\lambda\!-\!\mu)/\sqrt{2},\qquad\eta=-(\lambda\!+\!\mu)/\sqrt{2},$$
(A.5)
so that eq. (A.4) simplifies to
$$\mathcal{L}^{\star}G\equiv\frac{\partial^{2}G}{\partial\eta^{2}}-\frac{%
\partial^{2}G}{\partial\xi^{2}}-\frac{\partial}{\partial\xi}\left(\frac{G}{\xi%
}\right)=2\delta(\xi-\xi_{0})\delta(\eta-\eta_{0}),$$
(A.6)
where $\xi_{0}=(\lambda_{0}\!-\!\mu_{0})/\sqrt{2}$ and $\eta_{0}=-(\lambda_{0}\!+\!\mu_{0})/\sqrt{2}$ are the coordinates of the source point.
The factor of 2 arises from the transformation of the derivatives; the
product of the delta functions in (A.4) transforms into
that of (A.6) because the Jacobian of the
transformation (A.5) is unity. The reason for our
choice of $\eta$ is that $G\equiv 0$ for $\eta<\eta_{0}$, that is
$\lambda+\mu>\lambda_{0}+\mu_{0}$. Hence $\eta$ is a time-like variable
which increases in the direction in which the non-zero part of the
solution propagates. We take a Laplace transform in $\tilde{\eta}=\eta-\eta_{0}$, and transform $G(\xi,\eta)$ to
$$\hat{G}(\xi,p)=\int\limits_{0}^{\infty}e^{-p\tilde{\eta}}G(\xi,\tilde{\eta}){%
\mathrm{d}}\tilde{\eta}.$$
(A.7)
There are two equally valid ways of taking proper account of the
$\delta(\eta-\eta_{0})$ in taking the Laplace transform of equation
(A.6). One can either treat it as
$\delta(\tilde{\eta}-0+)$, in which case it has a Laplace transform of
1, or one can treat it as $\delta(\tilde{\eta}-0-)$, in which case it
contributes a unit initial value to $\partial G/\partial\eta$ which
must be included in the Laplace transform of $\partial^{2}G/\partial\eta^{2}$ (Strauss 1992). Either way leads to a transformed
equation for $\hat{G}(\xi,p)$ of
$$p^{2}\hat{G}-\frac{{\mathrm{d}}^{2}\hat{G}}{{\mathrm{d}}\xi^{2}}-\frac{{%
\mathrm{d}}}{{\mathrm{d}}\xi}\left(\frac{\hat{G}}{\xi}\right)=2\delta(\xi-\xi_%
{0}).$$
(A.8)
The homogeneous part of eq. (A.8) is the modified
Bessel equation of order one in the variable $p\xi$. Two
independent solutions are the modified Bessel functions $I_{1}$ and
$K_{1}$. The former vanishes at $\xi=0$ and the latter decays
exponentially as $\xi\to\infty$. We need $\hat{G}$ to decay
exponentially as $\xi\to\infty$ because $G(\xi,\eta)$ vanishes
for $\tilde{\eta}<\xi-\xi_{0}$, and hence its Laplace transform
$\hat{G}$ is exponentially small for large $\xi$. We also need $\hat{G}$ to vanish at $\xi=0$ where $\lambda=\mu$. The focus at which
$\lambda=\mu=-\alpha$ is the only physically relevant point at
which $\xi=0$. It lies on a boundary of the solution region in the
$\lambda_{0}\to-\alpha$ limit (Fig. 7).
The focus is a point at which the difference $\Delta$ between the
stresses vanishes, and hence $G$ and $\hat{G}$ should vanish there.
The delta function in eq. (A.8) requires that
$\hat{G}$ be continuous at $\xi=\xi_{0}$ and that ${\mathrm{d}}\hat{G}/{\mathrm{d}}\xi$
decrease discontinuously by 2 as $\xi$ increases through
$\xi=\xi_{0}$. Combining all these requirements, we obtain the result
$$\hat{G}(\xi,p)=\begin{cases}2\xi_{0}\,K_{1}(p\xi)\,I_{1}(p\xi_{0}),&\text{$\xi%
_{0}\leq\xi<\infty$},\\
2\xi_{0}\,K_{1}(p\xi_{0})\,I_{1}(p\xi),&\text{$0\leq\xi\leq\xi_{0}$}.\end{cases}$$
(A.9)
We use the Wronskian relation $I_{1}(x)K_{1}^{\prime}(x)-I_{1}^{\prime}(x)K_{1}(x)=-1/x$
(eq. [9.6.15] of Abramowitz & Stegun 1965)
in calculating the prefactor of the products
of modified Bessel functions. The inversion of this transform is
obtained from formula (13.39) of Oberhettinger & Badii
(1973) which gives
$$G(\xi,\!\tilde{\eta})\!=\!\!\begin{cases}\!\sqrt{\!\frac{\xi_{0}}{\xi}}\,{}_{2%
}F_{1}(-\scriptstyle\frac{1}{2}\displaystyle,\scriptstyle\frac{3}{2}%
\displaystyle;\!1;w),&\text{$|\xi_{0}\!\!-\!\xi|\!\leq\!\tilde{\eta}\!\leq\!%
\xi_{0}\!\!+\!\xi$},\\
0,&\text{$-\infty\!<\!\tilde{\eta}\!<\!|\xi_{0}\!\!-\!\xi|$},\end{cases}$$
(A.10)
we have (cf. eq. [3.16])
$$w\equiv\frac{\tilde{\eta}^{2}-(\xi_{0}\!-\!\xi)^{2}}{4\xi_{0}\xi}=\frac{(%
\lambda_{0}\!-\!\lambda)(\mu_{0}\!-\!\mu)}{(\lambda_{0}\!-\!\mu_{0})(\lambda\!%
-\!\mu)}.$$
(A.11)
The second case of eq. (A.10) shows that $G$ does
indeed vanish outside the shaded sector $\lambda<\lambda_{0}$,
$\mu<\mu_{0}$. The first case shows that it agrees with the adjoint
Riemann–Green function $\mathcal{G}^{\star}$ of
(A.3) which was derived from the analysis of
§3.1.
A.3 Comparison with EL89
EL89 use variables $s=-\eta$ and $t=\xi$, whereas we avoided
using $t$ for the non-time-like variable. They consider the Fourier
transform
$$\bar{G}(\xi,k)=\int\limits_{-\infty}^{\infty}e^{-ik\tilde{\eta}}{G}(\xi,\tilde%
{\eta}){\mathrm{d}}\tilde{\eta}.$$
(A.12)
Because $G\equiv 0$ for $\tilde{\eta}\leq 0$, we can rewrite our
Laplace transform as their Fourier transform. Setting $p=-ik$ gives
$\bar{G}(\xi,k)=i\hat{G}(\xi,-ik)$, and using the formulas
$I_{1}(x)=-J_{1}(ix)$ and $K_{1}(x)=\frac{1}{2}\pi iH_{1}^{(1)}(ix)$,
eq. (A.9) yields
$$\bar{G}(\xi,k)=\begin{cases}\pi i\xi_{0}\,H_{1}^{(1)}(k\xi)\,J_{1}(k\xi_{0}),&%
\text{$\xi_{0}\leq\xi<\infty$},\\
\pi i\xi_{0}\,H_{1}^{(1)}(k\xi_{0})\,J_{1}(k\xi),&\text{$0\leq\xi\leq\xi_{0}$}%
.\end{cases}$$
(A.13)
This formula differs from the solution for the Fourier transform given
in eq. (70) of EL89. The major difference is that their solution has
Hankel functions of the second kind $H_{1}^{(2)}(kt)=H_{1}^{(2)}(k\xi)$
where ours has $J_{1}$ Bessel functions. Consequently their solution
has an unphysical singularity at $t=\xi=0$, and so, in our opinion, is
incorrect. Our solution, which was devised to avoid that singularity,
gives a result which matches that derived by Riemann’s method in
§3.1.
A.4 The solution for $\Delta$
The solution for $\Delta$ using the adjoint Riemann–Green function is
given by eq. (3.14) with $\mathcal{G}$ replaced by
$\mathcal{G}^{\star}$ and the sign of $c_{2}$ changed for the adjoint case
(Copson 1975).
The hypergeometric function of eq. (A.3) for
$\mathcal{G}^{\star}$ is expressible in terms of complete elliptical
integrals as
$${}_{2}F_{1}(-\scriptstyle\frac{1}{2}\displaystyle,\scriptstyle\frac{3}{2}%
\displaystyle;1;w)=\frac{2}{\pi}[E(w)+2wE^{\prime}(w)].$$
(A.14)
Hence, the solution for the difference $\Delta$ between the two
principal stresses is given by
$$\displaystyle\Delta(\lambda,\mu)=\frac{2}{\pi(\lambda\!-\!\mu)^{\frac{1}{2}}}%
\Biggl{\{}\\
\displaystyle\int\limits_{\lambda}^{\infty}\hskip-4.0pt{\mathrm{d}}\lambda_{0}%
\hskip-5.0pt\int\limits_{\mu}^{-\alpha}\hskip-4.0pt{\mathrm{d}}\mu_{0}\!\Bigl{%
[}\!E(w)+2wE^{\prime}(w)\!\Bigr{]}\!(\lambda_{0}\!-\!\mu_{0})^{\frac{1}{2}}\!%
\biggl{(}\!\frac{\partial\rho}{\partial\lambda_{0}}\!\frac{\partial V_{S}}{%
\partial\mu_{0}}\!-\!\frac{\partial\rho}{\partial\mu_{0}}\!\frac{\partial V_{S%
}}{\partial\lambda_{0}}\!\biggr{)}\\
\displaystyle\hskip-5.0pt-\!\!\int\limits_{\lambda}^{\infty}\hskip-4.0pt{%
\mathrm{d}}\lambda_{0}\!\biggl{[}\!E(w)+2wE^{\prime}(w)\hskip-12.0pt\underset{%
\mu_{0}=-\alpha}{\biggr{]}}\hskip-12.0pt\frac{{\mathrm{d}}}{{\mathrm{d}}%
\lambda_{0}}\!\Bigl{[}\!(\lambda_{0}\!+\!\alpha)^{\frac{1}{2}}\Delta(\lambda_{%
0},-\alpha)\!\Bigr{]}\!\!\Biggr{\}}.$$
(A.15)
The determined reader can verify, after some manipulation, that this
expression is equivalent to the difference between the separate
solutions (3.21a) and
(3.21b), derived in §3.1.
Note added in manuscript
We agree with the amendment to our method of solution for $\Delta$
given in Appendix A.4.
Our Green’s function, while solving the differential equation, had the
wrong boundary conditions.
N.W. Evans & D. Lynden-Bell |
High-Resolution Angle Tracking for
Mobile Wideband Millimeter-Wave Systems with Antenna Array Calibration
Dalin Zhu,
Junil Choi,
Qian Cheng,
Weimin Xiao,
and Robert W. Heath Jr.
Dalin Zhu and Robert W. Heath Jr. are with the Department
of Electrical and Computer Engineering, The University of Texas at Austin, Austin,
TX, 78712 USA, e-mail: {dalin.zhu, rheath}@utexas.edu.
Junil Choi is with the Department of Electrical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk 37673 Korea, e-mail: junil@postech.ac.kr.
Qian Cheng and Weimin Xiao are with the Wireless Research and Standards Department, Huawei R&D USA, Rolling Meadows, IL, 60008 USA, e-mail: {q.cheng, weimin.xiao}@huawei.com
This work was supported in part by a gift from Huawei Technologies, in part by the National Science Foundation under Grant No. 1702800, and in part by the Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korean Government (MSIT) (No.2018(2016-0-00123), Development of Integer-Forcing MIMO Transceivers for 5G and Beyond Mobile Communication Systems).
Abstract
Millimeter-wave (mmWave) systems use directional beams to support high-rate data communications. Small misalignment between the transmit and receive beams (e.g., due to the mobility) can result in significant drop of the received signal quality especially in line-of-sight communication channels. In this paper, we propose and evaluate high-resolution angle tracking strategies for wideband mmWave systems with mobility. We custom design pairs of auxiliary beams as the tracking beams, and use them to capture the angle variations, towards which the steering directions of the data beams are adjusted. Different from conventional beam tracking designs, the proposed framework neither depends on the angle variation model nor requires an on-grid assumption. For practical implementation of the proposed methods, we examine the impact of the array calibration errors on the auxiliary beam pair design. Numerical results reveal that by employing the proposed methods, good angle tracking performance can be achieved under various antenna array configurations, channel models, and mobility conditions.
I Introduction
The small array form factor at millimeter-wave (mmWave) frequencies enables the use of large antenna arrays to generate highly directional beams. This allows array gain for improved received signal power and also reduces mean interference levels [1]-[5]. For the most benefits from beamforming, accurate channel direction information such as the channel’s angle-of-departures (AoDs) and angle-of-arrivals (AoAs) is required at both the base station (BS) and user equipment (UE) sides. Further, due to the UE’s mobility, slight misalignments of the transmit and receive beams with the channel’s AoDs and AoAs may result in significant performance loss at mmWave frequencies in line-of-sight (LOS) communication channels. Hence, accurate beam or angle tracking designs are required to better capture the channel variations and enable reliable mmWave communications in fast-varying environments.
Grid-of-beams based beam training is the defacto approach for configuring transmit and receive beams; variations are used in IEEE 802.11ad systems [6, 7] and will be used in 5G [8]. Beam tracking approaches that support grid-of-beams have been developed in [6, 7, 9, 10], but the performance depends on the grid resolution, leading to high complexity, tracking overhead, and access delay. In [11, 12], a priori-aided angle tracking strategies were proposed. By combining the temporal variation law of the AoD and AoA of the LOS path with the sparse structure of the mmWave channels, the channels obtained during the previous time-slots are used to predict the support (the index set of non-zero elements in a sparse vector) of the channel. The time-varying parameters corresponding to the support of the channel are then tracked for the subsequent time-slots. To track the non-LOS (NLOS) paths, the classical Kalman filter can be employed by first eliminating the influence of the LOS path [13]. In [14], the idea of Kalman filter was exploited as well when designing the angle tracking and abrupt change detection algorithms. In [15], the extended Kalman filter was used to track the channel’s AoDs and AoAs by only using the measurement of a single beam pair. The angle tracking algorithms developed in [11]-[15], however, depend on specific modeling of the geometric relationship between the BS and UE and the angle variations.
In this paper, we develop high-resolution angle tracking algorithms through the auxiliary beam pair design for mobile wideband mmWave systems under the analog architecture. In the employed analog architecture, the BS uses a small number of radio frequency (RF) chains to drive a large number of antenna elements, and forms the tracking beams in the analog domain. We propose and analyze new angle tracking procedures, where the basic principles follow those in [16, 17] with moderate modifications based on the employed array configurations and pilot signal structures. In our previous work [16, 17], we exploited the idea of auxiliary beam pair design to estimate both the narrowband and wideband mmWave channels with and without dual-polarization. The proposed approaches, however, were only applied to the angle estimation, and not specifically designed for the angle tracking. Further, in this paper, we custom design two array calibration strategies for the employed analog architecture, and characterize the impact of the array calibration errors on the proposed methods. We summarize the main contributions of the paper as follows:
•
We provide detailed design procedures of the proposed auxiliary beam pair-assisted angle tracking approaches in wideband mmWave systems. We propose several angle tracking design options and differentiate them in terms of tracking triggering device, feedback information, and information required at the UE side.
•
We develop and evaluate direct and differential feedback strategies for the proposed angle tracking designs in frequency-division duplexing systems. By judiciously exploiting the structure of auxiliary beam pair, the differential feedback strategy can significantly reduce the feedback overhead.
•
We custom design two receive combining based array calibration methods for the employed analog architecture, in which all the antenna elements are driven by a small number of RF chains. The proposed two methods are different in terms of the probing strategies of the calibration reference signals.
•
We characterize the impact of the radiation pattern impairments on our proposed methods. We first exhibit that relatively large phase and amplitude errors would contaminate the angle tracking performances of the proposed algorithms, resulting in increased tracking error probability and reduced spectral efficiency. By using the proposed array calibration methods to compensate for the radiation pattern impairments, we show that the proposed angle tracking strategies work well even with residual calibration errors.
We organize the rest of this paper as follows. In Section II, we first describe the employed system and wideband channel models; we then illustrate the frame structure and conventional grid-of-beams based beam tracking design for mmWave systems. In Section III, we explain detailed design principles and procedures of the proposed high-resolution angle tracking strategies. In Section IV, we discuss the developed array calibration methods and their impact on the proposed angle tracking designs. In Section V, we present numerical results to validate the effectiveness of the proposed techniques. Finally, we draw our conclusions in Section VI.
Notations: $\bm{A}$ is a matrix; $\bm{a}$ is a vector; $a$ is a scalar; $|a|$ is the magnitude of the complex number
$a$; $(\cdot)^{\mathrm{T}}$ and $(\cdot)^{*}$ denote transpose and conjugate transpose; $\bm{I}_{N}$ is the $N\times N$ identity matrix; $\bm{1}_{M\times N}$ represents the $M\times N$ matrix whose entries are all ones; $\mathcal{N}_{\mathrm{c}}(\bm{a},\bm{A})$ is a complex Gaussian vector with mean $\bm{a}$ and covariance $\bm{A}$; $\mathbb{E}[\cdot]$ is used to denote expectation; $\otimes$ is the Kronecker product; $\mathrm{sign}(\cdot)$ extracts the sign of a real number; $\mathrm{diag}(\cdot)$ is the diagonalization operation; and $\mathrm{vec}(\cdot)$ is the matrix vectorization operation.
II System Model and Conventional Beam Tracking Design
In this section, we first present the employed system model including the transceiver architecture, antenna array configurations, and wideband mmWave channel model. We then illustrate the conventional grid-of-beams based beam tracking design along with an introduction to the frame structure.
II-A Transceiver architecture, antenna array configurations, and received signal model
We consider a precoded MIMO-OFDM system with $N$ subcarriers and a hybrid precoding transceiver structure as shown in Figs. 1(a) and (b). A BS equipped with $N_{\mathrm{tot}}$ transmit antennas and $N_{\mathrm{RF}}$ RF chains transmits $N_{\mathrm{S}}$ data streams to a UE equipped with $M_{\mathrm{tot}}$ receive antennas and $M_{\mathrm{RF}}$ RF chains. As can be seen from Fig. 1, in a shared-array architecture, all antenna elements are jointly controlled by all RF chains sharing the same network of phase shifters. Further, we assume that a uniform planar array (UPA) is adopted at the BS, and a uniform linear array (ULA) is employed at the UE. The proposed methods are custom designed for uniform arrays, but can be extended to other array geometries by reconfiguring the beamforming vectors. The proposed methods are suited for both co-polarized and cross-polarized arrays [17], though we focus on co-polarized array setup in this paper.
Based on the employed transceiver architecture, we now develop the baseband received signal model for our system after beamforming and combining. Let $\bm{s}[k]$ denote an $N_{\mathrm{S}}\times 1$ baseband transmit symbol vector such that $\mathbb{E}\left[\bm{s}[k]\bm{s}^{*}[k]\right]=\bm{I}_{N_{\mathrm{S}}}$ and $k=0,\cdots,N-1$. The data symbol vector $\bm{s}[k]$ is first precoded using an $N_{\mathrm{RF}}\times N_{\mathrm{S}}$ digital baseband precoding matrix $\bm{F}_{\mathrm{BB}}[k]$ on the $k$-th subcarrier, resulting in $\bm{d}[k]=\left[d_{1}[k],\cdots,d_{N_{\mathrm{RF}}}[k]\right]^{\mathrm{T}}=\bm%
{F}_{\mathrm{BB}}[k]\bm{s}[k]$. In this paper, we set $N_{\mathrm{RF}}=N_{\mathrm{S}}$ and $\bm{F}_{\mathrm{BB}}[k]=\bm{I}_{N_{\mathrm{S}}}$ because the channel tracking is conducted in the analog domain. Note that similar analog-only assumption applies to the UE side as well. The transmit symbols are then transformed to the time-domain via $N_{\mathrm{RF}}$, $N$-point IFFTs, generating the discrete-time signal sample $x_{n_{\mathrm{R}}}[n]=\sum_{k=0}^{N-1}d_{n_{\mathrm{R}}}[k]e^{\mathrm{j}\frac{%
2\pi k}{N}n}$, where $n_{\mathrm{R}}=1,\cdots,N_{\mathrm{RF}}$. Before applying an $N_{\mathrm{tot}}\times N_{\mathrm{RF}}$ wideband analog precoding matrix $\bm{F}_{\mathrm{RF}}$, a cyclic prefix (CP) with length $D$ is added to the data symbol blocks such that $D$ is greater than or equal to the maximum delay spread of the multi-path channel. Denote by $\bm{x}[n_{\mathrm{c}}]=\left[x_{1}[n_{\mathrm{c}}],\cdots,x_{N_{\mathrm{RF}}}[%
n_{\mathrm{c}}]\right]^{\mathrm{T}}$, where $n_{\mathrm{c}}=N-D,\cdots,N-1,0,\cdots,N-1$ due to the insertion of the CP. We can then express the discrete-time transmit signal model as $\bm{x}_{\mathrm{cp}}[n_{\mathrm{c}}]=\bm{F}_{\mathrm{RF}}\bm{x}[n_{\mathrm{c}}]$. To maintain the total transmit power constraint, $\left[\left[\bm{F}_{\mathrm{RF}}\right]_{:,n_{\mathrm{R}}}\left[\bm{F}_{%
\mathrm{RF}}\right]_{:,n_{\mathrm{R}}}^{*}\right]_{i,i}=\frac{1}{N_{\mathrm{%
tot}}}$ is satisfied with $i=1,\cdots,N_{\mathrm{tot}}$.
At the UE side, after combining with an $M_{\mathrm{tot}}\times M_{\mathrm{RF}}$ analog combining matrix $\bm{W}_{\mathrm{RF}}$, the CP is removed. The received data symbols are then transformed from the time-domain to the frequency-domain via $M_{\mathrm{RF}}$, $N$-point FFTs. Denote the frequency-domain $M_{\mathrm{tot}}\times N_{\mathrm{tot}}$ channel matrix by $\bm{H}[k]$. We can then express the discrete-time received signal as
$$\bm{y}[k]=\bm{W}_{\mathrm{RF}}^{*}\bm{H}[k]\bm{F}_{\mathrm{RF}}\bm{d}[k]+\bm{W%
}_{\mathrm{RF}}^{*}\bm{n}[k].$$
(1)
The noise vector $\bm{n}\sim\mathcal{N}_{c}(\bm{0}_{M_{\mathrm{tot}}},\sigma^{2}\bm{I}_{M_{%
\mathrm{tot}}})$ and $\sigma^{2}=1/\gamma$, where $\gamma$ represents the target signal-to-noise ratio (SNR) before the transmit beamforming.
II-B Wideband channel model
We employ a spatial geometric channel model to characterize the angular sparsity and frequency selectivity of the wideband mmWave channel. The spatial geometric channel models have been adopted in Long-Term Evolution (LTE) systems for various deployment scenarios [18]. In Section V, we use practical channel parameters obtained via measurements to evaluate the proposed methods, though we employ the spatial geometric channel model to analytically explain the core idea. We assume that the channel has $N_{\mathrm{r}}$ paths, and each path $r$ has azimuth and elevation AoDs $\phi_{r}$, $\mu_{r}$, and AoA $\varphi_{r}$ in this paper. Let $p(\tau)$ denote the combined effect of filtering and pulse-shaping for $T_{\mathrm{s}}$-spaced signaling at $\tau$ seconds. We express the time-domain delay-$d$ MIMO channel matrix as
$$\bm{H}[d]=\sum_{r=1}^{N_{\mathrm{r}}}g_{r}p\left(dT_{\mathrm{s}}-\tau_{r}%
\right)\bm{a}_{\mathrm{r}}(\varphi_{r})\bm{a}_{\mathrm{t}}^{*}(\mu_{r},\phi_{r%
}),$$
(2)
where $g_{r}$ represents the complex path gain of path-$r$, $\bm{a}_{\mathrm{r}}(\cdot)\in\mathbb{C}^{M_{\mathrm{tot}}\times 1}$ and $\bm{a}_{\mathrm{t}}(\cdot,\cdot)\in\mathbb{C}^{N_{\mathrm{tot}}\times 1}$ correspond to the receive and transmit array response vectors. The channel frequency response matrix on subcarrier $k$ is the Fourier transform of $\bm{H}[d]$ such that
$$\bm{H}[k]=\sum_{r=1}^{N_{\mathrm{r}}}g_{r}\rho_{\tau_{r}}[k]\bm{a}_{\mathrm{r}%
}(\varphi_{r})\bm{a}_{\mathrm{t}}^{*}(\mu_{r},\phi_{r}),$$
(3)
where $\rho_{\tau_{r}}[k]=\sum_{d=0}^{D-1}p\left(dT_{\mathrm{s}}-\tau_{r}\right)e^{-%
\mathrm{j}\frac{2\pi kd}{N}}$ is the Fourier transform of the delayed sampled filter $p(\tau)$ [19, 20].
Assuming that the UPA employed by the BS is in the $\mathrm{xy}$-plane with $N_{\mathrm{x}}$ and $N_{\mathrm{y}}$ elements on the $\mathrm{x}$ and $\mathrm{y}$ axes, then the transmit array response vector is
$$\displaystyle\bm{a}_{\mathrm{t}}(\mu_{r},\phi_{r})$$
$$\displaystyle=$$
$$\displaystyle\frac{1}{\sqrt{N_{\mathrm{tot}}}}\Big{[}1,e^{\mathrm{j}\frac{2\pi%
}{\lambda}d_{\mathrm{tx}}\sin(\mu_{r})\cos(\phi_{r})},\cdots,e^{\mathrm{j}%
\frac{2\pi}{\lambda}\left(N_{\mathrm{x}}-1\right)d_{\mathrm{tx}}\sin(\mu_{r})%
\cos(\phi_{r})},e^{\mathrm{j}\frac{2\pi}{\lambda}d_{\mathrm{ty}}\sin(\mu_{r})%
\sin(\phi_{r})},$$
(4)
$$\displaystyle\cdots,e^{\mathrm{j}\frac{2\pi}{\lambda}\left(\left(N_{\mathrm{x}%
}-1\right)d_{\mathrm{tx}}\sin(\mu_{r})\cos(\phi_{r})+\left(N_{\mathrm{y}}-1%
\right)d_{\mathrm{ty}}\sin(\mu_{r})\sin(\phi_{r})\right)}\Big{]}^{\mathrm{T}},$$
where $N_{\mathrm{tot}}=N_{\mathrm{x}}N_{\mathrm{y}}$, $\lambda$ represents the wavelength corresponding to the operating carrier frequency, $d_{\mathrm{tx}}$ and $d_{\mathrm{ty}}$ are the inter-element distances of the transmit antenna elements on the $\mathrm{x}$ and $\mathrm{y}$ axes. Denote by $\theta_{r}=\frac{2\pi}{\lambda}d_{\mathrm{tx}}\sin(\mu_{r})\cos(\phi_{r})$ and $\psi_{r}=\frac{2\pi}{\lambda}d_{\mathrm{ty}}\sin(\mu_{r})\sin(\phi_{r})$, which can be interpreted as the elevation and azimuth transmit spatial frequencies for path-$r$. We further define two vectors $\bm{a}_{\mathrm{tx}}(\theta_{r})\in\mathbb{C}^{N_{\mathrm{x}}\times 1}$ and $\bm{a}_{\mathrm{ty}}(\psi_{r})\in\mathbb{C}^{N_{\mathrm{y}}\times 1}$ as
$$\displaystyle\bm{a}_{\mathrm{tx}}(\theta_{r})=\frac{1}{\sqrt{N_{\mathrm{x}}}}%
\left[1,e^{\mathrm{j}\theta_{r}},\cdots,e^{\mathrm{j}\left(N_{\mathrm{x}}-1%
\right)\theta_{r}}\right]^{\mathrm{T}},\bm{a}_{\mathrm{ty}}(\psi_{r})=\frac{1}%
{\sqrt{N_{\mathrm{y}}}}\left[1,e^{\mathrm{j}\psi_{r}},\cdots,e^{\mathrm{j}%
\left(N_{\mathrm{y}}-1\right)\psi_{r}}\right]^{\mathrm{T}},$$
(5)
which can be viewed as the transmit array response vectors in the elevation and azimuth domains. We therefore have $\bm{a}_{\mathrm{t}}(\theta_{r},\psi_{r})=\bm{a}_{\mathrm{tx}}(\theta_{r})%
\otimes\bm{a}_{\mathrm{ty}}(\psi_{r})$ [17]. With this decomposition, we are able to separately track the channel’s azimuth and elevation angle information.
Since the ULA is employed by the UE, the receive array response vector is
$$\bm{a}_{\mathrm{r}}(\varphi_{r})=\frac{1}{\sqrt{M_{\mathrm{tot}}}}\big{[}1,e^{%
\mathrm{j}\frac{2\pi}{\lambda}d_{\mathrm{r}}\sin(\varphi_{r})},\cdots,e^{%
\mathrm{j}\frac{2\pi}{\lambda}d_{\mathrm{r}}\left(M_{\mathrm{tot}}-1\right)%
\sin(\varphi_{r})}\big{]}^{\mathrm{T}},$$
(6)
where $d_{\mathrm{r}}$ denotes the inter-element distance between the receive antenna elements. Let $\nu_{r}=\frac{2\pi}{\lambda}d_{\mathrm{r}}\sin(\varphi_{r})$ denote the receive spatial frequency for path-$r$. We can rewrite the receive array response vector for the UE as $\bm{a}_{\mathrm{r}}(\nu_{r})=\frac{1}{\sqrt{M_{\mathrm{tot}}}}\left[1,e^{%
\mathrm{j}\nu_{r}},\cdots,e^{\mathrm{j}\left(M_{\mathrm{tot}}-1\right)\nu_{r}}%
\right]^{\mathrm{T}}$.
II-C Frame structure and conventional beam tracking procedures
In Fig. 2(a), we provide a potential frame structure. The time frame consists of three main components: channel estimation (ChEst), dedicated data channel (DDC), and dedicated tracking channel (DTC). The ChEst, DDC and DTC are composed of various numbers of time-slots. Here, we define the time-slot as the basic time unit, which is equivalent to, say, one OFDM symbol duration. We assume a total of $T$ time-slots for one DTC. In the DDC, directional narrow beams are probed by the BS for high-rate data communications, while in the DTC, relatively wide beams are used to track the channel variations. In this paper, the beams in the DTC and the DDC are multiplexed in the time-domain as shown in Fig. 2(a). A transition period (TP) may exist between the DTC and the DDC. Similar to the zero prefix/postfix design for OFDM [21], the TP is set as a zero region. As beams probed in the DDC and the DTC may have different beamwidths, the antenna array can be reconfigured during the TP. The TP may also handle the tracking requests and responses between the BS and the UE. Further, the beam tracking in the DTC can be conducted in either a periodic or an aperiodic manner as shown in Fig. 2(a). Based on the employed frame structure, we now illustrate the conventional grid-of-beams based beam tracking procedures for mmWave systems.
To reduce the computational complexity and tracking overhead, the beams in the DTC are formed surrounding the beam in the DDC in the angular domain. For simplicity, we first categorize all beams into three types, which are (i) the anchor beam (the beam in the DDC), (ii) the supporting beams (a predefined number of beams in the DTC that are closely surrounding the anchor beam), and (iii) the backup beams (the beams in the DTC other than the supporting beams). For a given DTC, the received signal strengths of the supporting and backup beams are measured by the UE and fed back to the BS, which are then compared with a predefined threshold. If the received signal strengths of the supporting beams are greater than the given threshold, the current anchor beam is continuously used for data communications until the next DTC is triggered. Otherwise, the backup beams that yield larger received signal strengths above the given threshold are considered, and the beam training is executed within the probing range of the selected backup beams to update the steering direction of the anchor beam. If the received signal strengths of all the supporting and backup beams are below the given threshold, the complete beam training process as in the channel estimation phase [7, 22] will be conducted.
A conventional beam tracking design may incur a high tracking error probability due to the use of relatively wide beams and lack of quantization resolution [12, 23]. Further, to update the steering direction of the anchor beam, an exhaustive search over all candidate anchor beams of interest is executed, which yields relatively high computational complexity and access delay. Hence, new beam or angle tracking methods with high tracking resolution and low implementation complexity are needed to enable reliable mmWave communications in fast-varying environments.
III Proposed Angle Tracking Designs for Mobile Wideband mmWave Systems
In this section, we first illustrate the employed beam-specific pilot signal structure for the proposed tracking algorithms. Based on the employed shared-array architecture in Fig. 1, we then explain the design principles of the proposed high-resolution angle tracking approaches assuming the beam-specific pilot signal structure. Further, we present the detailed design procedures for the proposed algorithms along with the discussion of various feedback strategies. Unless otherwise specified, we explain the proposed angle tracking strategies in the azimuth domain assuming given elevation AoDs and AoAs. Note that the proposed algorithms can be directly extended to the tracking of the elevation directions.
III-A Design principles of proposed angle tracking approaches
The design focus of the proposed angle tracking approaches is to first obtain high-resolution angle estimates, and then track the angle variations via the custom designed tracking beams. We employ the same frame structure as in Fig. 2(a) in the proposed methods, where the tracking beams are probed during the DTC. In this part, we provide an overview of the auxiliary beam pair-assisted high-resolution angle estimation design for wideband mmWave systems. For simplicity, we focus on the estimation of the azimuth AoDs at the receiver.
Each auxiliary beam pair comprises two successively probed analog beams in the angular domain. Pairs of custom designed analog transmit and receive beams are probed to cover the given angular ranges. In this paper, the two analog beams in the same auxiliary beam pair are formed simultaneously by the BS, and are differentiated by the beam-specific pilot signals at the UE side. In Fig. 3(a). we provide one conceptual example of the transmit auxiliary beam pair formed in the azimuth domain. As can be seen from Fig. 3(a), to form an azimuth transmit auxiliary beam pair, the two analog beamforming vectors targeted at the directions of $\eta_{\mathrm{az}}-\delta_{\mathrm{y}}$ and $\eta_{\mathrm{az}}+\delta_{\mathrm{y}}$ in the azimuth domain are $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})$ and $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})$, where $\delta_{\mathrm{y}}=\frac{2\ell_{\mathrm{y}}\pi}{N_{\mathrm{y}}}$ with $\ell_{\mathrm{y}}=1,\cdots,\frac{N_{\mathrm{y}}}{4}$ and $\eta_{\mathrm{el}}$ corresponds to a given elevation direction. Now, we illustrate the employed pilot signal structure.
Due to the constant amplitude and the robustness to the frequency selectivity, Zadoff-Chu (ZC)-type sequences are used in this paper as the pilot signals for tracking. Denoting the sequence length by $N_{\mathrm{ZC}}$, the employed ZC sequence with root index $i_{z}$ is
$$s_{i_{z}}[m]=\exp\left(-\mathrm{j}\frac{\pi m(m+1)i_{z}}{N_{\mathrm{ZC}}}%
\right),$$
(7)
where $m=0,\cdots,N_{\mathrm{ZC}}-1$. Here, we let $N_{\mathrm{ZC}}=N$ (i.e., the total number of employed subcarriers) and $z=0,1$ such that $i_{0}$ and $i_{1}$ correspond to the two beams in the same auxiliary beam pair. By cross correlating two ZC sequences at zero-lag, we can obtain [24]
$$\sum_{k=0}^{N-1}s_{i_{0}}[k]s^{*}_{i_{1}}[k]=\Bigg{\{}\begin{array}[]{l}1,%
\hskip 5.690551pt\textrm{if}\hskip 5.690551pti_{0}=i_{1}\\
\beta_{i_{0},i_{1}},\hskip 5.690551pt\textrm{otherwise}.\end{array}$$
(8)
Here, $\beta_{i_{0},i_{1}}$ is a constant with small magnitude $\left|\beta_{i_{0},i_{1}}\right|\approx 0$. In this paper, we assume $\beta_{i_{0},i_{1}}=0$, i.e., the two sequences are orthogonal in the code-domain. By leveraging this code-domain orthogonality, the two simultaneously probed beams in the same auxiliary beam pair can be differentiated by the UE without interference.
Based on the employed pilot signal structure, we now explain the design principles of the auxiliary beam pair-assisted angle acquisition. Assume $M_{\mathrm{RF}}=1$ and a given analog receive beam, say, $\bm{a}_{\mathrm{r}}(\vartheta)$. According to the employed array configurations and the pilot signal structure, we can then rewrite (1) in the absence of noise as
$$y[k]=\bm{a}^{*}_{\mathrm{r}}(\vartheta)\sum_{r=1}^{N_{\mathrm{r}}}g_{r}\rho_{%
\tau_{r}}[k]\bm{a}_{\mathrm{r}}(\nu_{r})\bm{a}_{\mathrm{t}}^{*}(\theta_{r},%
\psi_{r})\left[\begin{array}[]{cc}\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_%
{\mathrm{az}}-\delta_{\mathrm{y}})&\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta%
_{\mathrm{az}}+\delta_{\mathrm{y}})\\
\end{array}\right]\left[\begin{array}[]{c}s_{i_{0}}[k]\\
s_{i_{1}}[k]\\
\end{array}\right].$$
(9)
Our design focus here is to estimate the azimuth transmit spatial frequency $\psi_{r^{\star}}$ for path-$r^{\star}$ with $r^{\star}\in\left\{1,\cdots,N_{\mathrm{r}}\right\}$. We first assume that $\psi_{r^{\star}}$ falls into the probing range of the auxiliary beam pair such that $\psi_{r^{\star}}\in\left(\eta_{\mathrm{az}}-\delta_{\mathrm{y}},\eta_{\mathrm{%
az}}+\delta_{\mathrm{y}}\right)$. This is possible by first selecting the $N_{\mathrm{r}}$ beams with the largest received powers, and then $N_{\mathrm{r}}$ auxiliary beam pairs according to [16, Lemma 2] such that each spatial frequency can be covered by the corresponding auxiliary beam pair with high probability. We can then rewrite (9) as
$$\displaystyle y[k]$$
$$\displaystyle=\bm{a}^{*}_{\mathrm{r}}(\vartheta)g_{r^{\star}}\rho_{\tau_{r^{%
\star}}}[k]\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}}^{*}(\theta_%
{r^{\star}},\psi_{r^{\star}})\left[\begin{array}[]{cc}\bm{a}_{\mathrm{t}}(\eta%
_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})&\bm{a}_{\mathrm{t}}(%
\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})\\
\end{array}\right]\left[\begin{array}[]{c}s_{i_{0}}[k]\\
s_{i_{1}}[k]\\
\end{array}\right]$$
$$\displaystyle+\underbrace{\bm{a}^{*}_{\mathrm{r}}(\vartheta)\sum_{\begin{%
subarray}{c}r^{\prime}=1,\\
r^{\prime}\neq r^{\star}\end{subarray}}^{N_{\mathrm{r}}}g_{r^{\prime}}\rho_{%
\tau_{r^{\prime}}}[k]\bm{a}_{\mathrm{r}}(\nu_{r^{\prime}})\bm{a}_{\mathrm{t}}^%
{*}(\theta_{r^{\prime}},\psi_{r^{\prime}})\left[\begin{array}[]{cc}\bm{a}_{%
\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})&\bm{a}_%
{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})\\
\end{array}\right]\left[\begin{array}[]{c}s_{i_{0}}[k]\\
s_{i_{1}}[k]\\
\end{array}\right]}_{\textrm{multi-path interference}}.$$
(10)
Because of the angular sparsity of the mmWave channels [17], we assume that other paths’ spatial frequencies are not covered by the auxiliary beam pair, i.e., $\psi_{r^{\prime}}\notin\left(\eta_{\mathrm{az}}-\delta_{\mathrm{y}},\eta_{%
\mathrm{az}}+\delta_{\mathrm{y}}\right)$ with $r^{\prime}\in\left\{1,\cdots,N_{\mathrm{r}}\right\}$ and $r^{\prime}\neq r^{\star}$. Along with the assumption of the high-power regime (e.g., $N_{\mathrm{y}}\rightarrow\infty$), we ignore the multi-path interference and rewrite (III-A) as
$$y[k]=g_{r^{\star}}\rho_{\tau_{r^{\star}}}[k]\bm{a}^{*}_{\mathrm{r}}(\vartheta)%
\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}}%
,\psi_{r^{\star}})\left[\begin{array}[]{cc}\bm{a}_{\mathrm{t}}(\eta_{\mathrm{%
el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})&\bm{a}_{\mathrm{t}}(\eta_{\mathrm%
{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})\\
\end{array}\right]\left[\begin{array}[]{c}s_{i_{0}}[k]\\
s_{i_{1}}[k]\\
\end{array}\right].$$
(11)
Note that we can extend the algorithm to separately estimate multiple paths in parallel.
Assuming perfect time-frequency synchronization, the UE employs locally stored reference beam-specific sequences to correlate the received signal samples. By using the reference ZC sequence with the sequence root index $i_{0}$, we can first obtain
$$\displaystyle\Lambda^{\Delta}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{k=0}^{N-1}s^{*}_{i_{0}}[k]y[k]$$
$$\displaystyle=$$
$$\displaystyle\sum_{k=0}^{N-1}g_{r^{\star}}\rho_{\tau_{r^{\star}}}[k]\bm{a}^{*}%
_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}%
}^{*}(\theta_{r^{\star}},\psi_{r^{\star}})\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el%
}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})s^{*}_{i_{0}}[k]s_{i_{0}}[k]$$
$$\displaystyle+$$
$$\displaystyle\sum_{k=0}^{N-1}g_{r^{\star}}\rho_{\tau_{r^{\star}}}[k]\bm{a}^{*}%
_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}%
}^{*}(\theta_{r^{\star}},\psi_{r^{\star}})\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el%
}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})s^{*}_{i_{0}}[k]s_{i_{1}}[k].$$
(13)
We assume flat channels here such that $\bar{\rho}_{\tau_{r^{\star}}}=\rho_{\tau_{r^{\star}}}[0]=\cdots=\rho_{\tau_{r^%
{\star}}}[N-1]$ for better illustration of the design principles. The proposed design approach can still achieve promising angle estimation/tracking performance in wideband channels (verified in Section V-B) since the correlation properties of the ZC-type sequences are robust to the frequency selectivity (e.g., up to $8.6$ MHz continuous bandwidth in LTE [25, 26]). We can then rewrite (13) as
$$\displaystyle\Lambda^{\Delta}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle g_{r^{\star}}\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r%
}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}},\psi_{r^{\star}}%
)\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}}%
)\bar{\rho}_{\tau_{r^{\star}}}\sum_{k=0}^{N-1}s^{*}_{i_{0}}[k]s_{i_{0}}[k]$$
(14)
$$\displaystyle+$$
$$\displaystyle g_{r^{\star}}\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r%
}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}},\psi_{r^{\star}}%
)\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}}%
)\bar{\rho}_{\tau_{r^{\star}}}\sum_{k=0}^{N-1}s^{*}_{i_{0}}[k]s_{i_{1}}[k]$$
$$\displaystyle\overset{(a)}{=}$$
$$\displaystyle g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\bm{a}^{*}_{\mathrm{r}%
}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}}^{*}(\theta%
_{r^{\star}},\psi_{r^{\star}})\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{%
\mathrm{az}}-\delta_{\mathrm{y}}),$$
(15)
where ($a$) is due to the employed beam-specific pilot signal structure in (8). We then compute the corresponding received signal strength as
$$\displaystyle\chi^{\Delta}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left(\Lambda^{\Delta}_{\mathrm{az}}\right)^{*}\Lambda^{\Delta}_{%
\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}$$
$$\displaystyle\times$$
$$\displaystyle\bm{a}^{*}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-%
\delta_{\mathrm{y}})\bm{a}_{\mathrm{t}}(\theta_{r^{\star}},\psi_{r^{\star}})%
\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}},\psi_{r^{\star}})\bm{a}_{\mathrm{t}%
}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}}).$$
(17)
Similarly, using the ZC sequence with the root index $i_{1}$ to correlate the received signal samples, we obtain
$$\displaystyle\Lambda^{\Sigma}_{\mathrm{az}}=\sum_{k=0}^{N-1}s^{*}_{i_{1}}[k]y[%
k]=g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\bm{a}^{*}_{\mathrm{r}}(\vartheta%
)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}%
},\psi_{r^{\star}})\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+%
\delta_{\mathrm{y}}).$$
(18)
We can calculate the corresponding received signal strength as
$$\displaystyle\chi^{\Sigma}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left(\Lambda^{\Sigma}_{\mathrm{az}}\right)^{*}\Lambda^{\Sigma}_{%
\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}$$
$$\displaystyle\times$$
$$\displaystyle\bm{a}^{*}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+%
\delta_{\mathrm{y}})\bm{a}_{\mathrm{t}}(\theta_{r^{\star}},\psi_{r^{\star}})%
\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}},\psi_{r^{\star}})\bm{a}_{\mathrm{t}%
}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}}).$$
(20)
We can further express $\chi^{\Delta}_{\mathrm{az}}$ and $\chi^{\Sigma}_{\mathrm{az}}$ as
$$\displaystyle\chi^{\Delta}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}\frac{\sin^{2}\left(\frac{N_{\mathrm{x}}(\theta_{r^{\star}}-\eta_{\mathrm{%
el}})}{2}\right)}{\sin^{2}\left(\frac{\theta_{r^{\star}}-\eta_{\mathrm{el}}}{2%
}\right)}\frac{\sin^{2}\left(\frac{N_{\mathrm{y}}(\psi_{r^{\star}}-\eta_{%
\mathrm{az}})}{2}\right)}{\sin^{2}\left(\frac{\psi_{r^{\star}}-\eta_{\mathrm{%
az}}+\delta_{\mathrm{y}}}{2}\right)},$$
(21)
$$\displaystyle\chi^{\Sigma}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}\frac{\sin^{2}\left(\frac{N_{\mathrm{x}}(\theta_{r^{\star}}-\eta_{\mathrm{%
el}})}{2}\right)}{\sin^{2}\left(\frac{\theta_{r^{\star}}-\eta_{\mathrm{el}}}{2%
}\right)}\frac{\sin^{2}\left(\frac{N_{\mathrm{y}}(\psi_{r^{\star}}-\eta_{%
\mathrm{az}})}{2}\right)}{\sin^{2}\left(\frac{\psi_{r^{\star}}-\eta_{\mathrm{%
az}}-\delta_{\mathrm{y}}}{2}\right)},$$
(22)
where (21) and (22) are obtained via $\left|\sum_{\bar{m}=1}^{M}e^{-\mathrm{j}(\bar{m}-1)\bar{x}}\right|^{2}=\frac{%
\sin^{2}\left(\frac{M\bar{x}}{2}\right)}{\sin^{2}\left(\frac{\bar{x}}{2}\right)}$.
We define the ratio metric
$$\displaystyle\zeta_{\mathrm{az}}=\frac{\chi^{\Delta}_{\mathrm{az}}-\chi^{%
\Sigma}_{\mathrm{az}}}{\chi^{\Delta}_{\mathrm{az}}+\chi^{\Sigma}_{\mathrm{az}}%
}=\frac{\sin^{2}\left(\frac{\psi_{r^{\star}}-\eta_{\mathrm{az}}-\delta_{%
\mathrm{y}}}{2}\right)-\sin^{2}\left(\frac{\psi_{r^{\star}}-\eta_{\mathrm{az}}%
+\delta_{\mathrm{y}}}{2}\right)}{\sin^{2}\left(\frac{\psi_{r^{\star}}-\eta_{%
\mathrm{az}}-\delta_{\mathrm{y}}}{2}\right)+\sin^{2}\left(\frac{\psi_{r^{\star%
}}-\eta_{\mathrm{az}}+\delta_{\mathrm{y}}}{2}\right)}=-\frac{\sin\left(\psi_{r%
^{\star}}-\eta_{\mathrm{az}}\right)\sin(\delta_{\mathrm{y}})}{1-\cos\left(\psi%
_{r^{\star}}-\eta_{\mathrm{az}}\right)\cos(\delta_{\mathrm{y}})}.$$
(23)
According to [16, Lemma 1], if $|\psi_{r^{\star}}-\eta_{\mathrm{az}}|<\delta_{\mathrm{y}}$, then the azimuth transmit spatial frequency $\psi_{r^{\star}}$ is within the range of $\left(\eta_{\mathrm{az}}-\delta_{\mathrm{y}},\eta_{\mathrm{az}}+\delta_{%
\mathrm{y}}\right)$, and $\zeta_{\mathrm{az}}$ is a monotonically decreasing function of $\psi_{r^{\star}}-\eta_{\mathrm{az}}$ and invertible with respect to $\psi_{r^{\star}}-\eta_{\mathrm{az}}$. Via the inverse function, we can therefore derive the estimated value of $\psi_{r^{\star}}$ as
$$\hat{\psi}_{r^{\star}}=\eta_{\mathrm{az}}-\arcsin\left(\frac{\zeta_{\mathrm{az%
}}\sin(\delta_{\mathrm{y}})-\zeta_{\mathrm{az}}\sqrt{1-\zeta_{\mathrm{az}}^{2}%
}\sin(\delta_{\mathrm{y}})\cos(\delta_{\mathrm{y}})}{\sin^{2}(\delta_{\mathrm{%
y}})+\zeta_{\mathrm{az}}^{2}\cos^{2}(\delta_{\mathrm{y}})}\right).$$
(24)
If $\zeta_{\mathrm{az}}$ is perfect, i.e., not impaired by noise and other types of interference, we can perfectly recover the azimuth transmit spatial frequency for path-$r^{\star}$, i.e., $\psi_{r^{\star}}=\hat{\psi}_{r^{\star}}$.
In Section III-B, we restrict to the tracking of path-$r^{\star}$’s azimuth AoD. To better reveal the temporal evolution, we use $\psi_{r^{\star},t}$ instead of $\psi_{r^{\star}}$ to represent path-$r^{\star}$’s azimuth transmit spatial frequency for a given time-slot $t\in\left\{0,\cdots,T-1\right\}$ in the DTC.
III-B Design procedures of proposed angle tracking approaches
Leveraging the high-resolution angle estimates, we exploit the auxiliary beam pair design in forming tracking beams in the DTC. In Fig. 3(b), we present one conceptual example of applying the auxiliary beam pair based approach in angle tracking. In this example, one transmit auxiliary beam pair (e.g., $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})$ and $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})$ in Fig. 3(a)) is probed during the DTC. The boresight angle of the auxiliary beam pair (e.g., $\eta_{\mathrm{az}}$ in Fig. 3(a)) is identical to the steering direction of the corresponding anchor beam in the DDC. In the following, we first illustrate the general framework of the proposed angle tracking designs.
In Fig. 4, we provide the relationship between the UE’s moving trajectory and the tracking beams in the DTC. At time-slot $0$, the anchor beam in the DDC with the azimuth boresight angle $\eta_{\mathrm{az},0}$ steers towards the UE of interest. One azimuth transmit auxiliary beam pair is formed as the tracking beams in the DTC. For a given elevation direction $\eta_{\mathrm{el},0}$, the corresponding two beams probe towards $\eta_{\mathrm{az},0}-\delta_{\mathrm{y}}$ and $\eta_{\mathrm{az},0}+\delta_{\mathrm{y}}$ with the boresight angle $\eta_{\mathrm{az},0}$ in the azimuth domain. As can be seen from the conceptual example shown in Fig. 4, at time-slots $1,\cdots,T-1$, the UE of interest moves away from the original azimuth position $\psi_{r^{\star},0}$ (or $\eta_{\mathrm{az},0}$) to $\psi_{r^{\star},1},\cdots,\psi_{r^{\star},T-1}$. Note that as long as $\psi_{r^{\star},1},\cdots,\psi_{r^{\star},T-1}$ are in the probing range of the tracking beams, they are expected to be accurately tracked according to the design principles of the auxiliary beam pair.
In the proposed methods, either the BS or the UE can trigger the angle tracking process, which are referred to as BS-driven or UE-driven angle tracking methods. For both the BS-driven and UE-driven angle tracking strategies, either a periodic or aperiodic DTC design can be adopted. Further, for the proposed BS-driven angle tracking, no prior knowledge of the auxiliary beam pair setup is required at the UE side. In the following, we present the detailed design procedures of the proposed methods and illustrate the employed direct and differential feedback strategies.
1. BS/UE-driven angle tracking design with direct ratio metric feedback. We start by illustrating the BS-driven angle tracking strategy using the direct ratio metric feedback. For a given time-slot $t\in\left\{0,\cdots,T-1\right\}$ in the DTC, the corresponding ratio metric $\zeta_{\mathrm{az},t}$ is calculated by the UE according to (23) using the probed azimuth transmit auxiliary beam pair. First, assume that the BS triggers the feedback of the derived ratio metric. For instance, considering a given DTC, if the BS requires the ratio metric feedback at time-slot $T-1$, $\zeta_{\mathrm{az},T-1}$ is then quantized and sent back to the BS. In this case, time-slot $T-1$ is the last time-slot of a given DTC. Note that in practice, the BS may require the ratio metric feedback for multiple time-slots within the same DTC to track the fast-varying channels. It is therefore essential for the UE to keep computing the ratio metric for every time-slot in the DTC. Upon receiving the ratio metric feedback from the UE at time-slot $t$, the BS retrieves the corresponding angle estimate according to (24). Denoting the azimuth angle estimate at time-slot $t$ by $\hat{\psi}_{r^{\star},t}$, we have
$$\hat{\psi}_{r^{\star},t}=\eta_{\mathrm{az},0}-\arcsin\left(\frac{\zeta_{%
\mathrm{az},t}\sin(\delta_{\mathrm{y}})-\zeta_{\mathrm{az},t}\sqrt{1-\zeta_{%
\mathrm{az},t}^{2}}\sin(\delta_{\mathrm{y}})\cos(\delta_{\mathrm{y}})}{\sin^{2%
}(\delta_{\mathrm{y}})+\zeta_{\mathrm{az},t}^{2}\cos^{2}(\delta_{\mathrm{y}})}%
\right).$$
(25)
The angle difference $\Delta\psi_{r^{\star},t}=\left|\psi_{r^{\star},0}-\hat{\psi}_{r^{\star},t}\right|$ is then calculated by the BS and compared with a predefined threshold $\varsigma_{\mathrm{az}}$. If $\Delta\psi_{r^{\star},t}\geq\varsigma_{\mathrm{az}}$, the azimuth steering direction of the anchor beam in the DDC is then updated from $\eta_{\mathrm{az},0}$ to $\eta_{\mathrm{az},t}=\hat{\psi}_{r^{\star},t}$. Otherwise, the azimuth steering direction of the anchor beam in the DDC is kept unchanged from time-slot $0$, i.e., $\eta_{\mathrm{az},t}=\eta_{\mathrm{az},0}$.
Different from the BS-driven strategy, the angle tracking process in the UE-driven method is triggered at the UE side. Here, the direct ratio metric feedback is still applied, but the feedback process is configured by the UE according to the received signal strength corresponding to the anchor beam in the DDC. We explain the design procedures of the UE-driven angle tracking approach with the direct ratio metric feedback as follows:
•
For a given time-slot $t\in\left\{0,\cdots,T-1\right\}$ in the DTC, the auxiliary beam pair with the boresight angle identical to the steering direction of the anchor beam in the DDC is probed by the BS.
•
The ratio metric $\zeta_{\mathrm{az},t}$ corresponding to the probed azimuth auxiliary beam pair is computed by the UE. Further, the received signal strength $\gamma_{t}$ of the anchor beam in the DDC is calculated by the UE. By comparing with the received signal strength $\gamma_{0}$ obtained at time-slot $0$, the received signal strength difference $\Delta\gamma_{t}=\left|\gamma_{t}-\gamma_{0}\right|$ is derived by the UE.
•
The received signal strength difference $\Delta\gamma_{t}$ is compared with a predefined threshold $\varrho_{\mathrm{az}}$ such that if $\Delta\gamma_{t}\geq\varrho_{\mathrm{az}}$, the ratio metric $\zeta_{\mathrm{az},t}$ is quantized by the UE and sent back to the BS to trigger the anchor beam adjustment in the azimuth domain. Otherwise, the above process proceeds to time-slot $t+1$.
•
Upon receiving $\zeta_{\mathrm{az},t}$, the BS estimates the channel’s azimuth transmit spatial frequency $\hat{\psi}_{r^{\star},t}$ for time-slot $t$. The azimuth steering direction of the anchor beam in the DDC is then updated by the BS as $\eta_{\mathrm{az},t}=\hat{\psi}_{r^{\star},t}$.
Note that in the proposed UE-driven angle tracking with the direct ratio metric feedback, no prior knowledge of the auxiliary beam pair setup is required at the UE side, while only the received signal strength of the anchor beam is deduced as the triggering performance metric.
In the following, we illustrate the proposed angle tracking method with differential feedback from the perspective of the UE-driven design. The corresponding BS-driven differential feedback strategy can be similarly derived with moderate modifications on the tracking procedures.
2. UE-driven angle tracking design with differential ratio metric feedback. To reduce the feedback overhead, we propose a differential ratio metric feedback option in this part. According to the derivation in (23), the ratio metric is distributed within $[-1,1]$. Further, the sign of the ratio metric implies the relative position of the angle to be estimated with respect to the boresight of the corresponding auxiliary beam pair. Consider the conceptual example shown in Fig. 4. For time-slot $t$, $\mathrm{sign}(\zeta_{\mathrm{az},t})=1$ indicates that $\psi_{r^{\star},t}$ falls on the left of the boresight of the corresponding auxiliary beam pair in the azimuth domain such that $\psi_{r^{\star},t}\in(\eta_{\mathrm{az},0}-\delta_{\mathrm{y}},\eta_{\mathrm{%
az},0})$. Similarly, $\psi_{r^{\star},t}\in(\eta_{\mathrm{az},0},\eta_{\mathrm{az},0}+\delta_{%
\mathrm{y}})$ results in $\mathrm{sign}(\zeta_{\mathrm{az},t})=-1$ implying that $\psi_{r^{\star},t}$ falls on the right of the boresight of the corresponding azimuth auxiliary beam pair.
At time-slot $t$, the UE derives the ratio metric $\zeta_{\mathrm{az},t}$ and calculates $\Delta\zeta_{\mathrm{az},t}=\left|\zeta_{\mathrm{az},t}-\zeta_{\mathrm{az},0}\right|$ and $\mathrm{sign}(\zeta_{\mathrm{az},t})$ for the azimuth domain. With the knowledge of the auxiliary beam pair setup, i.e., the boresight angle $\eta_{\mathrm{az},0}=\psi_{r^{\star},0}$, the boresight angle difference $\delta_{\mathrm{y}}$, and the corresponding beamwidth, the angle difference $\Delta\psi_{r^{\star},t}$ can be computed by the UE by exploiting the monotonic and symmetric properties of the ratio metric [17]. The angle difference is then compared with a predefined threshold $\varsigma_{\mathrm{az}}$ for the azimuth domain such that if $\Delta\psi_{r^{\star},t}\geq\varsigma_{\mathrm{az}}$, the UE quantizes $\Delta\zeta_{\mathrm{az},t}$ and sends it back to the BS along with $\mathrm{sign}(\zeta_{\mathrm{az},t})$ to trigger the anchor beam adjustment. Note that in contrast to the direct ratio metric quantization, the differential ratio metric quantization reduces the feedback overhead by half but with one extra bit indicating the sign. Upon receiving the feedback information, the BS can determine $\Delta\psi_{r^{\star},t}$ using $\Delta\zeta_{\mathrm{az},t}$. The azimuth steering direction of the anchor beam in the DDC can therefore be updated as $\eta_{\mathrm{az},t}=\hat{\psi}_{r^{\star},t}=\psi_{r^{\star},0}+\mathrm{sign}%
(\zeta_{\mathrm{az},t})\Delta\psi_{r^{\star},t}$.
Remark: Similar to the direct and differential ratio metric feedback methods, direct and differential angle feedback strategies can also be supported for the angle tracking designs, as long as necessary auxiliary beam pair setup is available at the UE side.
IV Impact of Radiation Pattern Impairments
Because of manufacturing inaccuracies, a variety of impairments such as geometrical and electrical tolerances cause non-uniform amplitude and phase characteristics of the individual antenna elements [27]. This results in phase and amplitude errors of the radiation patterns [28]. In this paper, we first define the following three terminologies:
•
Ideal radiation pattern: the radiation pattern is not impaired by any types of impairments such as phase and amplitude errors, mutual coupling, imperfect matching, and etc.
•
Impaired radiation pattern: the radiation pattern is impaired only by the phase and amplitude errors, but not other impairments such as mutual coupling, imperfect matching, and etc.
•
Calibrated radiation pattern: the phase and amplitude errors are compensated by certain array calibration methods; after the calibration, residual phase and amplitude errors may still exist depending on many factors, e.g., the calibration SNR or distribution of the impairments.
The angle tracking performance of the proposed auxiliary beam pair-assisted designs is subject to the radiation pattern impairments, which are neglected during the derivation of the ratio metric in (23). If the radiation patterns of the beams in the auxiliary beam pair are impaired by the phase and amplitude errors, the monotonic and symmetric properties of the ratio metric may not hold, which in turn, results in large angle tracking errors. In the following, we first illustrate the impact of the radiation pattern impairments on the proposed angle tracking designs. To calibrate the antenna array with the analog architecture, we custom design and evaluate two calibration methods. We then examine the impact of the residual calibration errors on the proposed angle tracking approaches.
IV-A Impact of phase and amplitude errors on proposed methods
Neglecting mutual coupling and matching effects, and denoting the phase and amplitude error matrices by $\bm{P}$ and $\bm{A}$, we have $\bm{P}=\mathrm{diag}\left(\left[e^{\mathrm{j}p_{0}},e^{\mathrm{j}p_{1}},\cdots%
,e^{\mathrm{j}p_{N_{\mathrm{tot}}-1}}\right]^{\mathrm{T}}\right)$ and $\bm{A}=\mathrm{diag}\Big{(}\big{[}a_{0},a_{1},\cdots,\\
a_{N_{\mathrm{tot}}-1}\big{]}^{\mathrm{T}}\Big{)}$, where $p_{i}$ and $a_{i}$ correspond to the phase and amplitude errors on the $i$-th antenna element with $i=0,\cdots,N_{\mathrm{tot}}-1$. Due to the UPA structure, we can decompose $\bm{P}$ and $\bm{A}$ as $\bm{P}=\bm{P}_{\mathrm{el}}\otimes\bm{P}_{\mathrm{az}}$ and $\bm{A}=\bm{A}_{\mathrm{el}}\otimes\bm{A}_{\mathrm{az}}$, where $\bm{P}_{\mathrm{el}}=\mathrm{diag}\left(\left[e^{\mathrm{j}p_{\mathrm{el},0}},%
e^{\mathrm{j}p_{\mathrm{el},1}},\cdots,e^{\mathrm{j}p_{\mathrm{el},N_{\mathrm{%
x}}-1}}\right]^{\mathrm{T}}\right)$ and $\bm{A}_{\mathrm{el}}=\mathrm{diag}\left(\left[a_{\mathrm{el},0},a_{\mathrm{el}%
,1},\cdots,a_{\mathrm{el},N_{\mathrm{x}}-1}\right]^{\mathrm{T}}\right)$ correspond to the elevation domain, and $\bm{P}_{\mathrm{az}}=\mathrm{diag}\Big{(}\big{[}e^{\mathrm{j}p_{\mathrm{az},0}%
},\\
e^{\mathrm{j}p_{\mathrm{az},1}},\cdots,e^{\mathrm{j}p_{\mathrm{az},N_{\mathrm{%
y}}-1}}\big{]}^{\mathrm{T}}\Big{)}$ and $\bm{A}_{\mathrm{az}}=\mathrm{diag}\left(\left[a_{\mathrm{az},0},a_{\mathrm{az}%
,1},\cdots,a_{\mathrm{az},N_{\mathrm{y}}-1}\right]^{\mathrm{T}}\right)$ are for the azimuth domain. In this paper, we model $p_{\mathrm{el},i_{\mathrm{el}}}$, $a_{\mathrm{el},i_{\mathrm{el}}}$ with $i_{\mathrm{el}}=0,\cdots,N_{\mathrm{x}}-1$ and $p_{\mathrm{az},i_{\mathrm{az}}}$, $a_{\mathrm{az},i_{\mathrm{az}}}$ with $i_{\mathrm{az}}=0,\cdots,N_{\mathrm{y}}-1$ as Gaussian distributed random variables with zero mean and certain variances.
We employ the example shown in Fig. 3(a) to illustrate the impact of the phase and amplitude errors on the auxiliary beam pair design. Denote $\bm{C}=\bm{A}\bm{P}$ and neglect the radiation pattern impairments at the UE side. Using the transmit analog beam $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})$ and the receive analog beam $\bm{a}_{\mathrm{r}}(\vartheta)$, we compute the corresponding noiseless received signal strength as
$$\displaystyle\chi^{\Delta}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}$$
$$\displaystyle\times$$
$$\displaystyle\bm{a}^{*}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-%
\delta_{\mathrm{y}})\bm{C}^{*}\bm{a}_{\mathrm{t}}(\theta_{r^{\star}},\psi_{r^{%
\star}})\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}},\psi_{r^{\star}})\bm{C}\bm{%
a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}$$
$$\displaystyle\times$$
$$\displaystyle\left|\sum_{i_{\mathrm{el}}=0}^{N_{\mathrm{x}}-1}a_{\mathrm{el},i%
_{\mathrm{el}}}e^{-\mathrm{j}\left[i_{\mathrm{el}}(\theta_{r^{\star}}-\eta_{%
\mathrm{el}})-p_{\mathrm{el},i_{\mathrm{el}}}\right]}\right|^{2}\left|\sum_{i_%
{\mathrm{az}}=0}^{N_{\mathrm{y}}-1}a_{\mathrm{az},i_{\mathrm{az}}}e^{-\mathrm{%
j}\left[i_{\mathrm{az}}(\psi_{r^{\star}}-\eta_{\mathrm{az}}+\delta_{\mathrm{y}%
})-p_{\mathrm{az},i_{\mathrm{az}}}\right]}\right|^{2}.$$
(27)
Similarly, we can derive the received signal strength with respect to the transmit and receive beams pair $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})$ and $\bm{a}_{\mathrm{r}}(\vartheta)$ as
$$\displaystyle\chi^{\Sigma}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}$$
(28)
$$\displaystyle\times$$
$$\displaystyle\left|\sum_{i_{\mathrm{el}}=0}^{N_{\mathrm{x}}-1}a_{\mathrm{el},i%
_{\mathrm{el}}}e^{-\mathrm{j}\left[i_{\mathrm{el}}(\theta_{r^{\star}}-\eta_{%
\mathrm{el}})-p_{\mathrm{el},i_{\mathrm{el}}}\right]}\right|^{2}\left|\sum_{i_%
{\mathrm{az}}=0}^{N_{\mathrm{y}}-1}a_{\mathrm{az},i_{\mathrm{az}}}e^{-\mathrm{%
j}\left[i_{\mathrm{az}}(\psi_{r^{\star}}-\eta_{\mathrm{az}}-\delta_{\mathrm{y}%
})-p_{\mathrm{az},i_{\mathrm{az}}}\right]}\right|^{2}.$$
Due to the phase and amplitude errors, the received signal strengths $\chi^{\Delta}_{\mathrm{az}}$ and $\chi^{\Sigma}_{\mathrm{az}}$ in (27) and (28) can not be expressed as the same forms as those in (21) and (22). The corresponding ratio metric calculated via $\zeta_{\mathrm{az}}=\frac{\chi^{\Delta}_{\mathrm{az}}-\chi^{\Sigma}_{\mathrm{%
az}}}{\chi^{\Delta}_{\mathrm{az}}+\chi^{\Sigma}_{\mathrm{az}}}$ is therefore no longer a strict monotonic function of the angle to be estimated. By directly inverting the ratio metric function as according to (24), high angle estimation error probability could be incurred, which in turn, degrades the angle tracking performance of the proposed methods.
In Figs. 5(a) and (b), we plot the ratio metrics versus the angle to be estimated assuming ideal radiation pattern and impaired radiation pattern with $N_{\mathrm{y}}=16$ and $\delta_{\mathrm{y}}=\pi/8$. It is observed from Fig. 5(b) that with $0.5$ phase and amplitude errors variances, the ratio metrics obtained via different impairment realizations are neither monotonic functions of the angle to be estimated nor symmetrical with respect to the origin. These observations are consistent with our analysis. Practical implementation of the proposed angle tracking designs therefore requires array calibration to compensate for the phase and amplitude errors.
Conventional array calibration methods such as those in [29] can not be directly applied to the array setup shown in Fig. 1. This is because in the employed array architecture, all antenna elements are driven by a limited number of RF chains such that only $N_{\mathrm{RF}}$-dimensional measurements are accessible to calibrate all $N_{\mathrm{tot}}$ antenna elements. In this paper, we develop and evaluate two off-line array calibration methods for the employed array configurations assuming simple LOS channels and single-carrier setup.
IV-B Receive combining based array calibration with single calibration source
In this method, we assume that the single calibration source transmitting the calibration reference signal (RS) is located at the origin with respect to the BS antenna array such that the calibration RS impinges on the antenna array at $0$ degree in both the azimuth and elevation domains. At the BS, a set of receive combining vectors are formed in a time-division multiplexing (TDM) manner probing towards $N_{\mathrm{tot}}$ different angular directions in both the azimuth and elevation domains. The external calibration source can be placed close to the BS antenna array, and the channel between them is LOS. We can therefore express the signals received across all the $N_{\mathrm{tot}}$ receive probings as
$$\displaystyle y_{0}$$
$$\displaystyle=$$
$$\displaystyle\bm{a}_{\mathrm{t}}^{*}(\eta_{\mathrm{el},0},\eta_{\mathrm{az},0}%
)\bm{C}\bm{a}_{\mathrm{t}}(\theta,\psi)x+\bm{a}_{\mathrm{t}}^{*}(\eta_{\mathrm%
{el},0},\eta_{\mathrm{az},0})\bm{n}_{0}$$
$$\displaystyle\vdots$$
$$\displaystyle y_{N_{\mathrm{tot}}-1}$$
$$\displaystyle=$$
$$\displaystyle\bm{a}_{\mathrm{t}}^{*}(\eta_{\mathrm{el},N_{\mathrm{x}}-1},\eta_%
{\mathrm{az},N_{\mathrm{y}}-1})\bm{C}\bm{a}_{\mathrm{t}}(\theta,\psi)x+\bm{a}_%
{\mathrm{t}}^{*}(\eta_{\mathrm{el},N_{\mathrm{x}}-1},\eta_{\mathrm{az},N_{%
\mathrm{y}}-1})\bm{n}_{N_{\mathrm{tot}}-1},$$
(30)
where $x$ represents the calibration RS, $\theta=\psi=0$, $\eta_{\mathrm{el},i_{\mathrm{el}}}$ and $\eta_{\mathrm{az},i_{\mathrm{az}}}$ ($i_{\mathrm{el}}=0,\cdots,N_{\mathrm{x}}-1$ and $i_{\mathrm{az}}=0,\cdots,N_{\mathrm{y}}-1$) are the receive steering directions in the elevation and azimuth domains, and $\bm{n}_{i}$ ($i=0,\cdots,N_{\mathrm{tot}}-1$) is the corresponding noise vector. In this paper, we assume the calibration RS $x=1$ while it can be selected as a different symbol from $1$ as long as it is known a prior. By concatenating all the received signal samples $y_{0},\cdots,y_{N_{\mathrm{tot}}-1}$, we therefore have
$$\bm{y}=\left[\begin{array}[]{c}y_{0}\\
\vdots\\
y_{N_{\mathrm{tot}}-1}\\
\end{array}\right]=\bm{A}_{\mathrm{t}}\bm{C}\bm{1}_{N_{\mathrm{tot}}\times 1}+%
\bm{A}_{\mathrm{t}}\bm{n},$$
(31)
with
$$\displaystyle\bm{A}_{\mathrm{t}}=\left[\begin{array}[]{c}\bm{a}_{\mathrm{t}}^{%
*}(\eta_{\mathrm{el},0},\eta_{\mathrm{az},0})\\
\vdots\\
\bm{a}_{\mathrm{t}}^{*}(\eta_{\mathrm{el},N_{\mathrm{x}}-1},\eta_{\mathrm{az},%
N_{\mathrm{y}}-1})\\
\end{array}\right]\in\mathbb{C}^{N_{\mathrm{tot}}\times N_{\mathrm{tot}}},%
\hskip 8.535827pt\bm{n}=\left[\begin{array}[]{c}\bm{n}_{0}\\
\vdots\\
\bm{n}_{N_{\mathrm{tot}}-1}\\
\end{array}\right]\in\mathbb{C}^{N_{\mathrm{tot}}\times 1}.$$
(32)
According to (31), the phase and amplitude errors matrix can be estimated as
$$\hat{\bm{C}}=\mathrm{diag}\left\{\bm{A}_{\mathrm{t}}^{-1}\bm{y}\right\},$$
(33)
and the calibration matrix is determined as $\bm{K}=\hat{\bm{C}}^{-1}$. Note that with different receive steering directions and DFT-type receive steering vector structure, the square matrix $\bm{A}_{\mathrm{t}}$ is invertible.
IV-C Receive combining based array calibration with distributed calibration sources
In this method, a total of $N_{\mathrm{RS}}$ distributed calibration sources transmit incoherent calibration RSs to the BS antenna array. Different from the single calibration source case, a total of $N_{\mathrm{RF}}$ receive beams are simultaneously probed by the BS to receive the calibration RSs in the TDM round-robing fashion. Calibrating all the antenna elements therefore requires $N_{\mathrm{RS}}=N_{\mathrm{tot}}/N_{\mathrm{RF}}$. We can then express the received signal model as
$$\bm{Y}=\underline{\bm{A}}_{\mathrm{t}}\bm{C}\bm{B}_{\mathrm{t}}\bm{I}_{N_{%
\mathrm{RS}}}x+\underline{\bm{A}}_{\mathrm{t}}\bm{N},$$
(34)
where $\bm{Y}\in\mathbb{C}^{N_{\mathrm{RF}}\times N_{\mathrm{RS}}}$,
$$\displaystyle\underline{\bm{A}}_{\mathrm{t}}=\left[\begin{array}[]{c}\bm{a}_{%
\mathrm{t}}^{*}(\eta_{\mathrm{el},0},\eta_{\mathrm{az},0})\\
\vdots\\
\bm{a}_{\mathrm{t}}^{*}(\eta_{\mathrm{el},N_{\mathrm{RF}}-1},\eta_{\mathrm{az}%
,N_{\mathrm{RF}}-1})\\
\end{array}\right],\hskip 8.535827pt\bm{B}_{\mathrm{t}}=\left[\bm{a}_{\mathrm{%
t}}(\theta_{0},\psi_{0}),\hskip 5.690551pt\cdots,\hskip 5.690551pt\bm{a}_{%
\mathrm{t}}(\theta_{N_{\mathrm{RS}}-1},\psi_{N_{\mathrm{RS}}-1})\right],$$
(35)
and $\bm{N}$ represents the $N_{\mathrm{tot}}\times N_{\mathrm{RS}}$ noise matrix. Note that because the calibration is conducted off-line, the receive steering matrix $\underline{\bm{A}}_{\mathrm{t}}$ and the array response matrix $\bm{B}_{\mathrm{t}}$ are known a prior, which can be used to determine $\bm{V}_{\mathrm{t}}=\bm{B}^{\mathrm{T}}_{\mathrm{t}}\otimes\underline{\bm{A}}_%
{\mathrm{t}}$. Assuming $x=1$, the phase and amplitude errors matrix can then be estimated as
$$\hat{\bm{C}}=\mathrm{diag}\left(\left(\bm{V}^{*}_{\mathrm{t}}\bm{V}_{\mathrm{t%
}}\right)^{-1}\bm{V}^{*}_{\mathrm{t}}\mathrm{vec}(\bm{Y})\right).$$
(36)
We can therefore calculate the calibration matrix $\bm{K}$ as $\bm{K}=\hat{\bm{C}}^{-1}$.
In Figs. 6(a) and (b), we evaluate the impact of the residual calibration errors on the azimuth radiation patterns for the proposed calibration methods. We set the calibration SNR as $0$ dB. As can be seen from Figs. 6(a) and (b), the calibrated radiation patterns almost match with the ideal radiation patterns in the azimuth domain such that the main lobe and side lobes can be clearly differentiated. Note that with increase in the calibration SNR, the calibration performances can be further improved.
After the array calibration, the amplitude and phase errors become small and are approximately the same across all the antenna elements. Denote the residual amplitude and phase errors for all antennas by $\bar{a}$ and $\bar{p}$. Using the transmit analog beam $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y}})$ and the receive analog beam $\bm{a}_{\mathrm{r}}(\vartheta)$, we can obtain the noiseless received signal strength as
$$\displaystyle\chi^{\Delta}_{\mathrm{az}}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}\left|\bm{a}_{\mathrm{t}}^{*}(\theta_{r^{\star}},\psi_{r^{\star}})\bm{C}\bm%
{K}\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}-\delta_{\mathrm{y%
}})\right|^{2}$$
(38)
$$\displaystyle\approx$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}\left|\bar{a}e^{\mathrm{j}\bar{p}}\right|^{2}\left|\sum_{i_{\mathrm{el}}=0}%
^{N_{\mathrm{x}}-1}e^{-\mathrm{j}i_{\mathrm{el}}(\theta_{r^{\star}}-\eta_{%
\mathrm{el}})}\right|^{2}\left|\sum_{i_{\mathrm{az}}=0}^{N_{\mathrm{y}}-1}e^{-%
\mathrm{j}i_{\mathrm{az}}(\psi_{r^{\star}}-\eta_{\mathrm{az}}+\delta_{\mathrm{%
y}})}\right|^{2}$$
$$\displaystyle=$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}\left|\bar{a}e^{\mathrm{j}\bar{p}}\right|^{2}\frac{\sin^{2}\left(\frac{N_{%
\mathrm{x}}(\theta_{r^{\star}}-\eta_{\mathrm{el}})}{2}\right)}{\sin^{2}\left(%
\frac{\theta_{r^{\star}}-\eta_{\mathrm{el}}}{2}\right)}\frac{\sin^{2}\left(%
\frac{N_{\mathrm{y}}(\psi_{r^{\star}}-\eta_{\mathrm{az}})}{2}\right)}{\sin^{2}%
\left(\frac{\psi_{r^{\star}}-\eta_{\mathrm{az}}+\delta_{\mathrm{y}}}{2}\right)}.$$
(39)
Similarly, after the array calibration, we can compute the received signal strength with respect to the transmit and receive beams pair $\bm{a}_{\mathrm{t}}(\eta_{\mathrm{el}},\eta_{\mathrm{az}}+\delta_{\mathrm{y}})$ and $\bm{a}_{\mathrm{r}}(\vartheta)$ as
$$\displaystyle\chi^{\Sigma}_{\mathrm{az}}$$
$$\displaystyle\approx$$
$$\displaystyle\left|g_{r^{\star}}\bar{\rho}_{\tau_{r^{\star}}}\right|^{2}\left|%
\bm{a}^{*}_{\mathrm{r}}(\vartheta)\bm{a}_{\mathrm{r}}(\nu_{r^{\star}})\right|^%
{2}\left|\bar{a}e^{\mathrm{j}\bar{p}}\right|^{2}\frac{\sin^{2}\left(\frac{N_{%
\mathrm{x}}(\theta_{r^{\star}}-\eta_{\mathrm{el}})}{2}\right)}{\sin^{2}\left(%
\frac{\theta_{r^{\star}}-\eta_{\mathrm{el}}}{2}\right)}\frac{\sin^{2}\left(%
\frac{N_{\mathrm{y}}(\psi_{r^{\star}}-\eta_{\mathrm{az}})}{2}\right)}{\sin^{2}%
\left(\frac{\psi_{r^{\star}}-\eta_{\mathrm{az}}-\delta_{\mathrm{y}}}{2}\right)}.$$
(40)
The corresponding ratio metric derived by $\zeta_{\mathrm{az}}=\frac{\chi^{\Delta}_{\mathrm{az}}-\chi^{\Sigma}_{\mathrm{%
az}}}{\chi^{\Delta}_{\mathrm{az}}+\chi^{\Sigma}_{\mathrm{az}}}$ therefore exhibits the same form as (23) implying that the channel directional information can be retrieved by inverting the ratio metric. In Figs. 6(c) and (d), we plot the ratio metrics obtained after the array calibration with respect to the angle to be estimated in the azimuth domain. We evaluate both the single calibration source and distributed calibration sources based methods. By comparing Figs. 6(c) and (d) with Fig. 5(a), it can be observed that the monotonic and symmetric properties of the ratio metric hold for most of the angle values.
V Numerical Results
In this section, we evaluate the proposed BS-driven angle tracking design with the direct ratio metric feedback and the periodic DTC. Note that different angle tracking strategies developed in Section III exhibit similar tracking performances, though they have different requirements on the tracking triggering metric, feedback information, and information available at the UE side. We evaluate the proposed angle tracking method assuming ideal radiation pattern, impaired radiation pattern with phase and amplitude errors, and calibrated radiation pattern. For simplicity, we obtain the calibrated radiation pattern via the proposed single calibration source based strategy. We set the angle difference threshold for triggering the beam adjustment as $10^{\circ}$. As the ratio metric is non-uniformly distributed within the interval of $[-1,1]$ [16], we employ the Lloyd’s algorithm [30] to optimize the codebook for quantizing the ratio metric.
V-A Narrowband single-path channels with single-carrier
In this part, we provide the numerical results in narrowband single-path channels with single-carrier modulation. We consider a single UE and two angular motion models shown in Figs. 7(a) and (b) to reveal the moving trajectory of the UE. In the first model (angular motion model I), the ULA is employed at the BS, while in the second model (angular motion model II), the UPA is employed at the BS such that the tracking beams can be probed towards both the elevation and azimuth domains. For both cases, the ULA is assumed at the UE side. Note that we develop the angular motion models I and II to better characterize the angle variations in terms of the moving trajectory. In Section V-B, we employ statistical temporal evolution tools to model practical channel variations. We list other simulation assumptions and parameters in Table I. We drop the path index here due to the single-path assumption.
Note that the angle variations $\Delta\psi$ and $\Delta\theta$ are obtained according to the UE’s azimuth and elevation velocities $v_{\mathrm{az}}$ and $v_{\mathrm{el}}$, the BS-UE distance $d$, and the symbol duration. We further randomize the angle variations by incorporating a Gaussian distributed random variable $w$ with zero mean and variance $1$ as in Table I. In the simulations, we set $T=1$. That is, each DTC comprises one time-slot (symbol), during which one auxiliary beam pair is formed. The two beams in the corresponding auxiliary beam pair are simultaneously probed, which are differentiated by the UE via the beam-specific pilot signal design. We can then define the tracking overhead as $\rho=1/T_{\mathrm{d}}$. For instance, $T_{\mathrm{d}}=1000$ results in less tracking overhead than $T_{\mathrm{d}}=10$ as the corresponding tracking overheads are computed as $\rho=0.1\%$ and $\rho=10\%$. We assume angular motion model I for Figs. 8 and 9, and angular motion model II for Fig. 10.
In Fig. 8, we provide snapshots of the angle tracking results over time for $\rho=10\%$ and $0.05\%$ in the proposed design. For comparison, we also provide the actual angle variations and the case without angle tracking. Further, we assume ideal radiation pattern. As can be seen from Fig. 8(a), the proposed auxiliary beam pair-assisted angle tracking design can accurately track the angle variations under relatively high tracking overheads, i.e., $10\%$. By reducing the tracking overhead to $0.05\%$, the tracking resolution becomes small, which in turn, degrades the angle tracking performance as shown in Fig. 8(b). Under different tracking overheads assumptions, the trend of the angle variations can be well captured by employing the proposed angle tracking design.
In Figs. 9(a) and (b), we evaluate the proposed angle tracking design assuming both impaired and calibrated radiation patterns with $1\%$ and $0.1\%$ tracking overheads. We set the variances of the phase and amplitude errors as $0.5$. Due to the random phase and amplitude errors, the angle tracking performance of the proposed approach with the impaired radiation pattern is deteriorated. Even with relatively high tracking overheads (e.g., $1\%$ in Fig. 9(a)), the tracked angles are very different from the actual ones for all the channel realizations. By compensating for the phase and amplitude errors via the proposed calibration method, the angle tracking performance is significantly improved. For $1\%$ tracking overhead, the angle tracking performance of the proposed method with calibrated radiation pattern almost matches with the actual angle variations for all the channel realizations.
We now evaluate the two-dimensional angle tracking performance for the proposed approach using calibrated radiation pattern. A total of $N_{\mathrm{tot}}=32$ antenna elements are equipped at the BS side with the UPA placed in the $\mathrm{xy}$-plane. Further, we set $N_{\mathrm{x}}=4$ and $N_{\mathrm{y}}=8$. In Fig. 10(a), we plot the cumulative density functions (CDFs) of the beamforming gains obtained from the anchor beam in the DDC. With calibrated radiation pattern, the proposed method shows close performance relative to the perfect case assuming various tracking overheads. In Fig. 10(b), the spectral efficiency performance is evaluated using the anchor beam in the DDC. Specifically, denoting by $h_{\mathrm{eff}}=g\bm{a}_{\mathrm{r}}^{*}(\nu)\bm{a}_{\mathrm{r}}(\nu)\bm{a}_{%
\mathrm{t}}^{*}(\theta,\psi)\bm{a}_{\mathrm{t}}(\hat{\theta},\hat{\psi})$ for single-path channels, we can compute the spectral efficiency metric as $C=\mathbb{E}\left[\log_{2}\left(1+\gamma h_{\mathrm{eff}}^{*}h_{\mathrm{eff}}%
\right)\right]$. Similar to Fig. 10(a), the spectral efficiency performances obtained by using the proposed method with different tracking overheads are close to the perfect case. In Figs. 10(a) and (b), we also evaluate the grid-of-beams based beam tracking design assuming various tracking overheads. For fair comparison, we employ the same number of tracking beams as in the auxiliary beam pair based angle tracking design. As can be seen from Figs. 10(a) and (b), the proposed algorithm shows superior beamforming gain and spectral efficiency performances over the grid-of-beams based beam tracking strategy.
V-B Wideband multi-path channels with OFDM
The temporal evolution effect of mmWave channels is not well characterized in current wideband mmWave channel models [32]. In this part of simulation, we therefore first implement the temporally correlated mmWave channels by considering both (i) the NYUSIM open source platform developed in [33] and (ii) the statistical temporal evolution model used in [14, 34].
For the NYUSIM open source platform, we consider the urban micro-cellular (UMi) scenario with NLOS components for the $28$ GHz carrier frequency. We evaluate $125$ MHz RF bandwidths with $N=512$ subcarriers. The corresponding CP lengths is $D=64$. The employed ZC-type sequences occupy the central $63$ subcarriers with the root indices $i_{0}=25$ and $i_{1}=34$. We set the subcarrier spacing and symbol duration as $270$ KHz and $3.7$ $\mu s$ following the numerology provided in [1]. Detailed channel modeling parameters are given in [31, Table III]. Further, our design focus here is to track the strongest path’s AoD by using the proposed approach.
Before proceeding with the temporal channel evolution model, we first rewrite the time-domain channel matrix in (2) in a more compact form. For time-slot $t$, denoting by $\bm{\varphi}_{t}=\left[\varphi_{1,t},\varphi_{2,t},\cdots,\varphi_{N_{\mathrm{%
r}},t}\right]^{\mathrm{T}}$, $\bm{\mu}_{t}=\left[\mu_{1,t},\mu_{2,t},\cdots,\mu_{N_{\mathrm{r}},t}\right]^{%
\mathrm{T}}$ and $\bm{\phi}_{t}=\big{[}\phi_{1,t},\phi_{2,t},\cdots,\phi_{N_{\mathrm{r}},t}\big{%
]}^{\mathrm{T}}$, we have
$$\bm{H}_{t}[d]=\bm{A}_{\mathrm{R}}(\bm{\varphi}_{t})\bm{G}_{t}[d]\bm{A}^{*}_{%
\mathrm{T}}(\bm{\mu}_{t},\bm{\phi}_{t}),$$
(41)
where $\bm{A}_{\mathrm{R}}(\bm{\varphi}_{t})$ and $\bm{A}_{\mathrm{T}}(\bm{\mu}_{t},\bm{\phi}_{t})$ represent the array response matrices for the receiver and transmitter such that
$$\displaystyle\bm{A}_{\mathrm{R}}(\bm{\varphi}_{t})=\left[\bm{a}_{\mathrm{r}}(%
\varphi_{1,t})\hskip 8.535827pt\bm{a}_{\mathrm{r}}(\varphi_{2,t})\hskip 8.5358%
27pt\cdots\hskip 8.535827pt\bm{a}_{\mathrm{r}}(\varphi_{N_{\mathrm{r}},t})\right]$$
(42)
$$\displaystyle\bm{A}_{\mathrm{T}}(\bm{\mu}_{t},\bm{\phi}_{t})=\left[\bm{a}_{%
\mathrm{t}}(\mu_{1,t},\phi_{1,t})\hskip 8.535827pt\bm{a}_{\mathrm{t}}(\mu_{2,t%
},\phi_{2,t})\hskip 8.535827pt\cdots\hskip 8.535827pt\bm{a}_{\mathrm{t}}(\mu_{%
N_{\mathrm{r}},t},\phi_{N_{\mathrm{r}},t})\right],$$
(43)
and $\bm{G}_{t}[d]=\mathrm{diag}\left(\left[g_{1}p\left(dT_{\mathrm{s}}-\tau_{1}%
\right),\cdots,g_{N_{\mathrm{r}}}p\left(dT_{\mathrm{s}}-\tau_{N_{\mathrm{r}}}%
\right)\right]^{\mathrm{T}}\right)$. We model the temporal evolution of the path gains as the first-order Gauss-Markov process as [14]
$$\bm{G}_{t+1}[d]=\rho_{\mathrm{D}}\bm{G}_{t}[d]+\sqrt{1-\rho_{\mathrm{D}}^{2}}%
\bm{B}_{t+1},$$
(44)
where $\rho_{\mathrm{D}}=J_{0}\left(2\pi f_{\mathrm{D}}T_{\mathrm{s}}\right)$ and $\bm{B}_{t+1}$ is a diagonal matrix with the diagonal entries distributed according to $\mathcal{N}_{c}(0,1)$. Here, $J_{0}(\cdot)$ denotes the zeroth-order Bessel function of first kind and $f_{\mathrm{D}}$ is the maximum Doppler frequency. The elevation and azimuth AoDs vary according to [34]
$$\bm{\mu}_{t+1}=\bm{\mu}_{t}+\Delta\bm{\mu}_{t+1},\hskip 8.535827pt\bm{\phi}_{t%
+1}=\bm{\phi}_{t}+\Delta\bm{\phi}_{t+1},$$
(45)
where $\Delta\bm{\mu}_{t+1}$ and $\Delta\bm{\phi}_{t+1}$ are distributed according to $\mathcal{N}_{c}(\bm{0}_{N_{\mathrm{r}}},\sigma^{2}_{\mu}\bm{I}_{N_{\mathrm{r}}})$ and $\mathcal{N}_{c}(\bm{0}_{N_{\mathrm{r}}},\sigma^{2}_{\phi}\bm{I}_{N_{\mathrm{r}%
}})$. We first determine the initial path gains, path delays, azimuth/elevation AoDs, and AoAs through one simulation run using the NYUSIM open source platform. We then obtain the channels for the subsequent time-slots by using the initial channel results and the temporal evolution model presented in (44) and (45).
In Fig. 11, we plot the beamforming gains against the employed OFDM symbols for $\rho=1\%$ and $0.1\%$ tracking overheads. We set $f_{\mathrm{D}}=1.3$ KHz and $\sigma^{2}_{\mu}=\sigma^{2}_{\phi}=(\pi/180)^{2}$, which characterize relatively fast moving and angle variation speeds [14, 34]. In addition to the actual angle variations, we evaluate the proposed angle tracking and grid-of-beams based beam tracking designs with calibrated radiation patterns. Similar to the evaluation results shown in Section V-A, the proposed algorithm shows close tracking performance to the perfect case, and outperforms the existing beam tracking approach for various system setups.
VI Conclusions
In this paper, we developed and evaluated several new angle tracking design approaches for mobile wideband mmWave systems with antenna array calibration. The proposed methods are different in terms of tracking triggering metric, feedback information, and auxiliary beam pair setup required at the UE. These differences allow the proposed strategies to be adopted in different deployment scenarios. We exposed the detailed design procedures of the proposed methods and showed that they can obtain high-resolution angle tracking results. The proposed methods neither depend on a particular angle variation model nor require the on-grid assumption. Since the proposed methods are sensitive to radiation pattern impairments, we showed by numerical examples that with appropriate array calibration, the angle variations can still be successfully tracked via the proposed methods under various angle variation models.
References
[1]
Z. Pi and F. Khan,
“An introduction to millimeter-wave mobile broadband systems,”
IEEE Commun. Mag., vol. 49, no. 6, pp. 101–107, Jun. 2011.
[2]
R. W. Heath Jr., N. Gonzalez-Prelcic, S. Rangan, W. Roh, and
A. Sayeed,
“An overview of signal processing techniques for millimeter wave
MIMO systems,”
IEEE J. Sel. Top. Signal Process., vol. 10, no. 3, pp.
436–453, Feb. 2016.
[3]
Z. Pi, J. Choi, and R. W. Heath Jr.,
“Millimeter-wave Gbps broadband evolution towards
5G: fixed access and backhaul,”
IEEE Commun. Mag., vol. 54, no. 4, pp. 138–144, Apr. 2016.
[4]
F. Boccardi, R. W. Heath Jr., A. Lozano, T. L. Marzetta, and
P. Popovski,
“Five disruptive technology directions for 5G,”
IEEE Commun. Mag., vol. 52, no. 2, pp. 74–80, Feb. 2014.
[5]
T. S. Rappaport, R. W. Heath Jr., R. C. Daniels, and J. N.
Murdock,
Millimeter wave wireless communications,
Prentice Hall, 2014.
[6]
“Wireless LAN Medium Access
Control (MAC) and Physical Layer
(PHY) Specifications - Amendment 4:
Enhancements for Very High Throughput in
the 60 GHz Band,” IEEE
P802.11ad/D9.0.
[7]
J. Wang, Z. Lan, C. Pyo, T. Baykas, C. Sum, M. Rahman, J. Gao, R. Funada,
F. Kojima, H. Harada, and S. Kato,
“Beam codebook based beamforming protocol for multi-Gbps
millimeter-wave WPAN systems,”
IEEE J. Sel. Areas Commun., vol. 27, no. 8, pp. 1390–1399,
Oct. 2009.
[8]
“Overview of NR initial access,” 3GPP TSG
RAN WG1 meeting #87, R1-1611272, Nov. 2016.
[9]
J. Palacios, D. D. Donno, and J. Widmer,
“Tracking mm-Wave channel dynamics: fast beam training
strategies under mobility,”
in Proc. of IEEE Conf. on Computer Commun. (INFOCOM 2017), Oct.
2017.
[10]
J. Bae, S.-H. Lim, J.-H. Yoo, and J.-W. Choi,
“New beam tracking technique for millimeter wave-band
communications,”
arXiv preprint arXiv:1702.00276, Feb. 2017.
[11]
L. Dai and X. Gao,
“Priori-aided channel tracking for millimeter-wave beamspace massive
MIMO systems,”
in Proc. of IEEE URSI Asia-Pacific Radio Science Conference
(URSI AP-RASC), Aug. 2016.
[12]
X. Gao, L. Dai, T. Xie, X. Dai, and Z. Wang,
“Fast channel tracking for Terahertz beamspace massive
MIMO systems,”
IEEE Trans. Veh. Technol., vol. 66, no. 7, pp. 5689–5696, Oct.
2016.
[13]
Y. Zhou, P. C. Yip, and H. Leung,
“Tracking the direction-of-arrival of multiple moving tragets by
passive arrays: asymptotic performance analysis,”
IEEE Trans. Signal Process., vol. 47, no. 10, pp. 2644–2654,
Oct. 1999.
[14]
C. Zhang, D. Guo, and P. Fan,
“Mobile millimeter wave channel acquisition, tracking, and abrupt
change detection,”
arXiv preprint arXiv:1610.09626, Oct. 2016.
[15]
V. Va, H. Vikalo, and R. W. Heath Jr.,
“Beam tracking for mobile millimeter wave communication systems,”
in Proc. of IEEE Global Conf. on Signal and Information
Process., Dec. 2016.
[16]
D. Zhu, J. Choi, and R. W. Heath Jr.,
“Auxiliary beam pair enabled AoD and
AoA estimation in closed-loop large-scale mmWave
MIMO system,”
IEEE Trans. Wireless Commun., vol. 16, no. 7, pp. 4770–4785,
Jul. 2017.
[17]
D. Zhu, J. Choi, and R. W. Heath Jr.,
“Two-dimensional AoD and AoA
acquisition for wideband millimeter-wave systems with dual-polarized
MIMO,”
IEEE Trans. Wireless Commun., vol. 16, no. 12, pp. 7890–7905,
Dec. 2017.
[18]
“Technical Report Group RAN:
Spatial Channel Model For
Multiple Input Multiple Output
(MIMO) Simulations, v13.0.0,” 3GPP, Jan. 2016.
[Online]. Available: http://www.3gpp.org/DynaReport/25996.htm.
[19]
A. Alkhateeb and R. W. Heath Jr.,
“Frequency selective hybrid precoding for limited feedback
millimeter wave systems,”
IEEE Trans. Commun., vol. 64, no. 5, pp. 1801–1818, May 2016.
[20]
K. Venugopal, A. Alkhateeb, N. Gonzalez-Prelcic, and R. W.
Heath Jr.,
“Channel estimation for hybrid architecture based wideband
millimeter wave systems,”
IEEE J. Sel. Areas Commun., to appear, arXiv preprint
arXiv:1611.03046, 2017.
[21]
S. Venkatesan and R. A. Valenzuela,
“OFDM for 5G: cyclic prefix versus zero postfix,
and filtering versus windowing,”
in Proc. of IEEE Intern. Conf. Commun., Jul. 2016.
[22]
J. Singh and S. Ramakrishna,
“On the feasibility of codebook-based beamforming in millimeter wave
systems with multiple antenna arrays,”
IEEE Trans. Wireless Commun., vol. 14, no. 5, pp. 2670–2683,
May 2015.
[23]
M. Xiao, S. Mumtaz, Y. Huang, L. Dai, Y. Li, M. Matthaiou, G. Karagiannidis,
E. Bjornson, K. Yang, C.-L. I, and A. Ghosh,
“Millimeter wave communications for future mobile networks,”
IEEE J. Sel. Areas Commun., vol. 35, no. 9, pp. 1909–1935,
Jun. 2017.
[24]
B. M. Popovic,
“Generalized chirp-like polyphase sequences with optimum correlation
properties,”
IEEE Trans. Inf. Theory, vol. 38, no. 4, pp. 1406–1409, Jul.
1992.
[25]
“Technical Specification Group RAN:
Evolved Universal Terrestrial Radio
Access (E-UTRA); Physical Channels and
Modulation,” 3GPP, Dec. 2011. [Online]. Available:
http://www.3gpp.org/ftp/Specs/html-info/36211.htm.
[26]
K. Manolakis, D. M. Estevez, V. Jungnickel, W. Xu, and C. Drewes,
“A closed concept for synchronization and cell search in
3GPP LTE systems,”
in Proc. of IEEE Wireless Commun. Netw. Conf., Apr. 2009, pp.
1–6.
[27]
G. Sommerkorn, D. Hampicke, R. Klukas, A. Richter, A. Schneider, and R. Thoma,
“Reduction of DoA estimation errors caused by
antenna array imperfections,”
in Proc. of IEEE European Microwave Conf., Oct. 1999, pp.
287–290.
[28]
B. Ng and C. See,
“Sensor-array calibration using a maximum-likelihood approach,”
IEEE Trans. Antennas Propag., vol. 44, no. 6, pp. 827–835,
Jun. 1996.
[29]
K. Sakaguchi, K. Kuroda, J.-I. Takada, and K. Araki,
“Comprehensive calibration for MIMO system,”
in Proc. of IEEE WPMC 2002, Oct. 2002.
[30]
S. Lloyd,
“Least squares quantization in PCM,”
IEEE Trans. Info. Theory, vol. 28, no. 2, pp. 129–137, Mar.
1982.
[31]
M. K. Samimi and T. S. Rappaport,
“3-D millimeter-wave statistical channel model for
5G wireless system design,”
IEEE Transactions on Microwave Theory and Techniques, vol. 64,
no. 7, pp. 2207–2225, Jul. 2016.
[32]
T. S. Rappaport, S. Sun, and M. Shafi,
“Investigation and comparison of 3GPP and NYUSIM
channel models for 5G wireless communications,”
arXiv preprint arXiv:1707.00291, Jul. 2017.
[33]
“NYUSIM channel simulator,”
http://wireless.engineering.nyu.edu./5g-millimeter-wave-channel-modeling-software/.
[34]
J. He, T. Kim, H. Ghauch, K. Liu, and G. Wang,
“Millimeter wave MIMO channel tracking systems,”
in Proc. of IEEE Global Telecomm. Conf., Dec. 2014, pp. 1–5. |
First Stars VIII – Enrichment of the neutron-capture elements
in the early Galaxy.
††thanks: Based on observations made with the ESO Very Large Telescope
at Paranal Observatory, Chile (program ID 165.N-0276(A); P.I: R. Cayrel).
P. François
8European Southern Observatory (ESO),
Alonso de Cordova 3107, Vitacura,
Casilla 19001, Santiago 19, Chile
81
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
1
E. Depagne
1
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
110Las Cumbres Observatory,
Santa Barbara, California, USA
10
V. Hill
1
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
1
M. Spite
1
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
1
F. Spite
1
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
1
B. Plez
2GRAAL, Université de Montpellier II, F-34095 Montpellier
Cedex 05, France
2
T. C. Beers
3Dept. of Physics & Astronomy, CSCE: Center for the Study of Cosmic Evolution,
and JINA: Joint Institute for Nuclear Astrophysics, Michigan State University,
E. Lansing, MI 48824, USA
3
J. Andersen
5The Niels Bohr Institute, Astronomy Group, Juliane Maries Vej 30,
DK-2100 Copenhagen, Denmark
59Nordic Optical Telescope, Apartado 474,
ES-38700 Santa Cruz de La Palma, Spain
9
G. James
8European Southern Observatory (ESO),
Alonso de Cordova 3107, Vitacura,
Casilla 19001, Santiago 19, Chile
81
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
1
B. Barbuy
4IAG, Universidade de São Paulo, Departamento de Astronomia, CP
3386, 01060-970 São Paulo, Brazil
4
R. Cayrel
1
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
1
P. Bonifacio
1
GEPI, Observatoire de Paris-Meudon, CNRS, Univ. de Paris Diderot,
Place Jules Janssen, F-92190 Meudon, France
16Istituto Nazionale di Astrofisica - Osservatorio Astronomico di Trieste,
Via G.B. Tiepolo 11, I-34131
Trieste, Italy
6
P. Molaro
6Istituto Nazionale di Astrofisica - Osservatorio Astronomico di Trieste,
Via G.B. Tiepolo 11, I-34131
Trieste, Italy
6
B. Nordström
5The Niels Bohr Institute, Astronomy Group, Juliane Maries Vej 30,
DK-2100 Copenhagen, Denmark
5
F. Primas
7European Southern Observatory (ESO),
Karl-Schwarschild-Str. 2, D-85749 Garching b. München, Germany
7
Patrick.Francois@obspm.fr
(Received 24/04/2007 / accepted 18/09/2007)
Key Words.:
Stars: abundances – Stars: Population II – Galaxy: abundances –
Galaxy: halo – Nucleosynthesis
††offprints: P. François
Abstract
Context:Extremely metal-poor (EMP) stars in the halo of the Galaxy are sensitive probes
of the production of the first heavy elements and the efficiency of mixing in
the early interstellar medium. The heaviest measurable elements in such stars are our main guides to understanding the nature and astrophysical site(s) of
early neutron-capture nucleosynthesis.
Aims:Our aim is to measure accurate, homogeneous neutron-capture element
abundances for the sample of 32 EMP giant stars studied earlier
in this series, including 22 stars with [Fe/H] $<-$3.0.
Methods:Based on high-resolution, high S/N spectra from the ESO VLT/UVES, 1D, LTE model
atmospheres, and synthetic spectrum fits, we determine abundances or upper limits for the 16 elements Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Ho, Er, Tm, and Yb in all stars.
Results:As found earlier, [Sr/Fe], [Y/Fe], [Zr/Fe] and [Ba/Fe] are below Solar in the EMP stars, with very large scatter. However, we find a tight anti-correlation
of [Sr/Ba], [Y/Ba], and [Zr/Ba] with [Ba/H] for $-4.5<$ [Ba/H] $<-2.5$, also
when subtracting the contribution of the main $r$-process as measured by [Ba/H].
Spectra of even higher S/N ratio are needed to confirm and extend these results below [Fe/H] $\simeq-3.5$. The huge, well-characterised scatter of the
[n-capture/Fe] ratios in our EMP stars is in stark contrast to the negligible dispersion in the [$\alpha$/Fe] and [Fe-peak/Fe] ratios for the same stars found in Paper V.
Conclusions:These results demonstrate that a second (“weak” or LEPP) $r$-process dominates the production of the lighter neutron-capture elements for [Ba/H] $<-2.5$. The
combination of very consistent [$\alpha$/Fe] and erratic [n-capture/Fe] ratios indicates that inhomogeneous models for the early evolution of the halo are needed. Our accurate data provide strong constraints on future models of the production and mixing of the heavy elements in the early Galaxy.
1 Introduction
In cold dark matter models for hierarchical galaxy formation, the very first generation of metal-free (Population III) stars are thought to be born in
sub-galactic fragments of mass $M>5~{}10^{5}M_{\odot}$ (Fuller & Couchman 2000; Yoshida et al. 2003; Madau et al. 2004). Recent models of primordial star formation
(Abel et al. 2000; Bromm 2005) suggest that these stars were very massive ($M>100M_{\odot}$), although substantial uncertainties remain.
It is likely that none
of these stars survives in the Galaxy today. However, this first generation
of stars left imprints of its nucleosynthetic history in the elemental
abundance patterns of the most metal-poor lower-mass stars that we can observe
at present. Detailed chemical analyses of the most metal-poor stars can
therefore provide insight into the synthesis of the first heavy elements and how efficiently they were mixed and incorporated in later stellar generations - i.e. how large spiral galaxies such as our own were first assembled.
In Paper V of this series (Cayrel et al. 2004), we confirmed the existence of relatively uniform $\alpha$-element overabundances in 32 very metal-poor
halo giants down to [Fe/H]$\simeq-4.2$, as expected for material enriched by massive progenitors. The very small dispersion in [$\alpha$/Fe] showed that previous findings of significant scatter in [$\alpha$/Fe] and [Fe-peak/Fe] at low metallicity were due to problems in the data and/or analyses (low S/N, uncertain stellar atmospheric parameters, combinations of data using different line lists, different analysis techniques, etc.). The results of Paper V
thus suggested that mixing of the ISM in the early Galaxy was quite efficient.
In contrast, the neutron-capture elements have been found to behave very differently (Molaro & Bonifacio 1990; Norris et al. 1993; Primas et al. 1994). For example, the [Ba/Fe] and [Sr/Fe] ratios are found to be generally below solar for stars with [Fe/H]
$<-2.5$ (McWilliam et al. 1995; Ryan et al. 1996; McWilliam 1998), but the trends with metallicity differ from one element to another. Moreover, several [n-capture/Fe] ratios exhibit a
large spread at low metallicity (McWilliam et al. 1995; Ryan et al. 1996; McWilliam 1998), as confirmed recently by Christlieb et al. (2004) and Barklem et al. (2005) from a large sample of very
metal-poor stars.
The detailed abundance ratios between the neutron-capture elements are our best diagnostics of the processes that synthesised these elements in the earliest stars. The detailed characteristics of the dispersion of these ratios around the mean relations (amplitude, change with metallicity, etc.) are also our most important diagnostics of the efficiency of mixing in the early ISM.
As in Paper V, we therefore want to determine, from high-quality spectra analysed in a consistent manner, the precise abundance relations between the main groups of neutron-capture elements seen in the most metal-poor stars and quantify the scatter around these mean relations. For this, we select the same sample of very metal-poor halo giants as discussed earlier in the “First Stars” project, using the same spectra, atmospheric parameters, and analysis techniques as before.
Throughout this paper we will use the designations Very Metal-Poor (VMP), Exrtremely Metal-Poor (EMP), and Ultra Metal-Poor (UMP) for stars with metallicities $-3<$[Fe/H]$<-2$, $-4<$[Fe/H]$<-3$, and [Fe/H]$<-4$,
respectively (Beers & Christlieb 2005).
We will not discuss the Carbon-Enhanced Metal-Poor (CEMP) stars, many of which exhibit peculiar abundances and may be binary systems (Lucatello et al. 2004, and in preparation).
2 Observations
The observations were performed during several observing runs
in 1999 and 2000 at the VLT-Kueyen telescope with the high-resolution
spectrograph UVES (Dekker et al. 2000). Details of these observations and the
spectrograph settings were given in Paper V, which also provided abundances
of the lighter elements for the same sample of stars as studied here.
The spectra were reduced using the UVES package within MIDAS, which performs
bias and inter-order background subtraction (object and flat-field), optimal extraction of the object (above sky, rejecting cosmic-ray hits), division by
a flat-field frame extracted with the same weighted profile as
the object, wavelength calibration, rebinning to a constant wavelength step, and
merging of all overlapping orders. The spectra were then added and normalized to
unity in the continuum.
Because UVES is so efficient in the near UV, we achieve typical S/N ratios per
resolution element of 50 or more at 350 nm. Thus, the weak lines from the heavy elements become measurable even in the EMP stars of our sample; most previous studies were based on spectra of substantially lower quality.
3 Abundance analysis
As described in Paper V, a classical LTE analysis of our spectra was carried
out, using OSMARCS model atmospheres (Gustafsson et al. 1975; Plez et al. 1992; Edvardsson et al. 1993; Asplund et al. 1997; Gustafsson et al. 2003).
Abundances were determined with a current version of the turbospectrum code (Alvarez & Plez 1998), which treats scattering in detail. Solar abundances were adopted from Grevesse & Sauval (2000).
Line detection and equivalent-width measurement was first carried out with the line list of the appendix of Paper I (Hill et al. 2002) and the automatic code fitline, which is based on genetic algorithms. As most of the lines are weak and located in crowded spectral regions, this turned out to be less than optimal, so we decided to determine the abundances by fitting synthetic spectra to all visible lines (and therefore do not list individual measured equivalent widths here).
It soon became clear that establishing upper limits for the abundances of many
of the heavy elements could also be useful, even when no line from the strongest predicted transition could be detected. These upper limits were computed by comparing the synthetic and observed spectra, and changing the abundance until the computed strength of the line was of the same order as the noise in the observed spectrum.
All the measured abundances and upper limits are given in Tables 3–5 and shown in detail in Figs. 6–9.
3.1 Atmospheric parameters
The procedures employed to derive T${}_{\rm eff}$, log $g$, and micro-turbulent
velocity estimates $v_{t}$for our stars were described in detail in Paper V
(Sect. 3). In summary, T${}_{\rm eff}$ is derived from broadband photometry,
using the Alonso et al. (1999) calibration. The surface gravity is set by
requiring that the Fe and Ti abundances derived from neutral and singly ionised
transitions be identical. Micro-turbulent velocities are derived by eliminating
the trend in abundance of the Fe I lines as a function of equivalent width.
Table 1 lists the atmospheric parameters adopted from Paper V.
3.2 Line list
For all of the stars in our sample, we adopt the [Fe/H] abundances derived in
Paper V, which are based on a large number of lines (60–150 Fe I lines and
4–18 Fe II lines). The line list used to determine the heavy-element abundances is taken from Paper I, but updated with recent determinations of oscillator strengths and hyperfine structure corrections (Den Hartog et al. 2003; Lawler et al. 2004)
for several of the elements.
The solar abundances from Grevesse & Sauval (2000) have not been corrected for the changes introduced by these corrections, as they are small and only affect some of the transitions in each element.
3.3 Error budget
Table 2 lists the computed errors in the heavy-element abundance ratios due to typical uncertainties in the stellar parameters. These errors were
estimated by varying T${}_{\rm eff}$, log $g$, and $v_{t}$ in the model atmosphere of BS 17569-049 by the amounts indicated; other stars of the sample yield
similar results. As will be seen, errors in the basic parameters largely cancel out in the abundance ratios between elements in similar stages of ionization and with similar excitation potentials.
The global error of an element abundance [A/H], including errors in fitting of the synthetic line profile to the observed spectra, is of the order of
0.20$-$0.25 dex, depending on the species under consideration. The typical
line-to-line scatter (standard deviation) for a given element is 0.05$-$0.15
dex.
4 Abundances of the neutron-capture elements
4.1 The light neutron-capture elements Sr, Y, and Zr
In the Solar System, the abundances of Sr, Y, and Zr are dominated by
$s$-process production (Arlandini et al. 1999). A small fraction of these elements can
be produced by the weak $s$-process (Prantzos et al. 1990), but this process is not
expected to be efficient at the low metallicities observed in our sample.
Fig. 1 shows the abundance ratios [Sr/Fe], [Y/Fe], and [Zr/Fe] as functions of [Fe/H], as determined here and by Honda et al. (2004), together with data selected from earlier papers (Ryan et al. 1991; Norris et al. 2001; Gratton et al. 1987; Gilroy et al. 1988; Gratton et al. 1988; Gratton & Sneden 1991; Edvardsson et al. 1993; Gratton & Sneden 1994; McWilliam et al. 1995; Carney et al. 1997; Nissen & Schuster 1997; McWilliam 1998; Stephens 1999; Burris et al. 2000; Fulbright 2000; Carretta et al. 2002; Johnson & Bolte 2002). Only results based on
high-resolution, high-S/N spectroscopy are shown here; thus we do not include the recent lower-S/N data by Barklem et al. (2005).
Fig. 1 shows a rather similar behaviour for these three elements, i.e. [X/Fe] $\simeq$0 for stars with [Fe/H] above $\simeq-3.0$. Below this
metallicity, all the abundance ratios drop below the solar values. In other words, the progressive enrichment in these elements only reaches the solar ratio at about [Fe/H] = $-3.0$.
4.1.1 Strontium
Strontium is a key element for probing the early chemical evolution of the
Galaxy, because its resonance lines are strong and can be measured even in
stars with metallicities as low as [Fe/H] = $-4.0$. For most of our stars,
only the resonance lines at 4077.719 Å and 4215.519 Å are visible in our spectra.
We adopt the $~{}gf$ values from Sneden et al. (1996) and confirm the large
underabundance of Sr in EMP stars reported e.g. by Honda et al. (2004). It has long been realized that the [Sr/Fe] ratio exhibits very high dispersion
for stars with [Fe/H] $\leq-2.8$ (McWilliam et al. 1995; Ryan et al. 1996), and we confirm this as well. As typical errors in the [Sr/Fe] ratio are no more than a few tenths of a dex at worst, this large spread (over 2 dex) cannot be attributed to observational errors; see, e.g., Ryan et al. (1996).
4.1.2 Yttrium
The Y lines are somewhat weaker than those of Sr in this temperature range and are not readily detected in our most metal-poor stars. However, nine lines of similar strength (354.9 nm, 360.07 nm, 361.10 nm, 377.43 nm, 378.86 nm, 381.83 nm, 383.29 nm, 395.03 nm, and 439.80 nm) can be measured in stars with [Fe/H]$>-3.5$ when the temperature is low enough, and then yield rather robust abundance determinations for Y.
The middle panel of Fig. 1 shows [Y/Fe] as a function of [Fe/H]. The overall trend is similar to that found for Sr, i.e., a solar ratio down to [Fe/H] $\simeq-3.0$, and lower values of increasing dispersion at even lower metallicities. Unlike Sr, which displays a relatively high dispersion at all metallicities, the weaker and sharper lines of Y yield a very small dispersion in its abundance at intermediate or higher metallicities. The similarity we stress here is that the dispersions in both [Sr/H] and [Y/H] increase by at least a factor of 2 below [Fe/H] $\simeq-3.0$.
4.1.3 Zirconium
Zr is similar to Y in line strength, and we can measure 5-10 lines in stars with [Fe/H] $>-3.50$ (see Fig. 1). We find a similar pattern for [Zr/Fe] as for Sr and Y, with a slightly lower average underabundance, large dispersion below [Fe/H] $\simeq-3.0$, and somewhat smaller scatter at intermediate and higher metallicities.
4.1.4 The [Y/Sr] ratio
If two elements are formed by the same process, their ratio should not vary
with metallicity, and the dispersion around the mean value should yield a good estimate of the errors on the abundance determinations.
Fig. 2 shows the ratio [Y/Sr] as a function of [Fe/H] for our
data, along with those by Burris et al. (2000), Johnson & Bolte (2002), and Honda et al. (2004).
We confirm that [Y/Sr] is constant with rather low scatter around the mean
value: [$<$Y/Sr$>$] = $-$0.2$\pm$0.2 (s.d.). This dispersion is fully
accounted for by the observational errors, indicating that any cosmic scatter
in this ratio is very small.
However, a plot of [Y/Sr] as a function of [Sr/H] from our data and those by
Johnson & Bolte (2002) (Fig. 5), appears to show an anticorrelation between [Y/Sr] and [Sr/H] - a result that needs confirmation as some of the data points are upper limits only. This is not seen in the data set of Honda et al. (2004), as
the range of Sr abundances in their sample is fairly small.
4.1.5 General features of the first neutron-capture peak elements
The first-peak elements are known to have a more complex origin than the heavier neutron-capture elements like Ba or Eu, which are only produced by the “main” components of the $r$- and $s$-processes. In solar-type material, Sr, Y and Zr are formed in the “main” $s$-process, but at lower metallicity the “weak” $s$-process (Busso et al. 1999) also contributes. In our EMP stars, we expect a pure $r$-process origin for the neutron-capture elements, and we wish to explore the nature of those processes in more detail.
Fig. 3 shows the average [$<$Sr+Y+Zr$>$/Fe] ratio for our stars and from recent literature as a function of [Fe/H]. Only stars with data for all three elements are included, which limits the sample towards the lowest metallicities). We find a clear increase in the dispersion of this ratio with decreasing metallicity. Note also that the two most metal-poor stars ($\rm[Fe/H]\simeq-3.5$) in Fig. 3 are nearly one dex below the solar value, reflecting the strong deficiency of all three elements in the most metal-poor stars.
Because Fe and the neutron-capture elements form under quite different conditions, it may be more informative to study their abundances as functions of another heavy element. The strong resonance lines of Ba can be measured in stars down to almost [Fe/H] $=-4.0$, so we select Ba as our alternative reference element. Fig. 4 shows the mean [$<$Sr+Y+Zr$>$/Ba] ratio as a function of [Ba/H]. We find a striking, tight anti-correlation, especially for stars below [Ba/H] $\simeq-2.5$), which may indicate that another nucleosynthesis process produces the light neutron-capture elements preferentially at low metallicity. We discuss this point more fully in Sect. 5.2.
4.2 The second neutron-capture peak elements ($56\leq Z\leq 72$)
This range in atomic mass includes the well-studied elements Ba, Eu, and La.
Ba and Eu played a key role in understanding early nucleosynthesis, when Truran (1981) first suggested that the [Ba/Eu] vs. [Fe/H] observations of Spite & Spite (1978) could be naturally understood if both of these neutron-capture
elements were synthesised by the $r$-process in massive stars during early Galactic evolution (85% of the Ba in the Solar System is due to the
$s$-process).
Due to the high UV efficiency of UVES, we have been able to determine abundances or upper limits in many of our stars for several other heavy neutron-capture elements (Ce, Pr, Nd, Sm, Gd, Dy, Ho, Er and Tm). The results are shown in Figs. 10 - 13 as functions of [Fe/H], together with those by Johnson & Bolte (2002), Honda et al. (2004), and data selected from earlier literature. These data enable us to discuss the nature of the early $r$-process nucleosynthesis in considerable detail.
As noted above, Ba is a particularly interesting element, in part because
the resonance lines are strong enough to be measured in all but two of our
stars and permit us to explore mean trends and scatter amongst the
neutron-capture elements down to [Fe/H] = $-4.2$; see Fig. 10. All
our abundance results for Ba have been derived assuming the isotopic composition corresponding to the $r$-process (McWilliam 1998).
In the metallicity range $-$2.5 to $-$3.0, we confirm the very large dispersion in [Ba/Fe] at given [Fe/H] noted by several previous authors, increasing towards the lowest metallicities. Our study adds a significant number of stars below [Fe/H] = $-3.0$. Although the number of stars in this range remains small, Fig. 10 suggests that [Ba/Fe] continues to decline to a mean value of [Ba/Fe] = $-2.0--1.0$, with a declining scatter as well. This might indicate that the nucleosynthesis processes involved undergo significant changes below [Fe/H] = $-3.2$.
The greatest scatter in [Ba/Fe] (a factor of 1000) occurs in the metallicity range $-3.2\leq{\rm[Fe/H]}\leq-2.8$. Thus, if the Ba and Fe in these stars was created by the same class of progenitor objects, their yields would have to
vary by a similarly large factor, whatever model of chemical evolution for the
early Galaxy one adopts. The yield of Ba could be extremely
metallicity-dependent or, perhaps more likely, the early production of Ba and
Fe was not correlated with each other, and Ba and Fe were produced in different
astrophysical sites, as suggested by Wanajo et al. (2001, and references therein).
As Fig. 10 shows, we do not observe a single star with a [Ba/Fe] ratio above solar below [Fe/H] $\simeq-3.2$; however, we do note that Barklem et al. (2005) do detect at least a few stars with [Ba/Fe] above solar at metallicities down to [Fe/H] $\simeq-3.4$.
The metallicity interval showing the largest scatter in [Ba/Fe] ($-3.2\leq{\rm[Fe/H]}\leq-2.8$) is also where the extremely $r$-process-enhanced metal-poor stars are found; i.e. those with [r-element/Fe] $>$ +1.0, referred to as r-II stars by Beers & Christlieb (2005). CS 22892-052, CS 31082-001, the eight new r-II stars found by Barklem et al. (2005), and the most recent discovery HE 1523-0909 (Frebel et al. 2007), all fall in this range. It is interesting that both CS 22892-052 and CS 31082-001 fit into the same region of Fig. 10 as the “normal”
(non-r-II) stars (albeit at the very upper limit), so these extreme r-II stars are not exceptional as far as the [Ba/Fe] ratio is concerned.
Like Ba, both La and Ce are primarily due to the $s$-process at solar metallicity.
For La and Ce, we can determine abundances for stars with [Fe/H] $>-3.0$, but only upper limits for the more metal-poor stars. It is interesting, however, that we find the same increase of the scatter with declining metallicity in the range $-3.2<$ [Fe/H] $<-2.0$ for La and Ce as for Ba. As this is also seen in the data from Honda et al. (2004) and earlier literature (see Fig. 10), there is little doubt as to its reality.
Figure 11 shows our results for [Pr/Fe], [Nd/Fe], and [Sm/Fe], which in Solar-system material are formed by the $s$- and $r$-process in roughly equal proportions. For Pr, the only earlier data are from Honda et al. (2004). We confirm the high [Pr/Fe] ratios found by these authors down to [Fe/H] $\simeq-3.0$. Our upper limits show that a rather large scatter in [Pr/Fe] exists down to [Fe/H] $\simeq-3.0$; for Nd and Sm, the scatter clearly increases with declining metallicity until its maximum at [Fe/H] $\simeq-3.0$. Note that we have Nd measurements for three stars with [Fe/H] $<-3.2$.
Eu, Gd, and Dy are elements that are produced primarily by the $r$-process, also in Solar-system material (93%, 84%, and 87%, respectively, according to Arlandini et al. (1999)). Figure 12 shows that they behave similarly to the other elements of the second neutron-capture peak and display increasing overabundances with declining metallicity, accompanied by increasing scatter. Once again, it appears that the scatter is at maximum at [Fe/H] $\simeq-3.0$, as found by Honda et al. (2004).
Note that for [Eu/Fe], low values $(\leq$0.0) are found only below [Fe/H] $\simeq-3.0$. Barklem et al. (2005) did find stars with high [Eu/Fe] ($>$ 0.5)
at metallicities lower than [Fe/H] = $-3.0$, but from a much larger sample of stars than ours. This indicates that stars with high [Eu/Fe] ratios are quite rare at very low metallicity, so that dedicated surveys are needed to uncover additional examples. For Gd, we do measure high [Gd/Fe] values in two stars
(CS 22172-002 and CS 22189-009) below [Fe/H] = $-$3.4.
Finally, Figure 13 shows our results for Ho, Er, and Tm, also produced almost exclusively in the $r$-process. Very few previous results exist for these three elements, which we find to be generally overabundant, as also reported by Honda et al. (2004) for Er and Tm. Once more, the large scatter in the element ratios appears maximal at [Fe/H] = $-3.0$. Our few results for Yb (not plotted) follow the same general trend.
5 Discussion
Our accurate, detailed, and homogeneous abundance data for the neutron-capture elements in a large sample of VMP and EMP stars enables us to address two important questions regarding the first stages of heavy-element enrichment in the Galaxy: (i): The nucleosynthesis process(es) that formed the first heavy elements, and (ii): the efficiency with which the newly synthesised elements were incorporated in the next generation(s) of stars, including those that have survived until today. We discuss each of these in turn in the following.
5.1 Diagnostics of the $r$-process(es) in EMP stars
We begin by repeating the classical Truran (1981) test of the relative weight of the $r$- and $s$-process as a function of metallicity. Ba and La are produced mostly by the “main” $s$-process in Solar-metallicity stars (92% and 83%, respectively, according to Arlandini et al. (1999), but in EMP stars they should be due to the $r$-process. Fig. 14 shows the [Eu/Ba] and [Eu/La] ratios as a function of [Fe/H] for our stars, along with earlier data. The dashed lines in both panels indicate the Solar-system $r$-process abundance ratios (Arlandini et al. 1999).
Our [Eu/Ba] ratios do cluster around the Solar-system $r$-process value at low metallicity, but a substantial scatter remains. Some of this may be due to the Ba data because of the broad hyperfine structure of the Ba lines: If the mix of Ba isotopes in the star is different from that assumed in the synthetic spectrum, the fit to the observed spectrum may be less stable than for single-component lines. Indeed, the [Eu/La] ratios exhibit substantially smaller dispersion at all metallicities, demonstrating that the scatter in [Eu/Ba] is essentially due to the Ba, not the Eu abundances. Together, the two panels of Fig. 14 confirm that the neutron-capture elements in EMP stars were produced predominantly or exclusively by the
$r$-process.
Given the large scatter of the [n-capture element/Fe] ratios as functions of [Fe/H] (Figs. 1 and 10 – 13), we proceed to compare elemental abundances within the neutron-capture group itself in the following. As noted earlier, we choose Ba as the reference element because data are available for nearly all our stars.
Fig. 15 shows the [Sr/Ba], [Y/Ba], and [Zr/Ba] ratios vs. [Ba/H] as determined by us and previous authors. We find a tight anti-correlation of
[X/Ba] with [Ba/H] for all three elements, at least down to [Ba/H] = -4.5.
We emphasize that most stars in our sample are not enriched in $r$-process elements, but note that the two extreme r-II stars CS 22892-052 and CS 31082-001 do in fact follow the same relation as the “normal” stars. In particular, we find no stars that are both Sr-poor and Ba-rich, as suspected already by Honda et al. (2004); however, such cases are found among the C-enhanced metal-poor (CEMP) stars (Sivarani et al. 2006).
Our most Ba-poor stars, below [Ba/H] $\simeq-4.5$, seem to depart from the correlation and show roughly Solar values for [Sr/Ba] and [Y/Ba], although we note that Honda et al. (2004) do find a couple of high [Sr/Ba] ratios in this region. This might indicate that the additional production channel for Sr, Y, and Ba discussed below may not operate in the very first stellar generations. However, the sample is very small (these are among our most metal-poor stars, with [Fe/H] $<-3.2$), and more reliable measurements of Sr, Y, and Zr in stars with low [Ba/H] will be needed for a definitive conclusion.
5.2 Synthesis of the first-peak elements
The diagrams discussed above amply demonstrate that not all the neutron-capture elements in metal-poor stars were produced by a single $r$-process, as discussed by Travaglio et al. (2004, and references therein); an additional process must contribute preferentially to the production of the first-peak elements in VMP/EMP stars, previously called the “weak” $r$-process; we will discuss below the aptness of this term.
Travaglio et al. (2004) explored the issue by following the Galactic enrichment of Sr, Y, and Zr using homogeneous chemical evolution models. They confirmed that a process of primary nature ($r$-process) is required to explain the observed abundance trends, argued that massive stars were the likely sites as these elements occur at very low metallicity, and coined the term “Lighter-Element Primary Process” (LEPP) for it. However, regardless of nomenclature, the actual process, site, or progenitor stars have not been identified.
Cescutti et al. (2005) came to similar conclusions, based on the behavior of Ba and Eu. They confirmed the need for a primary source to explain the behaviour of [Ba/Fe] vs. [Fe/H] and suggested that the primary production of Eu and Ba is associated with stars in the mass range 10-30 M${}_{\odot}$. Ishimaru et al. (2004) computed the evolution of [Eu/Fe], using inhomogeneous chemical evolution models with induced star formation, and concluded that the observations implied that the
low-mass range of supernovae were the dominant source of Eu.
The observations shown in Fig. 15 clearly cannot be explained by single
$r$-process. The trends suggest the existence of three different regimes: (i): [Ba/H] $\geq-2.5$, where all ratios are close to Solar; (ii): $-4.5\leq$ [Ba/H] $\leq-2.5$, where Sr, Y, and Zr become increasingly overabundant relative to Ba at lower metallicities, and (iii): [Ba/H] $\leq-4.5$, where the abundance ratios seem to drop to Solar again. The latter transition corresponds to [Fe/H]$\simeq-3$, i.e. the metallicity range in which all the highly $r$-process enriched metal-poor stars are have been found so far – the r-II stars as defined by Beers & Christlieb (2005).
It appears from these plots, and from the great uniformity of the $r$-process element patterns in the r-II stars observed so far, that the main $r$-process dominates the total abundance pattern of the heavy elements once they have been enriched beyond the level of [Ba/H] $\geq-2.5$. At levels up to 2 dex below this threshold, another process contributes increasingly to the production of the first-peak elements Sr, Y, and Zr. We want to clarify the properties of this process as independently of the main $r$-process as possible.
To do so, we have computed the mean residuals of Sr, Y, and Zr in each of our stars from the Solar-system $r$-process abundance pattern of Arlandini et al. (1999) as shown in Figs. 6–9. Thus, these abundance residuals should represent the pure production of the unknown process, free of interference from the main $r$-process.
The result is shown in Fig. 16 and shows that, far from being “weak”, the LEPP is responsible for 90-95% of the total abundance of these elements at [Ba/H] $\simeq-4.3$, where the [$<$Sr,Y,Zr$>$/Ba] ratio may split into two branches, as suggested on theoretical grounds by Ishimaru & Wanajo (1999) and Ishimaru & Wanajo (2000).
One would surmise that qualitative differences in neutron exposure or the nature of the available seed nuclei in the most extreme metal-poor stars could cause such differences. E.g., Qian & Wasserburg (2007) propose that the first-peak elements (Sr, Y, Zr) are formed by charged-particle reactions in the so-called $\alpha$-process (Woosley & Hoffmann 1992) in all supernovae, while heavy $r$-process elements would form only in low-mass SNe with O-Ne-Mg cores and iron only in high-mass SNe. The correlation shown in Fig. 16 would appear difficult to reconcile with this otherwise interesting scenario.
As an alternative, a new nucleosynthesis process (the $\nu p$-process) has been proposed very recently by Froehlich et al. (2006). This process should occur in
core-collapse supernovae and would allow for the nucleosynthesis of nuclei with mass number A $>$ 64.
5.3 Heavy-element enrichment in the early Galaxy
The scatter in the observed abundance ratios are an indication of the efficiency of mixing in the ISM in the era before the formation of the oldest stars we can observe today. The results so far are contradictory.
In Paper V, we demonstrated that the [$\alpha$/Fe] ratios in the EMP giant stars of our sample exhibit very little scatter beyond the observational uncertainty.
The great uniformity in the [$\alpha$/Fe] ratios of metal-poor stars has recently been demonstrated in more limited samples of turnoff (dwarf) stars also by Cohen et al. (2004), Arnone et al. (2005), and Spite et al. (2005) and will be further discussed in the next paper of this series (Bonifacio et al., in preparation).
These results are clearly inconsistent with current inhomogeneous chemical evolution models, which predict a scatter of order 1 dex for such elements (Argast et al. 2002).
As emphasized by Argast et al. (2002), the initial scatter of a given element ratio,
[X/Fe], is determined by the adopted nucleosynthesis yields. The details of the
chemical evolution model will then determine how fast a homogeneous ISM is
achieved through mixing of the enriched regions. The results of Paper V
indicated that, in order to reproduce the observed low scatter in [$\alpha$/Fe],
the galactic chemical evolution model must employ yields of [$\alpha$/Fe] with little or no dependence on the mass of the progenitor. In homogeneous chemical evolution models (François et al. 2004), instantaneous mixing is assumed and more variation in the yield can be allowed, because it is integrated over the different stellar masses as the galaxy evolves.
As the number of EMP stars with high-resolution, high S/N spectroscopy has increased, our ability to quantify the trends and scatter about such
trends for individual elements has improved dramatically as well. In such studies, it is particularly important to use data sets that are reduced
and analysed in as homogeneous a manner as possible, so as to minimise the
influence of spurious “observer” scatter on the behaviours that one seeks to
understand. It was a key goal of our project to produce such data sets.
Thus, Table 6 presents estimates of the observed scatter
of the elemental ratios reported here, following the order of the figures presenting the information, but based exclusively on the stars
analysed by ourselves. The first two columns in the table present the
ordinates and abscissae corresponding to each of the figures. The
number of stars considered in each range of abscissae listed in the
table appears in the third column, while the range in the parameter under
discussion is listed in the fourth column.
In order to obtain robust estimates of scatter, we must first de-trend
the distributions of the observed ratios. This is accomplished by
determination of robust locally weighted regression lines (loess lines), as described by Cleveland (1979, 1994). Such lines
have been used before in similar scatter analyses (see, e.g., Ryan et
al. 1996; Carretta et al. 2002). The scatter about these lines is
then estimated by application of the biweight estimator of scale,
$S_{BI}$, described by Beers, Flynn, & Gebhardt (1990) 111The
scale matches the dispersion for a normal distribution..
The first entry in the last column of Table 6 lists this
estimate. The quantities in parentheses in this column are the
$1-\sigma$ confidence intervals on this estimate of scatter, obtained
from analysis of 1000 bootstrap resamplings of the data in each of the
given ranges. In this listing, CL represents the lower interval on
the value of the scatter, while UL represents the upper interval.
These errors are useful for assessing the significance of the
difference between the scales of the data from one range to another.
[Sr/Fe], [Y/Fe], and [Zr/Fe] shows a similar increase of the scatter as
the metallicity decreases, with a more pronounced effect for Sr. The mean
ratio [$<$Sr+Y+Zr$>$/Fe] shows the same behaviour with a lower amplitude.
Large scatter is also seen in Ba and La, but its variation as a function of metallicity differs from the lighter elements. The dispersion found for Ba seems independent of metallicity, whereas the scatter of La appears much smaller for the most metal-poor stars. Ce, Pr and Nd show much smaller scatter again, in particular Ce for which we measure a bi-weight estimator of only 0.038 dex for the whole sample. Pr and Nd behave like La with smaller scatter for the most
metal-poor stars.
Eu shows a rather high scatter, decreasing as the metallicity decreases. In contrast, Gd, Dy and Er follow the same behaviour as Sr, i.e. an increase in scatter as the metallicity decreases.
If we now consider the ratios [Eu/Ba] and [Eu/La] as a function of [Fe/H],
the scatter is smaller by almost an order of magnitude, confirming the common origin of these elements. It is also noteworthy that the scatter is even smaller for the most metal poor metallicity bin.
5.4 Abundance scatter and inhomogeneous models of galactic
chemical evolution
The apparently contradictory abundance results for the $\alpha$- and various
neutron-capture elements in VMP and EMP stars might be reconciled if the sites of significant $r$-process production were diverse and (some of them) rare.
And we caution that r-II stars are rare: Barklem et al. (2005) estimate that they constitute roughly 5% of the giants with [Fe/H] $<-2.0$. The lower probability of finding them at metallicities below [Fe/H] $\simeq-3.20$ may introduce an artificial decrease of the observed scatter.
The highly $r$-enriched (r-II) stars have all been found in a very narrow range around [Fe/H] = $-2.9$ (Barklem et al. 2005). Do we see the onset of a new process at this metallicity? Does this metallicity correspond to the typical metallicity of the building blocks of the halo, originating from systems of similar size (i.e. about the same metallicity) but with different chemical histories (IMF, fraction
of peculiar supernovae), leading to a spread of [n-capture/Fe] but keeping an
$r$-process signature. Did the stars with [Fe/H] $<-3.5$ form out of matter polluted by massive Pop III stars, which could mean that they are
pre-galactic?
It is interesting to note how difficult it has been to find true UMP stars,
i.e. stars with [Fe/H] $<-4$; in fact, only three are currently known
(Christlieb et al. 2002; Frebel et al. 2005; Norris et al. 2007). In a standard closed-box model (François et al. 1990), we
would expect to have found several more, if the IMF did not change substantially over time; however, the preferred scheme for the halo formation is an open model, where infall is invoked to explain this “UMP desert” (Chiappini et al. 1997).
In the context of an inhomogeneous model of chemical evolution of the Galaxy
(Argast et al. 2002), simulations show that the density of stars at [Fe/H] = $-3.0$
and [Fe/H] = $-4.0$ is of the same order (Argast et al. 2002, see their Fig. 7). As a
consequence, the paucity of UMP stars would require rather fine tuning of
the mixing of supernovae ejecta into the ISM. However, Karlsson (2006) has
suggested that, alternatively, the absence of UMP stars could be explained with a galactic chemical evolution model where star formation was low or delayed
for a period after the formation and demise of the first generation of stars,
due to heating of the ISM by their supernova explosions.
Another possibility is that stars in the inner and outer regions of the halo
of the Milky Way may have rather different metallicity distribution functions (MDFs). From a kinematic analysis of a local sample of stars from the Sloan
Digital Sky Survey, Carollo et al. (2007) argue that just such a dichotomy exists,
with the MDF of the inner-halo stars peaking around [Fe/H] = $-1.6$, that of
the outer halo around [Fe/H] = $-2.2$.
The magnitude-limited objective-prism surveys that have identified the most
metal-poor halo stars to date may thus have been dominated by inner-halo
objects. If so, simple models of Galactic chemical evolution that match
the MDFs derived from such surveys may not provide adequate explanations for
the formation of the Milky Way halo, nor for the detailed chemical composition
of its most primitive stars.
6 Conclusions
This paper has presented accurate, homogeneous abundance determinations for 16
neutron-capture elements in a sample of 32 VMP and EMP giant stars, for which
abundances of the lighter elements have been determined earlier (Paper V).
Our data confirm and refine the general results of earlier studies of the neutron-capture elements in EMP stars, and extend them to lower metallicities.
In particular, the sample of stars below [Fe/H] = $-2.8$ is increased
significantly.
Our data show the [n-capture/Fe] ratios, and their scatter around the mean
value, to reach a maximum around [Fe/H] $\simeq-3.0$. Below [Fe/H]
$\simeq-3.2$, we do not find stars with large overabundances of
neutron-capture elements relative to the solar ratio. We note, however,
that the large “snapshot” sample of Barklem et al. (2005) does identify at
least a few stars below [Fe/H] = $-3.0$ with high [Sr/Fe], [Zr/Fe], or
[Eu/Fe], so a larger sample of accurate data may be needed for a firm
conclusion.
Adopting Ba as a reference element in the abundance ratios reveals very tight
anti-correlations of [Sr/Ba], [Y/Ba] and [Zr/Ba] ratios with [Ba/H] abundance from [Ba/H] $\simeq-1.5$ down to [Ba/H] $\simeq-4.5$.These results confirm the need for a second neutron-capture process for the synthesis of the first-peak elements, called the “weak” $r$-process (Busso et al. 1999; Qian & Wasserburg 2000; Wanajo et al. 2001), LEPP process (Travaglio et al. 2004), or CPR process (Qian & Wasserburg 2007), or even an entirely new nucleosynthesis mechanism in massive, metal-poor stars ($\nu$p-process, Froehlich et al. (2006). By subtracting the contributions of the main
$r$-process, we show that this mechanism is responsible for 90-95% of the
amounts of Sr, Y, and Zr in stars with [Ba/H] $>-4.5$. Below this value,
the [Sr/Ba], [Y/Ba], and [Zr/Ba] ratios seem to return to the solar ratio, although the number of stars in this range is small.
As found earlier (Ryan et al. 1996; McWilliam 1998; Honda et al. 2004), the [n-capture/Fe] ratios exhibit a much larger dispersion than can be attributed to observational errors, although the scatter in their $[\alpha/Fe]$ and [Fe-peak/Fe] ratios as functions of [Fe/H] is very small. We discuss the implications of these apparently contradictory results on the efficiency of mixing of the primitive ISM in terms of homogeneous vs. inhomogeneous models of galactic chemical evolution.
Acknowledgements.
We thank the ESO staff for assistance during all the runs of our Large
Programme. R.C., P.F., V.H., B.P., F.S. & M.S. thank the PNPS and the PNG for
their support. PB and PM acknowledge support from the MIUR/PRIN 2004025729_002
and PB from EU contract MEXT-CT-2004-014265 (CIFIST). T.C.B. acknowledges
partial funding for this work from grants AST 00-98508, AST 00-98549, and AST
04-06784 as well as from grant PHY 02-16783: Physics Frontiers Center/Joint
Institute for Nuclear Astrophysics (JINA), all awarded by the U.S. National Science Foundation. BN and JA thank the Carlsberg Foundation and the Swedish and Danish Natural Science Research Councils for partial financial support of this research.
References
Abel et al. (2000)
Abel T., Bryan, G.L., Norman, M.L., 2000, ApJ 540, 39
Alonso et al. (1998)
Alonso, A., Arribas, S., Martínez-Roger, C., 1998 A&AS 131, 209
Alonso et al. (1999)
Alonso, A., Arribas, S., Martinez-Roger, C. 1999 A&AS 140,261
Alvarez & Plez (1998)
Alvarez R., Plez B., 1998 A&A 330, 1109
Arlandini et al. (1999)
Arlandini, C. Käppeler, F., Wisshak, K., Gallino, R., Lugaro, M., Busso, M., Straniero, O.
1999 ApJ 525, 886
Argast et al. (2002)
Argast, D., Samland, M., Thielemann, F.-K., Gerhard, O. E. 2002 A&A 388, 842
Arnone et al. (2005)
Arnone, E., Ryan, S.G., Argast, D., Norris, J. E., Beers, T. C. 2005, A&A 430, 507
Asplund et al. (1997)
Asplund, M., Gustafsson, B., Kiselman, D., Eriksson, K. 1997 A&A 318, 521
Barklem et al. (2005)
Barklem, P.S., Christlieb, N., Beers, T.C., Hill, V., Bessell, M.S., Holmberg, J., Marsteller, B., Rossi, S., Zickgraf, F.-J., Reimers, D. 2005 A&A 439, 129
Beers et al. (1985)
Beers, T.C., Preston, G.W., & Schectman, S.A. 1985, AJ 90, 2089
Beers et al. (1990)
Beers, T.C., Flynn, C., Gebhardt, K. 1990, AJ 100, 32
Beers & Christlieb (2005)
Beers, T.C., Christlieb, N. 2005 ARA&A 43, 531
Bromm (2005)
Bromm V. 2005 in From Lithium to Uranium, IAU Symp. 228 V. Hill, P. François, F. Primas eds., p. 121
Burstein & Heiles (1982)
Burstein, D., Heiles, C. 1982, AJ 87, 1165
Burris et al. (2000)
Burris, D.L., Pilachowski, C.A., Armandroff, T.E., Sneden, C., Cowan, J.J., Roe, H. 2000, ApJ 544, 302
Busso et al. (1999)
Busso, M., Gallino, R., Wasserburg, G. J. 1999 ARA&A 37,239
Carney et al. (1997)
Carney, B.W., Wright, J.S., Sneden, C., Laird, J.B., Aguilar, L.A., & Latham, D.W. 1997, AJ 114, 363
Carollo et al. (2007)
Carollo, D., Beers, T.C., Lee, Y.S., et al., 2007, Nature (submitted) astro-ph/0706.3005v2
Carretta et al. (2002)
Carretta, E., Gratton, R., Cohen, J. G., Beers, T. C., Christlieb, N. 2002 AJ 124, 481
Cayrel et al. (2004)
Cayrel, R., Depagne, E., Spite, M., at al. 2004, A&A 416, 1117 (Paper V)
Cescutti et al. (2005)
Cescutti, G., François, P., Matteucci, F., Cayrel, R., Spite, M. 2005 A&A 448, 557
Charbonneau (1995)
Charbonneau, P. 1995 ApJS 101, 309
Chiappini et al. (1997)
Chiappini C., Matteucci F., Gratton R. 1997 ApJ 77, 765
Christlieb et al. (2002)
Christlieb, N., Bessell, M., Beers, T.C., et al., 2002, Nature 419, 904
Christlieb et al. (2004)
Christlieb, N., Beers, T.C., Barklem, P.S., et al. 2004
A&A 428, 1027
Cleveland (1979)
Cleveland, W.S. 1979, J. Am. Stat. Assoc., 74, 829
Cleveland (1984)
Cleveland, W.S. 1984
The Elements of Graphing Data (rev. ed., Summit, NJ: Hobart)
Cohen et al. (2004)
Cohen, J.G., Christlieb, N., McWilliam, A., et al. 2004, ApJ 612, 1107
Dekker et al. (2000)
Dekker, H., D’Odorico, S., Kaufer, A., Delabre, B., Kotzlowski, 2000 in “Optical and IR Telescope Instrumentation and Detectors” Masanori Iye and Alan F. Moorwood (Eds.), Proc SPIE Vol 4008, p534
Den Hartog et al. (2003)
Den Hartog, E.A., Lawler, J.E., Sneden, C., Cowan, J.J., 2003 ApJS 148, 543
Edvardsson et al. (1993)
Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D.L., Nissen, P.E., Tomkin, J. 1993 A&A 275, 101
François et al. (1990)
François, P., Vangioni-Flam, E., Audouze, J. 1990 ApJ 361, 487
François et al. (2004)
François, P., Matteucci, F., Cayrel, R., Spite, M., Spite, F., Chiappini, C. 2004 A&A 421, 613
Frebel et al. (2005)
Frebel, A., Aoki, W., Christlieb, N., et al., 2005, Nature 434, 871
Frebel et al. (2007)
Frebel, A., Christlieb, N., Norris, J.E., Thom, C., Beers, T.C., Rhee, J. 2007,
ApJ 660, L117
Froehlich et al. (2006)
Fröhlich, C., Martinez-Pinedo, G., Liebendörfer, M., Thielemann, F.-K., Bravo, E., Hix, W.R., Langanke, K., Zinner, N.T.
2006 PhysRevLett 96, 142502
Fulbright (2000)
Fulbright, J.P. 2000, AJ 120, 1841
Fuller & Couchman (2000)
Fuller, T. M., Couchman, H. M. P.2000, ApJ 544, 6
Gilroy et al. (1988)
Gilroy, K.K., Sneden, C., Pilachowski, C.A., & Cowan, J.J. 1988, ApJ 327, 298
Goriely & Arnould (1997)
Goriely, S., Arnould, M. 1997 A&A 322, 29
Gratton (1989)
Gratton, R.G. 1989, A&A 208, 171
Gratton et al. (1987)
Gratton, R.G. & Sneden, C. 1987, A&A 178, 179
Gratton et al. (1988)
Gratton, R.G. & Sneden, C. 1988, A&A 204, 193
Gratton & Sneden (1991)
Gratton, R.G. & Sneden, C. 1991, A&A 241, 501
Gratton & Sneden (1994)
Gratton, R.G. & Sneden, C. 1994, A&A 287, 927
Grevesse & Sauval (2000)
Grevesse, N. & Sauval, A.J. 2000, Origin of Elements in the Solar System. Edited by O. Manuel. p.261
Gustafsson et al. (1975)
Gustafsson, B., Bell, R.A., Eriksson, K., Nordlund Å., 1975, A&A 42, 407
Gustafsson et al. (2003)
Gustafsson, B., Edvardsson, B., Eriksson, K., Graae-Jørgensen, U.,
Mizuno-Wiedner, M., Plez, B., 2003, in Stellar Atmosphere Modeling,
ed. I. Hubeny, D. Mihalas, K. Werner, ASP Conf. Series 288, 331.
Hill et al. (2002)
Hill, V., Plez, B., Cayrel, R., et al. 2002 A&A 387, 560 (Paper I)
Honda et al. (2004)
Honda, S., Aoki, W., Ando, H., Izumiura, H., Kajino, T., Kambe, E., Kawanomoto, S., Noguchi, K., Okita, K., Sadakane, K., Sato, B., Takada-Hidai, M., Takeda, Y., Watanabe, E., Beers, T. C., Norris, J. E., Ryan, S. G., 2004, ApJS 152, 113
Johnson & Bolte (2002)
Johnson, J. A., Bolte, M. 2002 ApJ 579, 616
Karlsson (2006)
Karlsson, T. 2006 ApJ 641, L41
Ishimaru & Wanajo (1999)
Ishimaru, Y. & Wanajo, S. 1999 ApJ 511, L33
Ishimaru & Wanajo (2000)
Ishimaru, Y. & Wanajo, S. 2000 The First Stars A. Weiss, T. Abel and V. Hill eds. Springer, p. 189
Ishimaru et al. (2004)
Ishimaru, Y., Wanajo, S., Aoki, W., Ryan, S. G. 2004 ApJ 600, 47
Lawler et al. (2004)
Lawler, J.E., Sneden, C., Cowan, J.J. 2004 ApJ 604, 850
Lucatello et al. (2004)
Lucatello, S., Tsangarides, S., Beers, T. C., Carretta, E., Gratton, R. G., Ryan, S. G. 2005 ApJ 625, 825
Madau et al. (2004)
Madau, P., Rees, M. J., Volonteri, M., Haardt, F., Oh, S. P. 2004 ApJ 604, 484
McWilliam et al. (1995)
McWilliam, A., Preston, G. W., Sneden, C., Searle, L. 1995, AJ 109, 2757
McWilliam (1998)
McWilliam, A. 1998 AJ 115, 1640
Molaro & Bonifacio (1990)
Molaro, P. & Bonifacio, P. 1990, A&A 236, L5
Nissen & Schuster (1997)
Nissen, P.E. & Schuster, W.J. 1997, A&A 326, 751
Norris et al. (1993)
Norris, J.E., Peterson, R.C., Beers, T.C. 1993 ApJ 415, 797
Norris et al. (2001)
Norris, J.E., Ryan, S.G., Beers, T.C. 2001 ApJ 561, 1034
Norris et al. (2007)
Norris, J., et al., 2007, ApJ, submitted
Plez et al. (1992)
Plez, B., Brett, J.M., Nordlund, Å.1992 A&A 256,551
Prantzos et al. (1990)
Prantzos, N., Hashimoto, M., Nomoto, K., 1990, A&A 234, 211
Primas et al. (1994)
Primas, F., Molaro, P., Castelli, F. 1994 A&A 290, 885
Qian & Wasserburg (2000)
Qian Y.Z., Wasserburg G. J. 2000, Phys. Rep., 333, 77
Qian & Wasserburg (2007)
Qian Y.Z., Wasserburg G. J. 2007, Phys. Rep., 444, 237
Ryan et al. (1991)
Ryan, S.G., Norris, J.E., & Bessell, M.S. 1991, AJ 102, 303
Ryan et al. (1996)
Ryan, S. G., Norris, J. E., Beers, T. C 1996, ApJ 471, 254
Schlegel et al. (1998)
Schlegel, D.J., Finkbeiner, D.P., Davis, M. 1998, ApJ 500, 525
Sivarani et al. (2006)
Sivarani, T., Beers, T.C., Bonifacio, P., et al.
2006, A&A, 459, 125 (Paper X)
Sneden et al. (1996)
Sneden, C., McWilliam, A, Preston, G. W., Cowan, J. J., Burris, D. L., Armosky, B. J. 1996 ApJ 467, 819
Spite (1967)
Spite, M., 1967 Ann. Astrophys. 30, 211
Spite & Spite (1978)
Spite, M., Spite, F. 1978 A&A 67, 23
Spite et al. (2005)
Spite, M., Bonifacio, P., Cayrel, R., et al. 2005, in IAU Symposium 228, Eds. V. Hill, P. François & F. Primas, Cambridge Univ. Press, p. 185
Stephens (1999)
Stephens, A. 1999, AJ 117, 1771
Travaglio et al. (2004)
Travaglio C., Gallino R., Arnone E., Cowan J., Jordan F., Sneden C. 2004 ApJ 601, 864
Truran (1981)
Truran, J. W. 1981 A&A 97, 391
Wanajo et al. (2001)
Wanajo, S., Kajino, T., Mathews, G. J., Otsuki, K. 2001 ApJ 554, 578
Woosley & Hoffmann (1992)
Woosley, S.E., Hoffmann, R.D. 1992 ApJ 395, 202
Yoshida et al. (2003)
Yoshida, N., Sokasian, A., Hernquist, L., Springel, V. 2003 ApJ 598, 73 |
Abstract
We use the effective field theory of dark energy to explore the space of modified gravity models which are capable of driving the present cosmic acceleration.
We identify five universal functions of cosmic time, which are enough to
describe a wide range of theories containing a single scalar degree of freedom in addition to the metric. The first function (the effective equation of state) uniquely controls the expansion history of the universe. The remaining four functions appear in the linear cosmological perturbation equations, but only three of them regulate the growth history of large scale structures. We propose a specific
parameterization of such functions in terms of characteristic coefficients that serve as coordinates in the space of modified gravity theories and can be effectively constrained by the next generation of cosmological experiments.
We address in full generality the problem of the soundness of the theory against ghost-like and gradient instabilities and show how the space of non-pathological models shrinks when
a more negative equation of state parameter is considered. This analysis allows us to locate a large class of stable theories that violate the null energy condition (i.e. super-acceleration models) and to
recover, as particular subsets, various models considered so far.
Finally, under the assumption that the true underlying cosmological model is the $\Lambda$ Cold Dark Matter scenario, and relying on the figure of merit with which
relevant observables of the model will be constrained by future experiments, we demonstrate that the space spanned by stable gravitational theories that will not be statistically rejected by data is actually much smaller than
the space enclosed by the likelihood contours.
Phenomenology of dark energy:
exploring the space of theories with future redshift surveys
[1cm]
Federico Piazza${}^{\rm a}$, Heinrich Steigerwald${}^{\rm b}$ and Christian Marinoni${}^{\rm b,c}$
[0.5cm]
${}^{\rm a}$ APC, (CNRS-Université Paris 7), 10 rue Alice Domon et Léonie Duquet, 75205 Paris, France
PCCP, 10 rue Alice Domon et Léonie Duquet, 75205 Paris, France
${}^{\rm b}$ Aix Marseille Université, CNRS, CPT, UMR 7332, 13288 Marseille, France.
Université de Toulon, CNRS, CPT, UMR 7332, 83957 La Garde, France.
${}^{\rm b}$ Institut Universitaire de France, 103, bd. Saint-Michel, F-75005 Paris, France
Contents
1 Introduction
2 The effective field theory of dark energy
2.1 The EFT action
2.1.1 Background sector
2.1.2 Perturbation sector
2.2 Theoretical viability and phenomenological constraints
2.2.1 Stability and speed of sound
2.2.2 Initial conditions, BBN constraints and screening
3 Resolving the background degeneracy
3.1 Fiducial background model
3.2 Theories reproducing the same expansion history
3.3 Dimensionless EFT couplings
4 Exploring the space of theories
4.1 The case for a constant $\bar{w}$
4.2 The parameterization of the couplings
4.3 Stability in theory space
5 Forecasts
5.1 The growth index
5.2 $\Lambda$CDM vs. modified gravity: comparing the growth rates
6 Conclusions
1 Introduction
Understanding the present accelerating phase of the universe in terms of fundamental physics is an outstanding challenge of modern cosmology.
Alternatives to the $\Lambda$CDM paradigm, the so called dark energy models, date at least as early as the discovery [1, 2] of the acceleration itself. In the simplest quintessence models [3, 4, 5, 6, 7], the required negative pressure is produced by a scalar field rolling down its potential. Dark energy models well beyond minimally-coupled quintessence have also been explored. Proposals range from coupled quintessence [8, 9, 10, 11, 12, 13, 14, 15] to extra-dimensional mechanisms [16, 17], from considering higher curvature terms in the Lagrangian (such as in $F(R)$ [18, 19, 20, 22, 21] or $F(G)$ [23] theories), as well as torsion terms [24], to models of non-local [25, 26, 27] and massive [28, 29, 30] gravity or departures from the geometrical description of general relativity [31, 32] (see [34, 33, 35] for reviews).
In the presence of a modification of gravity,
effects of dark energy are expected not only at the background level—for instance, on the Hubble rate $H(t)$—but also at the level of cosmological perturbations—for instance, in the
growth rate $f(t)$ of large scale structures.111The linear growth rate function is $f(t)=\frac{d\ln\delta}{d\ln a(t)}$, where $\delta$ represents the fractional overdensity of matter
Current observational programs, which constrain $f(t)$ with a $15\%$ precision in the redshift range up to $z\sim 1.3$ [36, 37, 38, 39, 40, 41, 42, 43, 44], already provide interesting evidence for ruling out the most extreme proposals of modified gravity. Future surveys such as DES[45], LSST[46], BigBoss [47], EUCLID[48, 49] are thus looked to with
expectant attention, as they will eventually attain the necessary precision to challenge also the finest deviations from standard gravity predictions on large cosmological scales.
However, empirical precision is not the only fundamental goal that a measuring protocol must satisfy in cosmology [50].
Since measurements are indeed estimations of parameters using a theory, testing the soundness of the theoretical framework that links
physical observables to cosmological parameters is also of critical importance. For example,
direct and independent measurements of the dark energy equation of state parameter [51, 52, 53, 54, 55, 56, 57] on the one hand and of $f(t)$ on the other
[36, 37, 38, 39, 40, 41, 42, 43, 44] inevitably loose track of
the specific mechanisms responsible for the possible deviations from general relativity and, ultimately, of the underlying theory. The orthogonal strategy—i.e. assessing, on an individual basis,
the observational viability of specific, more or less physically motivated models—is far from economical, and prevents to make model-independent statements about unknown regions of theory-space.
An intermediate perspective is recommendable: developing a consistent formalism that can incorporate all the possible gravitational laws generated by
adding a single scalar degree of freedom to Einstein’s equations.
Such a phenomenological approach should efficiently keep track of both the background behavior (i.e. $H(t)$) and the dynamics on smaller scales (i.e. the cosmological perturbations) responsible, for example, for the growth rate $f(t)$. Several strategies have been proposed along this direction. In this paper we will make use of the effective field theory (EFT) of Dark Energy proposed in [58] and further developed in [59, 60, 61], which extends to late-time cosmology the formalism of the EFT of inflation [62, 63] (see also [64] for the treatment of a more restricted class of models and [65] for a review). Other notable strategies include the parameterized post-Friedmaniann formalism [66, 67, 68], the covariant EFT approach in its various versions [69, 70, 71], the imperfect fluid approach [72].
Instead of parametrizing a specific theory in terms of variables, letting to observations the task of fixing their amplitudes,
the EFT of dark energy makes it possible to parametrize theories themselves in terms of structural functions of time. The advantage is that one can interpret observations directly in the phase space of theories and not within the framework of a single paradigm. The price to pay is that, instead of fixing numbers, observations should have enough power to fix continuous functions of time.
The apparent intractability of the problem
can be finessed by phenomenological modeling, i.e. by compressing the information contained in the structural functions into a finite set of coefficients. The tricky step is to engineer
a parametrization which can be effectively constrained by observations, yet it is flexible and universal enough to allow exploring most of the phase space of stable theories, i.e. models that
do not suffer from ghosts instabilities.
In this paper we present a specific way to address this challenge and show how the EFT of dark energy can efficiently confront the observations.
The goal is to show what constraints on theoretical scenarios of modified gravity a precise measurement of $H(t)$ and $f(t)$ can provide.
To this purpose we exploit the growth index formalism developed in [73] which provides a flexible parameterization of the
linear growth rate $f(t)$ and a straightforward mapping of observational constraints from the space of cosmological observables into the phase space of all possible theories.
The paper is organized as follows. In Sec. 2 we review the EFT of dark energy. Our starting point is an action that depends on six structural functions of the cosmic time, and general enough to contain all scalar tensor theories with equations of motion up to second order in derivatives. We quote the background evolution equations as well as the expressions of characteristic linear perturbation theory observable: the effective Newton constant $G_{\rm eff}$ and the gravitational slip $\gamma_{\rm sl}$ parameter. We also discuss what constraints theoretical consistency, as well as gravitational tests on various scales,
put on these structural functions. Only three of them appear both in the equations of motion for the background and perturbation sectors of the theory, the remaining uniquely characterizing the perturbation equations. However, a residual degeneracy affects the formalism, in that different combinations of the three functions governing the background can reproduce identical expansion histories. In Sec. 3 we show how we tackle this issue: by a simple “back-engineering” procedure, we re-express the three background functions in terms of an effective equation of state parameter $\bar{w}$ [96, 97], which rules the expansion history, and a coupling function that only governs the perturbation sector.
Despite the EFT formalism is now reduced to five functions of time, in order to confront observations, we need to implement a parameterization scheme. In Sec. 4 we propose a specific parameterization that is by no means unique, but present a sufficient degree of generality and phenomenological merits.
Our formalism allows a neat treatment of the stability issue for dark energy theories (Sec. 4.3) and allows to address in full generality the issue of super-acceleration within a consistent, non-pathological theory.
Finally, in Sec. 5 we present the constraints that current measurements of a particular observable of the perturbation sector, the growth rate of cosmic structures,
put on the phase space of gravitational theories and forecast those that a future mission such as EUCLID will provide. Conclusions are drawn in Sec. 6.
2 The effective field theory of dark energy
The formalism at the basis of this paper (see [65] for a review) was first used in Ref. [74] and then applied to Inflation in Refs. [62, 63]. The idea at the basis of the “effective field theory of Inflation” is to consider cosmological solutions as states with spontaneously broken time-translations, and cosmological perturbations as the associated Nambu-Goldstone excitations. This allows a systematic and unambiguous expansion of the inflationary Lagrangian in operators containing an increasing number of cosmological perturbations. The formalism was then extended to quintessence [64] and to the most general class of single scalar field dark energy models in [58] (see also [59]). Later relevant developments include [60, 61].
2.1 The EFT action
Our starting point here is the following action222With a slight change of notation with respect to the above cited works on the EFT of dark energy, here we define our structural functions by pulling out the bare Planck mass squared from the Lagrangian. It is straightforward to compile a dictionary between our notation and that of, e.g., Ref. [60]:
$\displaystyle M^{2}(t)$
$\displaystyle=M_{*}^{2}f(t)\,,\qquad\qquad\lambda=\frac{\Lambda}{M_{*}^{2}f(t)%
}\,,\ \qquad\qquad{\cal C}=\frac{c}{M_{*}^{2}f(t)}\,,$
(1a)
$\displaystyle\mu^{2}_{2}$
$\displaystyle=\frac{M_{2}^{4}}{M_{*}^{2}f(t)}\,,\ \quad\qquad\mu_{3}=\frac{m_{%
3}^{3}}{M_{*}^{2}f(t)}\,,\,\qquad\qquad\epsilon_{4}=\frac{2m_{4}^{2}}{M_{*}^{2%
}f(t)}\,.$
(1b)
The scale of the new coefficients is set by appropriate powers of the Hubble parameter
(e.g. ${\cal C}\sim H^{2}$, $\mu_{3}\sim H$, $\epsilon_{4}\sim 1$). Note also that, in the notations of [60], we have set $m_{4}^{2}=\tilde{m}_{4}^{2}$.
$$\begin{split}\displaystyle S\ =&\displaystyle\ S_{m}[g_{\mu\nu},\Psi_{i}]\ +\ %
\int\!d^{4}x\,\sqrt{-g}\,\frac{M^{2}(t)}{2}\,\Big{[}R\,-\,2\lambda(t)\,-\,2{%
\cal C}(t)g^{00}\Big{.}\\
&\displaystyle\left.+\,\mu_{2}^{2}(t)(\delta g^{00})^{2}\,-\,\mu_{3}(t)\,%
\delta K\delta g^{00}+\,\epsilon_{4}(t)\left(\delta K^{\mu}_{\ \nu}\,\delta K^%
{\nu}_{\ \mu}-\delta K^{2}+\frac{{}^{(3)}\!R\,\delta g^{00}}{2}\right)\right]%
\;.\\
\end{split}$$
(2)
Those who are not already familiar with the formalism may not find the above expression particularly illuminating. While referring to the already cited literature for more details, we summarize few main features below. The pragmatic reader may also skip directly to the formulas relating the structural functions $M^{2}$, $\lambda$, ${\cal C}$, $\mu^{2}_{2}$, $\mu_{3}$, $\epsilon_{4}$ to the
background and perturbation quantities, in Secs. 2.1.1 and 2.1.2 respectively.
1.
The action is specifically tailored for cosmology and written directly in unitary gauge: the time coordinate $t$ is fixed to be proportional to the scalar field, while the three space coordinates $x^{i}$ are left undetermined. This explains the presence of non covariant terms such as the perturbations of the lapse component of the metric, $\delta g^{00}\equiv g^{00}+1$, the perturbation of the extrinsic curvature on the $t=const.$ hypersurfaces $\delta K_{\mu\nu}$ and its trace, $\delta K$, the three dimensional Ricci scalar ${}^{(3)}\!R$ calculated on such an hypersurface. This choice of gauge also explains the “disappearance” of the scalar field: its dynamics is entirely encoded in the metric’s degrees of freedom.
2.
The displayed operators reproduce [60] the entire class of Horndeski [75] or generalized Galileon [76] theories, which are the most general scalar tensor theories not giving rise to derivatives beyond second order in the equations of motion. A large class of dark energy models can be recast in this form (see Table 1).
The structural functions $M^{2}$, $\lambda$, ${\cal C}$, $\mu^{2}_{2}$, $\mu_{3}$, $\epsilon_{4}$ are universal, in the sense that they are unaffected by field redefinitions. As shown in [58, 60], action (2) can always be recast into covariant form.
3.
Violations of the weak equivalence principle are assumed to be negligible or at least irrelevant for the problem at hand. Thus, the action is written in the Jordan Frame, i.e. in terms of the metric to which matter fields (baryons and dark matter, contained inside $S_{m}$) are minimally coupled. This is also the metric of most direct physical interpretation [85].
The main advantage of the above gauge choice is
a neat separation between the terms contributing to the background evolution and those affecting only the perturbations. All terms in the second line of (2) are quadratic in the perturbations and hence do not interfere with the background evolution. The latter is determined uniquely by the three time-dependent functions $M^{2}(t)$, ${\cal C}(t)$ and $\lambda(t)$. This is a general result that is demonstrated to hold [63, 58] for arbitrarily complicated covariant dark energy Lagrangians, as long as they contain only up to one additional scalar degree of freedom.
The relevant equations for the evolution of background and perturbed cosmological observables are summarized in the next subsections.
2.1.1 Background sector
In the EFT formalism the background evolution is governed only by the three functions $M^{2}(t)$, ${\cal C}(t)$ and $\lambda(t)$ appearing in the first line of (2). This applies to all dark energy theories—no matter how complicated—with up to one scalar degree of freedom. This non-trivial result has been proved in [58].
Since the matter fields are essentially constituted by non-relativistic species, we adopt the perfect fluid approximation and set $p_{m}\simeq 0$. In a flat universe, the background Einstein equations derived from (2) read
$$\displaystyle{\cal C}$$
$$\displaystyle=\ \frac{1}{2}(H\mu-\dot{\mu}-\mu^{2})+\frac{1}{2M^{2}}(\rho_{D}+%
p_{D})\;,$$
(3)
$$\displaystyle\lambda$$
$$\displaystyle=\ \frac{1}{2}(5H\mu+\dot{\mu}+\mu^{2})+\frac{1}{2M^{2}}(\rho_{D}%
-p_{D})\;.$$
(4)
where $H=\dot{a}(t)/a(t)$ is the Hubble expansion rate and we have defined the non-minimal coupling function333Our coupling $\mu$ corresponds to $\dot{f}/f$ in the notations of [58, 60].
$$\mu\ \equiv\ \frac{d\log M^{2}(t)}{dt}\,.$$
(5)
The dark energy density $\rho_{D}$ and pressure $p_{D}$ are defined by the relations
$$\displaystyle H^{2}$$
$$\displaystyle=\ \frac{1}{3M^{2}(t)}(\rho_{m}+\rho_{D})\;,$$
(6)
$$\displaystyle\dot{H}$$
$$\displaystyle=\ -\frac{1}{2M^{2}(t)}(\rho_{m}+\rho_{D}+p_{D})\;.$$
(7)
Since we are working in the Jordan frame, non-relativistic matter always scales as $\rho_{m}\propto a^{-3}$ by energy-momentum conservation. On the opposite, because of the coupling to gravity, the energy momentum tensor of dark energy is not univocally defined. From the above relations we can derive the modified conservation equation
$$\dot{\rho}_{D}+3H(\rho_{D}+p_{D})=3\mu M^{2}H^{2}\,.$$
(8)
From the above set of equations it is appearent that in the limit $\mu=0$ ($M^{2}$ constant), the background evolution (i.e. $H(t)$) completely determines the remaining structural functions ${\cal C}(t)$ and $\lambda(t)$. Such a limit was specifically considered in Ref. [64]. However, in general, we need one more input in order to completely determine the background sector. For instance, we can define the equation of state of dark energy $w$ as
$$p_{D}(t)\,=\,w(t)\rho_{D}(t)\,.\qquad$$
(9)
Then, if the functions $w(t)$ and $\mu(t)$ are known, a measure of the constant $H_{0}$ is enough to close the system and determine the values of ${\cal C}$ and $\lambda$ univocally.
While the background functions completely determine the expansion history $H(t)$, the converse is not true. Indeed, from Eq. (6) we see that different choices of $M(t)$ and $\rho_{D}(t)$ can give the same $H(t)$ (see also the detailed dynamical analysis of Ref. [86]). In Sec. 3 we outline a strategy to remove such a degeneracy. The latter can be broken by looking at specific observables in the perturbation sector of the theory which also affects the evolution of the background.
2.1.2 Perturbation sector
The evolution of the large-scale, inhomogeneous distribution of matter in the Universe can be computed with a good approximation by using linear cosmological
perturbation theory. The perturbed sector of the EFT formalism involves all operators of action (2) except $\lambda(t)$.
In the standard gravitational paradigm, and assuming the quasi-static approximation (see e.g. [87] for a throughout discussion)
the evolution of density perturbations $\delta$ is given by
$$\ddot{\delta}+2H\dot{\delta}-4\pi G_{N}\rho_{m}\delta=0\,.$$
(10)
This is still true in more general scenarios, at least for those scales in which the quasi-static approximation applies, as long as the Newton gravitational constant $G_{N}$ is replaced by a more complicated function
of both time $t$ and comoving Fourier scale $k$ [60],
$$G_{\rm eff}\ =\ \frac{1}{8\pi M^{2}(1+\epsilon_{4})^{2}}\ \frac{2{{\cal C}}+%
\mathring{\mu}_{3}-2\dot{H}\epsilon_{4}+2H\mathring{\epsilon}_{4}+2(\mu+%
\mathring{\epsilon}_{4})^{2}+\ Y_{\rm IR}}{\ 2{{\cal C}}+\mathring{\mu}_{3}-2%
\dot{H}\epsilon_{4}+2H\mathring{\epsilon}_{4}+2\dfrac{(\mu+\mathring{\epsilon}%
_{4})(\mu-\mu_{3})}{1+\epsilon_{4}}-\dfrac{(\mu-\mu_{3})^{2}}{2(1+\epsilon_{4}%
)^{2}}+\ Y_{\rm IR}\ }\ ,$$
(11)
where we have defined
$$\displaystyle\mathring{\mu}_{3}$$
$$\displaystyle\equiv\ \dot{\mu}_{3}+\mu\mu_{3}+H\mu_{3},$$
(12)
$$\displaystyle\mathring{\epsilon}_{4}$$
$$\displaystyle\equiv\ \dot{\epsilon}_{4}+\mu\epsilon_{4}+H\epsilon_{4}\,,$$
(13)
$$\displaystyle Y_{\rm IR}$$
$$\displaystyle\equiv\ 3\left(\frac{a}{k}\right)^{2}\,\left[2\dot{H}{\cal C}-%
\dot{H}\mathring{\mu}_{3}+\ddot{H}(\mu-\mu_{3})-2H\dot{H}\mu_{3}-2H^{2}(\mu^{2%
}+\dot{\mu})\right].$$
(14)
An observational probe that is particularly sensitive to the specific form of $G_{eff}$ is the redshifts distortion. Interestingly,
one could also exploit weak lensing data and constrain an additional observable, the gravitational slip parameter $\gamma_{\rm sl}\equiv\Psi/\Phi$: the ratio of the two gravitational potentials
defined by the perturbed metric in Newtonian gauge
$$ds^{2}=-(1+2\Phi)dt^{2}+a^{2}(t)(1-2\Psi)\delta_{ij}dx^{i}dx^{j}\,.$$
(15)
Despite the forecasted constraints are less stringent, this observable adds independent lines of evidence and can resolve residual degeneracy in the perturbed sector.
Again, by specializing the general formulas of Ref. [60] to action (2), we find
$$1-\gamma_{\rm sl}\ =\ \frac{(\mu+\mathring{\epsilon}_{4})(\mu+\mu_{3}+2%
\mathring{\epsilon}_{4})-\epsilon_{4}(2{\cal C}+\mathring{\mu}_{3}-2\dot{H}%
\epsilon_{4}+2H\mathring{\epsilon}_{4})+\epsilon_{4}\cdot Y_{\rm IR}}{2{\cal C%
}+\mathring{\mu}_{3}-2\dot{H}\epsilon_{4}+2H\mathring{\epsilon}_{4}+2(\mu+%
\mathring{\epsilon}_{4})^{2}+Y_{\rm IR}}\,.$$
(16)
The above defined quantities, $G_{\rm eff}$ and $\gamma_{\rm sl}$, only depend on the three non-minimal coupling functions $\mu$, $\mu_{3}$ and $\epsilon_{4}$, which can be taken as appropriate coordinates in the parameter space of modified gravity theories. The coupling $\mu_{2}^{2}$ does not appear in eqs. (11) and (16) but plays a role in the stability and speed of sound of dark energy, to be discussed in Sec. 2.2.1 below. Moreover, our formalism gives a relatively compact expression for the infra-red, scale dependent term $Y_{\rm IR}$, eq. (14). Interestingly, such a scale dependence can be in principle constrained by future data [88, 89, 90]. We will further discuss the order of magnitude of the couplings and the consequences on the scale dependence of $G_{\rm eff}$, in Secs. 3.3 and 5.1 respectively.
2.2 Theoretical viability and phenomenological constraints
We conclude the presentation of the essential features of the EFT formalism by
discussing some general conditions that modified gravity models must
satisfy if they are to be viable.
With the action written in the standard form (2) one can study the linear dynamics of the propagating scalar degree of freedom once the system has been diagonalized [58, 60]. In
A theory is said to be sound if such a degree of freedom has neither ghost- nor gradient-instabilities. Besides the stability criteria, in this section we also discuss
on the relation between the reduced Planck mass $M_{\rm Pl}=1/\sqrt{8\pi G_{N}}$ and the present value of the EFT parameter $M^{2}(t)$, and on the constraints imposed by nucleosynthesis
at very early times.
2.2.1 Stability and speed of sound
Stability conditions can be analyzed by isolating the scalar propagating degree of freedom contained in the theory and by writing its Lagrangian.
This is done explicitly in [60] by working directly in unitary gauge and using the ADM formalism.
Equivalently, by a change of coordinates we can make the scalar field’s fluctuations reappear explicitly in the theory. By forcing a time diffeomorphism on the action (2),
$$t\rightarrow t+\pi(x)$$
(17)
the spacetime dependent parameter $\pi(x)$ becomes the scalar field fluctuation. The system is then governed by $\pi$ and by the (scalar) metric fluctuations, which can be taken as the gravitational potentials $\Psi$ and $\Phi$ in the Newtonian gauge (15). The quadratic Lagrangian contains at most one derivative per field. At highest order in derivatives, the presence of the non-minimal coupling functions
$\mu$, $\mu_{3}$, $\epsilon_{4}$ produces a mixing between $\pi$ and the gravitational potentials. However, the system can be diagonalized with field redefinitions [58]. Then we are left with the truly propagating degree of freedom $\pi$, decoupled from gravity and governed by the quadratic Lagrangian
$$S_{\pi}=\int\,a^{3}M^{2}\left[A\left(\mu,\mu_{2}^{2},\mu_{3},\epsilon_{4}%
\right)\ \dot{\pi}^{2}\ -\ B\left(\mu,\mu_{3},\epsilon_{4}\right)\frac{(\vec{%
\nabla}\pi)^{2}}{a^{2}}\right]\,+\ \dots\,,$$
(18)
where ellipsis stands for terms that are lower in derivatives and the terms $A$ and $B$ read, explicitly,
$$\displaystyle A$$
$$\displaystyle=\ ({\cal C}+2\mu_{2}^{2})(1+\epsilon_{4})+\frac{3}{4}(\mu-\mu_{3%
})^{2}\,,$$
(19a)
$$\displaystyle B$$
$$\displaystyle=\ ({\cal C}+\frac{\mathring{\mu}_{3}}{2}-\dot{H}\epsilon_{4}+H%
\mathring{\epsilon}_{4})(1+\epsilon_{4})-(\mu-\mu_{3})\left(\frac{\mu-\mu_{3}}%
{4(1+\epsilon_{4})}-\mu-\mathring{\epsilon}_{4}\right)\,.$$
(19b)
Note that the function $\mu_{2}$ does not enter $G_{\rm eff}$ (11) nor the gravitational slip $\gamma_{\rm sl}$ (16), but does appear in the equation of propagation for $\pi$ and is therefore relevant for the stability of the theory.
The expressions $A$ and $B$ must be separately positive. The positivity of $A$ guarantees that there are no ghosts and therefore, ultimately, the soundness of the theory itself (see e.g. the discussion in Ref. [91]).
The positivity of term $B$, on the other hand, enforces the gradient stability condition. This condition prevents
an instability which is less severe, and it could be relaxed by considering operators containing higher space derivatives, i.e. terms
which become important at some high momentum scale $k_{\rm grad}$.
If such operators appear with the right sign, the exponential growth related to the wrong sign of term $B$ could be limited to the infra-red modes of momenta $k<k_{\rm grad}$.
For a throughout discussion of this issue, and the rather tight observational constraints related to it, we refer the reader to Sec. 2.2 of Ref. [64].
Here, for definiteness, we limit ourselves to the operators displayed in (2), which do not contain higher derivatives,
and thus we will not invoke such a mechanism. In summary, we will simply require both stability conditions
$$\displaystyle A\left(\mu,\mu_{2}^{2},\mu_{3},\epsilon_{4}\right)$$
$$\displaystyle>\ 0\qquad\qquad\text{no-ghost condition}$$
(20a)
$$\displaystyle B\left(\mu,\mu_{3},\epsilon_{4}\right)$$
$$\displaystyle\geq\ 0\qquad\qquad\text{gradient-stability condition}$$
(20b)
to be independently satisfied. Note also that expression $B$ is proportional to the denominator of $G_{\rm eff}$, eq. (11). Therefore, requiring that $B>0$ also saves us from a possible pathological behavior of $G_{\rm eff}$.
From (18), the propagation speed $c_{s}$ of dark energy (its “speed of sound”) can be read straightforwardly,
$$c_{s}^{2}\ =\ \frac{B}{A}\,.$$
(21)
Conditions (20) make $c_{s}^{2}$ a positive number as expected, but they are not enough to guarantee that the propagation speed be less than that the speed of light, i.e. that $c_{s}\leq 1$. It has been debated whether or not one should tolerate super-luminal propagation in a low-energy effective theory. Signals traveling faster than light lead to well-known puzzles and paradoxes, such as the presence of boosted reference frames with respect to which such signals arrive before leaving. These macroscopic difficulties might be accompanied with others, more subtle and “microscopic”:
theories with superluminal propagation have been argued to not admit a consistent ultraviolet completion [92]. In our formalism, large values of the structural function $\mu_{2}^{2}$ automatically guarantee subluminal propagation, as apparent from eqs. (20) and (21). Otherwise, we will generally keep an open-minded attitude on superluminality in the present paper.
2.2.2 Initial conditions, BBN constraints and screening
In modified gravity, the attractive interaction between two gravitating bodies is given by the modified Poisson equation for the gravitational potential $\Phi$
$$-\frac{k^{2}}{a^{2}}\Phi=4\pi G_{\rm eff}(t,k)\rho_{m}\delta_{m}\,,$$
(22)
where $G_{\rm eff}$ can be calculated in the quasi static limit by solving the equations for the metric and for the scalar degree of freedom [60], and
is explicitly given in eq. (11). Schematically, it reads
$$G_{\rm eff}(t)\ =\ \frac{1}{8\pi M^{2}(t)}\left[1\,+\,F(\mu,\mu_{3},\epsilon_{%
4})\right]\,,$$
(23)
where the function $F$ of the couplings and their derivatives vanishes when $\mu$, $\mu_{3}$, $\epsilon_{4}$ go to zero.
It is important to note that the linear equations of the long-range propagating field contribute the term $F$ inside the square brackets. Indeed, the scalar is not directly coupled to matter in the Jordan frame metric (the one used here),
but is bound to follow the shape of the Newtonian potential due to its mixing with gravity (see e.g. [58]). The Einstein frame picture, when available, is even more intuitive, because there the scalar directly couples to matter and mediate a fifth force.
All such linear effects are constrained in about one part in a thousand by solar system tests. Such a severe bound on the non-minimal couplings $\mu$, $\mu_{3}$, $\epsilon_{4}$ would make them completely irrelevant on cosmological scales. Therefore, we need to assume a non-linear mechanism of screening [93, 94] that suppresses the propagation of the scalar degree of freedom in dense environments or in the vicinity of astrophysical sources. Here, and in what follows, we will be cavalier about this issue and just assume that such a mechanism is at work and it is produced by appropriate higher order (non linear) terms that are not displayed in action (2) and/or by the non-linear contributions of the displayed quadratic operators expanded at higher order. This allows us as to contemplate—and constrain with cosmological observations—non-minimal couplings $\mu$, $\mu_{3}$, $\epsilon_{4}$ that are a priori of order one on cosmic scales (see also discussion at the end of Sec. 2.1.2).
The above considerations suggest that the linear effects contained in $F$ in eq. (23) must be extremely close to zero in the solar system due to screening effects.
But since this is where we measure $G_{N}$, we conclude that
$$G_{N}\ \simeq\ \frac{1}{8\pi M^{2}(t_{0})}$$
(24)
in about one part in a thousand.444A more precise estimate would be to evaluate $M$ at the “local” value of the scalar field, say, $\phi_{\rm solar-system}$, which could be different than its cosmological value $\phi_{0}\sim t_{0}$. Such a refinement inevitably involves other details of the theory, such as the precise structure of the operators that are cubic and of higher order in the perturbations and is thus beyond the scope of this paper. Of course, this is not enough precision for solar system tests, but it is enough for cosmological observations of large scale structures, and thus for setting our initial conditions at the present time.
In summary, we assume the following integral relation between $M^{2}(t)$ and $\mu(t)$,
$$M^{2}(t)\ =\ M_{Pl}^{2}\ e^{\int_{t_{0}}^{t}\mu(t^{\prime})dt^{\prime}}\,.$$
(25)
Finally, we should mention that there are limits on the possible excursion of the Newton constant from primordial nucleosynthesis to the present time. Following e.g. Ref. [95], we will assume that $M^{2}(t)$ at early times be within a 10% of its value today, say
$$\frac{|M^{2}(z>10)-M^{2}_{\rm Pl}|}{M^{2}_{\rm Pl}}\ \lesssim\ \frac{1}{10}\,.$$
(26)
3 Resolving the background degeneracy
A promising starting point to constrain modified gravity models is the neat separation between background and perturbation quantities offered by the EFT formalism.
However, as we will show below, such a split is not complete yet. Additional analysis is needed if we are to gain insights from data on viable modified gravity models.
As noted, the expansion history $H(t)$ depends only on the functions $M^{2}(t)$, ${\cal C}(t)$ and $\lambda(t)$. More precisely, as shown in Sec. 2.1.1,
these three functions are not independent, only two of them being sufficient to fully determine $H(t)$.
However, the converse is not true, in the sense that a given expansion history $H(t)$ does not fully specify $M^{2}(t)$, ${\cal C}(t)$ and $\lambda(t)$.
This is related to the fact that a cosmic acceleration mechanism can be provided by either the energy momentum tensor of dark energy, in virtue of its negative pressure, or by a strong non-minimal coupling to gravity, the
so called “self-acceleration” model. These two limiting cases span a degeneracy in the background sector, that is apparent by looking directly at the modified Friedmann equation
(6): the dark energy density $\rho_{D}(t)$ and the function $M^{2}(t)$ can compensate each other, so that different choices of these functions can produce an identical expansion history $H(t)$.
Such a degeneracy can be resolved using growth history data, because a time-variation of $M^{2}(t)$ is always accompanied with a modification of the perturbed sector of the theory. Explicitly, a time-variation of $M^{2}(t)$ switches on the non-minimal coupling $\mu$ defined in eq. (5), which affects the evolution of density inhomogeneities through Eq. (11). Note that all the other non-minimal couplings (such as $\mu_{3}$ and $\epsilon_{4}$) are only sensitive to the perturbed sector, and therefore can be fixed independently of the information on the cosmic expansion history,
This is not the case for $\mu$, which cannot be varied arbitrarily without hitting back on the background evolution through (5). In this sense, the separation between background and perturbations of the EFT formalism is not complete yet.
One of the goals of this paper is to show how to efficiently treat the expansion history $H(t)$ and the non-minimal coupling $\mu(t)$ as independent quantities and thus finally achieve a complete separation between background and perturbations. In brief, the strategy is to describe the expansion history with the effective equation of state parameter $\bar{w}(t)$ of a minimally coupled dark energy model. Then, upon fixing the non-minimal coupling $\mu$ using growth history data, we can back engineer $M^{2}(t)$ and $\rho_{D}$ of the full theory.
3.1 Fiducial background model
Following standard prescriptions in phenomenological studies of dark energy, we model the expansion history of the universe by means of an effective equation of state parameter $\bar{w}(t)$ [96, 97],
$$\bar{p}_{D}(t)\ =\ \bar{w}(t)\,\bar{\rho}_{D}(t)\,.$$
(27)
In the above, $\bar{w}(t)$ is the effective equation of state parameter of a
minimally coupled dark energy model with pressure and energy density $\bar{p}_{D}(t)$ and $\bar{\rho}_{D}(t)$ respectively.
Note that $\bar{w}(t)$ is not directly related to the parameters of any fundamental theory; it is just a fitting degree of freedom of the Friedmann model describing the observed scaling of $\dot{a}/a$. The resulting best fitting model for the expansion history, hereafter called fiducial background model,
is characterized by the following fraction of non-relativistic matter energy density,
$$\bar{\Omega}_{m}(y)\ =\ \frac{\bar{\Omega}_{m}^{0}}{\bar{\Omega}_{m}^{0}+(1-%
\bar{\Omega}_{m}^{0})\,e^{\,-3\!\int_{0}^{y}\,\bar{w}(y^{\prime})dy^{\prime}}}\,.$$
(28)
where $\bar{\Omega}_{m}^{0}$ and $\bar{w}(y)$ are the standard Friedmann fitting parameters routinely constrained via cosmological experiments.
Note that we have defined
$$y\ \equiv\ \log\frac{a(t)}{a_{0}}\ =\ \log\frac{1}{1+z}\,.$$
(29)
The next step is to find the class of EFT models that reproduce the same expansion rate $H(t)$.
3.2 Theories reproducing the same expansion history
We want to constrain the combined scalings of the coupling $\mu(t)$ and equation of state parameter $w(t)$ (see eq. (9)) which reproduce the fiducial model (27).
To this pourpose, let us start by defining the matter density parameter $\Omega_{m}$ as
$$\Omega_{m}(y)\,=\,\frac{\rho_{m}(y)}{\rho_{m}(y)+\rho_{D}(y)}\,.$$
(30)
Since the fiducial background model is minimally coupled, we require that the expansion rate $H$ of each EFT representative
be identical to
$$H^{2}\ =\ \frac{1}{3M_{\rm Pl}^{2}}(\bar{\rho}_{m}+\bar{\rho}_{D})\,$$
(31)
that is to the expansion rate of a Friedmann model augmented by a minimally coupled dark energy component. (We are here assuming that
observations are well described in terms of this effective model.)
Note, also, that one can allow the fiducial background to have an energy density of matter $\bar{\rho}_{m}$ different than the physical one, $\rho_{m}$. However, since they both scale as $a^{-3}$, they can differ
at most by a constant factor $\rho_{m}=\kappa\bar{\rho}_{m}$.
By imposing the equality between the right hand sides of (6) and (31) we obtain
$$\kappa\,\bar{\Omega}_{m}(y)\,M_{\rm Pl}^{2}\ =\ \Omega_{m}(y)\,M^{2}(y)\,.$$
(32)
This identity links the functional space of theories, described via the functions $w(t)$ and $M^{2}(t)$, to a given expansion history, whose information is
condensed in the discrete parameter $\bar{\Omega}_{m}^{0}$ and the function $\bar{w}(t)$.
By deriving the above and after some straightforward algebra, we obtain the relation
$$\bar{w}(1-\bar{\Omega}_{m})=w(1-\Omega_{m}),$$
(33)
that can be used to express $\Omega_{m}$ in terms of $w$.
By (32) and (33) we get
$$M^{2}(y)\ =\ \kappa\,M^{2}_{\rm Pl}\,\frac{\bar{\Omega}_{m}w}{w-\bar{w}(1-\bar%
{\Omega}_{m})}\,.$$
(34)
and, by exploiting the initial condition $M(t_{0})=M_{\rm Pl}$ (cfr. eq. (24)),
we deduce that the parameter $\kappa$ regulates, at the same time, the ratio between $\Omega_{m}^{0}$ and $\bar{\Omega}_{m}^{0}$ and that between $M^{2}$ and $M_{\rm Pl}^{2}$ during matter domination:
$$\kappa\ =\ \frac{\Omega_{m}^{0}}{\bar{\Omega}_{m}^{0}}\,,\qquad\qquad M^{2}(t%
\rightarrow 0)\ =\ \kappa\,M^{2}_{\rm Pl}\,.$$
(35)
Because the limits from nucleosynthesis constrain the total displacement of $M^{2}$ from the Planck value to about 10% (26), we simply set $\kappa=1$ (and thus $\Omega_{m}^{0}=\bar{\Omega}_{m}^{0}$) from now on.
By using (33) we can now directly calculate the derivative of $M^{2}(t)$, and thus the coupling $\mu$. We obtain
$$\mu\ =\ \frac{H(1-\bar{\Omega}_{m})}{w-\bar{w}+\bar{w}\,\bar{\Omega}_{m}}\left%
[3\bar{w}\left(w-\bar{w}\right)+\frac{d\bar{w}}{dy}-\frac{\bar{w}}{w}\,\frac{%
dw}{dy}\right]\,.$$
(36)
with initial condition $w(0)=\bar{w}(0)$.
Note that if $\mu=0$ one recovers the fiducial background model, $w=\bar{w}$, i.e. dark energy is correctly modeled in terms of a minimally
coupled (“quintessence”-like) scalar degree of freedom.
To summarize, if an EFT theory reproduces the expansion history given in terms of the phenomenological Friedmann parameters $\bar{\Omega}_{m}(t)$ and $\bar{w}(t)$, then the background functions
$w(y)$ and $\mu(y)$ characterizing that theory need to satisfy the constraint (36). Note that the background sector of EFT models is now completely specified since
the additional coefficients ${\cal C}$ and $\lambda$ are automatically computed once $w$ and $\mu$ are known by means of eqs. (3) and (4).
3.3 Dimensionless EFT couplings
At this stage it is worth making a few general considerations on the size of the
non-minimal couplings $\mu$, $\mu_{2}^{2}$, $\mu_{3}$, $\epsilon_{4}$. Their dimensions are naturally set by the appropriate powers of the Hubble parameter $H$, as suggested by inspecting the action (2).
Clearly, dimensional analysis alone cannot set the amplitude of the couplings. Consider, for example, $\mu$ defined in eq. (5). If $\mu\ll H$, the value of $M^{2}(t)$ barely changes on cosmic time scales. As a consequence (see Eq. (6)) cosmic acceleration is generated by a negative pressure component, and not by a genuine self-acceleration/modified gravity effect.
Incidentally, this situation has a most notable example in $f(R)$ gravity [19]. It is now understood [98] that the observational limits on $\mu$ for $f(R)$ theories are rather strict—of order $\mu\lesssim 10^{-3}H$. This is because the chameleon mechanism [99], on which $f(R)$ theories rely, is scarcely efficient at screening the unobserved effects of modified gravity from the solar system environment.
Such a small value of $\mu$, although perfectly legitimate, relegates the scalar field to the role of a spectator on the largest cosmological scales, and a “standard” cosmological constant term is still required to drive cosmic acceleration.
In this paper, on the contrary, we focus on those scenarios in which the scalar field and the modified gravity mechanism do play the main role in the present cosmic acceleration. We will thus consider, a priori, non-minimal couplings of order one in Hubble units555This assumption, in general, has consequences also on the scale dependence of $G_{\rm eff}$ and $\gamma_{\rm sl}$. We comment on this later on in Sec. 5.1, with no particular hierarchy among them. This also means that, as already noted in Sec. 2.2.2, we need to assume a mechanism of screening other than—and more powerful of—the chameleon, in order to make such large couplings compatible with the physics of the solar system.
Finally, we comment on the time-dependence of the couplings. Despite the formalism is general enough to allow for any time dependence,
it is natural to assume that modified gravity effects become important only at late epochs, when dark energy dominates. This is, in particular, a characteristic feature of
explicit scalar tensor models of gravity. Indeed, when remapping the EFT lagrangian back into the usual covariant form, the EFT couplings typically contain one or more time derivatives of a suitably normalized scalar field $\phi$ (see e.g. App. C of Ref. [60]). As expected, these functions become subdominant at early time, during matter domination, when the energy density of $\phi$ is negligible. In doing so, they do not modify the successful predictions of the standard model of cosmology at early epochs.
The above assumptions translate into the following definitions of the dimensionless coupling functions $\eta$-$\eta_{i}$ operating in the perturbation sector,
$$\displaystyle\mu$$
$$\displaystyle=\ \eta\,H\,(1-\bar{\Omega}_{m})\,,$$
(37a)
$$\displaystyle\mu_{2}^{2}$$
$$\displaystyle=\ \eta_{2}\,H^{2}\,(1-\bar{\Omega}_{m})\,,$$
(37b)
$$\displaystyle\mu_{3}$$
$$\displaystyle=\ \eta_{3}\,H\,(1-\bar{\Omega}_{m})\,,$$
(37c)
$$\displaystyle\epsilon_{4}$$
$$\displaystyle=\ \eta_{4}\,(1-\bar{\Omega}_{m})\,.$$
(37d)
Note that all relevant background quantities can be calculated once $\bar{w}$ and $\eta$ are assigned. For instance, let us consider ${\cal C}$, the crucial term for assessing the stability of the modified gravity theories (see next section). From $\bar{w}$ and $\eta$ we can calculate $w$ by solving the following equation [that descends straightforwardly from (36)]
$$\eta\ =\ \frac{1}{w-\bar{w}+\bar{w}\,\bar{\Omega}_{m}}\left[3\bar{w}\left(w-%
\bar{w}\right)+\frac{d\bar{w}}{dy}-\frac{\bar{w}}{w}\,\frac{dw}{dy}\right]\,,$$
(38)
together with the initial condition $w(0)=\bar{w}(0)$. Then the amplitude of ${\cal C}$ follows from equation (3) [see also eq. (44) below].
In Table 2 we summarize the independent dimensionless functions of our formalism (central column) and relate them to the original coefficients in action (2) (left column) and to their observational effects (right column).
4 Exploring the space of theories
In order to explore the space of theories and confront predictions with observations, we need to chose a particular form for the free functions $\bar{w}(y)$ and $\eta(y)$-$\eta_{i}(y)$. We will start discussing the former, that we set to a constant from now on, although different choices are possible given the absolute generality of the formalism presented in Sec. (3). We then discuss a convenient ansatz for the function $\eta$, that will produce closed exact expressions for all the relevant quantities of the formalism. As for the remaining couplings $\eta_{i}$, we explore the easiest ansatz: the constant one. The main goal is to determine the region of stability of
modified gravity theories in the $\eta_{i}$ space for different values of $\bar{w}$.
4.1 The case for a constant $\bar{w}$
A large body of observations currently suggest that a single fitting degree of freedom, i.e. a constant $\bar{w}$,
is more than enough for describing the cosmic expansion rate $H$ with high precision.
More elaborated extensions of the Friedman paradigm, as those obtained by allowing an explicit time dependence of the effective equation of state $\bar{w}(t)$
do not provide higher resolution insights into the expansion history of the universe. As a matter of fact, we have verified that
the minimum $\chi^{2}_{\nu}$ value obtained from the analysis of the Hubble diagram of Supernovae Ia collected in the Union 2 data set, a state of the art
compilation of data for SNIa in the redshift range $0<z<1.4$ [101], is nearly similar $(\chi^{2}_{\nu,min}\sim 562$, for $\nu\sim 580$ degrees of freedom) irrespective of
whether the data are fitted assuming a constant model for $w$ or a time evolving model of the type $w_{0}+w_{a}(1-a)$. Therefore, from now on, we will set
$$\bar{w}(t)\ =\ \bar{w}\,.$$
(39)
Clearly, new and more precise observations will likely impose the adoption of a more refined fitting scheme. Notwithstanding, all our arguments
can be generalized in a straightforward way if bayesian evidence should point toward the necessity of parameterizing
$\bar{w}$ with more than 1 parameter. The choice (39) obviously contains the $\Lambda$CDM background behavior as a limit, and easily allows to consider small deviations from it, which is one of our main targets.
The fractional energy density of non-relativistic matter of the fiducial background model, $\bar{\Omega}_{m}$, proves a useful cosmological “clock” for late time cosmology because it naturally triggers dark energy related effects.
In fact, $\bar{\Omega}_{m}$ interpolates smoothly between the matter domination epoch, when it evaluates $1$, and the dark energy domination epoch, when it evaluates $\sim 1/3$. Since we will often use this variable in the following,
we simply label it as $x$:
$$x\ \equiv\ \bar{\Omega}_{m}(\bar{w}={\rm const.})\ =\ \frac{\Omega_{m}^{0}}{%
\Omega_{m}^{0}+(1-\Omega_{m}^{0})\,e^{-3\bar{w}y}}\,.$$
(40)
Derivatives with respect to $y$ and with respect to $x$ are related by
$$\frac{d}{dy}\ =\ 3\bar{w}x(1-x)\,\frac{d\,\,}{dx}\,.$$
(41)
Also, from now on, derivatives with respect to $x$ will be indicated with a prime symbol ${}^{\prime}$. Note that,
as a consequence of (39), and by using $x$ as a “time” variable, eq. (38) reduces to
$$\eta(x)\ =\ \frac{3\bar{w}}{w(x)-\bar{w}+\bar{w}x}\left[w(x)-\bar{w}-\bar{w}x%
\,(1-x)\frac{w^{\prime}(x)}{w(x)}\right]\,.$$
(42)
4.2 The parameterization of the couplings
Here we discuss how we concretely parameterize the—so far, completely general—time dependent couplings $\eta,\eta_{2},\eta_{3}$ and $\eta_{4}$ in terms of a suitable number of real coefficients.
Most of this Section will be devoted to the “Brans-Dicke” or “background” sector of the theory, the one obtained by setting the coefficients of all quadratic operators, $\eta_{2}$, $\eta_{3}$ and $\eta_{4}$, to zero. In this limited sector, the issue of stability is easily addressed. If we set to zero the high oder couplings in eq. (19), the terms $A$ and $B$ become identical, and the stability conditions Eqs. (20a) and (20b) coincide. The stability criterion boil down to r
$${\cal C}+\frac{3\mu^{2}}{4}\ >\ 0\,.$$
(43)
By using eqs. (3), (30) and (37a) we obtain
$${\cal C}=\frac{H^{2}(1-x)}{2}\left[3\bar{w}\,\frac{1+w}{w}+(5+3\bar{w}+3\bar{w%
}x)\frac{\eta}{2}-(1-x)\eta^{2}-3\bar{w}x(1-x)\eta^{\prime}\right]\,,$$
(44)
where $w$ is a solution of (42) with the initial condition $w(x=\Omega_{m}^{0})=\bar{w}$. Note that, in the minimally coupled case ($\eta=0$, $w=\bar{w}$), the only non-vanishing term inside the square brackets is the first one, and ${\cal C}$ reduces to $(\rho_{D}+p_{D})/(2M_{\rm Pl}^{2})$ [64].
Analogously, we can calculate the $\lambda$ background function,
$$\lambda=\frac{H^{2}(1-x)}{2}\left[3\bar{w}\,\frac{1-w}{w}+(7-3\bar{w}-3\bar{w}%
x)\frac{\eta}{2}+(1-x)\eta^{2}+3\bar{w}x(1-x)\eta^{\prime}\right]\,.$$
(45)
Thanks to the specific recasting of the non-minimal coupling $\mu$ [cfr. eq. (37a)] an overall factor of $H^{2}(1-x)$ collects from the stability condition (43), which then becomes777Strictly speaking, in the absence of any higher derivative operator, fifth force experiments would already confine
the size of the remaining non minimal coupling $\eta$ to completely irrelevant and uninteresting values. This is because, if no higher derivative operators are present, we can only confide in the chameleon mechanism [99] to screen the unobserved modified gravity effects. Such a mechanism is known (see [98]) to be not enough powerful to produce an appreciable amount of self-acceleration and, at the same time, be compatible with solar system tests. Nonetheless, here and in the following we contemplate order one values of the $\eta$ parameter even just in the background sector, which constitutes the skeleton of our formalism. The tacit assumption being that, if the remaining $\eta_{i}$ are absent, higher derivatives cubic operators beyond those displayed in action (2) will produce sufficient “Vainshtein” [93, 94] screening.
$$3\bar{w}\,\frac{1+w}{w}+(5+3\bar{w}+3\bar{w}x)\frac{\eta}{2}+\frac{1}{2}(1-x)%
\eta^{2}-3\bar{w}x(1-x)\eta^{\prime}\ >\ 0\,.$$
(46)
To proceed further and extract information from growth history data, we need to supply a specific parametric form for the non-minimal coupling parameter $\eta(x)$.
This is a critical step, since it involves the incorporation of model dependent assumptions into the EFT formalism.
Ideally, the chosen model should have a minimal impact on our exploration strategy, that is, it should not severely restrict
the general space of theories to which the EFT formalism give access. In this sense, a most generic and uninformative ansatz is of the type
$$\eta=Ax^{-1}+B+Cx+Dx^{2}\,.$$
(47)
In the absence of any theoretical prior, there are only a couple of practical desiderata that help in tuning the coefficients of (47). It would be useful
1.
to have an exact solution of (42) for $w$ that can be written in closed form.
2.
to have condition (46) satisfied for some values of our parameters even when $\bar{w}$ drops below $-1$. i.e. to maximize the region of the parameter space which represents
a well-behaved theory.
We find that the above requirements are met by specializing (47) to the two-parameter expression
$$\eta(x)\ =\,(\beta-\alpha)\frac{x_{0}}{x}\,+\,[\alpha-\beta(2+x_{0})]\,x\,+\,2%
\beta\,x^{2}\,,$$
(48)
where we have defined $x_{0}\equiv\Omega_{m}^{0}$.
Such an ansatz allows a rather simple closed expression for the functions $w$,
$$w\ =\ \bar{w}\ \frac{1-x}{1-x\,\exp\left[\frac{(\alpha-\beta+\beta x)(x-x_{0})%
(1-x)}{3\bar{w}x}\right]}\,,$$
(49)
and $\Omega_{m}$
$$\Omega_{m}=x\exp\left[\frac{\left(\alpha-\beta+\beta x\right)\left(1-x\right)%
\left(x-x_{0}\right)}{3\bar{w}x}\right]\,,$$
(50)
as required by point 1 above.
The issue of stability for the background sector (point 2) is discussed throughout in the next subsection.
Finally, we can now complete the parameterization scheme by exploiting the advantages provided by the EFT formalism in the perturbation sector. Since the functions $\eta_{2}(x)$, $\eta_{3}(x)$ and $\eta_{4}(x)$ do not affect the dynamics of the background, they are essentially unconstrained and they can be chosen arbitrarily. We will thus make the simplest ansatz, the constant one: $\eta_{2}(x)=\eta_{2}$, $\eta_{3}(x)=\eta_{3}$ and $\eta_{4}(x)=\eta_{4}$.
4.3 Stability in theory space
When dealing directly with a covariant Lagrangian, one has to make sure that the background solution of interest is stable under small fluctuations. This analysis cannot be performed without explicitly calculating the
evolution equations for the background. On the opposite, the EFT formalism allows to by-pass this lengthy calculation by reducing the issue to solving the algebraic inequalities (20) containing the non-minimal couplings $\eta$-$\eta_{i}$, their time derivatives and the effective equation of state $\bar{w}$.
The quantities $A$ and $B$ defined in (19), which need to be separately positive, can be straightforwardly calculated in terms of our set of parameters $\bar{w},\alpha,\beta,\eta_{2},\eta_{3},\eta_{4}$ using eqs. (37), (44), (48), and (49). We do not quote them explicitly because their expressions are rather involved.
The Brans-Dicke sector of the theory ($\eta_{2}=\eta_{3}=\eta_{4}=0$) is spanned in our approach by the two parameters $\alpha$ and $\beta$. However, when we turn to other directions and switch on also $\eta_{3}$ and/or $\eta_{4}$, it looks more economical to leave $\alpha$ alone as a measure of the Brans-Dicke coupling, effectively setting $\beta=0$. The results in terms of the stability of the theories are illustrated in Figures 1 and 2. The stability regions are derived by imposing conditions (20) at all cosmic epochs. The difference between the two Figures is the assumed value of $\eta_{2}$. Such a parameter can play a relevant role in the no-ghost condition (20a). We thus consider two limiting cases. In Figure 1 we assume a negligible value for $\eta_{2}$, whereas in Figure 2 we consider the opposite limit, and help the stability by turning on a large value of $\eta_{2}$. Effectively, Figure 2 shows the gradient stability regions only.
A universal—and rather expected—feature emerging from our plots is that the region of stability shrinks when $\bar{w}$ decreases.
As a by-product, we can address in full generality the following theoretical problem: under which conditions is it possible to have stable violations of the null energy condition (NEC), and thus an effective equation of state $\bar{w}<-1$? Naively, one might think to achieve this with a minimally coupled scalar field, just by flipping the sign of its kinetic term [102].
However, this turns out to be catastrophic since the considered theory would inevitably develop ghost instabilities [91]. The fact that a minimally coupled scalar cannot produce a super-accelerating equation of state is clearly visible in Figures 1 and 2. When $\bar{w}<-1$, the region of stability always excludes the origin, which means that some non-minimal coupling needs to be switched on in the presence of super-acceleration.
Indeed, the upper-left panels of Figures 1 and 2 show that the Brans-Dicke theory allows for a stable phase of super-acceleration, a fact first noted in [103]. The more negative becomes the equation of state, and the greater the values of the non-minimal coupling $\alpha$ that one has to adopt in order
to make the theory stable across all cosmic epochs. Super-acceleration seems particularly challenging for Brans-Dicke theories in the absence of other non-minimal couplings, because
the lower limit on acceptable $\alpha$ values ($\alpha_{min}$) is strongly sensitive to the value of $\bar{w}$: a small decrement of $\bar{w}$
translates into a large increase of $\alpha_{min}$.
Also when the high order operators $\eta_{3}$ and $\eta_{4}$ are switched on (see panels b),c) and d) of Figure 1), a large value of $\alpha$ is generically requested in order to make a theory stable. On the contrary, the parameter $\alpha$ can be interpreted as a small perturbative parameter in the super-acceleration regime, if together with $\eta_{3}$ and $\eta_{4}$ we also switch on (and set to an extremely large value) $\eta_{2}$. Indeed, by inspecting the last three panels of Figure
2, we conclude that choosing a large value of $\eta_{2}$ is enough to enter the regime where $\bar{w}<-1$ with a relatively small effort.
In the limiting case of large value of $\eta_{2}$, it is enough to switch on $\eta_{3}$ and/or $\eta_{4}$ individually to stabilize the super-acceleration phase. Consider, for instance, panel b) of Figure 2. Even by setting the Brans-Dicke coupling $\alpha$ to zero, stable theories can be found along the positive $\eta_{3}$-direction. In essence, this is the specific mechanism of super-acceleration studied in [62, 64] with the EFT formalism and in [80] in terms of a specific model. The present work generalizes those
findings to a great extent by enlarging the dimensionality of theory space. For instance, we find that also the “coordinate” $\eta_{4}$ can locate stable super-acceleration theories (Figure 2, panel c)), which might eventually be discriminated because of their different phenomenology (i.e. because of their effects on the growth rate).
5 Forecasts
Suppose that the next generation of large scale surveys will measure
the linear growth rate function of matter perturbation $f(t)$ finding that it is compatible with that of a $\Lambda$CDM model to a precision say of $1\%$, the nominal precision quoted by [49].
How will this observation constrain the space of alternative models for the propagation of gravity on large cosmic scales? Equivalently, which
stable theories of modified gravity, if any, will still be compatible with both background and perturbed sector data (on linear scales)?
5.1 The growth index
We address this issue by computing the growth index of linear density perturbations.
Briefly, we assume that the quasi-static regime applies and that the linear density perturbations of matter evolves as
$$\displaystyle\ddot{\delta}+2H\dot{\delta}-4\pi G_{\rm eff}\rho_{m}\delta=0.$$
(51)
The EFT of dark energy makes characteristic predictions about the general form of the effective gravitational constant (see section 2.1.2 and eq. (11)). However,
the scale dependence of $G_{\rm eff}$, which is encoded in the the infra-red corrective term $Y_{\rm IR}$ appearing both at the numerator and at the denominator of (11) and (16), deserves
a specific comment. Our formalism allows a relatively compact explicit expression for $Y_{\rm IR}$, eq. (14). Such a term becomes important at large distances—how large depending crucially on the size of the non-minimal couplings. As discussed at the beginning of Sec. 3.3, if the modified gravity mechanism plays a role in the cosmic acceleration, we generally expect the couplings to be of order Hubble to the appropriate power.
In this case, one can see by inspection that the infrared corrections become important only at wavelengths as large as Hubble itself, and thus fall outside the domain of validity of the quasi-static approximation. In other words, if the mechanism of modification of gravity is the one directly responsible for the acceleration of the Universe, the $Y_{\rm IR}$ term is generally irrelevant in the expressions (11) and (16) and the scale dependence effectively drops from such observables, leaving just an overall dependence on the time variable. Therefore, we will effectively ignore the $Y_{\rm IR}$ term in the present analysis.888The are specific limits in which the $k$-dependence is restored, but they represent a relatively narrow region of our parameter space.
The case of $f(R)$ theories provides a notable example: if ${\cal C}$ is strictly zero, $\mu\sim 10^{-3}H$ and all other couplings are null, then, by direct inspection of (11), the $Y_{\rm IR}$ term starts becoming important well within the Hubble scale. We also remark that evolving the full dynamics of perturbations in linear regime, without implementing any quasi-static approximation, might eventually be required in view of the accuracy and
scale range of upcoming surveys [81].
The differential equation (51) translates into the following first order equation for the linear growth rate $f(t)=\frac{d\ln\delta}{d\ln a}$
$$f^{\prime}(x)+f(x)^{2}+\left[2+\frac{H^{\prime}(x)}{H(x)}\right]=\frac{3x}{2}%
\frac{G_{\rm eff}(x)}{G_{N}}\,.$$
(52)
Note that we are using $x$ as independent variable, the fractional non-relativistic matter energy density of the fiducial background model, defined in eq. (40). A prime means differentiation with respect to it. On the RHS of the above equation, $G_{\rm eff}$ is time dependent, while the Newton constant $G_{N}$ is related to the present value of $M(t)$ through eq. (24).
It is a standard to parameterize $f$ by elevating to some power $\gamma$ the fractional matter density—which, consistently with our strategy so far, we take to be that of the fiducial background model, $x$. As shown in [73], if $\gamma$ itself is expanded in powers of the logarithm of $x$, the approximation gets very precise for those models whose growth does not significantly deviate from that of $\Lambda$CDM. By taking just the first two terms of such an expansion we write
$$f(x)=x^{\gamma_{0}+\gamma_{1}\ln(x)}\,.$$
(53)
This parameterization allows to compress the information contained in the function
$f(t)$ in the two parameters $\gamma_{0}$ and $\gamma_{1}$, the so called growth indexes. Once the function $H(t)$ and $G_{\rm eff}$ are specified in any given cosmological model, the coefficients $\gamma_{i}$ can be straightforwardly and fastly computed using the prescriptions of [73].
Any deviation of the measured value of these coefficients $\gamma_{i}$ from the standard GR value is the smoking gun that gravitation may possess additional degrees of freedom.
We simulate future Euclid measurements of the growth rate $f(t)$ as explained in [73]. Essentially, we adopt the figure of merit
quoted in [49] and assume as fiducial a $\bar{w}$CDM model with parameters $x_{0}=0.3$ and $\bar{w}=-1$.
We then perform a maximum likelihood analysis of data with the model given in eq. (53).
The resulting likelihood contours for the growth indexes $\gamma_{0}$ and $\gamma_{1}$ are shown in Figure 3.
In this same figure we also show the EFT predictions for the amplitude of $\gamma_{0}$ and $\gamma_{1}$ which are compatible with both expected data
and requirements of theoretical stability.
We stress that, given our background-perturbation separation strategy, any stable EFT model (parameterized in terms of $\alpha,\beta,\eta_{3}$ and $\eta_{4}$)
lying within the likelihood contours of growth rate data, is also a model which identically reproduces the expansion rate of the fiducial $\bar{w}$CDM model.
Figure 3 deserves a few comments.
•
It appear that Brans-Dicke like models offers a maximum coverage of the likelihood surface, with parameters $\alpha$ and $\beta$ varying roughly
in the range $[0,0.2]$ and $[0,0.15]$ respectively. On the contrary, the space of viable theories parameterized by $\eta_{3}$ and $\eta_{4}$ is much more
constrained, as can be seen in panel d).
•
By opportunely choosing the EFT parameters in each panel, the growth rate $f$ can be made larger or smaller than the growth rate in the fiducial model.
In most of the stable models which fit the data, however, the linear growth of matter fluctuations is suppressed with respect to the fiducial model.
For example in Brans-Dicke like models, in which an increase of $\alpha(/\beta)$ produces a decrease(/increase) of $\gamma_{0}(/\gamma_{1})$, stable model
may generate a present day growth rate $f$ that is $12\%$ smaller and $5\%$ higher than that of the fiducial model
•
Our parameterization suggests that stable models of modified gravity cannot extend in regions where the zeroth order growth index $\gamma_{0}$ is more positive than the $\Lambda$CDM
value. In other terms, the stability condition sets a limit on the maximum growth index $\gamma_{0}$ a theory can have.
In principle, this might be an artifact due to the specific parameterization adopted. In Sec. 5.2 we show that this is indeed a universal feature of viable modified gravity models.
Overall, Figure 3 illustrates a central result of this paper, i.e. the space spanned by stable gravitational theories that are not statistically rejected by data is actually much smaller than
the space enclosed by the likelihood contours. By this, we demonstrate the importance of analyzing data with a general EFT-like formalism: the figure of merit in the $\gamma_{0}-\gamma_{1}$ plane
is naturally boosted, not only by increased observational capabilities, but also by enhanced theoretical understanding.
Given that we will show in Section 2 that the region for $\gamma_{0}>\gamma_{0,\Lambda CDM}$ cannot be described by any stable theory with the same background expansion of the $\Lambda$CDM
model, the gross features of the confidence region which is theoretically unbiased can be safely considered as independent from the adopted parameterization scheme.
The finer details of the bounds imposed by theory, however, might be parameter-dependent. The extent to which this is affecting our conclusions will be explored in a forthcoming paper.
While Figure 3 is more observer friendly, in the sense that it projects theoretical results about the amplitude of the EFT parameters directly in the
plane of cosmological observables (i.e. $\gamma_{0}$ and $\gamma_{1}$), Fig 4 is, in the same spirit, more theorist friendly, since it projects the
growth index likelihood contours directly in the phase space of modified gravity theories. From this plot one can straightforwardly deduce the
set of modified gravity theories which are in agreement with results of the simulated experiment. For example, while no positive $\beta$ values can be paired to a positive
$\alpha$ parameter, the contrary happens to $\eta_{3}$ and $\eta_{4}$, which, once paired with $\alpha$, fit data only if their value is negative.
Finally, in Figure 5 we show the EFT parameters that are compatible with current data about the growth rate of structures.
To this purpose we analyze the measurements of [36, 37, 38, 39, 40, 41, 42, 43]
using the prescriptions detailed in [73]. Statistical degeneracy affects the parameters $\alpha-\eta_{3}$ and $\eta_{3}-\eta_{4}$. While in the former case the degeneracy is
resolved by imposing stability conditions, in the latter it can be overcome only by increasing quality and quantity of astronomical data, as shown in Figure 4.
5.2 $\Lambda$CDM vs. modified gravity: comparing the growth rates
Since gravity—even when modified—is an attractive force on small scales, one could naively expect that the growth of structures is always enhanced
when a non-minimal coupling is switched on in the theory. However, the complexity of the system does not allow us to draw such a universal statement.
Indeed, as outlined when commenting Figure 3, there is a relevant stable region of parameter space inducing an overall growth of structures today $f(t_{0})$ which is smaller than that produced by $\Lambda$CDM.
Still, there is a sense in which “larger couplings” imply “more growth”. Of the growth indexes introduced in the last section, $\gamma_{0}$ is the one relevant at very early times, i.e. at the onset of dark energy domination. While the total growth function today, $f(t_{0})$, is the result of a possibly complicate time evolution, $\gamma_{0}$ is only sensitive to the “initial kick” given by the new component—the accelerating mechanism at the time when it starts to be effective. Our plots of Figure 3 suggest that $\Lambda$CDM always correspond to the highest allowed value of $\gamma_{0}$—and therefore to the lowest “initial tendency” to structure growth—among all the models with the same effective equation of state $\bar{w}=-1$.
Indeed, the areas of stability are always on the left of the point representing $\Lambda$CDM. This, however, could be an artifact of our specific parameterization as well.
Here we show that, irrespectively of the adopted parameterization scheme, the $\Lambda$CDM model always maximizes $\gamma_{0}$. This follows straightforwardly from one of the most noticeable properties
of the proposed formalism, the possibility of a direct comparison of the growth rates of disparate dark energy theories which share the same effective equation of state $\bar{w}$.
Let us first consider the Brans-Dicke sector of the theory. Models with a given effective equation of state $\bar{w}=const.$ have a specific relation between $w(x)$ and $\eta(x)$, which is given in eq. (42).
Here, at variance with what we did in Sec. 4 we choose to define the Brans-Dicke model by means of $w(x)$, and to calculate $\eta(x)$ accordingly. Specifically, we define
$$\ell(x)\ =\ w(x)\,-\,\bar{w}\,.$$
(54)
and note that the parameter $\gamma_{0}$ depends only on the value of $\ell$ at $x=1$, i.e. $\ell_{1}\equiv\ell(1)$. For $\bar{w}=-1$ we find
$$\gamma_{0}\ =\ \frac{6}{11}\,\frac{1-2\ell_{1}}{1-\ell_{1}}\ =\ \frac{6}{11}\,%
-\,\frac{6\ell_{1}}{11}\,+\,{\cal O}(\ell_{1}^{2})\,.$$
(55)
We can see that $\gamma_{0}$ is a decreasing function of $\ell_{1}$, which reaches the $\Lambda$CDM value $\frac{6}{11}$ at $\ell_{1}=0$.
On the other hand, the no-ghost and gradient-stabilities conditions (20) coincide in the Brans-Dicke case, and can be expanded in powers of $(1-x)$ at early times. The leading term is linear in $(1-x)$. By requiring its positivity we obtain the stability condition
$$\bar{w}\,\frac{2+2\bar{w}+7\ell_{1}+6\bar{w}\ell_{1}}{\bar{w}+\ell_{1}}\ \geq%
\ 0\,.$$
(56)
For $\bar{w}=-1$ the above condition reduces to $0<\ell_{1}<1$. So, if $w$ gets not too far from $\bar{w}$ at $x=1$, the stability condition does imply $\gamma_{0}\leq\frac{6}{11}$.
If also the other coupling functions, $\eta_{3}(x)$ and $\eta_{4}(x)$, are switched on, the expression for $\gamma_{0}$ will include more terms, but it will still depend on the value of the couplings at $x=1$, that is on the amplitudes $\eta_{3}(1)$ and $\eta_{4}(1)$. By expanding the stability conditions around $x=1$ we find that the no-ghost condition (20a) is identical to (56), while the gradient-stability condition (20b) becomes more involved, even if this expression still depends only on the value $\eta_{3}(1)$ and $\eta_{4}(1)$ (and not on their derivatives). Thus the stability problem still reduces to an algebraic inequality between a finite number of parameters, namely, the values of $w$, $\eta_{3}$ and $\eta_{4}$ at $x=1$. We have verified, numerically, that stability conditions, also in this case, imply
$$\gamma_{0}\ \leq\ \frac{6}{11}\,.$$
(57)
In summary, for a fixed effective equation of state parameter $\bar{w}=-1$, the $\Lambda$CDM model maximizes the allowed values of the leading order growth index $\gamma_{0}$.
6 Conclusions
The effective field theory of dark energy provides a framework for parameterizing
possible departures from the standard gravitational paradigm on large scales and, at the same time, interpret eventual non-null detections in terms of fundamental gravitational proposals.
Its most appealing features are twofold: a) the effectiveness with which a general class of gravitational models obtained by adding a single degree of freedom to general relativity
can be unified and their predictions systematically compared to data, and b) the straightforward identification and classification of the operators controlling the evolution of the background and of the (linearly) perturbed sector of the universe.
In this paper we have shown that the six universal couplings entering the EFT Lagrangian ($M^{2},\lambda,{\cal C},\mu_{2},\mu_{3}$ and $\epsilon_{4}$, eq. (2)) can be re-expressed
in terms of five dimensionless functions ($\bar{w}$ the effective dark energy equation of state of a Friedmann model of the universe,
the non-minimal “Brans-Dicke” coupling $\eta$, and high order couplings $\eta_{2},\eta_{3}$ and $\eta_{4}$, see Table 2) such that
operators responsible for the expansion and growth histories are distinct and independent. Indeed, while the effective equation of state parameter $\bar{w}$ depends only on the expansion rate of the universe, the four remaining functions are only active in the perturbation sector. Among them, only three ($\eta,\eta_{3}$ and $\eta_{4}$) have a direct influence on the growth rate of cosmological structures at the linear level, and may be responsible
for non-standard structure formation processes.
As a convenient—but by no means univocal—choice, we propose to parameterize these functions in terms of a set of six coefficients (see Table 3, right column). We show that this parameterization scheme is general enough to cover most interesting deviations from standard gravity, and flexible enough to
encompass most theories without pathologies, that is, free from ghost or gradient instabilities. We then use these coefficients as “coordinates” to locate potentially viable gravitational theories
and test for a non-minimal coupling of the dark sector to gravity (in the Jordan frame). We show that the volume spanned by non-pathological theories is progressively reduced as
the dark energy equation of state parameter decreases towards negative values. Such a “theory-space behavior”, although somewhat expected, has never been quantified before to our knowledge, but can be easily tracked in our formalism because $\bar{w}$ and the non-minimal couplings $\eta$-$\eta_{i}$ are treated as independent quantities.
Specifically, no minimally coupled scalar field can generate a super-accelerated expansion (i.e. $\bar{w}<-1$). However, in the presence of a sufficiently large $\eta_{2}$, even a small value of the $\eta_{4}$ parameter with all other couplings set to zero is able to stabilize theories with a strongly negative $\bar{w}$ (Figure 2, bottom-right panel). The parameter $\eta_{4}$ is typical [60] of the higher-order Galilean Lagrangians (the “${\cal L}_{4}$” and “${\cal L}_{5}$” terms) [83] and their generalized versions [75, 76]. We also expect it in Gauss-Bonnet $f(G)$-type theories [23].
Besides being instrumental in searching for explicitly covariant models that comply with stability constraints, the formalism developed in this papers also serve as a guide in
interpreting empirical results about the amplitude of relevant cosmological observables. Future surveys of the LSS are expected to constrain, with unprecedented precision, both geometrical (smooth)
and dynamical (perturbed) observables of the cosmological model. In particular the Euclid survey is expected to test the large scale limit of Einstein
gravity by measuring the growth index $\gamma$ to a 1-sigma precision of $<0.02$.
We rely on this predicted figure of merit to forecast how the parameter space of viable gravitational alternatives will shrink under data pressure.
We show that likelihood contours for the parameters $\gamma_{0}-\gamma_{1}$ have a purely formal, phenomenological nature.
Since the growth index is a model dependent quantity, the statistical limits on its amplitude, if not properly interpreted using a general gravitational formalism such as the EFT,
overestimate the true range of theoretically allowed values. Indeed, we have demonstrated that only a fraction of the statistically allowed region is also physically viable,
i.e. it is spanned by stable theories of gravity. In particular we have found (Sec. 5.2) that once the cosmic expansion rate is fixed to $\bar{w}=-1$,
no viable theory can generate a leading order growth index $\gamma_{0}$ which is larger than that of $\Lambda$CDM.
Most of our conclusions rest on the specific parameterization that we have adopted in order to turn the phenomenological exploration of an additional gravitational degree of freedom
into a tractable problem. A point that needs to be further investigated is thus the degree of generality guaranteed by such a parameterization scheme.
Despite the proofs that, irrespectively of the adopted parameterization, $\gamma_{0}$ for $\bar{w}=-1$ is always lower than that predicted in a
$\Lambda$CDM model, one is still left with the issue of investigating whether our specific parameterization offers the maximal possible coverage of the empirical likelihood
in the plane $\gamma_{0}$-$\gamma_{1}$.
Another line of investigation concerns the application of the EFT formalism to interpret perturbation observables other than the growth rate. For example, it would be interesting to
work out which theoretical constraints fundamental physics imposes on the amplitude of the gravitational slip parameters.
Acknowledgments
We acknowledge useful discussions with Luca Amendola, Julien Bel, Paolo Creminelli, Noemi Frusciante, Jerome Gleyzes, Luigi Guzzo, Justin Khoury, David Langlois, Marco Raveri, Alessandra Silvestri, Enrico Trincherini and Filippo Vernizzi,
CM is grateful for support from specific project funding of the Institut Universitaire de France and of the Labex OCEVU.
References
[1]
S. Perlmutter et al. [Supernova Cosmology Project Collaboration],
“Measurements of Omega and Lambda from 42 high redshift supernovae,”
Astrophys. J. 517, 565 (1999)
[astro-ph/9812133].
[2]
A. G. Riess et al. [Supernova Search Team Collaboration],
“Observational evidence from supernovae for an accelerating universe and a cosmological constant,”
Astron. J. 116, 1009 (1998)
[astro-ph/9805201].
[3]
C. Wetterich,
“The Case for a Positive Cosmological Lambda-term”
Nucl. Phys. B 302 668 (1988)
[4]
V. Sahni and A. A. Starobinsky,
“The Case for a positive cosmological Lambda term,”
Int. J. Mod. Phys. D 9, 373 (2000)
[astro-ph/9904398].
[5]
R. R. Caldwell, R. Dave and P. J. Steinhardt,
“Cosmological Imprint of an Energy Component with General Equation-of-State,”
Phys. Rev. Lett. 80 (1998) 1582
[arXiv:astro-ph/9708069].
[6]
P. Binetruy,
“Cosmological constant versus quintessence,”
Int. J. Theor. Phys. 39, 1859 (2000)
[hep-ph/0005037].
[7]
P. J. E. Peebles and B. Ratra,
“The Cosmological constant and dark energy”
Rev. Mod. Phys. 75, 559 (2003)
[astro-ph/0207347].
[8]
C. Brans and R. H. Dicke,
“Mach’s Principle And A Relativistic Theory Of Gravitation,”
Phys. Rev. 124 (1961) 925.
[9]
L. Amendola,
“Coupled quintessence,”
Phys. Rev. D 62, 043511 (2000)
[astro-ph/9908023].
[10]
J. P. Uzan,
“Cosmological scaling solutions of non-minimally coupled scalar fields,”
Phys. Rev. D 59 (1999) 123510 [arXiv:gr-qc/9903004].
[11]
F. Perrotta, C. Baccigalupi and S. Matarrese,
“Extended quintessence,”
Phys. Rev. D 61 (2000) 023507 [arXiv:astro-ph/9906066].
[12]
A. Riazuelo and J. P. Uzan,
“Cosmological observations in scalar-tensor quintessence,”
Phys. Rev. D 66 (2002) 023525 [arXiv:astro-ph/0107386].
[13]
M. Gasperini, F. Piazza and G. Veneziano,
“Quintessence as a runaway dilaton,”
Phys. Rev. D 65, 023508 (2002)
[gr-qc/0108016].
[14]
L. Perivolaropoulos,
“Equation of state of oscillating Brans-Dicke scalar and extra dimensions,”
Phys. Rev. D 67 (2003) 123516 [arXiv:hep-ph/0301237].
[15]
V. Pettorino and C. Baccigalupi,
“Coupled and Extended Quintessence: theoretical differences and structure formation,”
Phys. Rev. D 77, 103003 (2008)
[arXiv:0802.1086 [astro-ph]].
[16]
G. R. Dvali, G. Gabadadze and M. Porrati,
“4-D gravity on a brane in 5-D Minkowski space,”
Phys. Lett. B 485, 208 (2000)
[hep-th/0005016].
[17]
C. Deffayet, G. R. Dvali and G. Gabadadze,
“Accelerated universe from gravity leaking to extra dimensions,”
Phys. Rev. D 65 (2002) 044023 [arXiv:astro-ph/0105068].
[18]
A. A. Starobinsky,
“A New Type of Isotropic Cosmological Models Without Singularity,”
Phys. Lett. B 91, 99 (1980).
[19]
A. De Felice and S. Tsujikawa,
“f(R) theories,”
Living Rev. Rel. 13, 3 (2010)
[arXiv:1002.4928 [gr-qc]].
[20]
T. P. Sotiriou and V. Faraoni,
“f(R) Theories Of Gravity,”
Rev. Mod. Phys. 82, 451 (2010)
[arXiv:0805.1726 [gr-qc]].
[21]
S. Capozziello, S. Carloni and A. Troisi,
“Quintessence without scalar fields,”
Recent Res. Dev. Astron. Astrophys. 1, 625 (2003)
[astro-ph/0303041].
[22]
L. Amendola, R. Gannouji, D. Polarski & S. Tsujikawa
“Conditions for the cosmological viability of f(R) dark energy modelsÓ
Phys. Rev. D 75, 083504 (2007)
[23]
S. ’i. Nojiri and S. D. Odintsov,
“Modified Gauss-Bonnet theory as gravitational alternative for dark energy,”
Phys. Lett. B 631, 1 (2005)
[hep-th/0508049].
[24]
B. Li, T. , P. Sotiriou, J. D. Barrow
Phys. Rev. D 83 104017 (2011)
[25]
N. Arkani-Hamed, S. Dimopoulos, G. Dvali and G. Gabadadze,
“Non-local modification of gravity and the cosmological constant problem,”
arXiv:hep-th/0209227.
[26]
G. Dvali, S. Hofmann and J. Khoury,
“Degravitation of the cosmological constant and graviton width,”
Phys. Rev. D 76, 084006 (2007)
[hep-th/0703027 [HEP-TH]].
[27]
S. Deser and R. P. Woodard,
“Nonlocal Cosmology,”
Phys. Rev. Lett. 99, 111301 (2007)
[arXiv:0706.2151 [astro-ph]].
[28]
C. de Rham, G. Gabadadze and A. J. Tolley,
“Resummation of Massive Gravity,”
Phys. Rev. Lett. 106, 231101 (2011)
[arXiv:1011.1232 [hep-th]].
[29]
S. F. Hassan and R. A. Rosen,
“Resolving the Ghost Problem in non-Linear Massive Gravity,”
Phys. Rev. Lett. 108, 041101 (2012)
[arXiv:1106.3344 [hep-th]].
[30]
D. Comelli, F. Nesti and L. Pilo,
“Cosmology in General Massive Gravity Theories,”
arXiv:1307.8329 [hep-th].
[31]
F. Piazza,
“The IR-Completion of Gravity: What happens at Hubble Scales?,”
New J. Phys. 11, 113050 (2009)
[arXiv:0907.0765 [hep-th]].
[32]
F. Piazza,
“Infrared-modified Universe,”
arXiv:1204.4099 [gr-qc].
[33]
L. Amendola and S. Tsujikawa,
“Dark Energy: Theory and Observations,”
Cambridge U. P. (2011) 506 p
[34]
E. J. Copeland, M. Sami and S. Tsujikawa,
“Dynamics of dark energy,”
Int. J. Mod. Phys. D 15, 1753 (2006)
[hep-th/0603057].
[35]
T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis,
“Modified Gravity and Cosmology,”
Phys. Rept. 513, 1 (2012)
[arXiv:1106.2476 [astro-ph.CO]].
[36]
L. Guzzo, M. Pierleoni, B. Meneux, E. Branchini, O. L. Fevre, C. Marinoni, B. Garilli and J. Blaizot et al.,
“A test of the nature of cosmic acceleration using galaxy redshift distortions,”
Nature 451, 541 (2008)
[arXiv:0802.1944 [astro-ph]].
[37]
Song, Y.-S., & Percival, W. J. 2009, JCAP, 10, 4
[38]
Davis, M., Nusser, A., Masters, K. L., et al. 2011, MNRAS, 413, 2906
[39]
Blake, C., Glazebrook, K., Davis, T. M., et al. 2011,
MNRAS, 418, 1725
[40]
Reid, B. A., Samushia, L., White, M., et al. 2012, arXiv:1203.6641
[41]
Samushia, L., Percival, W. J., & Raccanelli, A. 2012, MNRAS, 420, 2102
[42]
F. Beutler, C. Blake, M. Colless, D. H. Jones, L. Staveley-Smith, G. B. Poole, L. Campbell and Q. Parker et al.,
“The 6dF Galaxy Survey: z $\approx$ 0 measurement of the growth rate and $sigma_{8}$,”
arXiv:1204.4725 [astro-ph.CO].
[43]
Turnbull, S. J., Hudson, M. J., Feldman, H. A., et al. 2012, MNRAS,
420, 447
[44]
S. de la Torre, L. Guzzo, J. A. Peacock, E. Branchini, A. Iovino, B. R. Granett, U. Abbas and C. Adami et al.,
“The VIMOS Public Extragalactic Redshift Survey (VIPERS). Galaxy clustering and redshift-space distortions at z=0.8 in the first data release,”
arXiv:1303.2622 [astro-ph.CO].
[45]
Dark Energy Survey (DES) http://www.skatelescope.org
[46]
Large Synoptic Survey Telescope (LSST) http://www.lsst.org/lsst
[47]
D. J. Schlegel, C. Bebek, H. Heetderks, S. Ho, M. Lampton, M. Levi, N. Mostek and N. Padmanabhan et al.,
“BigBOSS: The Ground-Based Stage IV Dark Energy Experiment,”
arXiv:0904.0468 [astro-ph.CO].
[48]
http://www.euclid-ec.org/
[49]
R. Laureijs, J. Amiaux, S. Arduini, J. -L. Augueres, J. Brinchmann, R. Cole, M. Cropper and C. Dabin et al.,
“Euclid Definition Study Report,”
arXiv:1110.3193 [astro-ph.CO].
[50]
P. J. E. Peebles,
“From precision cosmology to accurate cosmology,”
astro-ph/0208037.
[51]
P. Astier et al. A&A, 447, 31 (2006)
[52]
C. Marinoni, C. & A. Buzzi,
Nature, 468, 539 (2010)
[53]
E. Komatsu, K. M. Smith, J. Dunkley, et al.
ApJS, 192, 18 (2011)
[54]
J. Bel and C. Marinoni,
“Determination of the abundance of cosmic matter via the cell count moments of the galaxy distribution,”
A&A in press, arXiv:1210.2365 [astro-ph.CO].
[55]
A. G. Sanchez, C. G. Scóccola, A. J Ross, et al.
“The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological implications of the large-scale two-point correlation function,”
arXiv:1203.6616 [astro-ph.CO].
MNRAS, 425, 415 (2012)
[56]
L. Anderson, E. Aubourg, S. Bailey, et al.
“The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: Baryon Acoustic Oscillations in the Data Release 9 Spectroscopic Galaxy Sample,”
Mon. Not. Roy. Astron. Soc. 427, no. 4, 3435 (2013)
[arXiv:1203.6594 [astro-ph.CO]].
[57]
P. A. R. Ade et al. [Planck Collaboration],
“Planck 2013 results. XVI. Cosmological parameters,”
arXiv:1303.5076 [astro-ph.CO].
[58]
G. Gubitosi, F. Piazza and F. Vernizzi,
“The Effective Field Theory of Dark Energy,”
JCAP 1302, 032 (2013)
[arXiv:1210.0201 [hep-th]].
[59]
J. K. Bloomfield, E. E. Flanagan, M. Park and S. Watson,
“Dark Energy or Modified Gravity? An Effective Field Theory Approach,”
arXiv:1211.7054 [astro-ph.CO].
[60]
J. Gleyzes, D. Langlois, F. Piazza and F. Vernizzi,
“Essential Building Blocks of Dark Energy,”
arXiv:1304.4840 [hep-th].
[61]
J. Bloomfield,
“A Simplified Approach to General Scalar-Tensor Theories,”
arXiv:1304.6712 [astro-ph.CO].
[62]
P. Creminelli, M. A. Luty, A. Nicolis and L. Senatore,
“Starting the Universe: Stable Violation of the Null Energy Condition and Non-standard Cosmologies,”
JHEP 0612, 080 (2006)
[hep-th/0606090].
[63]
C. Cheung, P. Creminelli, A. L. Fitzpatrick, J. Kaplan and L. Senatore,
“The Effective Field Theory of Inflation,”
JHEP 0803, 014 (2008)
[arXiv:0709.0293 [hep-th]].
[64]
P. Creminelli, G. D’Amico, J. Norena and F. Vernizzi,
“The Effective Theory of Quintessence: the w¡-1 Side Unveiled,”
JCAP 0902, 018 (2009)
[arXiv:0811.0827 [astro-ph]].
[65]
F. Piazza and F. Vernizzi,
“Effective Field Theory of Cosmological Perturbations,”
arXiv:1307.4350 [hep-th].
[66]
T. Baker, P. G. Ferreira, C. Skordis and J. Zuntz,
“Towards a fully consistent parameterization of modified gravity,”
Phys. Rev. D 84, 124018 (2011)
[arXiv:1107.0491 [astro-ph.CO]].
[67]
T. Baker, P. G. Ferreira and C. Skordis,
“The Parameterized Post-Friedmann Framework for Theories of Modified Gravity: Concepts, Formalism and Examples,”
Phys. Rev. D 87, 024015 (2013)
[arXiv:1209.2117 [astro-ph.CO]].
[68]
T. Baker, P. G. Ferreira and C. Skordis,
“The Fast Route to Modified Gravitational Growth,”
arXiv:1310.1086 [astro-ph.CO].
[69]
M. Park, K. M. Zurek and S. Watson,
“A Unified Approach to Cosmic Acceleration,”
Phys. Rev. D 81, 124008 (2010)
[arXiv:1003.1722 [hep-th]].
[70]
J. K. Bloomfield and E. E. Flanagan,
“A Class of Effective Field Theory Models of Cosmic Acceleration,”
arXiv:1112.0303 [gr-qc].
[71]
R. A. Battye and J. A. Pearson,
“Effective action approach to cosmological perturbations in dark energy and modified gravity,”
JCAP 1207, 019 (2012)
[arXiv:1203.0398 [hep-th]].
[72]
I. Sawicki, I. D. Saltas, L. Amendola and M. Kunz,
“Consistent perturbations in an imperfect fluid,”
arXiv:1208.4855 [astro-ph.CO].
[73]
H. Steigerwald, J. Bel and C. Marinoni,
in prep.
[74]
N. Arkani-Hamed, H. -C. Cheng, M. A. Luty, S. Mukohyama,
“Ghost condensation and a consistent infrared modification of gravity,”
JHEP 0405, 074 (2004).
[hep-th/0312099].
[75]
G. W. Horndeski,
Int. J. Theor. Phys. 10, 363 (1974).
[76]
C. Deffayet, S. Deser and G. Esposito-Farese,
“Generalized Galileons: All scalar models whose curved background extensions maintain second-order field equations and stress-tensors,”
Phys. Rev. D 80, 064015 (2009)
[arXiv:0906.1967 [gr-qc]].
[77]
C. Armendariz-Picon, V. F. Mukhanov and P. J. Steinhardt,
“A Dynamical solution to the problem of a small cosmological constant and late time cosmic acceleration,”
Phys. Rev. Lett. 85, 4438 (2000)
[astro-ph/0004134].
[78]
C. Brans and R. H. Dicke,
“Mach’s principle and a relativistic theory of gravitation,”
Phys. Rev. 124, 925 (1961).
[79]
B. Boisseau, G. Esposito-Farese, D. Polarski and A. A. Starobinsky,
“Reconstruction of a scalar-tensor theory of gravity in an accelerating universe,”
Phys. Rev. Lett. 85, 2236 (2000)
[gr-qc/0001066].
[80]
C. Deffayet, O. Pujolas, I. Sawicki and A. Vikman,
“Imperfect Dark Energy from Kinetic Gravity Braiding,”
JCAP 1010, 026 (2010)
[arXiv:1008.0048 [hep-th]].
[81]
B. Hu, M. Raveri, N. Frusciante and A. Silvestri,
“Effective Field Theory of Cosmic Acceleration: an implementation in CAMB,”
arXiv:1312.5742 [astro-ph.CO].
[82]
N. Chow and J. Khoury,
“Galileon Cosmology,”
Phys. Rev. D 80, 024037 (2009)
[arXiv:0905.1325 [hep-th]].
[83]
A. Nicolis, R. Rattazzi and E. Trincherini,
“The Galileon as a local modification of gravity,”
Phys. Rev. D 79, 064036 (2009)
[arXiv:0811.2197 [hep-th]].
[84]
C. Deffayet, G. Esposito-Farese and A. Vikman,
“Covariant Galileon,”
Phys. Rev. D 79, 084003 (2009)
[arXiv:0901.1314 [hep-th]].
[85]
F. Nitti and F. Piazza,
“Scalar-tensor theories, trace anomalies and the QCD-frame,”
Phys. Rev. D 86, 122002 (2012)
[arXiv:1202.2105 [hep-th]].
[86]
N. Frusciante, M. Raveri and A. Silvestri,
“Effective Field Theory of Dark Energy: a Dynamical Analysis,”
arXiv:1310.6026 [astro-ph.CO].
[87]
J. Noller, F. von Braun-Bates and P. G. Ferreira,
“Relativistic scalar fields and the quasi-static approximation in theories of modified gravity,”
arXiv:1310.3266 [astro-ph.CO].
[88]
A. Silvestri, L. Pogosian and R. V. Buniy,
“A practical approach to cosmological perturbations in modified gravity,”
Phys. Rev. D 87, 104015 (2013)
[arXiv:1302.1193 [astro-ph.CO]].
[89]
L. Amendola, S. Fogli, A. Guarnizo, M. Kunz and A. Vollmer,
“Model-independent constraints on the cosmological anisotropic stress,”
arXiv:1311.4765 [astro-ph.CO].
[90]
A. Hojjati, L. Pogosian, A. Silvestri and G. -B. Zhao,
“Observable physical modes of modified gravity,”
arXiv:1312.5309 [astro-ph.CO].
[91]
J. M. Cline, S. Jeon and G. D. Moore,
“The Phantom menaced: Constraints on low-energy effective ghosts,”
Phys. Rev. D 70, 043543 (2004)
[hep-ph/0311312].
[92]
A. Adams, N. Arkani-Hamed, S. Dubovsky, A. Nicolis and R. Rattazzi,
“Causality, analyticity and an IR obstruction to UV completion,”
JHEP 0610, 014 (2006)
[hep-th/0602178].
[93]
A. I. Vainshtein,
“To the problem of nonvanishing gravitation mass,”
Phys. Lett. B 39, 393 (1972).
[94]
E. Babichev and C. Deffayet,
“An introduction to the Vainshtein mechanism,”
Class. Quant. Grav. 30, 184001 (2013)
[arXiv:1304.7240 [gr-qc]].
[95]
S. Rappaport, J. Schwab, S. Burles and G. Steigman,
“Big Bang Nucleosynthesis Constraints on the Self-Gravity of Pressure,”
Phys. Rev. D 77, 023515 (2008)
[arXiv:0710.5300 [astro-ph]].
[96]
M. Tegmark,
“Measuring the metric: A Parametrized postFriedmanian approach to the cosmic dark energy problem,”
Phys. Rev. D 66, 103507 (2002)
[astro-ph/0101354].
[97]
W. Hu and I. Sawicki,
“A Parameterized Post-Friedmann Framework for Modified Gravity,”
Phys. Rev. D 76, 104043 (2007)
[arXiv:0708.1190 [astro-ph]].
[98]
J. Wang, L. Hui and J. Khoury,
“No-Go Theorems for Generalized Chameleon Field Theories,”
Phys. Rev. Lett. 109, 241301 (2012)
[arXiv:1208.4612 [astro-ph.CO]].
[99]
J. Khoury and A. Weltman,
“Chameleon fields: Awaiting surprises for tests of gravity in space,”
Phys. Rev. Lett. 93, 171104 (2004)
[astro-ph/0309300].
[100]
E. Sefusatti and F. Vernizzi,
“Cosmological structure
formation with clustering quintessence,”
JCAP 1103, 047 (2011)
[arXiv:1101.1026 [astro-ph.CO]].
[101]
Amanullah, R.,
Lidman, C., Rubin, D., et al. 2010, APJ, 716, 712
[102]
R. R. Caldwell,
“A Phantom menace?,”
Phys. Lett. B 545, 23 (2002)
[astro-ph/9908168].
[103]
S. Das, P. S. Corasaniti and J. Khoury,
“Super-acceleration as signature of dark sector interaction,”
Phys. Rev. D 73, 083509 (2006)
[astro-ph/0510628]. |
A Note on Symmetric properties of the multiple $q$-Euler zeta functions
and higher-order $q$-Euler polynomials
Dae San Kim and Taekyun Kim
Abstract.
Recently, the higher-order $q$-Euler polynomials and multiple $q$-Euler
zeta functions are introduced by T. Kim ([key-8, key-7]). In
this paper, we investigate some symmetric properties of the multiple
$q$-Euler zeta function and derive various identities concerning
the higher-order $q$-Euler polynomials from the symmetric properties
of the multiple $q$-Euler zeta functions.
1. Introduction
$\,$
For $q\in\mathbb{C}$ with $\left|q\right|<1$, the $q$-number
is defined by $\left[x\right]_{q}=\frac{1-q^{x}}{1-q}$ . Note that
${\displaystyle\lim_{q\rightarrow 1}\left[x\right]_{q}}=x$. As is
well known, the Euler polynomials of order $r$$\left(\in\mathbb{N}\right)$
are defined by the generating function to be
(1)
$$\left(\frac{2}{e^{t}+1}\right)^{r}e^{xt}=\underset{r\textrm{-times}}{%
\underbrace{\left(\frac{2}{e^{t}+1}\right)\times\cdots\times\left(\frac{2}{e^{%
t}+1}\right)}}e^{xt}=\sum_{n=0}^{\infty}E_{n}^{\left(r\right)}\left(x\right)%
\frac{t^{n}}{n!}.$$
When $x=0$, $E_{n}^{\left(r\right)}=E_{n}^{\left(r\right)}\left(0\right)$
are called the Euler numbers of order $r$ (see [1-13]).
In [key-7], T. Kim considered the $q$-extension of higher-order
Euler polynomials which are given by the generating function to be
(2)
$$\displaystyle F_{q}^{\left(r\right)}\left(t,x\right)$$
$$\displaystyle=\left[2\right]_{q}^{r}\sum_{m_{1},\cdots,m_{r}=0}^{\infty}\left(%
-q\right)^{m_{1}+\cdots+m_{r}}e^{\left[m_{1}+\cdots+m_{r}+x\right]_{q}t}$$
$$\displaystyle=\sum_{n=0}^{\infty}E_{n,q}^{\left(r\right)}\left(x\right)\frac{t%
^{n}}{n!}.$$
Note that ${\displaystyle\lim_{q\rightarrow 1}}F_{q}^{\left(r\right)}\left(t,x\right)=%
\left(\frac{2}{e^{t}+1}\right)^{r}e^{xt}={\displaystyle\sum_{n=0}^{\infty}E_{n%
}^{\left(r\right)}\left(x\right)\frac{t^{n}}{n!}}$.
When $x=0$, $E_{n,q}^{\left(r\right)}=E_{n,q}^{\left(r\right)}\left(0\right)$
are called the $q$-Euler numbers of order $r$$\left(\in\mathbb{N}\right)$.
In [key-11], Rim et al. have studied the properties of $q$-Euler
polynomials due to T. Kim.
From (2), we note that
(3)
$$\displaystyle E_{n,q}^{\left(r\right)}\left(x\right)$$
$$\displaystyle=\sum_{l=0}^{n}\dbinom{n}{l}q^{lx}E_{l,q}^{\left(r\right)}\left[x%
\right]_{q}^{n-l}$$
$$\displaystyle=\left(q^{x}E_{q}^{\left(r\right)}+\left[x\right]_{q}\right)^{n},$$
with the usual convention about replacing $\left(E_{q}^{\left(r\right)}\right)^{n}$
by $E_{n,q}^{\left(r\right)}$.
In [key-7], T. Kim considered the multiple $q$-Euler zeta function
which interpolates higher-order $q$-Euler polynomials at negative
integers as follows :
$$\displaystyle\zeta_{q,r}\left(s,x\right)$$
$$\displaystyle=$$
$$\displaystyle\left[2\right]_{q}^{r}\sum_{m_{1},\cdots,m_{r}=0}^{\infty}\frac{%
\left(-q\right)^{m_{1}+\cdots+m_{r}}}{\left[m_{1}+\cdots+m_{r}+x\right]_{q}^{s}}$$
$$\displaystyle=$$
$$\displaystyle\left[2\right]_{q}^{r}\sum_{m=0}^{\infty}\dbinom{m+r-1}{m}_{q}%
\left(-q\right)^{m}\frac{1}{\left[m+x\right]_{q}^{s}},$$
where $s\in\mathbb{C}$ and $x\in\mathbb{R}$ with $x\neq 0,\,-1,\,-2,\,-3,\,\cdots$.
By using Cauchy residue theorem and Laurent series, we note that
(5)
$$\zeta_{q,r}\left(-n,x\right)=E_{n,q}^{\left(r\right)}\left(x\right),\quad%
\textrm{where }r\in\mathbb{Z}_{\geq 0}.$$
Recently, D.S. Kim et al. ([key-5]) introduced some interesting
and important symmetric identities of the $q$-Euler polynomials which
are derived from the symmetric properties of $q$-Euler zeta function.
Indeed, their identities are a part of an answer to an open question
for the symmetric identities of Carlitz’s type $q$-Euler polynomials
in [key-6].
In order to find a generalization of identities of D. S. Kim et al.
([key-5]), we consider symmetric properties of the multiple
$q$-Euler zeta function. From the symmetric properties of multiple
$q$-Euler zeta function, we derive identities of symmetry for the
higher-order $q$-Euler polynomials.
2. Some identities of higher-order $q$-Euler polynomials
$\,$
For $a,\,b\in\mathbb{N}$ with $a\equiv 1$ (mod $2$) and $b\equiv 1$
(mod 2), we observe that
(6)
$$\displaystyle\frac{1}{\left[2\right]_{q^{a}}^{r}}\zeta_{q^{a},r}\left(s,\,bx+%
\frac{b}{a}\left(j_{1}+\cdots+j_{r}\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{n_{1},\cdots,\,n_{r}=0}^{\infty}\frac{\left(-1\right)^{n_{1%
}+\cdots+n_{r}}q^{a\left(n_{1}+\cdots+n_{r}\right)}}{\left[n_{1}+\cdots+n_{r}+%
bx+\frac{b}{a}\left(j_{1}+\cdots+j_{r}\right)\right]_{q^{a}}^{s}}$$
$$\displaystyle=$$
$$\displaystyle\left[a\right]_{q}^{s}\sum_{n_{1},\cdots,\,n_{r}=0}^{\infty}\frac%
{\left(-1\right)^{n_{1}+\cdots+n_{r}}q^{a\left(n_{1}+\cdots+n_{r}\right)}}{%
\left[a\left(n_{1}+\cdots+n_{r}\right)+abx+b\left(j_{1}+\cdots+j_{r}\right)%
\right]_{q}^{s}}$$
$$\displaystyle=$$
$$\displaystyle\left[a\right]_{q}^{s}\sum_{n_{1},\cdots,\,n_{r}=0}^{\infty}\sum_%
{i_{1},\cdots,i_{r}=0}^{b-1}\frac{\left(-1\right)^{{\displaystyle{\textstyle%
\sum_{l=1}^{r}\left(i_{l}+bn_{l}\right)}}}q^{a{\displaystyle{\textstyle\sum_{l%
=1}^{r}\left(i_{l}+bn_{l}\right)}}}}{\left[ab{\displaystyle{\textstyle\sum_{l=%
1}^{r}\left(x+n_{l}\right)}+b{\textstyle\sum_{l=1}^{r}j_{l}}+a{\textstyle\sum_%
{l=1}^{r}i_{l}}}\right]_{q}^{s}}.$$
From (6), we note that
(7)
$$\displaystyle\frac{\left[b\right]_{q}^{s}}{\left[2\right]_{q^{a}}^{r}}\sum_{j_%
{1},\cdots,\,j_{r}=0}^{a-1}\left(-1\right)^{{\textstyle{\displaystyle{%
\textstyle\sum_{l=1}^{r}j_{l}}}}}q^{b{\displaystyle{\textstyle\sum_{l=1}^{r}j_%
{l}}}}\zeta_{q^{a},r}\left(s,\,bx+\frac{b}{a}\left(j_{1}+\cdots+j_{r}\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\left[a\right]_{q}^{s}\left[b\right]_{q}^{s}\sum_{j_{1},\cdots,\,%
j_{r}=0}^{a-1}\sum_{i_{1},\cdots,\,i_{r}=0}^{b-1}$$
$$\displaystyle\times\sum_{n_{1},\cdots,\,n_{r}=0}^{\infty}\frac{\left(-1\right)%
^{{\displaystyle{\textstyle\sum_{l=1}^{r}\left(i_{l}+j_{l}+n_{l}\right)}}}q^{{%
\displaystyle{\textstyle\sum_{l=1}^{r}\left(bj_{l}+ai_{l}+abn_{l}\right)}}}}{%
\left[ab{\displaystyle{\textstyle\sum_{l=1}^{r}\left(x+n_{l}\right)+b{%
\displaystyle{\textstyle\sum_{l=1}^{r}j_{l}}+a{\displaystyle{\textstyle\sum_{l%
=1}^{r}i_{l}}}}}}\right]_{q}^{s}}.$$
By the same method as (7), we get
(8)
$$\displaystyle\frac{\left[a\right]_{q}^{s}}{\left[2\right]_{q^{b}}^{r}}\sum_{j_%
{1},\cdots,\,j_{r}=0}^{b-1}\left(-1\right)^{{\displaystyle{\textstyle\sum_{l=1%
}^{r}j_{l}}}}q^{a{\displaystyle{\textstyle\sum_{l=1}^{r}j_{l}}}}\zeta\left(s,%
\,ax+\frac{a}{b}\left(j_{1}+\cdots+j_{r}\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\left[a\right]_{q}^{s}\left[b\right]_{q}^{s}\sum_{j_{1},\cdots,\,%
j_{r}=0}^{b-1}\sum_{i_{1},\cdots,\,i_{r}=0}^{a-1}$$
$$\displaystyle\times\sum_{n_{1},\cdots,\,n_{r}=0}^{\infty}\frac{\left(-1\right)%
^{{\displaystyle{\textstyle\sum_{l=1}^{r}\left(i_{l}+j_{l}+n_{l}\right)}}}q^{{%
\displaystyle{\textstyle\sum_{l=1}^{r}\left(bi_{l}+aj_{l}+abn_{l}\right)}}}}{%
\left[ab{\displaystyle{\textstyle\sum_{l=1}^{r}\left(x+n_{l}\right)}+a{%
\textstyle\sum_{l=1}^{r}j_{l}}+b{\displaystyle{\textstyle\sum_{l=1}^{r}i_{l}}}%
}\right]_{q}^{s}}.$$
Threfore, by (7) and (8), we obtain the following
theorem.
Theorem 1.
For $a,\,b\in\mathbb{N}$ with $a\equiv 1$ $\textnormal{(mod }2\mathnormal{)}$
and $b\equiv 1$ $\textnormal{(mod }2\mathnormal{)}$, we have
$$\displaystyle\left[2\right]_{q^{b}}^{r}\left[b\right]_{q}^{s}\sum_{j_{1},%
\cdots,j_{r}=0}^{a-1}\left(-1\right)^{{\displaystyle{\textstyle\sum_{l=1}^{r}j%
_{l}}}}q^{b{\displaystyle{\textstyle\sum_{l=1}^{r}j_{l}}}}\zeta_{q^{a},r}\left%
(s,bx+\frac{b}{a}\left(j_{1}+\cdots+j_{r}\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\left[2\right]_{q^{a}}^{r}\left[a\right]_{q}^{s}\sum_{j_{1},%
\cdots,j_{r}=0}^{b-1}\left(-1\right)^{{\displaystyle{\textstyle\sum_{l=1}^{r}j%
_{l}}}}q^{a{\displaystyle{\textstyle\sum_{l=1}^{r}j_{l}}}}\zeta_{q^{b},r}\left%
(s,ax+\frac{a}{b}\left(j_{1}+\cdots+j_{r}\right)\right).$$
$\,$
From (5) and Theorem 1, we obtain the following
theorem.
Theorem 2.
For $n\geq 0$ and $a,\,b\in\mathbb{N}$ with $a\equiv 1$ $\textnormal{(mod }2\mathnormal{)}$
and $b\equiv 1$ $\textnormal{(mod }2\mathnormal{)}$, we have
$$\displaystyle\left[2\right]_{q^{b}}^{r}\left[a\right]_{q}^{n}\sum_{j_{1},%
\cdots,j_{r}=0}^{a-1}\left(-1\right)^{{\displaystyle{\textstyle\sum_{l=1}^{r}j%
_{l}}}}q^{b{\textstyle{\displaystyle{\textstyle\sum_{l=1}^{r}j_{l}}}}}E_{n,q^{%
a}}^{\left(r\right)}\left(bx+\frac{b}{a}\left(j_{1}+\cdots+j_{r}\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\left[2\right]_{q^{a}}^{r}\left[b\right]_{q}^{n}\sum_{j_{1},%
\cdots,j_{r}=0}^{b-1}\left(-1\right)^{{\displaystyle{\textstyle\sum_{l=1}^{r}j%
_{l}}}}q^{a{\displaystyle{\textstyle\sum_{l=1}^{r}j_{l}}}}E_{n,q^{b}}^{\left(r%
\right)}\left(ax+\frac{a}{b}\left(j_{1}+\cdots+j_{r}\right)\right).$$
$\,$
By (3), we easily get
(9)
$$E_{n,q}^{\left(r\right)}\left(x+y\right)=\sum_{i=0}^{n}\dbinom{n}{i}q^{xi}E_{i%
,q}^{\left(r\right)}\left(y\right)\left[x\right]_{q}^{n-i}.$$
Thus, from (9), we have
(10)
$$\displaystyle\sum_{j_{1},\cdots,j_{r}=0}^{a-1}\left(-1\right)^{{\displaystyle{%
\textstyle\sum_{l=1}^{r}j_{l}}}}q^{b{\displaystyle{\textstyle\sum_{l=1}^{r}j_{%
l}}}}E_{n,q^{a}}^{\left(r\right)}\left(bx+\frac{b}{a}\left(j_{1}+\cdots+j_{r}%
\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\sum_{j_{1},\cdots,j_{r}=0}^{a-1}\left(-1\right)^{{\displaystyle{%
\textstyle\sum_{l=1}^{r}j_{l}}}}q^{b{\displaystyle{\textstyle\sum_{l=1}^{r}j_{%
l}}}}\sum_{i=0}^{n}\dbinom{n}{i}q^{ia\left(\frac{b}{a}{\displaystyle{%
\textstyle\sum_{l=1}^{r}j_{l}}}\right)}E_{i,q^{a}}^{\left(r\right)}\left(bx%
\right)\left[\frac{b}{a}\sum_{l=1}^{r}j_{l}\right]_{q^{a}}^{n-i}$$
$$\displaystyle=$$
$$\displaystyle\sum_{j_{1},\cdots,j_{r}=0}^{a-1}\left(-1\right)^{{\displaystyle{%
\textstyle\sum_{l=1}^{r}j_{l}}}}q^{b{\displaystyle{\textstyle\sum_{l=1}^{r}j_{%
l}}}}\sum_{i=0}^{n}\dbinom{n}{i}q^{\left(n-i\right)b{\textstyle{\displaystyle{%
\textstyle\sum_{l=1}^{r}j_{l}}}}}E_{n-i,q^{a}}^{\left(r\right)}\left(bx\right)%
\left[\frac{b}{a}\sum_{l=1}^{r}j_{l}\right]_{q^{a}}^{i}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=0}^{n}\dbinom{n}{i}\left(\frac{\left[b\right]_{q}}{\left[%
a\right]_{q}}\right)^{i}E_{n-i,q^{a}}^{\left(r\right)}\left(bx\right)$$
$$\displaystyle\times\sum_{j_{1},\cdots,j_{r}=0}^{a-1}\left(-1\right)^{{%
\displaystyle{\textstyle\sum_{l=1}^{r}j_{l}}}}q^{b{\displaystyle{\textstyle%
\sum_{l=1}^{r}\left(n-i+1\right)j_{l}}}}\left[j_{1}+\cdots+j_{r}\right]_{q^{b}%
}^{i}$$
$$\displaystyle=$$
$$\displaystyle\sum_{i=0}^{n}\dbinom{n}{i}\left(\frac{\left[b\right]_{q}}{\left[%
a\right]_{q}}\right)^{i}E_{n-i,q^{a}}^{\left(r\right)}\left(bx\right)S_{n,i,q^%
{b}}^{\left(r\right)}\left(a\right),$$
where
(11)
$$S_{n,i,q^{b}}^{\left(r\right)}\left(a\right)=\sum_{j_{1},\cdots,\,j_{r}=0}^{a-%
1}\left(-1\right)^{\sum_{l=1}^{r}j_{l}}q^{\sum_{l=1}^{r}\left(n-i+1\right)j_{l%
}}\left[j_{1}+\cdots+j_{r}\right]_{q}^{i}.$$
From (10) and (11), we note that
(12)
$$\displaystyle\left[2\right]_{q^{b}}^{r}\left[a\right]_{q}^{n}\sum_{j_{1},%
\cdots,j_{r}=0}^{a-1}\left(-1\right)^{\sum_{l=1}^{r}j_{l}}q^{b\sum_{l=1}^{r}j_%
{l}}E_{n,q^{a}}^{\left(r\right)}\left(bx+\frac{b}{a}\left(j_{1}+\cdots+j_{r}%
\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\left[2\right]_{q^{b}}^{r}\sum_{i=0}^{n}\dbinom{n}{i}\left[a%
\right]_{q}^{n-i}\left[b\right]_{q}^{i}E_{n-i,q^{a}}^{\left(r\right)}\left(bx%
\right)S_{n,i,q^{b}}^{\left(r\right)}\left(a\right).$$
By the same method as (12), we get
(13)
$$\displaystyle\left[2\right]_{q^{a}}^{r}\left[b\right]_{q}^{n}\sum_{j_{1},%
\cdots,j_{r}=0}^{b-1}\left(-1\right)^{\sum_{l=1}^{r}j_{l}}q^{a\sum_{l=1}^{r}j_%
{l}}E_{n,q^{b}}^{\left(r\right)}\left(ax+\frac{a}{b}\left(j_{1}+\cdots+j_{r}%
\right)\right)$$
$$\displaystyle=$$
$$\displaystyle\left[2\right]_{q^{a}}^{r}\sum_{i=0}^{n}\dbinom{n}{i}\left[b%
\right]_{q}^{n-i}\left[a\right]_{q}^{i}E_{n-i,q^{b}}^{\left(r\right)}\left(ax%
\right)S_{n,i,q^{a}}^{\left(r\right)}\left(b\right).$$
Therefore, by (12) and (13), we obtain the following
theorem.
Theorem 3.
For $n\geq 0$ and $a,\,b\in\mathbb{N}$ with $a\equiv 1$ $\textnormal{(mod }2\mathnormal{)}$
and $b\equiv 1$ $\textnormal{(mod }2\mathnormal{)}$, we have
$$\displaystyle\left[2\right]_{q^{b}}^{r}\sum_{i=0}^{n}\dbinom{n}{i}\left[a%
\right]_{q}^{n-i}\left[b\right]_{q}^{i}E_{n-i,q^{a}}^{\left(r\right)}\left(bx%
\right)S_{n,i,q^{b}}^{\left(r\right)}\left(a\right)$$
$$\displaystyle=$$
$$\displaystyle\left[2\right]_{q^{a}}^{r}\sum_{i=0}^{n}\dbinom{n}{i}\left[b%
\right]_{q}^{n-i}\left[a\right]_{q}^{i}E_{n-i,q^{b}}^{\left(r\right)}\left(ax%
\right)S_{n,i,q^{a}}^{\left(r\right)}\left(b\right).$$
$\,$
It is not difficult to show that
(14)
$$\displaystyle e^{\left[x\right]_{q}u}\sum_{m_{1},\cdots,m_{r}=0}^{\infty}q^{m_%
{1}+\cdots+m_{r}}\left(-1\right)^{m_{1}+\cdots+m_{r}}e^{\left[y+m_{1}+\cdots+m%
_{r}\right]_{q}q^{x}\left(u+v\right)}$$
$$\displaystyle=$$
$$\displaystyle e^{-\left[x\right]_{q}v}\sum_{m_{1},\cdots,m_{r}=0}^{\infty}q^{m%
_{1}+\cdots+m_{r}}\left(-1\right)^{m_{1}+\cdots+m_{r}}e^{\left[x+y+m_{1}+%
\cdots+m_{r}\right]_{q}\left(u+v\right)}.$$
By (2) and (14), we get
(15)
$$\displaystyle\sum_{k=0}^{m}\dbinom{m}{k}q^{\left(k+n\right)x}E_{k+n,q}^{\left(%
r\right)}\left(y\right)\left[x\right]_{q}^{m-k}$$
$$\displaystyle=$$
$$\displaystyle\sum_{k=0}^{n}\dbinom{n}{k}E_{m+k,q}^{\left(r\right)}\left(x+y%
\right)q^{\left(n-k\right)x}\left[-x\right]_{q}^{n-k},$$
where $m,n\geq 0$.
Thus, by (15), we see that
(16)
$$\displaystyle\sum_{k=0}^{m}\dbinom{m}{k}q^{kx}E_{k+n,q}^{\left(r\right)}\left(%
y\right)\left[x\right]_{q}^{m-k}$$
$$\displaystyle=$$
$$\displaystyle\sum_{k=0}^{n}\dbinom{n}{k}q^{-kx}E_{m+k,q}^{\left(r\right)}\left%
(x+y\right)\left[-x\right]_{q}^{n-k},$$
where $m,\,n\geq 0$.
ACKNOWLEDGEMENTS. This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MOE)
(No.2012R1A1A2003786 ).
$\,$
Department of Mathematics, Sogang University, Seoul
121-742, Republic of Korea
E-mail address : dskim@sogang.ac.kr
$\,$
Department of Mathematics, Kwangwoon University, Seoul
139-701, Republic of Korea
E-mail address : tkkim@kw.ac.kr |
Isospin dependent global neutron-nucleus optical model potential
Xiao-Hua Li${}^{1,2}$111li_xiaohua@sjtu.edu.cn , Lie-Wen Chen
${}^{1,3}$222lwchen@sjtu.edu.cn
1. Department of Physics, Shanghai Jiao Tong University, Shanghai 200240, China
2. School of Nuclear Science and Technology, University of South
China, Hengyang, Hunan 421001, China
3. Center of Theoretical Nuclear Physics, National Laboratory of
Heavy Ion Accelerator, Lanzhou 730000, China
(December 6, 2020)
Abstract
In this paper, we construct a new phenomenological isospin dependent
global neutron-nucleus optical model potential. Based on the
existing experimental data of elastic scattering angular
distributions for neutron as projectile, we obtain a set of the
isospin dependent global neutron-nucleus optical model potential
parameters, which can basically reproduce the experimental data for
target nuclei from ${}^{24}$Mg to ${}^{242}$Pu with the energy region up
to 200 MeV.
I Introduction
The optical model(OM) is of fundamental importance on many aspects
of nuclear physics PGYONG199841 . It is the basis and starting
point for many nuclear model calculations and also is one of the
most important theoretical approaches in nuclear data evaluations
and analyses. The optical model potential (OMP) parameters are the
key to reproduce the experimental data, such as reaction cross
sections, elastic scattering angle distributions, and so on.
Over the past years, a number of excellent local and global
optical potentials for nucleons have been
proposed NPA7132312003 PR18211901969 PRT201571991 .
Koning and Delarche NPA7132312003 constructed a set of
global phenomenological nucleon-nucleus optical model potential
parameters (KD OMP), which can perfectly reproduce the
experimental data for the region of targets from ${}^{24}$Mg to
${}^{209}$Bi with the incident energy from 1 keV to 200 MeV; Weppner
et al PRC800346082009 obtained a set of isospin
dependent global nucleon-nucleus optical model potential
parameters (WP OMP) with target nuclei region from carbon to
nickel and the projectile energy from 30 to 160 MeV; Han
et al PRC810246162010 also obtained a new set of
global phenomenological optical model potential parameters for
nucleon-actinide reactions with energies up to 300 MeV. In the
nucleon optical model potential, the isospin degree of freedom may
play an important role to more accurately describe the
experimental data NP356761962 AOP1242081980 .
Information on the isospin dependence of the nucleon optical model
potential has been shown to be very useful to understand the
nuclear symmetry
energy PRB1347221964 ; PLB421631972 ; NPA86512011 ; PRC820546072010
which encodes the energy related to the neutron-proton asymmetry
in the equation of state of isospin asymmetric nuclear matter and
is a key quantity for many issues in nuclear physics and
astrophysics (See, e.g., Ref. PR4641132008 ). On the other
hand, to study the systematics of neutron scattering cross
sections on various nuclei for neutron energies up to several
hundred MeV is a very interesting and important topic due to the
concept of an accelerator driven subcritical (ADS) system in which
neutrons are produced by bombarding a heavy element target with a
high energy proton beam of typically above 1.0 GeV with a current
of 10 mA and the ADS system serves a dual purpose of energy
multiplication and waste incineration (See, e.g.,
Ref. nuclth0409005 ). Therefore, to construct a more
accurate neutron-nucleus optical model potential is of crucial
importance. The motivation of the present paper is to construct a
new isospin dependent neutron-nucleus optical model potential,
which can reproduce the experimental data for a wider range of
target nucleus than the formers.
This paper is arranged as follows. In Sec. II, we provide a
description of the optical model and the form of the isospin
dependent neutron-nucleus optical potential. Section III presents
the results, and section IV is devoted to the discussion. Finally, a
summary is given in Sec. V.
II OPTICAL MODEL AND THE FORM OF the isospin
dependent neutron-nucleus OPTICAL POTENTIAL
The phenomenological OMP for neutron-nucleus reaction $V(r,E)$ is
usually defined as follows:
$$V(r,E)=-V_{v}\,f_{r}(r)-i\,W_{v}\,f_{v}(r)+i\,4\,a_{s}\,W_{s}\,\frac{df_{s}(r)%
}{dr}+\lambda\!\!\!{{}^{-}}_{\pi}^{2}\,\frac{V_{so}+iW_{so}}{r}\,\frac{df_{so}%
(r)}{dr}\,2\vec{S}\cdot\vec{l},$$
(1)
where $V_{v}$ and $V_{so}$ are the depth of real part of central
potential and spin-orbit potential, respectively; $W_{v}$, $W_{s}$ and
$W_{so}$ are the depth of imaginary part of volume absorption
potential, surface absorption potential and spin-orbit potential,
respectively. The $f_{i}$ ($i=v,s,so$) are the standard Wood-Saxon
shape form factors.
In this work, according to Lane Model NP356761962 , we add the
isospin dependent terms in the $V_{v}$, $W_{v}$ and $W_{s}$, which can be
parameterized as:
$$\displaystyle V_{v}=V_{0}+V_{1}\,E+V_{2}\,E^{2}+(V_{3}+V_{3L}\,E)\,(N-Z)/A,$$
(2)
$$\displaystyle W_{s}=W_{s0}+W_{s1}\,E+(W_{s2}+W_{s2L}\,E)\,(N-Z)/A$$
(3)
$$\displaystyle W_{v}=W_{v0}+W_{v1}\,E+W_{v2}\,E^{2}+(W_{v3}+W_{v3L}\,E)\,(N-Z)/A$$
(4)
The shape form factors $f_{i}$ can be expressed as
$$\displaystyle f_{i}(r)=[1+\exp((r-r_{i}\,A^{1/3})/a_{i})]^{-1}\;\;with\;\;i=r,%
v,s,so$$
(5)
where
$$\displaystyle r_{i}=r_{i0}+r_{i1}\,A^{-1/3}\qquad\qquad\qquad with\quad i=r,v,%
s,so$$
(6)
$$\displaystyle a_{i}=a_{i0}+a_{i1}\,A^{1/3}\qquad\qquad\qquad with\quad i=r,v,s%
,so$$
(7)
In above equations, $A=Z+N$ with $Z$ and $N$ being the number
of protons and neutrons of the target nucleus, respectively; $E$ is
the incident neutron energy in the laboratory frame;
$\lambda\!\!\!{{}^{-}}_{\pi}^{2}$ is the Compton wave length of pion, and
usually we use $\lambda\!\!\!{{}^{-}}_{\pi}^{2}=2.0$ fm${{}^{2}}$.
APMN NuclSciEng1411782002 is a code to automatically
search for a set of optical potential parameters with smallest $\chi^{2}$ in $E\leq 300$ MeV energy region by means of the improved
steepest descent algorithm SDalgorithm , which is suitable for
non-fissile medium-heavy nuclei with the light projectiles, such as
neutron, proton, deuteron, triton, ${}^{3}$He, and $\alpha$. The
optical potential in APMN NuclSciEng1411782002 has
been modified based on the standard BG form PR18211901969 ,
i.e. Woods-Saxon form for the real part potential $V_{v}$ and the
imaginary part potential of volume absorption $W_{v}$; derivative
Woods-Saxon form for the imaginary part potential of surface
absorption $W_{s}$; and Thomas form for the spin-orbital coupling
potential $V_{so}$ and $W_{so}$. It should be noted that all the
radius and diffusiveness parameters in the standard BG optical
potential form are constant, not varying with the mass of target
nuclei. In the present work, they are modified as functions of the
mass of target nuclei according to our former
work NPA7891032007 . We modify the APMN code
according to the the present form of the isospin dependent global
neutron-nucleus optical model potential and thus totally 32
adjustable parameters are involved in the code APMN NuclSciEng1411782002 .
In the code APMN NuclSciEng1411782002 , the
compound nucleus elastic scattering is calculated with the
Hauser-Feshbach statistic theory with Lane-Lynn width fluctuation
correction Lane-Lynn1957 (WHF), which is designed for
medium-heavy target nuclei. For these nuclei, the spaces between
levels are usually small, the concepts of continuous levels and
level density can be properly used for description of higher
levels, say, their excited energies are higher than the combined
energy of the emitting particle in compound nucleus. In the code
APMN, the Hauser-Feshbach theory supposed that after the
compound nucleus emits one of the six particles–n, p, d, t,
$\alpha$ and ${}^{3}$He, or a $\gamma$ photon, all discrete levels of
the residual nucleus de-excite only through emission of $\gamma$
photons, not permitting emission of any particles. For
medium-heavy target nuclei, when the incident energy increase to
about 5–7 MeV, the cross sections of the compound nucleus elastic
scattering usually will drop to very small values in comparison
with the shape elastic scattering; so there is no need for
considering pre-equilibrium particle emission.
III RESULTS
Our theoretical calculation is carried out within the
non-relativistic frame and the relativistic kinetics corrections
have been neglected because they are usually very small when the
projectile energy $E\leq 200$ MeV (See, e.g.,
Ref. PRC730546052006 ). In the present work, we choose the
existing experimental data of neutron elastic scattering angular
distributions with the incident energy region from $0.134$ to $225$
MeV for the $45$ target nuclei shown in Table I as the data base,
for searching for global neutron optical potential parameters. These
data shown in Table I have been also used in the work of Koning and
Delarche NPA7132312003 . In this work, all of experimental
data used are taken from EXFOR (web address:
http://www.nndc.bnl.gov/). As for the data error, we take the values
given in EXFOR if they are available (we note here that more than
$90\%$ data considered in the present work have data error in
EXFOR); in the case that the data errors are not provided in EXFOR,
we take them as $10\%$ of the corresponding experimental data, which
roughly corresponds to the mean value of the available experimental
data error.
We use the global neutron optical model potential parameters of
Becchetti and Greenless PR18211901969 as starting point. The
value of zero has been used as the initial values for the parameters
that we add newly in the code APMN.
Through the calculation of APMN code, we obtain a new set
of isospin dependent global neutron-nucleus optical model potential
parameters which can be expressed as following:
$$\displaystyle V_{v}=54.983-0.3278E+0.00031E^{2}-(18.495-0.219E)(N-Z)/A~{}(%
\textrm{MeV})$$
(8)
$$\displaystyle W_{s}=11.846-0.182E-(16.66-0.0141E)(N-Z)/A~{}(\textrm{MeV})$$
(9)
$$\displaystyle W_{v}=-2.5028+0.2144E-0.00126E^{2}-(0.000248-0.2139E)(N-Z)/A~{}(%
\textrm{MeV})$$
(10)
$$\displaystyle a_{r}=0.696-0.00064A^{1/3}~{}(\textrm{fm}),\quad a_{s}=0.563-0.0%
137A^{1/3}~{}(\textrm{fm})$$
(11)
$$\displaystyle a_{v}=0.912+0.0539A^{1/3}~{}(\textrm{fm}),\quad a_{so}=0.677+0.0%
203A^{1/3}~{}(\textrm{fm})$$
(12)
$$\displaystyle r_{r}=1.173-0.002A^{-1/3}~{}(\textrm{fm}),\quad r_{s}=1.278-0.01%
4A^{-1/3}~{}(\textrm{fm})$$
(13)
$$\displaystyle r_{v}=1.266+0.02A^{-1/3}~{}(\textrm{fm}),\quad r_{so}=0.828+0.01%
A^{-1/3}~{}(\textrm{fm})$$
(14)
$$\displaystyle V_{so}=8.797~{}(\textrm{MeV}),\quad W_{so}=0.019~{}(\textrm{MeV})$$
(15)
where the unit of the incident neutron energy $E$ is MeV.
With above optical model potential parameters, we calculate the
angular distributions of elastic scattering for many nuclei with
neutron as projectile. Some of the calculated results and
experimental data of elastic scattering angular distributions are
shown in Fig. 1 to Fig. 12 where the corresponding results from KD
OMP are also included for comparison.
IV DISCUSSION
The $\chi^{2}$ represents the deviation of the calculated values
from the experimental data, and in this work it is defined as
follows:
$$\chi^{2}=\frac{1}{N}\sum\limits_{n=1}^{N}\chi_{n}^{2},$$
(16)
with
$$\displaystyle\chi_{n}^{2}=\frac{1}{N_{n,el}}\sum\limits_{i=1}^{N_{n,el}}\frac{%
1}{N_{n,i}}\sum\limits_{j=1}^{N_{n,i}}(\frac{\sigma_{el}^{th}(i,j)-\sigma_{el}%
^{exp}(i,j)}{\Delta\sigma_{el}^{exp}(i,j)})^{2},$$
(17)
where $\chi_{n}^{2}$ is for a single nucleus, and $n$ is the nucleus
sequence number. $\chi^{2}$ is the average values of the $N$ nuclei
with $N$ denoting the numbers of nuclei included in global
parameters search and its value is $45$ in the present work.
$\sigma_{el}^{th}(i,j)$ and $\sigma_{el}^{exp}(i,j)$ are the
theoretical and experimental differential cross sections at the
$j$-th angle with the $i$-th incidence energy, respectively. $\Delta\sigma_{el}^{exp}(i,j)$ is the corresponding experimental data
error. $N_{n,i}$ is the number of angles for the $n$-th nucleus and
the $i$-th incidence energy. $N_{n,el}$ is the number of incident
energy points of elastic scattering angular distribution for the
$n$-th nucleus.
Through minimizing the average $\chi^{2}$ value for the $45$ nuclei
in Table I with the modified code APMN, we find an optimal
set of global neutron potential parameters, which are given in Eqs.
(8)$-$(15). With the obtained parameters above, we get the average
value of $\chi^{2}=32.27$ for the $45$ nuclei. Using the parameters of
Koning and Delaroche NPA7132312003 , we obtain the average
value of $\chi^{2}=30.11$ for the same $45$ nuclei. Therefore, our
parameter set has almost the same good global quality as that of
Koning and Delaroche for the global neutron potential.
We use the optical model potential parameters of ours and Koning
et al to calculate the $\chi_{n}^{2}$ of a single nucleus for
the $45$ nuclei in Table I. In addition, in order to see the
predictive power, we also calculate the $\chi_{n}^{2}$ for other $58$
nuclei listed Table II where the incident energy region and
references are also given. The calculated results for all the $103$
nuclei in Table I and Table II are shown in Table III where our
results are denoted by $\chi_{n1}^{2}$ and that of Koning et
al are denoted by $\chi_{n2}^{2}$, respectively.
From Table III, we can see that the value of $\chi_{n1}^{2}$ is
close to that of $\chi_{n2}^{2}$ for the nuclei in Table I; The
value of $\chi_{n1}^{2}$ is much less than that of $\chi_{n2}^{2}$ for
the nuclides Os, Pt, Th, U, and Pu; The value of $\chi_{n1}^{2}$ is
also close to that of $\chi_{n2}^{2}$ for the other nuclei. This
means that our new set of the isospin dependent global
neutron-nucleus optical potential parameters can be as equally
good as that of Koning et al to reproduce the
experimental data for neutron as projectile with target ranging
from ${}^{24}$Mg to ${}^{209}$Bi. However our results are better than
those of Koning et al for the actinide. We would like to
point out that the number of parameters of our optical model
potential is significantly less than that of Koning et
al.
Some of the elastic scattering angular distributions obtained with
our global optical potential parameters and with those of Koning
et al as well as the corresponding experimental data are
plotted in Figs. 1 to 12. The sold lines are the results calculated
with our parameters, the dashed lines are the results with the
parameters of Koning et al, and the points represent the
experimental data. The same symbols are used in all figures. The
experimental data and the corresponding theoretical calculation
results in all figures are in the center of mass (C.M.) system. From
these figures, we can see clearly that our theoretical calculations
can reproduce the experimental data as equally well as those of
Koning et al in the targets range from ${}^{24}$Mg to
${}^{209}$Bi, except for some energy points of few nuclei.
From Fig. 1 and Fig. 2, it is seen that both of our theoretical
calculations and those of Koning et al can not well
reproduce the experimental data for some energy points of targets
${}^{40}$Ca and ${}^{48}$Ca. This is a well-known
problem PRC3825891988 PRC5811181998 for ${}^{40}$Ca. It
may be due to the fact that both ${}^{40}$Ca and ${}^{48}$Ca are double
magic nuclei and the shell effect corrections may be important.
However, both of our work and that of Koning et al aim at
constructing global spherical optical model potentials. So the shell
effects are not included in both of the OMPs. In addition, the
effects of giant resonances have been neglected in both theoretical
calculations and including them could improve the
agreement PRC243691981 .
From Figs. 3-8, one can see that there exist some obvious
deviations between experimental data and theoretical calculations
with both our OMP parameters and that of Koning et al for
nuclei Ba and W. This may be due to the fact that the Ba and W
exist large deformation, and an effective spherical mean field may
no longer provide a totally adequate description of the
neutron-nucleus many body problem NPA7132312003 . Both of
the OMPs are based on spherical frame and the effects of
deformation are not considered.
For the actinide, such as Th, U, and Pu, it is seen from Figs.
9-12 that our theoretical results exhibit significantly better
agreement with experimental data than those of Koning et
al.
V SUMMARY
A new set of isospin dependent global neutron-nucleus optical
potential parameters has been obtained based on the existing
experimental data of neutron elastic scattering angular
distributions by using the modified code APMN NuclSciEng1411782002 . The calculated elastic scattering
angular distributions with the new optical model potential
parameters have been shown to be in good agreement with the
corresponding experimental data for many nuclei from ${}^{24}$Mg to
${}^{242}$Pu in the energy region up to 200 MeV. In particular, our
new global optical model potential parameters can give a
significantly improved description of neutron elastic scattering
angular distributions for the actinide, such as Th, U, and Pu, than
the existing global optical model potential parameters in the
literature. Our new global optical model potential can be used to
calculate the neutron elastic scattering for different target nuclei
including those for which the experimental data are unavailable so
far.
In the present work, polarization of the projectile is not
considered. The polarized neutron beams may play a very important
role in nuclear reaction and nuclear structure studies as well as
many fundamental issues of particle physics. We plan to investigate
the effect of neutron polarization in a future work.
Acknowledgements.
The authors would like to thank Professor Chong-Hai Cai for useful
discussions. This work was supported in part by the NNSF of China
under Grant Nos. 10975097 and 11047157, Shanghai Rising-Star
Program under Grant No. 11QH1401100, and the National Basic
Research Program of China (973 Program) under Contract No.
2007CB815004.
References
(1)
P. G. Yong, RIPL Handbook, Vol. 41, 1998,
http://www-nds.iaea.org/ripl/, Chapter 4: Optical Model
Parameters.
(2)
A. J. Koning, J. P. Delaroche, Nucl. Phys.
A 713 (2003) 231.
(3)
F. D. Becchetti, G. W. Greenless, Phys.
Rev. 182 (1969) 1190.
(4)
R. L. Varner, W. J. Thompson, T. L. Mcabee, E. J. Ludwig, T. B. Clegg, Phys. Rep. 201 (1991) 57.
(5)
S. P. Weppner, R. B. Penney, G. W. Diffendale, G.
Vittorini, Phys. Rev. C 80 (2009) 034608.
(6)
Y. L. Han, Y. L. Xu, H. Y. Liang, H. R. Guo, Q. B. Shen, Phys. Rev. C 81 (2010) 024616.
(7)
A. M. Lane, Nucl. Phys. 35 (1962) 676.
(8)
M. M. Giannini, G. Ricco, A. Zucchiatti,
Ann. Phys. (N.Y.) 124 (1980) 208.
(9)
K. A. Brueckner, J. Dabrowski, Phys. Rev. 134 (1964) B722.
(10)
J. Dabrowski, P. Haensel, Phys. Lett. B 42 (1972) 163
(11)
C. Xu, B. A. Li, L. W. Chen, C. M. Ko, Nucl. Phys. A 865 (2011)
1.
(12)
C. Xu, B. A. Li, L. W. Chen, Phys. Rev. C
82 (2010) 054607.
(13)
B. A. Li, L.W. Chen, C. M. Ko, Phys. Rep. 464 (2008) 113.
(14)
S. S. V. Surya Narayan, Rajesh S. Gowda, and S.
Ganesan, arXiv:nucl-th/0409005.
(15)
Q. B. Shen, Nucl. Sci. Eng. 141 (2002) 78.
(16)
B. Alder, S. Fernbach, M. Rotenberg, Methods in Computational
Physics, Vol. 6 Academic Press, New York/London, 1966, p. 1.
(17)
X. H. Li, C. T. Liang, C. H. Cai, Nucl.
Phys. A 789 (2007) 103.
(18)
A. M. Lane, J. E. Lynn, Proc. Phys. Soc. 24, (1957) 557.
(19)
H. X. An, C. H. Cai, Phys. Rev. C 73 (2006) 054605.
(20)
C. H. Johnson, C. Mahaux, Phys. Rev. C 38
(1988) 2589.
(21)
E. Bauge, J. P. Delaroche, M. Girod, Phys.
Rev. C 58 (1998) 1118.
(22)
M. Pignanelli, H. V. von Geramb, R. Deleo,
Phys. Rev. C 24(1981) 369.
(23)
I. A. Korzh, V. A. Mishchenko, M. V. Pasechnik, N. M. Pravdivyi, I. E. Sanzhur, I. A.
Tockii, Ukr. Fiz. Zh. 13 (1968) 1781.
(24)
M. B. Fedorov and T. I. Jakovenko,
Ukr. Fiz. Zh. 15 (1970) 1905.
(25)
T. Schweitzer, D. Seeliger, S. Unholzer,
Kernenergie, 20 (1977) 174.
(26)
D. T. Stewart, W. M. Currie, J. Martin, P. W.
Martin, Nuclear Stucture Conference, Vol. 1, Antwerp, 1965, p.
509.
(27)
A. Virdis,
Conference on Nuclear Data for Science and Technology, Antwerp,
1982, p. 769.
(28)
I. A. Korzh, N. T. Skljar,
Ukr. Fiz. Zh. 8 (1963) 1389.
(29)
I. A. Korzh, S. Kopytin, M. V. Pasechnik, N. M. Pravdivyj, N. T. Skljar, I. A. Totskiy,
J. Nucl. Energ. 19 (1965) 141.
(30)
J. H. Towle, W. B. Gilboy,
Nucl. Phys. 39 (1962) 300.
(31)
D. J. Bredin,
Phys. Rev. 135 (1964) B412.
(32)
B. Holmqvist, T. Wiedling, V. Benzi, L. Zuffi,
Nucl. Phys. A 150 (1970) 105.
(33)
D. Winterhalter,
Nucl. Phys. 43 (1963) 339.
(34)
R. L. Becker, W. G. Guindon, G. J. Smith,
Nucl. Phys. 89 (1966) 154.
(35)
K. Tsukada, S. Tanaka, M. Maruyama, Y. Tomita,
Conference on Reactor Physics Seminar, Vol. 1, Vienna, 1961, p.
75.
(36)
S. Tanaka, K. Tsukada, M. Maruyama, Y.Tomita,
Conference On Nuclear Data for Reactors, Vol. 2, Helsinki, 1970,
p. 317.
(37)
W. E. Kinney, F. G. Perey,
Oal Ridge National Laboratory Report, No. ORNL-4516, 1970.
(38)
J. Martin, D. T. Stewart, W. M. Currie,
Nucl. Phys. A 113 (1968) 564.
(39)
X. Wang, Y. Wang, D. Wang, J. Rapaport,
Nucl. Phys. A 465 (1987) 483.
(40)
J. D. Brandenberger, A. Mittler, M. T. McEllistrem,
Nucl. Phys. A 196 (1972) 65.
(41)
G. Boerker, R. Boettger, H. J. Brede, H. Klein, W. Mannhart, R. L. Siebert,
Conference on Nuclear Data for Science and Technology, Mito, 1988,
p. 193.
(42)
C. S. Whisnant, J. H. Dave, C. R. Gould,
Phys. Rev. C 30 (1984) 1435.
(43)
P. H. Stelson, R. L. Robinson, H. J. Kim, J. Rapaport, G. R. Satchler,
Nucl. Phys. 68 (1965) 97.
(44)
L. F. Hansen, F. S. Dietrich, B. A. Pohl, C. H. Poppe, C. Wong,
Phys. Rev. C 31 (1985) 111.
(45)
M. M. Nagadi, C. R. Howell, W. Tornow, G. J. Weisel, M. A. Al-Ohali,
R. T. Braun, H. R. Setze, Zemin Chen, R. L. Walter, J. P.
Delaroche, P. Romain, Phys. Rev. C 68 (2003) 044610.
(46)
J. S. Petler, M. S. Islam, R. W. Finlay, F. S. Dietrich,
Phys. Rev. C 32 (1985) 673.
(47)
N. Olsson, B. Trostell, E. Ramstr$\ddot{o}$m, B. Holmqvist, F. S. Dietrich,
Nucl. Phys. A 472 (1987) 237.
(48)
T. P. Stuart, J. D. Anderson, C. Wong,
Phys. Rev. 125 (1962) 276.
(49)
J. Roturier,
Comptes Rendus Serie B-Physique, 262 (1966) 1736.
(50)
S. Kliczewski, Z. Lewandowski,
Nucl. Phys. A 304 (1978) 269.
(51)
M. Babaa, M. Onoa, N. Yabutaa, T. Kikutia, N. Hirakawaa,
Radiation Effects and Defects in Solids: Incorporating Plasma
Science and Plasma Technology 92 (1986) 223.
(52)
J. H$\ddot{o}$hn, H. Pose, D. Seeliger, R. Reif,
Nucl. Phys. A 134 (1969) 289.
(53)
Y. Yamanouti, S. Tanaka,
NEANDC Report No. NEANDC(J)-51/U, 1977, p. 13.
(54)
R. Alarcon, J. Rapaport,
Nucl. Phys. A 458 (1986) 502.
(55)
B. Strohmaier, M.Uhl,
International Conference on Neutron Physics and Nuclear Data,
Harwell, 1978, p. 1184.
(56)
G. C. Bonazzola, E. Chiavassa,
Nucl. Phys. 68 (1965) 369.
(57)
T. B. Shope, M. F. Steuer, R. M. Wood, M. P. Etten,
Nucl. Phys. A 260 (1976) 95.
(58)
S. Harrar,
Nuclear Reaction Mechanisms Conference, Padua, 1962, p. 849.
(59)
M. Conjeaud, B. Fernandez, S. Harar, J. Picard, G. Souch$\grave{e}$re,
Nucl. Phys. 62 (1965) 225.
(60)
Y. Yamanouti,
Nucl. Phys. A 283 (1977) 23.
(61)
D. Abramson, A. Arnaud, J. C. Bluet, G. Filippi, C. Lavelaine, C. Le Rigoleur,
Report No. EANDC(E)-149, 1971.
(62)
W. Tornow, E. Woye, G. Mack, C. E. Floyd, K. Murphy, P. P. Guss, S. A. Wender,
R. C. Byrd, R. L. Walter, T. B. Clegg, H. Leeb,
Nucl. Phys. A 385 (1982) 373.
(63)
W. J. McDonald, J. M. Robson,
Nucl. Phys. 59 (1964) 321.
(64)
G. M. Honor$\acute{e}$, W. Tornow, C. R. Howell, R. S. Pedroni, R. C. Byrd,
R. L. Walter, J. P. Delaroche, Phys. Rev. C 33 (1986) 1129.
(65)
J. H. Osborne, F. P. Brady,J. L. Romero,J. L. Ullmann, D. S. Sorenson, A. Ling, N. S. P. King, R. C. Haight,
J. Rapaport, R. W. Finlay, E. Bauge, J. P. Delaroche, A. J.
Koning, Phys. Rev. C 70 (2004) 054613.
(66)
A. B. Smith, P. T. Guenther,
J. Phys. G 19 (1993) 655.
(67)
J. H. Towle,
Nucl. Phys. A 117 (1968) 657.
(68)
M. Abdel Harith, Th. Schweitzer, D. Seeliger, S. Unholzer,
Zentralinst. f. Kern Forschung Rossendorf Report, No. 315,
Germany, 1976, p. 12.
(69)
A. V. Polyakov, G. N. Lovchikova, V. A. Vinogradov, B. V. Zhuravlev, O. A. Salnikov, S. E. Sukhikh,
Vop. At. Nauki i Tekhn. Ser. Yad. Konst. 4 (1987) 31.
(70)
A. V. Polyakov, G. N. Lovchikova, V. A. Vinogradov, B. V. Zhuravlev, O. A. Salnikov, S. E. Sukhikh,
Vop. At. Nauki i Tekhn. Ser. Yad. Konst. 3 (1986) 21.
(71)
D. Schmidt, W. Mannhart, Z. C. Wei,
Conference on Nuclear Data for Science and Technology, Trieste,
Vol. 1, 1997, p. 407.
(72)
M. Baba, M. Ishikawa, T. Kikuchi, H. Wakabayashi, N. Yabuta, N. Hirakawa,
Conference on Nuclear Data for Science and Technology, Mito, 1988,
p. 209.
(73)
I. A. Korzh, V. A. Mishchenko, E. N. Mozhzhukhin, A. A. Golubova,
N. M. Pravdivyj, I. E. Sanzhur, M. V. Pasechnik, Ukr. Fiz. Zh. 22
(1977) 866.
(74)
M. V. Pasechnik, M. B. Fedorov, T. I. Jakovenko, I. E. Kashuba, V. A. Korzh,
Ukr. Fiz. Zh. 11 (1969) 1874.
(75)
I. A. Korzh, V. A. Mischenko, E. N. Mozhzhukhin, N. M. Pravdivyj,
Yad. Fiz. 35 (1982) 1097.
(76)
W. E. Kinney, F. G. Perey,
Oak Ridge National Laboratory Report No. ORNL-4806, 1974.
(77)
Y. Yamanoutt, M. Sugimoto, M. Mizumoto, Y. Watanabe, K. Hasegawa,
NEANDC Report No. NEANDC(J)-155, 1990, p. 20.
(78)
J. C. Ferrer, J. D. Carlson, J. Rapaport,
Nucl. Phys. A 275 (1977) 325.
(79)
P. T. Guenther, D. L. Smith, A. B. Smith, J. F. Whalen,
Ann. Nucl. Eng. 13 (1986) 601.
(80)
I. A. Korzh, V. A. Mishchenko, E. N. Mozhzhukhin, N. M. Pravdivij, I. E. Sanzhur,
Ukr. Fiz. Zhu. 22 (1977) 87.
(81)
P. Boschung, J. T. Lindow, E. F. Shrader,
Nucl. Phys. A 161 (1971) 593.
(82)
W. E. Kinney, F. G. Perey,
Oak Ridge National Laboratory Report No. ORNL-4907, 1974.
(83)
S. M. El-Kadi, C. E. Nelson, F. O. Purser, R. L. Walter, A. Beyerle, C. R. Gould, L. W. Seagondollar
Nucl. Phys. A 390 (1982) 509.
(84)
S. Mellema, R. W. Finlay, F. S. Dietrich, F. Petrovich,
Phys. Rev. C 28 (1983) 2267.
(85)
A. I. Tutubalin, A. P. Kljucharev, V. P. Bozhko, V. Ja. Golovnja, G. P. Dolja, A. S. Kachan, N. A. Shljakhov,
Conference on Neutron Physics, Kiev, Vol. 3, 1973, p. 62.
(86)
R. S. Pedroni, C. R. Howell, G. M. Honor$\acute{e}$, H. G. Pfutzner, R. C. Byrd, R. L. Walter, J. P. Delaroche,
Phys. Rev. C 38 (1988) 2052.
(87)
V. M. Morozov, Ju. G. Zubov, N. S. Lebedeva,
Yaderno-Fizicheskie Issledovaniya Reports No. 14, 1972, p. 8.
(88)
W. L. Rodgers, E. F. Shrader, J. T. Lindow,
Chicago Operations Office A. E. C. Contract Report, No. 1573,
1967, p. 2.
(89)
K. Hata, S. Shirato, Y. Ando,
NEANDC Report No. NEANDC(J)-155, 1990, p. 95.
(90)
M. Walt, H. H. Barschall,
Phys. Rev. 93 (1954) 1062.
(91)
P. T. Guenther, P. A. Moldauer, A. B. Smith, J. F. Whalen,
Nucl. Sci. Eng. 54 (1974) 273.
(92)
A. B. Smith, P. T. Guenther, R. D. Lawson,
Nucl. Phys. A 483 (1988) 50.
(93)
Claude St. Pierre, M. K. Machwe, Paul Lorrain,
Phys. Rev. 115 (1959) 999.
(94)
M. M. Nagadi, M. Al-Ohali, G. Weisel, R. Setze, C. R. Howell, W. Tornow, R. L. Walter, J. Lambert,
Triangle University Nuclear Laboratoey Annual Report No. 30, 1991.
(95)
S. T. Lam, W. K. Dawson, S. A. Elbakr, H. W. Fielding, P. W. Green, R. L. Helmer, I. J. van Heerden,
A. H. Hussein, S. P. Kwan, G. C. Neilson, T. Otsubo, D. M.
Sheppard, H. S. Sherif, J. Soukup, Phys. Rev. C 32 (1985) 76.
(96)
Carl Budtz-J$\phi$rgensen, Peter T. Guenther, Alan B. Smith, James F. Whalen,
Z. Phys. A 306 (1982) 265.
(97)
A. B. Smith, P. T. Guenther, J. F. Whalen, S. Chiba,
J. Phys. G 18 (1992) 629.
(98)
I. A. Korzh, V. P. Lunev, V. A. Mishchenko, E. N. Mozhzhukhin, M. V. Pasechnik, N. M. Pravdivyy,
Yad. Fiz. 31 (1980) 13.
(99)
P. P. Guss, R. C. Byrd, C. E. Floyd, C. R. Howell, K. Murphy, G. Tungate,
R. S. Pedroni, R. L. Walter, J. P. Delaroche, T. B. Clegg, Nucl.
Phys. A 438 (1985) 187.
(100)
Y. Yamanoutt, J. Rapaport, S. M. Grimes, V. Kulkarni, R. W. Finlay, D. Bainum, P. Grabmayr, G. Randers-pehrson,
Brookhaven National Laboratory Reports, No. 51245, 1, 1980, p.
375.
(101)
A. Smith, P. Guenther, D. Smith, J. Whalen,
Nucl. Sci. Eng. 72 (1979) 293.
(102)
W. E. Kinney, F. G. Perey, Oak Ridge National Laboratory Report No. ORNL-4807, 1974.
(103)
F. G. Perey, C. O. LeRigoleur, W. E. Kinney,
Oak Ridge National Laboratory Report No. ORNL-4523, 1970.
(104)
W. E. Kinney, F. G. Perey,
Oak Ridge National Laboratory Report No. ORNL-4908, 1974.
(105)
M. A. Etemad,
Aktiebolaget Atomenergi, Stockholm/Studsvik Report No. 482, 1973.
(106)
B. Holmqvist, T. Wiedling,
Aktiebolaget Atomenergi, Stockholm/Studsvik Report No. 485, 1974.
(107)
R. M. Musaelyan, V. I. Popov, V. M. Skorkin,
International Conference on Neutron Physics, Kiev, Vol. 3, 1987,
p. 213.
(108)
E. S. Konobeevskij, Ju. G. Kudenko, M. V. Mordovskiy, V. I. Popov, V. M. Skorkin,
Izv. Ross. Akad. Nauk, Ser. Fiz. 48 (1984) 389.
(109)
I. A. Korzh, V. A. Mishchenko, N. N. Pravdivyj,
All-Union Conference on Neutron Physics, Kiev, Vol. 3, 1983, p.
167.
(110)
G. V. Gorlov, N. S. Lebedeva, V. M. Morozov,
Dokl. Akad. Nauk 158 (1964) 574.
(111)
R. G. Kurup, R. W. Finlay, J. Rapaport, J. P. Delaroche,
Nucl. Phys. A 420 (1984) 237.
(112)
D. E. Bainum, R. W. Finlay, J. Rapaport, M. H. Hadizadeh, J. D. Carlson, J. R. Comfort,
Nucl. Phys. A 311 (1978) 492.
(113)
S. A. Cox, E. E. Dowling Cox,
Argonne National Laboratory Report No. ANL-7935, 1972.
(114)
N. A. Bostrom, I. L. Morgan, J. T. Prudhomme, P. L. Okhuysen, O. M. Hudson Jr,
Wright Air Devel. Centre Report, USA, WADC-TN-59-107, 1959.
(115)
A. B. Smith, P. T. Guenther, J. F. Whalen,
Nucl. Phys. A 415 (1984) 1.
(116)
R. D. Lawson, P. T. Guenther, A. B. Smith,
Phys. Rev. C 34 (1986) 1599.
(117)
R. M. Wilenzick, K. K. Seth, P. R. Bevington, H. W. Lewis,
Nucl. Phys. 62 (1965) 511.
(118)
F. G. Perey, W. E. Kinney,
Oak Ridge National Laboratory Report No. ORNL-4552, 1970.
(119)
S. Mellema, J. S. Petler, R. W. Finlay, F. S. Dietrich, J. A. Carr, F. Petrovich,
Phys. Rev. C 36 (1987) 577.
(120)
G. M. Honor$\acute{e}$, R. S. Pedroni, C. R. Howell, H. G. Pf¨¹tzner, R. C. Byrd, G. Tungate, R. L. Walter,
Phys. Rev. C 34 (1986) 825.
(121)
F. D. McDaniel, J. D. Brandenberger, G. P. Glasgow, H. G. Leighton,
Phys. Rev. C 10 (1974) 1087.
(122)
P. Guenther, A. Smith, J. Whalen,
Phys. Rev. C 12 (1975) 1797.
(123)
R. W. Stooksberry, J. H. Anderson, M. Goldsmith,
Phys. Rev. C 13 (1976) 1061.
(124)
F. D. Mc Daniel, J. D. Brandenberger, G. P. Glasgow, M. T. Mc Ellistrem, J. L. Weil,
University of Kentucky Annual Report, No. 74/77, 1977, p. 34.
(125)
R. D. Wilson,
PhD thesis, University of Virginia, 1973.
(126)
S. Tanaka, Y. Yamanouti,
NEANDC Report No. NEANDC(J)-51/U, 1977, p. 11.
(127)
Y. Wang, J. Rapaport,
Nucl. Phys. A 517 (1990) 301.
(128)
S. A. Cox,
Argonne National Laboratory Report No. ANL-7210, 1966, p. 3.
(129)
D. Reitmann, C. A. Engelbrecht, A. B. Smith,
Nucl. Phys. 48 (1963) 593.
(130)
A. B. Smith, P. T. Guenther, J. F. Whalen,
Z. Phys. A 264 (1973) 379.
(131)
R. E. Coles,
A. W. R. E. Aldermaston Report, No. AWRE-O-66/71, 1971.
(132)
A. B. Smith, P. T. Guenther, J. F. Whalen,
Argonne National Laboratory Report No. ANL-70, 1982.
(133)
A. B. Smith, P. Guenther, R. D. Lawson,
Argonne National Laboratory Report No. ANL-91, 1985.
(134)
M. A. Etemad,
AE-Report No. AE-482, 1973.
(135)
R. S. Pedroni, R. C. Byrd, G. M. Honor$\acute{e}$, C. R. Howell, R. L. Walter,
Phys, Rev. C 43 (1991) 2336.
(136)
M. Adel-Fawzy, H. Foertsch, S. Mittag, D. Schmidt, D. Seeliger, T. Streil,
Kernenergie, 24 (1981) 107.
(137)
E. G. Christodoulou, N. C. Tsirliganis, G. F. Knoll,
Nucl. Sci. Eng. 132 (1999) 273.
(138)
A. Takahashi, Y. Sasaki, H. Sugimoto,
INDC-Report No. INDC(JAP)-118/L, 1989.
(139)
J. H. Cao, Y. S. Dai, D. R. Wan, X. C. Liang, S. M.
Wang, INDC-Report No. INDC(CPR)-011/GI, 1988.
(140)
W. Finlay,
Private communication, 1991.
(141)
P. Lambropoulos, P. Guenther, A. Smith, J. Whalen,
Nucl. Phys. A 201 (1973) 1.
(142)
A. B. Smith, P. Guenther, J. Whalen,
Nucl. Phys. A 244 (1975) 213.
(143)
I. A. Korzh, V. P. Lunev, V. A. Mishchenko, E. N. Mozhzhukhin,
N. M. Pravdivyj, E. Sh. Sukhovitskiy, Vop. At. Nauki i Tekhn. Ser.
Yad. Konst. 50 (1983) 40.
(144)
M. T. McEllistrem, J. D. Bradenberger, K. Sinram, G. P. Glasgow, K. C. Chung,
Phys. Rev. C 9 (1974) 670.
(145)
J. Rapaport, T. S. Cheema, D. E. Bainum, R. W. Finlay, J. D. Carlson,
Nucl. Phys. A 313 (1979) 1.
(146)
S. Tanaka, Y. Yamanouti,
Interational Conference on Interaction of Neutron with Nuclei,
Lowell, 1976, p. 1328.
(147)
A. B. Smith, P. T. Guenther,
J. Phys. G 20 (1994) 795.
(148)
A. Smith, P. Guenther, G. Winkler, J. Whalen,
Nucl. Phys. A 332 (1979) 297.
(149)
R. M. Musaelyan, V. D. Ovdienko, N. T. Sklyar, V. M. Skorkin, G. A. Smetanin, I. V. Surkova,
M. B. Fedorov, T. I. Yakovenko, Yad. Fiz. 50 (1990) 1531.
(150)
P. P. Guss, R. C. Byrd, C. R. Howell, R. S. Pedroni, G. Tungate,
R. L. Walter, J. P. Delaroche, Phys. Rev. C 39 (1989) 405.
(151)
J. Rapaport, Mohammed Mirzaa, H. Hadizadeh, D. E. Bainum, R. W. Finlay,
Nucl. Phys. A 341 (1980) 56.
(152)
G. V. Gorlov, N. S. Lebedeva, V. M. Morozov,
Dokl. Akad. Nauk 158 (1964) 574.
(153)
S. Chiba, Y. Yamanouti, M. Sugimoto, M. Mizumoto, Y. Furuta, M. Hyakutake, S. Iwasaki,
J. Nucl. Sci. Technol. 25 (1988) 511.
(154)
S. Tanaka, Y. Tomita, Y. Yamanouti, K. Ideno,
Nuclear Structure Conference, Budapest, 1972, p. 148.
(155)
S. A. Cox, E. E. Dowling Cox,
Argonne National Laboratory Report No. ANL-7935, 1972.
(156)
R. B. Galloway, A. Waheed,
Nucl. Phys. A 318 (1979) 173.
(157)
Amena Begum, R. B. Galloway, F. K. McNeil-Watson,
Nucl. Phys. A 332 (1979) 349.
(158)
W.B. Gilboy, J.H. Towle,
Nucl. Phys. 42 (1963) 86.
(159)
R. Singh, H. -H. Knitter,
Z. Phys. A 272 (1975) 47.
(160)
S. Tanaka, Y. Tomita, K. Ideno, S. Kikuchi,
Nucl. Phys. A 179 (1972) 513.
(161)
S. G. Buccino, C. E. Hollandsworth, P. R. Bevington,
Z. Phys. A 196 (1966) 103.
(162)
D. L. Bernard, G. Lenz, J. D. Reber,
Nuclear Cross-Sections Technology Conference, Washington, Vol. 2,
1968, p. 755.
(163)
D. F. Coope, S. N. Tripathi, M. C. Schell, M. T. McEllistrem,
Bull. Am. Phys. Sco. 24 (1979) 854.
(164)
G. Haouat, J. Lachkar, Ch. Lagrange, M. T. McEllistrem,
Y. Patin, R. E. Shamu, J. Sigaud, Phys. Rev. C 20 (1979) 78.
(165)
D. F. Coope, S. N. Tripathi, M. C. Schell, J. L. Weil, M. T. McEllistrem,
Phys. Rev. C 16 (1977) 2223.
(166)
Ch. Lagrange, R. E. Shamu, T. Burrows, G. P. Glasgow, G. Hardie, F. D. McDaniel,
Phys. Lett. B 58 (1975) 293.
(167)
Namik K. Aras, William B. Walters,
Phys. Rev. C 15 (1977) 927.
(168)
L. L. Litvinskiy, Said-Sabbagkh, Ya. A. Zhigalov, V. G. Krivenko, Ya. V.
Pugach, Vop. At. Nauki i Tekhn. Ser. Yad. Konst. 1994 (1994) 15.
(169)
R. B. Day,
Private communication, 1965.
(170)
F. T. Kuchnir, A. J. Elwyn, J. E. Monahan, A. Langsdorf, Jr., F. P. Mooring,
Phys. Rev. 176 (1968) 1405.
(171)
S. A. Cox,
Argonne National Laboratory Report No. ANL-7910, 1972, p. 20.
(172)
M. Walt, J. R. Beyster,
Phys. Rev. 98 (1955) 677.
(173)
A. B. Smith,
Nucl. Sci. Eng. 155 (2007) 74.
(174)
B. Holmqvist, T. Wiedling,
AE-Report AE-485, 1974.
(175)
N. Olsson, B. Holmqvist, E. Ramstr$\ddot{o}$m,
Nucl. Phys. A 385 (1982) 285.
(176)
Sally F. Hicks, J. M. Hanly, S. E. Hicks, G. R. Shen, M. T. McEllistrem,
Phys. Rev. C 49 (1994) 103.
(177)
C. D. Zafiratos, T. A. Oliphant, J. S. Levin, L. Cranberg,
Phys. Rev. Lett. 14 (1965) 913.
(178)
G. E. Belovitskij, L. N. Kolesnikova, I. M. Frank,
Yad. Fiz. 15 (1972) 662.
(179)
J. L. Fowler,
Phys. Rev. 147 (1966) 870.
(180)
V. M. Morozov, Ju. G. Zubov, N. S. Lebedeva,
Yaderno-Fizicheskie Issledovaniya Reports, No. YFI-14, 1972, p. 8.
(181)
G. Haouat, J. Sigaud, J. Lachkar, Ch. Lagrange, B. Duchemin, Y. Patin,
Nucl. Sci. Eng. 81 (1982) 491.
(182)
J. R. M. Annand, R. W. Finlay, P. S. Dietrich,
Nucl. Phys. A 443 (1985) 249.
(183)
M. L. Roberts, P. D. Felsher, G. J. Weisel, Zemin Chen, C. R. Howell,
W. Tornow, R. L. Walter, D. J. Horen, Phys. Rev. C 44 (1991) 2006.
(184)
W. E. Kinney, F. G. Perey,
Oak Ridge National Laboratory Report No. ORNL-4909, 1974.
(185)
J. Rapaport, T. S. Cheema, D. E. Bainum, R. W. Finlay, J. D. Carlson,
Nucl. Phys. A 296 (1978) 95.
(186)
J. P. Delaroche, C. E. Floyd, P. P. Guss, R. C. Byrd, K. Murphy,
G. Tungate, R. L. Walter, Phys. Rev. C 28 (1983) 1410.
(187)
C. E. Floyd Jr,
PhD thesis, Duke University, 1981.
(188)
R. W. Finlay, J. R. M. Annand, T. S. Cheema, J. Rapaport, F. S. Dietrich,
Phys. Rev. C 30 (1984) 796.
(189)
R. P. DeVito,
Dissertation Abstracts B 40 (1980) 5724.
(190)
P. T. Guenther, A. B. Smith, J. F. Whalen,
Nucl. Sci. Eng. 75 (1980) 69.
(191)
R. D. Lawson, P. T. Guenther, A. B. Smith,
Phys. Rev. C 36 (1987) 1298.
(192)
R. K. Das, R. W. Finlay,
Phys. Rev. C 42 (1990) 1013.
(193)
A. H. Hussein, J. M. Cameron, S. T. Lam, G. C. Neilson, J. Soukup,
Phys. Rev. C 15 (1977) 233.
(194)
Cecil I. Hudson, Jr., W. Scott Walker, S. Berko,
Phys. Rev. 128 (1962) 1271.
(195)
M. Tohyama,
Nucl. Phys. A 401 (1983) 237.
(196)
S. F. Hicks,
PhD thesis, University of Kentucky, 1988.
(197)
Sally F. Hicks, S. E. Hicks, G. R. Shen, M. T. McEllistrem,
Phys. Rev. C 41 (1990) 2560.
(198)
M. B. Fedorov, T. I. Jakovenko,
Ukr. Fiz. Zh. 19 (1974) 152.
(199)
J. Lachkar, M. T. McEllistrem, G. Haouat, Y. Patin, J. Sigaud, F. Coçu,
Phys. Rev. C 14 (1976) 933.
(200)
S. P. Simakov, G. N. Lovchikova, O. A. Sa$l^{{}^{\prime}}$nikov, A. M. Trufanov, G. V. Kote$l^{{}^{\prime}}$nikova, N. N. Shchadin,
At. Energ. 51 (1981) 244.
(201)
V. M. Morozov, Ju. G. Zubov, N. S. Lebedeva,
Neutron Physics Conference, Kiev, Vol. 1, 1972, p. 267.
(202)
Yu. M. Burmistrov, T. E. Grigo$r^{{}^{\prime}}$ev, E. D. Molodtsov, R. M. Musaelyan, V. I. Popov,
S. I. Potashev, V. M. Skorkin, Kratkie Soobshcheniya po Fizike,
Issue. 6, 1987, p.12.
(203)
B. Strohmaier, M. Uhl, W. K. Matthes,
Nucl. Sci. Eng. 65 (1978) 368.
(204)
Gang Chen, Min Li, J. L. Weil, M. T. McEllistrem,
Phys. Rev. C 63 (2001) 014606.
(205)
A. B. Smith,
Ann. Nucl. Eng. 32 (2005) 1926.
(206)
A. B. Smith,
Argonne National Laboratory Report No. ANL-6727, 1963.
(207)
S. P. Simakov, G. N. Lovchikova, O. A. Sa$l^{{}^{\prime}}$nikov,
G. V. Kote$l^{{}^{\prime}}$nikova, A. M. Trufanov, Vop. At. Nauki i Tekhn.
Ser. Yad. Konst. 1982 (1982) 17.
(208)
R. E. Benenson, K. Rimawi, E. H. Sexton, B. Center,
Nucl. Phys. A 212 (1973) 147.
(209)
P. T. Guenther, A. B. Smith, J. F.
Whalen, Phys. Rev. C 26 (1982) 2433.
(210)
J. R. M. Annand, R. W. Finlay,
Nucl. Phys. A 442 (1985) 234.
(211)
T, Hicks,
PhD thesis, University of Kentucky, 1987.
(212)
S. E. Hicks, Z. Cao, M. C. Mirzaa, J. L. Weil, J. M. Hanly, J. Sa, M. T. McEllistrem,
Phys. Rev. C 40 (1989) 2509.
(213)
M. C. Mirzaa, J. P. Delaroche, J. L. Weil, J. Hanly, M. T. McEllistrem, S. W. Yates,
Phys. Rev. C 32 (1985) 1488.
(214)
S. E. Hicks, J. P. Delaroche, M. C. Mirzaa, J. Hanly, M. T. McEllistrem,
Phys. Rev. C 36 (1987) 73.
(215)
P. T. Guenther, D. G. Havel, A. B. Smith,
Nucl. Sci. Eng. 65 (1978) 174.
(216)
Y. Tomita, S. Tanaka, M. Maruyama,
EANDC Report No. EANDC(J)-30, 1973, p. 6.
(217)
Y. Fujita, T. Ohsawa, R. M. Brugger, D. M. Alger, W. H. Miller,
J. Nucl. Sci. Technol. 20 (1983) 983.
(218)
G. C. Goswami, J. J. Egan, G. H. R. Kegel, A. Mittler, E. Sheldon,
Nucl. Sci. Eng. 100 (1988) 48.
(219)
R. Batchelor, W. B. Gilboy, J. H. Towle,
Nucl. Phys. 65 (1965) 236.
(220)
V. I. Popov, Soviet Progress in Neutron Physics,
Consultants Bureau, New York, 1963, p.224.
(221)
A. B. Smith, S. Chiba,
Ann. Nucl. Eng. 23 (1996) 459.
(222)
L. F. Hansen, B. A. Pohl, C. Wong, R. C. Haight, Ch. Lagrange
Phys. Rev. C 34 (1986) 2075.
(223)
R. C. Allen, R. B. Walton, R. B. Perkins, R. A. Olson, R. F. Taschek,
Phys. Rev. 104 (1956) 731.
(224)
H. -H. Knitter, M. M. Islam, M. Coppola,
Z. Phys. A 257 (1972) 108.
(225)
L. Cranberg,
Los Alamos Scientific Laboratory Report No. LA-2177, 1959.
(226)
R. Batchelor, K. Wyld,
A. W. R. E. Aldermaston Report No. AWRE-O-55/69, 1969.
(227)
A. V. Murzin, V. P. Vertebnyy, A. L. Kirilyuk, V. A. Libman, L. L. Litvinskiy, G. M. Novoselov,
V. F. Razbudey, C. V. Sidorov, N. A. Trofimova, At. Energ. 62
(1987) 192.
(228)
L. E. Beghian, G. H. R. Kegel, T. V. Marcella, B. K. Barnes,
G. P. Couchell, J. J. Egan, A. Mittler, D. J. Pullen, W. A.
Schier, Nucl. Sci. Eng. 69 (1979) 191.
(229)
H. H. Knitter, M. Coppola, N. Ahmed, B. Jay,
Z. Phys. A 244 (1971) 358.
(230)
G. C. Goswami,
PhD thesis, University of Lowell, 1986.
(231)
J. R. M. Annand, R. B. Galloway,
J. Phys. G 11 (1985) 1341.
(232)
W. P. Bucher, C. E. Hollandsworth,
Phys. Rev. Lett. 35 (1975) 1419.
(233)
B. J. Qi, H. Q. Tang, Z. Y. Zhou, J. Sa, Z. J. Ke, Q. C. Sui, H. H. Xiu,
Conference on Nuclear Data for Science and Technology, Gatlinburg,
Vol. 2, 1994, p.901.
(234)
H. Q. Qi, Q. K. Chen, Y. T. Chen, H. B. Chen, Z. P. Chen, J. K. Deng,
Z. M. Chen, H. G. Tang, B. J. Qi, Chin. J. Nucl. Phys. 13 (1991)
343.
(235)
H. Q. Qi, H. B. Chen, Y. T. Chen, Q. K. Chen, Z. P. Chen, Z. M.
Chen, INDC Report No. EANDC(J)-030/L, 1993, p. 1.
(236)
B. Ya. Guzhovskiy,
At. Energ. 11 (1961) 395.
(237)
P. E. Cavanagh, C. F. Coleman, D. A. Boyce, G. A. Gard,
A. G. Hardacre, J. F. Turner, A. E. R. E. Harwell Report No.
AERE-R-5972, 1969.
(238)
M. Coppola, H. -H. Knitter,
Z. Phys. A 232 (1970) 286.
(239)
A. B. Smith, P. Lambropoulos, J. F. Whalen,
Nucl. Sci. Eng. 47 (1972) 19.
(240)
D. M. Drake, M. Drosg, P. Lisowski, L. Veeser,
Los Alamos Scientific Laboratory Report No. LA-7855, 1979. |
Neutrinos from the Sun:
experimental results confronted with solar models
V. Castellani${}^{(1)}$
S. Degl’Innocenti${}^{(2,3)}$
G. Fiorentini${}^{(2,3)}$
M. Lissia${}^{(4)}$
and B. Ricci${}^{(3,5)}$
${}^{(1)}$Dipartimento di Fisica dell’Università di Pisa, I-56100 Pisa,
Osservatorio Astronomico di Collurania, I-64100 Teramo, and Università
dell’Aquila, I-67100 L’Aquila
${}^{(2)}$Dipartimento di Fisica dell’Università di Ferrara, I-44100 Ferrara
${}^{(3)}$Istituto Nazionale di Fisica Nucleare, Sezione di Ferrara,
I-44100 Ferrara
${}^{(4)}$Dipartimento di Fisica dell’Università di Cagliari, I-09100
Cagliari,
and Istituto Nazionale di Fisica Nucleare, Sezione
di Cagliari, I-09100 Cagliari
${}^{(5)}$Scuola di Dottorato dell’Università di Padova, I-35100 Padova.
(November 26, 2020)
Abstract
For standard neutrinos,
recent solar neutrino results together with the assumption of a
nuclearly powered Sun imply severe constraints on the individual
components of the total neutrino flux:
$\Phi_{\text{Be}}\leq 0.7\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$,
$\Phi_{\text{CNO}}\leq 0.6\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$, and
$64\times 10^{9}\text{cm}^{-2}\text{s}^{-1}\leq\Phi_{pp+pep}\leq 65\times 10^{9%
}\text{cm}^{-2}\text{s}^{-1}$
(at $1\sigma$ level). The bound on $\Phi_{\text{Be}}$
is in strong disagreement with the
standard solar model (SSM) prediction
$\Phi_{\text{Be}}^{\text{SSM}}\approx 5\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$.
We study a
large variety of non-standard solar models with low inner
temperature, finding that the temperature profiles $T(m)$ follow the
homology relationship: $T(m)=kT^{\text{SSM}}(m)$, so that they are specified
just by the central temperature $T_{c}$. There is no value of $T_{c}$ which can
account for all the available experimental results.
Even if we only consider the Gallium and Kamiokande
results, they remain incompatible.
Lowering the cross section $p+{}^{7}\text{Be}\to\gamma+{}^{8}\text{B}$
is not a remedy. The shift of
the nuclear fusion chain towards the $pp$-I termination could be
induced by a hypothetical low energy resonance in the
${}^{3}\text{He}+{}^{3}\text{He}$
reaction. This mechanism gives a somehow better, but still bad
fit to the combined experimental data. We
also discuss what can be learnt from new generation experiments,
planned for the detection of monochromatic solar neutrinos, about
the properties of neutrinos and of the Sun.
pacs: 96.60.Kx
††preprint:
Submitted to Physical Review D INFNFE-3-94
INFNCA-TH-94-10
I Introduction
The aim of this paper is to examine whether there is still room for an
astrophysics and/or nuclear physics solution of the solar neutrino
problem, in the light of the most recent results of the Gallium
experiments [1, 2].
We shall demonstrate that these results, when combined with the
information arising from the Chlorine [3] and Kamiokande [4]
experiments and – most important – with the assumption of a
nuclearly powered Sun, severely constrain the individual
components of the solar neutrino flux, under the hypothesis of
standard (zero mass, no mixing, no magnetic moment …) neutrinos.
The arguments leading to these constraints, already outlined
in a previous paper [5],
are essentially independent of solar models.
The basic assumption concerning the Sun is that the present total
neutrino flux can be derived from the presently observed value of
the solar constant. We remark that these constraints have became much more
stringent after the recent
reports from Gallex and Sage [1, 2].
For standard neutrinos, these
results provide evidence that the nuclear energy production chain,
see Fig. 1, is extremely shifted towards the $pp$-I termination and,
as a consequence, the fluxes of $\nu_{\text{Be}}$ and $\nu_{\text{CNO}}$
are strongly reduced with
respect to the predictions of standard solar models.
The situation is the following: i) we can now compare theory and
experiment at the level of individual fluxes, ii) the solar neutrino
problem, i.e. the discrepancy between experimental results and
standard solar models, affects now also the ${}^{7}\text{Be}$-nuclei production,
and not only the rare ${}^{8}\text{B}$ neutrinos.
Next, we ask ourselves whether the solar neutrino problem is
restricted to standard solar models. In this spirit, we analyze several
non-standard solar models with an enhanced $pp$-I termination. The
main inputs of any solar model are listed in Table 1.
We are aware of
just two ways for enhancing the $pp$-I termination acting on these
inputs:
i)
adjusting the parameters which affect the inner solar temperature,
so as to build low inner-temperature solar models,
ii)
adjusting the ${}^{3}\text{He}$ nuclear cross sections.
We note that the $p+{}^{7}\text{Be}\to\gamma+{}^{8}\text{B}$
cross section does not influence the $pp$-I branch.
As a relevant and common feature of all the low-inner-temperature models, we
find a homology relation for the temperature profiles,
$T(m)=kT^{\text{SSM}}(m)$, where $k$ depends on
the input parameters,
but it is independent of the mass coordinate $m$ in the inner radiative
zone (at least for $m=M/M_{0}<0.97$), and SSM refers here and in the
following to standard solar models. In other words, our numerical
experiments disclose that a variation of the solar temperature in the
centre implies a definite variation in the entire inner radiative zone.
A consequence of this finding is that the different components of
the neutrino flux depend basically only on the central temperature,
and are almost independent of how that temperature is achieved.
This in turn implies that, when performing a $\chi^{2}$ analysis of the
experimental data compared to the prediction of non-standard solar
models, it is sufficient to parameterize these non-standard solar
models by the central temperature. In other words, varying
independently all the solar model parameters that influence the
temperature does not yield a better fit than just varying the central
temperature.
It is well known that it is not possible to get a good temperature fit
due to the “discrepancy” between the Kamiokande and Chlorine
results [6, 7],
but the following questions are, nonetheless, interesting:
i)
how much does the fit improve if one excludes one of the
experimental results?
ii)
does this fit improve if one lowers the
$p+{}^{7}\text{Be}\to\gamma+{}^{8}\text{B}$ cross section,
as suggested from the analysis of recent data on the Coulomb
dissociation of ${}^{8}\text{B}$ [8, 9]?
Another way to shift the nuclear fusion chain towards the $pp$-I
termination without altering the inner solar temperature
can be found in the realm of nuclear physics. In the light
of the new neutrino results, we discuss whether a hypothetical low
energy resonance in the ${}^{3}\text{He}+{}^{3}\text{He}$ reaction,
firstly advocated by
Fowler [10], analyzed in Ref. [6],
and presently investigated
experimentally at LNGS [11], can reconcile theory and experiments.
Several new-generation experiments are being planned for the
detection of monochromatic solar neutrinos produced in electron
capture (${}^{7}\text{Be}+e^{-}\to{}^{7}\text{Li}+\nu$)
and in the $pep$ ($p+e^{-}\to d+n$)
reactions [12, 13, 14].
Furthermore, Bahcall [15, 16]
pointed out that thermal effects on
monochromatic neutrino lines can be used to infer inner solar
temperatures. In relation with the foregoing analysis, we discuss
what can be learnt from such future measurements about the
properties of neutrinos and of the Sun.
Concerning the organization of the paper, the solar-model-independent
constraints on neutrino fluxes are presented in Sec. II
and compared with the results of standard solar models in Sec. III.
Section IV is devoted to the analysis of non-standard solar models
with lower temperature, which are compared with experimental data
in Sec. V.
In Sec. VI we discuss the chances of a low energy resonance
in the ${}^{3}\text{He}+{}^{3}\text{He}$ channel, and in
Sec. VII we remark the relevance of
future detection of the $pep$ and ${}^{7}\text{Be}$ neutrinos. Our conclusions are
summarized in the final Section.
II (Almost) solar model independent constraints on
neutrino fluxes
In this section we briefly update the constraints on neutrino fluxes
derived in Ref. [5], in the light of the recent reports from
Gallex and Sage [1, 2].
While we refer to Ref. [5] for details, we recall here the main
points.
i)
For standard neutrinos and under the assumption of a nuclearly
powered Sun, the components $\Phi_{i}$ of the total neutrino flux arriving
onto the Earth are constrained by the equation of energy production
$$K=\sum_{i}\left(\frac{Q}{2}-\langle E\rangle_{i}\right)\,\Phi_{i}\quad,$$
(1)
where $K$ is the solar constant, $Q$ is the energy released in the fusion
reaction $4p+2e\to\alpha+2\nu$
and $\langle E\rangle_{i}$ is the average neutrino energy of the $i$th flux.
In practice the relevant terms in Eq. (1) are just those
corresponding to $\Phi_{pp+pep}$,
$\Phi_{\text{Be}}$, and
$\Phi_{\text{CNO}}$.
ii)
In order to calculate $\langle E\rangle_{i}$,
we take the ratio $\xi\equiv\Phi_{pep}/\Phi_{pp+pep}$ from
the SSM ($\xi=2.38\times 10^{-3}$), and, similarly, the ratio
$\xi\equiv\Phi_{\text{N}}/\Phi_{\text{CNO}}=0.54$.
Results are almost insensitive to these choices [5].
iii)
The signal $S_{X}$ of the $X$ experiment is represented as
$$S_{X}=\sum_{i}X_{i}\Phi_{i}\quad,$$
(2)
where the weighting factors $X_{i}$ are cross sections for the $\nu$ detection
reaction averaged over the (emission) spectrum of the $i$-th
component of the neutrino flux (note that the $X_{i}$ are ordered
according to the neutrino energy), and are shown in Table 2.
iv)
We use the following experimental results, where systematic and
statistical errors have been added in quadrature. For the Gallium value,
we use the weighted average of the Gallex [1]
and Sage [2] results
$$S_{\text{Ga}}=(78\pm 10)\text{ SNU}\quad.$$
(3)
For the Chlorine experiment we use the average of the 1970-1992
runs [3]
$$S_{\text{Cl}}=(2.32\pm 0.26)\text{ SNU}\quad.$$
(4a)
Whereas the Kamiokande result reads
$$S_{\text{B}}^{\text{Ka}}=(2.9\pm 0.42)\times 10^{6}\text{cm}^{-2}\text{s}^{-1}\quad.$$
(4b)
v)
We take the Boron flux $\Phi_{\text{B}}$,
which enters in Eq.( 2), from
experiment. However, we can use either the Kamiokande result or
the Chlorine result (it is well known [6, 7] that a choice
between the two
experiment is needed, otherwise one is forced to an unphysical value
$\Phi_{\text{Be}}\leq 0$).
We have thus four unknowns
$\Phi_{pp+pep}$, $\Phi_{\text{Be}}$, $\Phi_{\text{CNO}}$, and
$\Phi_{\text{B}}$,
which are
constrained by the three equations (1), (3),
and, alternatively, (4a) or (4b).
By exploiting the ordering properties of the $X_{i}$, as discussed in
Ref. [5],
and by using the new experimental results, one derives severe constraints,
for standard neutrinos.
As an example, by taking $\Phi_{\text{B}}$
from Kamioka, for each assumption about $\Phi_{pp+pep}$ one has the
minimum signal in Gallex if all other neutrinos are from Beryllium and
the maximum signal if all other neutrinos are from CNO.
By using similar procedures one finds
the bounds depicted in Figures. 2, 3,
and 4. By
conservatively using the Chlorine result to determine the Boron flux
(this choice is the less restrictive on the fluxes), we find the
following bounds on the fluxes, in units of
$10^{9}\text{cm}^{-2}\text{s}^{-1}$,
$$\displaystyle 64\leq\Phi_{pp+pep}$$
$$\displaystyle\leq$$
$$\displaystyle 65\quad\quad\text{at 1 }\sigma$$
$$\displaystyle\Phi_{\text{Be}}$$
$$\displaystyle\leq$$
$$\displaystyle 0.7$$
$$\displaystyle\Phi_{\text{CNO}}$$
$$\displaystyle\leq$$
$$\displaystyle 0.6\quad,$$
(5a)
and
$$\displaystyle 61\leq\Phi_{pp+pep}$$
$$\displaystyle\leq$$
$$\displaystyle 65\quad\quad\text{at 3 }\sigma$$
$$\displaystyle\Phi_{\text{Be}}$$
$$\displaystyle\leq$$
$$\displaystyle 4.2$$
$$\displaystyle\Phi_{\text{CNO}}$$
$$\displaystyle\leq$$
$$\displaystyle 3.6\quad.$$
(5b)
In summary, the Gallium result together with the luminosity
constraint implies that almost all neutrinos, if standard, come from
the $pp$-I termination. The bounds of Eqs. (5)
are very strict since
even a small flux of other (and more energetic) than the $pp$ neutrinos
gives
an appreciable contribution to the Gallium signal. This is why an
experimental result with 10% accuracy can fix the
$\Phi_{pp+pep}$ at the level of about 2%.
We note that the bounds have become much more stringent than
those reported in Ref. [5], because both the central value and the
error of the Gallium result have decreased, so that now the
experimental result is even closer to the minimal signal which is
obtained when all neutrinos come from the $pp$-I termination
($\Phi_{pp+pep}=65\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$).
Concerning the assumptions leading to Eqs. (5),
we remark that the main hypothesis is that the present Sun is nuclearly
powered, see Eq.( 1),
whereas the values chosen for $\xi$ and $\eta$ are unessential (see
again Ref. [5]).
III Standard solar models and experimental data
The relevance of the bounds derived in the previous section can be
best illustrated by comparing them with the results of standard solar
model computations. For a few representative calculations we
present the main input parameters of these
models in Table 3, and the resulting neutrino
fluxes in Table 4.
Let us remark that we can now compare not only the total signals
predicted by the theory and measured by experiments, but also
several individual fluxes, as shown in Table 4.
In particular, we find that the upper limit for $\Phi_{\text{Be}}$,
implied by the
experiment at the $1\sigma$ level, is 7 times smaller than
$\Phi_{\text{Be}}^{\text{SSM}}$, whereas,
at the same level of accuracy, the suppression of
$\Phi_{\text{B}}$ is about a
factor of two respect to the SSM (in Table 4
the experimental upper
bound on $\Phi_{\text{B}}$ is obtained from the less constraining result,
i.e. the Kamiokande value). A suppression of $\Phi_{\text{Be}}$
stronger than $\Phi_{\text{B}}$ was
already implied by the comparison between Kamiokande and
Chlorine results, while we derived it using essentially only the
Gallium experiments.
In addition, we remind that the theoretical calculation for
$\Phi_{\text{B}}$ is the
most questionable of the flux calculations, due to the well known
uncertainties. In our opinion, the discrepancy between theory and
experiment for the ${}^{7}{\text{Be}}$ flux
is much more serious than the one for the ${}^{8}\text{B}$ flux.
In other words, it seems to us that the solar neutrino problem is now
at the level of the branching between the $pp$-I and $pp$-II
terminations.
In order to reconcile the theoretical and experimental determination
of $\Phi_{\text{B}}$, one needs that the ratio between the two rates for the
${}^{3}\text{He}+{}^{4}\text{He}$ and the ${}^{3}\text{He}+{}^{3}\text{He}$ reactions,
$$R=\frac{\langle\lambda_{34}\rangle}{\langle\lambda_{33}\rangle}\quad,$$
(6)
is drastically altered from $R^{\text{SSM}}=0.16$ to something about
$R=0.02$
(here and in the following, $\lambda_{ij}$ is the rate for the
collision between
nuclei with mass number $i$ and $j$, $m_{ij}$ being the reduced mass).
The investigation of non-standard solar models where $R$ is strongly
reduced will be the subject of the next sections. It is worth
remarking however that a reduction of $\Phi_{\text{Be}}$ to bring it in
the experimentally acceptable range
generally implies also a comparable, or even larger, reduction of
$\Phi_{\text{B}}$,
which then becomes too small with respect to the experimental
value.
IV Non-standard solar models with low central temperature
Clearly the $pp$ chain can be shifted towards the $pp$-I termination by
lowering the inner temperature $T$, since the tunnelling probability is
more reduced for the heavier nuclei:
$$\log\left(\frac{\langle\lambda_{34}\rangle}{\langle\lambda_{33}\rangle}\right)%
\propto\frac{m_{33}^{1/3}-m_{34}^{1/3}}{(KT)^{1/3}}\quad.$$
(7)
In order to reduce the inner temperatures one may attempt several
manipulations [5]:
i)
reduce the metal fraction Z/X,
ii)
reduce (by an overall multiplicative factor) the opacity tables,
iii)
increase the astrophysical factor $S_{pp}$ of the
$p+p\to d+e+\nu$ reaction,
iv)
reduce the Sun age.
Clearly i) and ii) work in the direction of getting a more transparent
Sun, which implies a lower temperature gradient, a larger energy
production region and
consequently smaller inner temperatures. When $S_{pp}$ is increased
nuclear fusion gets easier, and the fixed luminosity is obtained with a
reduced temperature. A younger Sun is another way to get a
Sun cooler in its interior, since the central H-abundance is increased
and, again, nuclear fusion gets easier.
On the other hand, we remark that variations of the other
astrophysical $S$-factors, $S_{33}$, $S_{34}$ and/or $S_{17}$,
affect very weakly the
inner solar temperature. This is physically clear, since the energy
production mechanism is untouched [6].
We have computed several solar models by varying the parameters
well beyond the uncertainties of the standard solar model
(see Table 5),
i.e. we have really built non-standard solar models.
An important feature of all these models is the homology of the inner
temperature profiles
$$T(m)=kT^{\text{SSM}}(m)\quad,$$
(8)
where $m=M/M_{0}$ is a mass coordinate, and the factor $k$ depends on the
parameter which is varied but does not depend on $m$.
We have verified that Eq. (8) holds with an accuracy better than 1%
in all the internal radiative zone ($M/M_{0}<0.97$ or
$R/R_{0}<0.7$) for all the
models we consider, but for huge (and really unbelievable)
variations of the solar age, see Fig. 5 and Table 5.
It is worth
remarking that $T(m)/T^{\text{SSM}}(m)$ is constant through a region where
$T(m)$ change by a factor five, see Fig. 6.
By looking at the numerical results, one finds - as expected - that, as long
as the Sun
age is kept fixed, the models have similar distributions of ${}^{4}$He and of
the energy production per unit mass, which as well known, is strongly
related with temperature and ${}^{4}$He density.
On the other hand, when the Sun age is varied, the ${}^{4}$He content
also changes strongly, and the homology relation for the temperature
is fading away. The important point is that for each model the
temperature profile is essentially specified
by a scale factor, which can be taken as the central temperature $T_{c}$.
On these grounds one derives general predictions for the behaviour
of the neutrino fluxes $\Phi_{i}$. They are crucially dependent (through the
Gamow factors) on the values of the temperature in the production
regions $T_{i}$, and, as usual, can be locally approximated by power laws:
$$\Phi_{i}=c_{i}\,T_{i}^{\beta_{i}}\quad.$$
(9)
The homology relationship implies
$T_{i}=(T_{c}/T_{c}^{\text{SSM}})T_{i}^{\text{SSM}}$ and,
consequently,
$$\Phi_{i}=\Phi_{i}^{\text{SSM}}\,\left(\frac{T_{c}}{T_{c}^{\text{SSM}}}\right)^%
{\beta_{i}}\quad.$$
(10)
This means that each flux is mainly determined by the central
temperature, almost independently on the way the temperature
variation was obtained, an occurrence which is
clearly confirmed by Fig. 7 for the components of the neutrino
flux which give the main contributions
($\Phi_{pp}$, $\Phi_{\text{Be}}$, and $\Phi_{\text{B}}$)
to the experimental signals.
The situation is shown in more details in
Table 6 where we present
the numerically calculated values of the $\beta_{i}$ coefficients. One sees that
$\beta_{pp}$, $\beta_{\text{Be}}$, and $\beta_{\text{B}}$
are approximately independent on the parameter
which is varied. This is not true for
$\Phi_{\text{N}}$, $\Phi_{\text{O}}$, and $\Phi_{pep}$.
Actually,
when writing Eq. (9) we neglected the flux dependence on the
densities of the parent nuclei which generate solar neutrinos. These
densities can change when some of the input parameters are
varied. For example, $\Phi_{\text{N}}$ and $\Phi_{\text{O}}$
look very sensible to variations of
$Z/X$, since in this case, in addition to the temperature variation, the
change of metallicity also influences the effectiveness of the CN cycle.
However, this effect is negligible when estimating total experimental
signals.
Analytical approximations to the
numerical values of the $\beta_{i}$ can be found
by considering the dependence on temperature of the
Gamow factors for the relevant nuclear reactions [17].
We would
like to comment here just on the temperature dependence of the
ratio $\Phi_{\text{B}}/\Phi_{\text{Be}}$:
$$\frac{\Phi_{\text{B}}}{\Phi_{\text{Be}}}=\frac{n_{p}\langle\sigma_{V}\rangle_{%
17}}{n_{e}\langle\sigma_{V_{e}}\rangle_{\text{capt}}}\propto\frac{n_{p}}{n_{e}%
}\frac{T^{\gamma_{17}}}{T^{\gamma_{\text{capt}}}}\quad,$$
(11)
where $\gamma_{\text{capt}}=-1/2$, and $\gamma_{17}=-2/3+E_{17}/KT$
($E_{17}$ is
the Gamow peak for the
$p+{}^{7}\text{Be}\to\gamma+{}^{8}\text{B}$
reaction) [18]. Assuming $n_{p}/n_{e}$ to be constant,
and evaluating $E_{17}/KT$ at $T_{c}^{\text{SSM}}$, we get
$$\frac{\Phi_{\text{B}}}{\Phi_{\text{Be}}}\propto T_{c}^{13.5}\quad.$$
(12)
This value is in good agreement with the one reported in Table 6
for a $S_{pp}$ variation; the agreement is less good with the values
obtained by varying the other parameters (in this case $n_{p}/n_{e}$ is
clearly not conserved).
Therefore, as long as the temperature profile is unchanged, lowering the
temperature immediately implies that Boron neutrinos are suppressed
much more strongly than Beryllium neutrinos, since the penetrability
factor for the
$p+{}^{7}\text{Be}\to\gamma+{}^{8}\text{B}$
reaction is diminished.
V The central solar temperature and the experimental results.
From the argument just presented, it is clear that a central
temperature reduction cannot work; nevertheless, let us perform a
$\chi^{2}(T_{c})$ analysis to see quantitatively what happens. We define:
$$\chi^{2}(T_{c})=\sum_{XY}(S^{\text{ex}}_{X}-S^{\text{th}}_{X})V^{-1}_{XY}(S^{%
\text{ex}}_{Y}-S^{\text{th}}_{Y})\quad,$$
(13)
where the symbols have the following meaning.
i)
The experimental signals $S^{\text{ex}}_{X}$ ($X=$ Gallium, Chlorine and
Kamiokande) are the ones reported in Eqs. (3) and (4).
ii)
The theoretical signals $S^{\text{th}}_{X}(T_{c})$ are calculated according
to the formula
$$S^{\text{th}}_{X}=\sum_{i\neq pp}X_{i}\,\Phi_{i}^{\text{SSM}}\left(\frac{T_{c}%
}{T_{c}^{\text{SSM}}}\right)^{\beta_{i}}+X_{pp}\,\Phi_{i}^{pp}\quad,$$
(14)
where we take the $\beta$ coefficients corresponding to the $S_{pp}$
variations
(second column of Table 6), and we use the
CDF94 standard solar
model results, see Table 4. Note, in particular, that
$\Phi_{\text{B}}^{\text{SSM}}$ has been
calculated by using $S_{17}=22.4$ eV barn. In order to achieve a better
accuracy, $\Phi_{pp}$ is calculated directly through the Eq. (1).
iii)
The error matrix $V_{XY}$ takes into account both the experimental
and the theoretical uncertainties. The theoretical uncertainties are
due to the neutrino cross sections $X_{i}$, and to the solar model
parameters that are not related to the free parameter $T_{c}$, i.e.
$S_{33}$, $S_{34}$, and $S_{17}$.
The diagonal entries, $V_{XX}$, are the sum of the experimental
variance $\sigma_{X}^{2}$, plus the the squares of the errors due to the
cross sections $\sum_{i}(\Delta_{X}^{i})^{2}$ ($\Delta_{X}^{i}$ is the error of the
detection cross section for the $X$ experiment averaged over the $i$-th flux),
plus the squares of the
errors due to the input parameters $S_{33}$, $S_{34}$, and $S_{17}$, i.e.
$\sum_{P}(\Delta_{X}^{P})^{2}$ ($P=S_{33},S_{34},S_{17}$).
The off-diagonal entries have contributions only from these last errors:
$V_{XY}=\sum_{P}\Delta_{X}^{P}\Delta_{Y}^{P}$. The errors $\Delta$
are calculated by linear propagation. Therefore, if we call
$\delta_{X}^{i}$ the error on the cross section $X_{i}$,
$\Delta_{X}^{i}=\Phi_{i}^{\text{SSM}}\left(\frac{T_{c}}{T_{c}^{\text{SSM}}}%
\right)^{\beta_{i}}\delta_{X}^{i}$,
while, if $\delta^{P}$ is the error on the
parameter $P$,
$\Delta_{X}^{P}=\left(\partial S^{\text{th}}_{X}/\partial P\right)\delta^{P}$. The the partial derivative of the neutrino fluxes respect
to these parameters are estimated by using power-laws which we
have been determined from numerical experiments, and which are very
similar to those of Table 7.2 in Ref. [19].
The values we use for the
uncertainties of the SSM parameters, $\delta^{P}$, are given in
Table 1,
while the errors on the cross sections, $\delta^{i}_{X}$,
can be found in Table 2. The
use of the error matrix is necessary to avoid that an apparently good
fit be achieved in an unphysical way, e.g. we cannot use the
uncertainty of the Boron flux to strongly reduce its contribution to
the Davis experiment, and, at the same time, have a smaller
reduction in the Kamiokande experiment.
The results shown in Fig. 8(a) deserve a few comments.
i)
The best fit to the three experimental signals yields a
$\chi^{2}_{\text{min}}$[Cl+Ga+Ka]=
18.5 that, for two degrees of freedom, is excluded at the 99.99%
level (here we have treated
systematic and statistical errors on equal footing);
we thus confirm the results of Ref. [20].
This is partly due to
the well known “inconsistency” between Kamiokande and Chlorine.
ii)
Even if we only consider Gallium and Kamiokande the fit is still
poor, yielding a $\chi^{2}_{\text{min}}$[Ga+Ka]= 11,
that for one degree of freedom is
excluded at the 99.9% level. The reason is that if one tries to reduce
$\Phi_{\text{Be}}$ in accordance with Gallium data,
then $\Phi_{\text{B}}$ becomes too small in
comparison with the Kamiokande result. On the other hand, if one
considers just Gallium and Chlorine results the situation is better
($\chi^{2}_{\text{min}}$[Cl+Ga]= 5, which has a 2.5% probability),
due to the fact that
the smaller Boron (and Beryllium) signal implied by the Chlorine
experiment can be more easily adjusted to the Gallium result.
iii)
From the above discussion it is clear that if one lowers the
$p+{}^{7}\text{Be}\to\gamma+{}^{8}\text{B}$
cross section, the situation gets even worse, see Fig. 8(b).
In other words, a reduction of S17 does not solve the solar
neutrino problem.
iv)
Considering the Chlorine data corresponding (approximately) to
the same data taking period as the other experiments
($S_{\text{Cl}}^{88-92}=2.76\pm 0.31\text{ SNU}$ [3]) the
situation is only slightly changed:
$\chi^{2}_{\text{min}}$[Cl+Ga+Ka]= 15
that, for two degrees of freedom, is
excluded at the 99.94% level; $\chi^{2}_{\text{min}}$[Ga+Ka]=
11, that for one degree of
freedom is excluded at the 99.9% level; and
$\chi^{2}_{\text{min}}$[Cl+Ga]=6, which
has a 2.4% probability.
v)
For the uncertainties of Table 1,
the effect of the error correlation is not large:
for instance, if we use
uncorrelated errors
$\chi^{2}_{\text{min}}$[Cl+Ga+Ka]= 16
instead of 18.5.
The real importance of error correlation
becomes evident if we try to resolve the
discrepancy by increasing the errors. For example, doubling the
uncertainties reduces the uncorrelated
$\chi^{2}_{\text{min}}$ to 14, while the
correlated one practically does not change.
vi)
The situation does not significantly change
when considering models where one of the other parameters (opacity table,
Z/X, age) are varied instead of $S_{pp}$,
as it is shown by Fig. 9. Slightly better fits are
obtained by varying Z/X or the age than $S_{pp}$ or the opacities,
but the resulting
$\chi^{2}_{\text{min}}$[Cl+Ga+Ka]= 16.5
is still excluded at the 99.97% level.
vii)
If one insists on a low temperature solution, the
best fit is for $T_{c}/T_{c}^{\text{SSM}}\approx 0.94$,
i.e. $T_{c}=1.46\times 10^{7}\,{}^{o}K$. The price to pay
for this 6% temperature reduction is very high in terms of the input
parameters which are being varied, see Table 1.
Huge variations of
the parameters are required, and, furthermore, in many cases the
values used are at the border of what can be tolerated by our stellar
evolution code: for example, we are not able to produce a Sun with
$T_{c}/T_{c}^{\text{SSM}}<0.94$ by lowering the opacity or the age.
VI A low energy resonance in the
${}^{\bbox{3}}\text{He}+{}^{\bbox{3}}\text{He}$
channel?
As mentioned in the introduction, the other way to enhance the $pp$-I
termination is to play with the ${}^{3}\text{He}$ nuclear cross sections.
As it was shown in Ref. [6], if the astrophysical S-factors
are varied by a constant (through the star) quantity:
$$\Phi_{i}=\Phi_{i}^{\text{SSM}}\,\theta$$
(15a)
where
$$\theta=\frac{S_{34}}{S_{34}^{\text{SSM}}}\sqrt{\frac{S_{33}^{\text{SSM}}}{S_{3%
3}}}\quad\quad\text{ and }\quad i=\text{B, Be}\quad.$$
(15b)
Numerical experiments confirm the approximate validity of
Eqs. (15)
giving
$\Phi_{\text{B}\,,\text{Be}}=\Phi_{\text{B}\,,\text{Be}}^{\text{SSM}}\,\theta^{%
0.9}$.
Note that the changes of $\Phi_{\text{B}}$ and
$\Phi_{\text{Be}}$ are proportional.
For variations of $S_{33}$ and $S_{34}$ the solar temperature is essentially
unaffected, and, consequently, all the fluxes other than B and Be are also
unaffected. Only the $pp+pep$
neutrino flux slightly changes, in order to fulfill the
luminosity condition, Eq. (1), i.e.
$$\Phi_{pp+pep}=\Phi_{pp+pep}^{\text{SSM}}+\Phi_{\text{Be}}^{\text{SSM}}-\Phi_{%
\text{Be}}$$
(16)
In order to reduce the Beryllium flux by a factor – say – three with
respect to the SSM value, $S_{33}$ ($S_{34}$) has to be nine times (one third)
the value used in the standard solar model calculations. Clearly, what
matters are the values of the astrophysical factors at the energies
relevant in the Sun, i.e. at the position of the Gamow peak for the
He + He reactions near the solar center, $E_{\text{G}}\approx 20\text{ keV}$.
We recall that the astrophysical factors used in the calculations are
obtained by extrapolating experimental data taken at higher energies
(see Ref. [18] for a review). Thus a very low energy resonance in
the ${}^{3}\text{He}+{}^{3}\text{He}$ reactions could be effective in reducing
$\Phi_{\text{Be}}$ and $\Phi_{\text{B}}$, and
could have escaped to experimental detection. This possibility, first
advanced in Ref. [10], cannot be completely dismissed,
(see the discussion
in Refs. [6, 18])
and it is presently being investigated in the underground
nuclear physics experiment LUNA at Laboratori Nazionali del Gran
Sasso [11].
For a resonance at energy $E_{r}$ and with strength $\omega\gamma$, equations (15) become:
$$\Phi_{i}=\Phi_{i}^{\text{SSM}}\,\sqrt{\frac{1}{1+x_{i}}}\quad\quad\text{ }%
\quad i=\text{B, Be}\quad,$$
(17a)
where
$$x_{i}=\frac{\omega\gamma}{W}\exp[3A(KT_{i})^{-1/3}-E_{r}/(KT_{i})]\quad,$$
(17b)
and $T_{i}$ are the temperatures at the peak of the $\nu_{\text{Be}}$
and $\nu_{\text{B}}$ production
($T_{\text{Be}}=1.45\times 10^{7}\,{}^{o}\text{K}$,
$T_{\text{B}}=1.5\times 10^{7}\,{}^{o}\text{K}$), $K$ is the Boltzmann constant,
and the other constants, defined in
Ref. [6], are $W=20.4\text{ keV}$ and $A=1.804\text{ MeV}^{1/3}$.
Let us remark that the resonance can work differently in different
regions of the Sun, in relationship with the kinetic energies of the
colliding particles. A low energy resonance is more efficient in the
outer zone of energy production, and consequently $\Phi_{\text{Be}}$ can be
suppressed more than $\Phi_{\text{B}}$.
The opposite occurs for higher energy
resonances, the turning point being $E_{r}\approx E_{\text{G}}$, see
Ref. [6] for details.
We have performed a $\chi^{2}$ analysis as a function of the resonance
strength $\omega\gamma$ for several values of the resonance energy
$E_{r}$, with a
procedure quite similar to that used in the previous section.
The errors on the calculated signals arise from the neutrino
interaction cross sections, from $S_{17}$, and from all those quantities
which influence the estimated central temperature of the Sun ($S_{pp}$,
Z/X, opacity and age), but not from $S_{33}$ and $S_{34}$ that influence
fluxes according to Eq. (15),
and correspond to our free parameter.
Again, the derivative of the neutrino fluxes with respect to these
parameters, necessary to calculate the error matrix by linear
propagation, are estimated by using power-laws very similar to
those of Table 7.2 in Ref. [19].
The uncertainties we use are shown in Tables 1 and 2.
We note that uncertainties on
the absorption cross sections, the metallicity Z/X and the opacity are
the most important for estimating the errors on the signal.
For the opacity we followed
Ref. [21] and took
“the characteristic difference between the solar
interior opacity calculated with Livermore and with Los Alamos
opacity code”, which may or may not be a fair estimate of the
uncertainty, but we could not find a better prescription. However,
as we shall see, the
correlation among the errors is such that $\chi^{2}_{\text{min}}$
does not change even
if we double the uncertainties on Z/X and on the opacity.
The results are presented in Fig. 10. The situation looks slightly
better than in the low temperature models since the
$\Phi_{\text{Be}}$ reduction
does not imply an even stronger $\Phi_{\text{B}}$
reduction. However, the best $\chi^{2}_{\text{min}}=14$,
obtained for $E_{r}=0$, is still excluded at the 99.9% level.
The $\chi^{2}_{\text{min}}$ slightly increases with $E_{r}$
because of the tuning of the
Beryllium/Boron suppression.
The best fit strength as a function of $E_{r}$ is shown in Fig. 11,
together
with existing experimental upper bound. We expect that LUNA
experiment, presently performed at LNGS [11], will have a sensitivity
better by about a factor 100, as compared with previous experiments,
mainly due to the cosmic ray shielding
in the underground laboratory, so that the search should be able
to detect/exclude such a resonance down to extremely low values of
$E_{r}$.
The use of the properly correlated errors on the fluxes is even more
important when studying the effect of the hypothetical resonance
than when we changed the temperature. The $\chi^{2}_{\text{min}}$
would be 10
instead of 14, had we used uncorrelated errors. Moreover, doubling
the errors would yield a $\chi^{2}_{\text{min}}$
of almost 6, while the correlated one remains 14.
The intuitive explanation
of how the uncorrelated fit works is the following.
The Chlorine and Kamiokande results require different suppressions of the
neutrino fluxes. The fit finds the best compromise between the two
experiments by adjusting the resonance strength.
Then, the uncertainty on the temperature is used
to further deplete
$\Phi_{\text{Be}}$ and, at the same time, to increase $\Phi_{\text{B}}$,
which is clearly unphysical. The correlated fit correctly uses the
uncertainty on the temperature either to increase or to decrease both fluxes
at the same time: either
option is useless, once we get the best compromise for the common
reduction of the two fluxes, no matter how much we are allowed to change the
temperature.
Combining the two mechanisms, i.e. a resonance in a low
temperature model, does not work either, since again, once the best compromise
suppression of the ${}^{7}\text{Be}$ and ${}^{8}\text{B}$ fluxes is achieved by one of
the two mechanisms, the other cannot do much more.
VII The detection of $\bbox{pep}$ and
${}^{\bbox{7}}$Be neutrinos
New generation experiments are being planned for the detection of
monochromatic solar neutrinos produced in electron capture
(${}^{7}\text{Be}+e^{-}\to{}^{7}\text{Li}+\nu$)
and in the $pep$ ($p+e^{-}+p\to d+\nu$)
reactions [12, 13, 14].
Furthermore, Bahcall [15, 16]
pointed out that, from the measurement of the average energy difference
between neutrinos emitted in solar and laboratory decay, one can infer
the temperature of the production zone.
In this section we discuss what can be learnt from such future
measurements about the properties of neutrinos and of the Sun.
Concerning the intensity of the ${}^{7}\text{Be}$ line,
we recall the bounds of Eqs. (5):
at $1\sigma$ ($3\sigma$)
the neutrino flux has to be smaller than
$0.7\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$
($4.0\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$),
otherwise neutrinos are non-standard. We recall however
that a low
$\Phi_{\text{Be}}$ is also typical of the MSW solution, see Fig. 12.
The $pep$ neutrinos are a good indicator of $\Phi_{pp}$, since the ratio
$\Phi_{pep}/\Phi_{pp}$ is rather stable.
In Fig. 12 we see that standard neutrinos
correspond to $\Phi_{pep}$ in the range $(1\div 2)\times 10^{8}\text{cm}^{-2}\text{s}^{-1}$, whereas the MSW
solution requires $\Phi_{pep}\leq 3\times 10^{7}\text{cm}^{-2}\text{s}^{-1}$.
Thus, a measurement of the
$pep$-line intensity will be crucial for deciding about neutrino
properties.
The possibility of measuring inner solar temperatures through thermal
effects
on monochromatic neutrino lines looks to us extremely
fascinating (although remote). In this respect the homology
relationship, Eq. (8),
is particularly interesting, see Fig. 13.
If homology holds, a measurement of the solar temperature in the –
say – ${}^{7}\text{Be}$ production zone gives the value of $T_{c}$.
On the other hand,
the homology relation itself is testable – in principle – by comparing
the temperatures at two different places, as can be done by looking
at the shapes of both the
$\nu_{\text{Be}}$ and $\nu_{pep}$ lines. We remark that this
would be a test of the mechanism for energy transport through the
inner Sun.
VIII Conclusions
i)
If neutrinos are standard, the present solar neutrino experiments
already impose severe constraints on the individual components of
the total neutrino flux. These constraints, at the $1\sigma$ level, are:
$$\displaystyle\Phi_{\text{Be}}$$
$$\displaystyle\leq$$
$$\displaystyle 0.7\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$$
$$\displaystyle\Phi_{\text{CNO}}$$
$$\displaystyle\leq$$
$$\displaystyle 0.6\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$$
$$\displaystyle 64\times 10^{9}\text{cm}^{-2}\text{s}^{-1}\leq\Phi_{pep}$$
$$\displaystyle\leq$$
$$\displaystyle 65\times 10^{9}\text{cm}^{-2}\text{s}^{-1}$$
(18)
The constraint on Beryllium neutrinos is in strong disagreement with
the results of any standard solar model calculation, see Table 4.
The solar neutrino problem is now at the Beryllium production level: the
experimental data demand a strong shift towards the $pp$-I
termination, and the problem is not restricted anymore to the rare
$pp$-III (${}^{8}$B) termination.
ii)
Solar models with low inner temperatures
show temperature profiles $T(m)$
homologous to that of the Standard Solar Model: $T(m)=kT^{\text{SSM}}(m)$.
As a consequence, the main components of the neutrino flux depend
essentially on the central solar temperature $T_{c}$ (see
Table 5), and
the experimental signals can be parameterized in terms of $T_{c}$. As
already known, there is no value of $T_{c}$ which can account for all the
available experimental results
($\chi^{2}_{\text{min}}(T_{c})\approx 16$). In addition, we find
that the fit is poor even considering just Gallium and Kamiokande
results ($\chi^{2}_{\text{min}}(T_{c})\approx 11$).
Furthermore, lowering the cross section for
$p+{}^{7}\text{Be}\to\gamma+{}^{8}\text{B}$ makes things worse.
iii)
Alternatively, the shift of the nuclear fusion chain towards the $pp$-I
termination could be induced by a hypothetical low energy
resonance in the ${}^{3}\text{He}+{}^{3}\text{He}$ reaction.
This mechanism gives a somehow
better but still poor fit to the combined experimental data
($\chi^{2}_{\text{min}}(T_{c})\approx 14$).
Its possible relevance to the solar neutrino problem
will be elucidated in an underground nuclear physics experiment,
presently performed at LNGS.
iv)
Concerning future experiments, the measurement of the ${}^{7}\text{Be}$ and,
particularly, of the $pep$-line intensities will be crucial for
discriminating non-standard solar models from non-standard
neutrinos, in relation with the bounds in Eq (VIII).
Furthermore, the
homology relation itself can be tested, in principle, in experiments
aimed at the measurement of inner solar temperatures by looking at
thermal effects on
the $pep$ and Be line shapes.
This would provide
a clear test about the mechanism of energy transport in the solar
interior.
In conclusion, we feel that recent Gallium results, taken at their face
value, strongly point towards non-standard neutrinos. Of course we
are anxiously waiting for the calibration of Gallex and Sage, and for
future experiments.
Acknowledgements.One of us (G. F.) acknowledges useful discussions with V. Berezinsky.
References
[1]
GALLEX Collaboration, P. Anselman et al., Report No. LNGS-94/89 (1993),
to be published in Phys. Lett. B
[2]
V. N. Gavrin, in talk given at 6th International
Symposium on Neutrino Telescopes, Venice, Italy, February 1993.
[3]
R. Davis Jr., Proc. of the 23rd ICRC, Calgary, Canada (1993), Prog.
in Nucl. and Part. Phys. 32 (1994).
[4]
A. Suzuki,” Kamiokande results and prospects”, talk given at 6th
International Symposium on Neutrino Telescopes , February 1993, Venice.
[5]
V. Castellani et al., Phy. Lett. B 324, 245 (1994).
[6]
V. Castellani, S. Degl’Innocenti and G. Fiorentini, A.& A. 271, 601
(1993).
[7]
S. A. Bludman et al., Phys. Rev. D 45, 1810 (1992).
[8]
K. Langanke and T. D. Shoppa, Phys. Rev C 49, 1771 (1994).
[9]
T. Motobayashi et al., Rikkyo Report No. RUP-94/2, 1994,
submitted to Phys. Rev. Lett.
[10]
W. A. Fowler, Nature 238, 24 (1972).
[11]
C. Arpesella et al., Nuclear Astrophysics at Gran Sasso Laboratory
(Proposal for a pilot project with a 30 KeV accelerator) internal report
LNGS 91-18 (1991).
[12]
C. Arpesella et al., Borexino at Gran Sasso: Proposal for a Real-Time
Detector for Low Energy Solar Neutrinos, internal report INFN - Milan
(1992).
[13]
A. Alessandrello et al., Report No. INFN/AE-92/28 (1992).
[14]
R. S. Raghavan et al., Report No. AT&T Bell Laboratories Technical
Memorandum 11121-930824-27TM (1993).
[15]
J. N. Bahcall, Phys. Rev Lett. 71, 2369 (1993).
[16]
J .N. Bahcall, Report No. IASSNS-AST 93/40 (1994),
to be published in Phys. Rev. D.
[17]
V. Castellani, S. Degl’Innocenti and G. Fiorentini, Phys. Lett B 303,
68 (1993).
[18]
C. Rolfs and W. Rodney, Cauldrons in the Cosmos (Chicago
University Press, Chicago, USA, 1988).
[19]
J. N. Bahcall, Neutrino Astrophysics (Cambridge University Press,
Cambridge, England, 1989).
[20]
S. A. Bludman et al., Phys. Rev. D 47, 2220 (1993).
[21]
J. N. Bahcall and M. H. Pinsonneault, Rev. Mod. Phys. 60, 297 (1992).
[22]
J .N. Bahcall and M. Kamionkowski, Astrophys. J. 420, 884 (1994).
[23]
S. Turck-Chièze et al., Phys. Rep. 230, 57 (1993).
[24]
A. Garcia et al., Phys. Rev. Lett. 67, 3654 (1991).
[25]
S. Turck-Chièze and I. Lopez, Astrophys. J. 408, 347 (1993).
[26]
R. Iglesias and Wilson, Astrophys. J. 397, 717 (1992).
[27]
N. Grevesse, in proceedings of Evolution of Stars: the Photospheric
Abundance connection, IAU, 1991, eds. G. Michaud and A. Tutukov.
[28]
C. W. Johnson et al., Astrophys. J. 392, 320 (1992).
[29]
A. Krauss et al., Nucl. Phys. A 467, 273 (1987).
[30]
G. Fiorentini et al., Report No. INFNFE-10/93 (1993), to appear in
Phys. Rev. D (1994). |
Flaw effects on square and kagome artificial spin ice
M. Di Pietro Martínez
mdipietro@ifimar-conicet.gob.ar
R. C. Buceta
rbuceta@mdp.edu.ar
Abstract
In this work, we study the effect of nanoislands with design flaws on the bilayer-square and in kagome arrays of artificial spin ice.
We have introduced disorder as random fluctuations in the length of the magnetic islands using two kinds of distributions: Gaussian and uniform.
For artificial square ice, as the system behaviour depends on its geometrical parameters, we focus on studying the system in the proximity of the Ice-like configuration where nearest neighbour and next nearest neighbour interactions between islands are approximately equal.
We show how length fluctuations of nanoislands affect the antiferromagnetic and (locally) ferromagnetic ordering, by inducing the system, in the case of weak disorder, to return to the Ice-like configuration where antiferro- and ferromagnetic vertices are equally likely. Moreover, in the case of strong disorder, ferromagnetic vertices prevail regardless of whether the mean length corresponds or not to an antiferromagnetic ordering. Additionally, for strong disorder we have found that excitations are not completely vanished in the ground state.
We also study kagome arrays inducing a similar crossover between types of vertex and show how disorder can led to a steady mixed-state where both types of monopoles are present.
Keywords: Frustration, geometry, dipolar interaction, disorder, spin ice, degrees of freedom.
1 Introduction
Artificial spin ices are counterparts to natural spin ice materials such as rare earth pyrochlores [1]. Both are magnetic systems with frustrated interactions that show unusual collective behaviours and distinctive complex patterns [2, 3].
Magnetic ions in pyrochlore compounds are located in the corners of tetrahedra
and the local interaction energy is minimized when the moments order according to Pauling’s ice rule: two spins pointing inwards and two outwards of each tetrahedron [4].
This rule was originally proposed for the proton orderings in (cubic) water ice but a perfect mapping with spin ice materials such as the one mentioned above was found later (see Figure 1 (a) and (b)).
Geometric frustration of pyrochlore materials enables highly degenerate ground states of the spin ice manifested as a large residual entropy even at very low temperatures [5].
Artificial spin ice (ASI) consists of a lithographically manufactured two-dimensional array of ferromagnetic nanoislands with a strong shape anisotropy resulting in single-domains that behave like giant Ising spins.
Magnetic force microscope images show that the magnetic moment of each island is indeed aligned along its long axis [6], which confirms the dominance of shape anisotropy over the low magnetocrystalline anisotropy of Permalloy, one of the materials used in the manufacture of ASI.
The appropriate choice of material, geometry and array topology leads to different characteristic collective behaviors.
The islands of the square ASI are placed in the links of a square lattice, as we show in Figure 2 (a).
The four spins set around a given vertex have $16$ possible configurations, classified into four topological types $\mathrm{T}_{n}$, with $n=1,\dots,4$ (see Figure 2 (b)). Note that $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ satisfy the ice rules meeting $\nabla\cdot\mathbf{S}=0$, while $\mathrm{T}_{3}$ and $\mathrm{T}_{4}$ do not.
If the dipoles are oriented randomly and not interacting, $3/8$ of the vertex population corresponds to $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$. Wang et al. [6] observed that, in lattices with fixed island size, this holds for large lattice spacing, while for smaller lattice spacing $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ population increases approximately to $7/10$, which shows the preponderance of the Ice-type spin configuration.
The easiest way to create an excitation is to flip a single spin turning a $\mathrm{T}_{1}$ or $\mathrm{T}_{2}$ vertex into $\mathrm{T}_{3}$.
By doing this, sources or defects are being created, making the flow nonzero, or locally $\nabla\cdot\mathbf{S}=\pm 2$.
Thus, the magnetic defects interact as Coulombian charges, which are called monopoles [7, 8].
However, in two-dimensional square ASI, monopoles do not appear as effective low-energy configurations, as they do in three-dimensional materials [9]. When the ice rule is not met, a monopole-antimonopole pair that is not fixed is created, even thought they are able to move separately by flipping spins, until the called ‘Dirac-string’ between defects bumps into another defect or the loop is closed. The creation of this pair of mobile defects is known as fractionalization of the excitation [10].
The square-ice model (or six-vertex model [11]) fulfills the condition $J_{1}=J_{2}$ where $J_{1}$ and $J_{2}$ are the nearest-neighbour and next-nearest-neighbour energy, respectively, as we show in Figure 2 (a).
Then, the energies of vertices $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ (namely $E_{1}$ and $E_{2}$ respectively) are degenerate and the system orders according only to Ice-like configurations.
In the case of inequality the F-model is obtained instead, which consists of spin configurations $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$, with $E_{2}>E_{1}$. In square ASI, at low temperatures, the configurations of the F-model are expected. The inequivalence in the properties of the vertices $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ on square ASI causes it to be weakly frustrated, like the F-model, when $J_{1}\neq J_{2}$ [12]. In contrast, in the three-dimensional spin ice interactions between pairs of spins in a vertex are equivalent. In square ASI this can also be achieved by slightly changing the geometry of the array.
In order to recover the equivalence on vertex interactions, Möller and Moessner [13] proposed a bilayer square ASI, made of two sublattices spaced a distance $h$, each of which is composed by unidirectional magnetic islands, designed such that when $h=0$ the two-dimensional square ASI with its properties is recovered. By adjusting the gap distance $h$, it is possible to establish degenerate states of energy for all those vertices which obey the ice rule; the centers of the islands form a tetrahedron (see Figure 1 (c)) and the ordering disappears, allowing monopoles to move freely. For this system, the authors showed [14] that the monopoles are excitations that have two types of Coulombian interactions: one three-dimensional magnetic and another two-dimensional entropic with logarithmic behaviour. Considering point-like dipoles for the degenerate state (taking $J_{1}=J_{2}$) they obtained [13] that the gap parameter is $h_{\mathrm{ice}}\approx 0.419\,a$, where $a$ is the lattice spacing. In contrast to elongated dipoles, as it increases its length $d$ the ice gap parameter decreases (particularly when $d=a$ we have $h_{\mathrm{ice}}=0$). Shortly afterward, Mol et al. [15] studied the ground states and excitations as a function of $h$. They showed that there is anisotropy in the tension of the ‘Dirac-string’ and that the quantity of magnetic charge is dependent on the direction of the monopoles’ movement, showing an abrupt change when monopoles are separated along the major axis of the islands. Additionally, for point-like dipoles, these authors found that the ground state changes for $h^{\prime}_{\mathrm{ice}}\approx 0.444\,a$, attributing this slight deviation to one of the configurations with ice rule which has no ground state energy, i.e. the system is not in a completely degenerate state.
Since the pioneering work of Wang et al. [6] on square ASI [16, 17], other two-dimensional lattices with regular geometries have been studied, such as triangular [15, 18, 19, 17, 20, 21, 14, 22], honeycomb or kagome [23, 24, 25], brick-work [26] and pentagonal or shakti [27, 28].
Particularly, square and kagome ASI complement each other. The kagome ASI consists on an array of islands on the links of a honeycomb lattice.
While square ASI has an even coordination number, kagome ASI has an odd coordination number, which implies that its $2^{3}$ possible vertices are all monopole like.
Also, as the interaction between each pair of islands set around a given vertex is equivalent, there is no need for a gap distance like in square ice.
In this paper, we have studied the influence of design flaws of the nanoislands on the thermodynamic properties and dynamics of the system, for a bilayer-square and kagome ASI. Here, we have considered random fluctuations in the length $d$ of the magnetic islands for two prototypical cases: Gaussian and uniform distributions.
In the square ASI case, we have focused on studying the system at points of the parameter space $(d,h)$ in a neighbourhood of the point $(d,h_{\mathrm{ice}}(d))$ where the condition $J_{1}=J_{2}$ is met. We show how length fluctuations of nanoislands affect the antiferromagnetic and ferromagnetic ordering, i.e. $J_{2}\lesssim J_{1}$ and $J_{2}\gtrsim J_{1}$, respectively, at low temperatures where populations of vertices $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ prevail. Similarly, we have studied the effects of designs flaws over kagome ASI. In Section 2 we give theoretical background of the needle model and our simulation details. In Section 3, we show the results achieved for the vertex population, the specific heat, the acceptance as a function of temperature and we also provide a comprehensive study for the observed effects.
Finally, in the Conclusions section, we evaluate the outcomes obtained, showing the importance of the presence of the design flaws in the thermodynamic properties of the vertices that satisfy the ice rule.
2 The needles model
We have considered the island moments (or spins) ${\bf S}_{\kappa}=\mu\hat{S_{\kappa}}$ as breadthless needles of finite length, each of which is a dipole formed by two effective charges $\pm q_{\kappa}=\mu/d_{\kappa}$ located at ${\bf r}_{\kappa}^{\pm}$ and separated by a distance $d_{\kappa}$ (see Figure 2 (c)). Thus, the potential created by a needle-dipole at a point $\bf r$ is simply
$$\Phi_{\kappa}({\bf r})=\frac{q_{\kappa}}{4\pi\epsilon_{0}}\left(\frac{1}{|{\bf
r%
}-{\bf r}_{\kappa}^{+}|}-\frac{1}{|{\bf r}-{\bf r}_{\kappa}^{-}|}\right)\;,$$
(1)
and the interaction energy between two dipole-like spins ${\bf S}_{\alpha}$ and ${\bf S}_{\beta}$ is
$$\mathcal{U}_{\alpha\beta}=q_{\beta}\bigl{[}\Phi_{\alpha}({\bf r}_{\beta}^{+})-%
\Phi_{\alpha}({\bf r}_{\beta}^{-})\bigr{]}.$$
Taking into account that ${\bf r}_{\kappa}^{\pm}={\bf r}_{\kappa}\pm\frac{1}{2}\,d_{\kappa}\,\hat{S}_{\kappa}$ (with $\kappa=\alpha,\beta$) and ${\bf r}_{\alpha\beta}={\bf r}_{\beta}-{\bf r}_{\alpha}$, the equation above can be rewritten as
$$\displaystyle{\mathcal{U}}_{\alpha\beta}=\frac{\mathcal{D}}{d_{\alpha}d_{\beta%
}}\left(\frac{1}{|{\bf r}_{\alpha\beta}+\frac{1}{2}(d_{\beta}\hat{S}_{\beta}-d%
_{\alpha}\hat{S}_{\alpha})|}+\frac{1}{|{\bf r}_{\alpha\beta}-\frac{1}{2}(d_{%
\beta}\hat{S}_{\beta}-d_{\alpha}\hat{S}_{\alpha})|}\right.$$
$$\displaystyle \left.-\frac{1}{|{\bf r}_{\alpha\beta}+\frac{1}{2}(d_{%
\beta}\hat{S}_{\beta}+d_{\alpha}\hat{S}_{\alpha})|}-\frac{1}{|{\bf r}_{\alpha%
\beta}-\frac{1}{2}(d_{\beta}\hat{S}_{\beta}+d_{\alpha}\hat{S}_{\alpha})|}%
\right)\;.$$
(2)
where $\mathcal{D}=\mu_{0}\mu^{2}/4\pi$. As can be proved, the expression for ${\mathcal{U}}_{\alpha\beta}$ corresponds to the dipolar energy when $d_{\kappa}\rightarrow 0$ ($\kappa=\alpha,\beta$).
Note that, since ${\bf r_{\alpha\beta}}={\bf r_{\alpha\beta}}(a,h)$, the interaction energy depends on the fixed geometric parameters $a$, $h$ and $d_{\kappa}$ of the lattice, and so does the system’s behaviour. Remenbering that in the case of kagome ASI $h=0$.
We have assumed that the length of the nanoislands is $d_{\kappa}=d+\eta_{\kappa}\,$, where $\eta_{\kappa}$ is the length fluctuation with mean value $\langle\eta_{\kappa}\rangle=0$ and correlations $\langle\eta_{\alpha}\,\eta_{\beta}\rangle=C_{\alpha\beta}\,$. If the length fluctuations are independent random variables the cross-correlation is zero and $C_{\alpha\beta}=C\,\delta_{\alpha\beta}\,$.
Taking into account that $\mathcal{U}_{\alpha\beta}=\mathcal{U}_{\beta\alpha}\,$, the system total energy $E=\frac{1}{2}\sum_{\alpha\beta}\mathcal{U}_{\alpha\beta}$ can be expressed as
$$E=\frac{1}{2}\sum_{\alpha\neq f}\sum_{\beta\neq f}\mathcal{U}_{\alpha\beta}+%
\sum_{\alpha\neq f}\mathcal{U}_{\alpha f}\;,$$
(3)
where $f$ is the index of the spin to flip and wherein self-interactions are excluded, i.e. $\alpha\neq\beta$. The interaction energy after the spin ${\bf S}_{f}$ is flipped verifies
$$\mathcal{U}^{\prime}_{\alpha\beta}=\left\{\begin{array}[]{ll}\;\mathcal{U}_{%
\alpha\beta}&\mathrm{if}\;\;\alpha\neq f\;\mathrm{and}\;\beta\neq f\\
-\mathcal{U}_{\alpha\beta}&\mathrm{if}\;\;\alpha\neq f\;\mathrm{and}\;\beta=f%
\;,\end{array}\right.$$
(4)
Therefore, the change of the system’s total energy $\Delta E_{f}=E^{\prime}-E$, where $E^{\prime}$ is the total energy with ${\bf S}_{f}$ flipped, becomes
$$\Delta E_{f}=-2\sum_{\alpha\neq f}\mathcal{U}_{\alpha f}.$$
(5)
For square ice, in order to find the point $(d,h_{\mathrm{ice}}(d))$ where the condition $J_{1}=J_{2}$ is met, we have calculated the interaction energy between a pair of dipoles in a vertex. The energy $J_{1}^{\{\alpha,\beta\}}$ due to the interaction between nearest-neighbour dipoles $\{\alpha,\beta\}$ is obtained taking ${\bf r}_{\alpha\beta}=\bigl{(}\frac{a}{2},\frac{a}{2},h\bigr{)}$ , ${\bf S}_{\alpha}=(0,1,0)$ and ${\bf S}_{\beta}=(1,0,0)$ in Equation 2. Likewise, the energy $J_{2}^{\{\alpha,\beta\}}$ due to the interaction between next-nearest-neighbour dipoles $\{\alpha,\beta\}$ is obtained taking ${\bf r}_{\alpha\beta}=(0,a,0)$ and ${\bf S}_{\alpha}={\bf S}_{\beta}=(0,1,0)$ in Equation 2.
In our model, as the lengths are considered to fluctuate between islands, the vertex energies $J_{n}^{\{\alpha,\beta\}}$ ($n=1,2$) are also different, and the ice rule is a local property that is approximately fulfilled. To order zero in the length fluctuations, i.e. $J_{n}=J_{n}^{\{\alpha,\beta\}}\rfloor_{\eta_{\alpha}=\eta_{\beta}=0}$, the condition $J_{1}=J_{2}$ is fulfilled over the entire lattice.
We have performed Monte Carlo simulations with the Metropolis algorithm using Equations 2 and 5 over a square ASI of size $N=900$ spins, with periodic boundary conditions and a cut-off of $14a$ for the dipolar sum, and over a kagome ASI of $N=675$ spins, with a cutt-off of $15a$.
In order to avoid the characteristic low-temperature freezing we also used the loop-move method [29, 30].
3 Results
3.1 Fluctuations in Square Ice
Having studied a system consisting of nanoislands with equal length and obtained a behaviour that is in agreement with previous results [14, 31], let us examine now what happens if they do not have the same length.
Each flawed island is fixed with a length $d_{\kappa}=d+\eta_{\kappa}\,$, where the length fluctuation $\eta_{\kappa}$ is a random number chosen according to the probability density $P(\eta_{\kappa})$ and the constant $d$ is the mean length. For weak disorder, $P(\eta_{\kappa})$ is a Gaussian with standard deviation $\sigma$ and zero mean; while for strong disorder we use a uniform distribution as to avoid $d_{\kappa}>a$. Then $\eta_{\kappa}$ is uniformly selected from the interval $(-\Delta,\Delta)$.
In Figure 3, we show the effect of different intensity of disorder.
We have studied the system by choosing the mean length $d=\langle d_{\kappa}\rangle$ and the gap distance $h$ between sublattices such that $J_{1}=J_{2}$ is verified. In order to study what happens nearby this condition, the geometrical parameters were set at $h=0.205\,a$ and $d_{-}=0.702\,a$ or $d_{+}=0.704\,a$.
In the absence of flaws, the former mean length brings the antiferromagnetic ground state with prevalence of vertices $\mathrm{T}_{1}$. In contrast, also in absence of flaws, the latter mean length corresponds to a (locally) ferromagnetic ground state with predominance of vertices $\mathrm{T}_{2}$.
In Figure 3, where plots (a) and (c) correspond to the vertex population for each configuration type and plots (b) and (d) are the corresponding specific heat, we show the simulation results of the system of nanoislands with the same mean length and several intensity of length fluctuations with Gaussian and uniform distributions.
Vertex population plots show that, independently of the design flaws, the high temperature equilibrium state corresponds to a random state where each vertex type has a population related to the number of possible configurations, i.e. $\mathrm{T}_{1}$: $1/8$, $\mathrm{T}_{2}$: $1/4$, $\mathrm{T}_{3}$: $1/2$ and $\mathrm{T}_{4}$: $1/8$ (see Figure 2 (b)). Also, both plots show how the adimensionalized temperature $t=a^{3}k_{B}T/\mathcal{D}$ exhibits a transition at $t=t_{1}^{*}\approx 4$ which is not affected by existing flaws in the system. Below this transition ($t<t_{1}^{*}$) the population of vertices $\mathrm{T}_{3}$, which corresponds to excitations, falls until it vanishes completely. The plots of specific heat display a peak for this first transition, which confirms that the system behaviour is not affected by the flaws. In systems without length flaws or with weak length fluctuations, at lower temperatures, a second peak appears at $t=t_{2}^{*}\approx 0.2$ in the specific heat. This peak reveals a second transition whose behaviour is known in systems without flaws, which accounts for meaningful variations in populations of vertices $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$. For systems without flaws, below this second transition ($t<t_{2}^{*}$) the population of vertices $\mathrm{T}_{2}$ or $\mathrm{T}_{1}$ increase to $100\%$ when the temperature decreases (see Figure 3 (a) and (c), respectively).
When $d=d_{-}$ (see Figure 3 (a)) a crossover of $\mathrm{T}_{1}$-$\mathrm{T}_{2}$ populations is observed at $t=t_{2}^{*}$.
For $\sigma=0.01$, we have found that even though the $\mathrm{T}_{1}$-$\mathrm{T}_{2}$ crossover has not disappeared, in the ground state $\mathrm{T}_{1}$ population has increased at the expense of $\mathrm{T}_{2}$ population and the specific heat maximum has been significantly reduced. Similarly in Figure 3 (c), $\mathrm{T}_{2}$ population has increased at the expense of $\mathrm{T}_{1}$ population.
Then, we have established that in the case of weak fluctuations, which correspond to $\sigma\lessapprox 0.05$, while $t<t_{2}^{*}$ the $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ populations start to redistribute until the Ice-like configuration is restored. That is to say, $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ vertices are equally likely, and so $1/3$ of the vertex population is $\mathrm{T}_{1}$ and $2/3$ is $\mathrm{T}_{2}$ (see Figure 2 (b)). Furthermore, the second peak of the specific heat flattens.
However, by continuing to increase $\sigma$ a different behaviour appears. Regardless of whether the system is set as ferro- or antiferro-magnetic, in the ground state $\mathrm{T}_{2}$ vertices prevail with a population bigger than $2/3$.
Moreover, in the case of uniformly distributed fluctuations, $\mathrm{T}_{2}$ population reaches $77\pm 2\,\%$, while $\mathrm{T}_{1}$ population stays at $22\pm 2\,\%$.
However, this system’s behaviour corresponds neither to the random configuration nor to the Ice-like configuration. So, what leads to this effect? Why does strong disorder encourage the prevalence of $\mathrm{T}_{2}$ vertices?
These questions have led us to study if there is a relation between the type of vertex and the length of the islands set around such vertex and how strongly related they are as a function of the temperature and the fluctuation intensity.
Thus, we have defined the quantity $f(n,v)$ as the fraction of $\mathrm{T}_{n}$ vertices that have $v$ of the four islands in favour ($v=0,\dots,4$). That is to say, $\mathrm{T}_{1}$ vertices are favoured by islands with a length smaller than the mean value $d$, while $\mathrm{T}_{2}$ vertices are the ground state for a system with length greater than $d$. So, for example, $f(1,0)$ corresponds to the fraction of $\mathrm{T}_{1}$ vertices that have none of the four vertices with a length smaller than $d$. By definition, $f(n,v)$ meets $\sum_{v}f(n,v)=1$.
In Figure 4, we show the results achieved for different $\sigma>0$.
High temperature behaviour corresponds to a random disposition of the vertices; therefore the fractions are given by the number of possible configurations: $f(n,0)=1/16$, $f(n,1)=1/4$, $f(n,2)=3/8$, $f(n,3)=1/4$ and $f(n,4)=1/16$. However, as the temperature is lowered the behaviour changes.
For $\sigma=0.01$, the fraction $f(n,0)$ decreases to zero as none of the islands favours $\mathrm{T}_{n}$ vertices. In addition, $f(n,1)$ decreases while $f(n,2)$ remains constant.
In contrast, if $v=3$ or $v=4$ the fraction of vertices rises. For this intensity of length fluctuation the results are the same for both $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$, but for $\sigma\geq 0.05$ the curves split into two different branches according to the type of vertex, showing an incipient asymmetry in the nature of the system. Also, the spacing between branches broadens by increasing the fluctuation intensity.
While $\mathrm{T}_{1}$ vertices appear more likely if $v>2$, $\mathrm{T}_{2}$ vertices are more permissive and also appear for $v=2$. Even when there is only one island in favour, a bigger fraction of $\mathrm{T}_{2}$ vertices accept this situation than $\mathrm{T}_{1}$ vertices.
Considering these results, we can say that a strong spacial relation stressed by fluctuation exists, where $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ vertices are going to be arranged strongly induced by local interaction with islands around the vertex in question.
On one hand, weak disorder has a global effect as the system displays a mean field behaviour, which is the reason why the Ice-like configuration is the ground state. On the other hand, strong fluctuations make the vertex type to be a direct result of the short range interactions as a local effect.
However, where does this asymmetry come from? We compare now the energies $E_{1}$ and $E_{2}$ contained in a $\mathrm{T}_{1}$ or a $\mathrm{T}_{2}$ vertex, respectively. Figure 5 (b) shows these energies for a vertex surrounded by four islands of the same length $d$. As discussed previously, when $d$ is smaller than the $E_{1}$-$E_{2}$ crossover length $d^{*}$ the ground state corresponds to the antiferromagnetic state, and when $d$ is greater it corresponds to the ferromagnetic state. However, if one of the four islands has a different length, the energy contained in the vertex changes. In Figure 5 (a) one of the islands has a length $d_{4}=0.9$, and as a result $d^{*}$ moves leftwards widening the ferromagnetic zone. This implies that the other three islands should be even smaller than they would be in the (b) case to maintain a $\mathrm{T}_{1}$ vertex. Similarly, in Figure 5 (c), one of the lengths is fixed in $d_{4}=0.5$ and $d^{*}$ moves rightwards widening the antiferromagnetic zone.
However, for a given fluctuation intensity the crossover behaviour is asymmetric and the greater the intensity is, the more evident this effect becomes.
Thus, the energy of the system allows the prevalence of $\mathrm{T}_{2}$ vertices in the ground state when strong flaws are introduced which, combined with the strong spacial relation that reduces the interaction to short-range, results in the observed effect.
Additionally, we have also found a remaining presence of $\mathrm{T}_{3}$ vertices at any temperature. While for $\sigma<0.05$, at low temperature the population $\mathrm{T}_{3}$ sticks at $0\pm 0\%$, for $\sigma=0.1$ we have found a population $\mathrm{T}_{3}$ of $0.3\pm 0.2\,\%$. For the uniform distribution, a $0.7\pm 0.3\,\%$ of excitations driven by strong fluctuations persist even in the ground state.
In order to determine whether this effect is a system freezing issue or not,
we studied the ratio of acceptance of spin-flips and loop-moves. In Figure 6, we show the acceptance for different fluctuation intensities.
The swapping of vertices $\mathrm{T}_{1}$ and $\mathrm{T}_{2}$ introduces an intermediate step with vertex $\mathrm{T}_{3}$, i.e. the creation of an excitation. Thus, the spin-flip method freezes at $t\approx t_{1}^{*}\,$.
The loop-moves are not accepted at first as the population of $\mathrm{T}_{3}$ vertices is still too big. Then, the loop-moves develop until the ground state is reached and there is no need to continue changing the vertex type. While the curves for spin-flips are not significantly affected by the fluctuation intensity, the curve of loop-move flattens and moves rightward as intensity increases.
For low temperature, the population $\mathrm{T}_{3}$ is about $0.7\%$ in the uniform fluctuation case, an amount too small to justify the $40\%$ to $0.1\%$ reduction of the loop-move acceptance. We also show the acceptance rate using a uniform distribution of fluctuations after a different amount of loop-move steps. Note that the curves are stable even if an order of magnitude in steps is increased. Then, the observed behaviour can be found in the physics of the simulated system.
3.2 Fluctuations in Kagome Ice
Similarly, in kagome ice we have studied the thermal behaviour of the vertex population for different length-fluctuation intensity.
In this case, $J_{1}=J_{2}=J$ as the distance between each possible pair of islands set around a given vertex is the same. Thus, there is no need to include the gap distance $h$.
Nevertheless, by studying the energy of this system, it can be observed that
depending on the values of $J$ and $d$ the system presents two different ground states.
A three legged vertex can have $2^{3}$ different configurations, which in turn can be topologically separated in two types of vertices depending on the net charge $q$. Namely, $\mathrm{T}_{1}$ are the vertices with $|q|=1$ while $\mathrm{T}_{3}$ are the ones with $|q|=3$.
Due to the odd coordination number, unlike the square ice this kind of system has always monopoles present.
For $J=0$ it has been demonstrated in previous studies [14] that the system passes through four different phases as the temperature is lowered. First, a high temperature behaviour that corresponds to a paramagnetic phase (P) where both $\mathrm{T}_{1}$ and $\mathrm{T}_{3}$ vertices are present with probability $3/4$ and $1/4$, respectively.
Then, there is a first transition into a $\mathrm{T}_{1}$ gas called K$1$, where $q=\pm 1$. Later on, the system rearranges into a charge-ordered phase K$2$, where $q=+1$ and $q=-1$ vertices are positioned as a NaCl structure. Finally, the system transitions into the final ordered state where $\mathrm{T}_{1}$ vertices organize also magnetically.
Also, a model of “dumbbells” [22], which means $d=0$, was studied for $J>0$ and $J<0$, where the authors show how varying $J$ affects the three peaks corresponding to the transitions described above.
Here, we have extended these results to the complete parameter space $(J,d)$ and we have used it as a tool to induce a similar situation to the one studied for square ice.
Thus, we have fixed the parameters $J=-6\,\mathcal{D}$ and $d_{-}=0.792\,a$ or $d_{+}=0.794\,a$, as to obtain a condition close to $E_{1}=E_{3}$.
In Figure 7 (a) and (c) we show the thermal behaviour of the vertex population for $d=d_{\pm}$ which corresponds to $E_{3}\gtrless E_{1}$, respectively.
In any case, high temperature leads to the paramagnetic (P) state. By decreasing the temperature, the first transition into $\mathrm{T}_{1}$ gas takes place;
but for this set of parameters it appears only as a shoulder in the vertex population curve, as it overlaps with the second transition into K$2$ structure, and so, for $d=d_{-}$ it does not completely develop to $100\%$ of $\mathrm{T}_{1}$.
In the absence of fluctuations, if $d=d_{-}$ the K$2$ arrangement and the ground state (GS) are comprise by $\mathrm{T}_{3}$ vertices, while if $d=d_{+}$ they are composed of $\mathrm{T}_{1}$ vertices.
By increasing the intensity of length fluctuations, we have found that the ground state starts to change.
For $\sigma=0.02$, neither the population of $\mathrm{T}_{1}$ nor $\mathrm{T}_{3}$ reaches the $100\%$.
For $\sigma\gtrsim 0.05$, low temperature behaviour turns to a $40\%$$\mathrm{T}_{1}$–$60\%$$\mathrm{T}_{3}$ mixed-state.
The error in the final population lowers in comparison to $\sigma=0.02$, showing more stability.
The P $\rightarrow$ K$1$ transition shoulder moves rightward until the transition disappears completely, turning into a P $\rightarrow$ mixed-state transition. This can also be concluded from the examination of the specific heat.
Figures 7 (b) and (d) exhibit the corresponding curves, where we show how the K$1\rightarrow$ K$2$ transition peak disappears when fluctuations are $\sigma\gtrsim 0.02$ and the peak corresponding to P $\rightarrow$ K$1$ transforms into the P $\rightarrow$ mixed-state, displacing rightward and reducing its maximum as the fluctuation intensity increases.
In particular, for $d=d_{+}$ the K$2\rightarrow$ GS transition peak disappears from the specific heat.
To sum up, sufficiently strong length fluctuations removes K$1$, K$2$ and the ordered state, replacing them by the $40$-$60\%$ mixed-state in both $d=d_{-}$ and $d=d_{+}$ cases.
Once again, this final state induced by length fluctuation does not correspond to a high temperature random state but to a different behaviour led by asymmetry.
We have performed the same analysis used for square ice, in which we calculated the fraction of vertices $f(n,v)$ with $v$ islands in favour.
In Figure 8, we show $f(n,v)$ as a function of temperature $t$. From the curves, the asymmetry intensified by length fluctuations that appears to explain the difference between $\mathrm{T}_{1}$ and $\mathrm{T}_{3}$ population can be seen.
Figure 8 (b) shows clearly that at the onset of mixed-state phase, characterized by intensity of length fluctuation $\sigma\approx 0.05$ at low temperatures, the vertex fractions verify the crossover properties $f(1,1)=f(3,3)$ and $f(1,3)=f(3,1)$, and $f(1,v)=f(3,v)$ for $v=0,2$.
4 Conclusions and Perspectives
In summary, we have contributed to the question of how disorder impacts on magnetic frustrated systems by analyzing particularly the effect of nanoislands with design flaws on the thermodynamical properties of the bilayer square and kagome ASI systems.
In this work, each island was considered as a dipolar needle and for square ASI the two layers were separated by a distance $h$.
So, as the system’s energy depends only on the geometrical parameters of the array, we have proposed selecting each island’s length as $d_{\kappa}=d+\eta_{\kappa}$, thereby introducing random fluctuations $\eta_{\kappa}$ in the nanoislands’ length of the given system according to the probability distribution.
In the square ASI, the mean length $d$ and the parameter $h$ were chosen so that the nearest neighbour $J_{1}$ and the next nearest neighbour $J_{2}$ interactions were approximately equal.
For $h=0.205\,a$ and $d=0.702\,a$ ($J_{1}\lesssim J_{2}$) or $d=0.704\,a$ ($J_{1}\gtrsim J_{2}$),
a Gaussian distribution with standard deviation $\sigma$ has been used up to $\sigma=0.1$ and then, in order to prevent island overlapping, we have used a uniform distribution between $(-\Delta,\Delta)$.
We have established $\sigma\lesssim 0.05$ as the weak disorder range, where the high temperature behaviour remains unchanged, while for low temperature the length fluctuations restore the Ice-like configuration.
Furthermore, considering that finding the exact point in the parameter space $(d,h_{ice})$ where $J_{1}=J_{2}$ is difficult, small flaws turns out to stabilize this state.
In contrast, we have shown that strong fluctuations correspond to $\sigma\gtrsim 0.05$. In this regime, we have found that ferromagnetic vertices prevail regardless of whether the mean value of $d$ corresponds to an antiferromagnetic ordering or not.
By analyzing the fraction $f(n,v)$ of vertices $\mathrm{T}_{n}$ with $v$ of the four islands favouring them and the energy contained in a $\mathrm{T}_{1}$ and a $\mathrm{T}_{2}$ vertex we have found an asymmetry in the system led by strong fluctuations and from which the observed results emerge.
Additionally, in the case of uniformly distributed fluctuations, we have found the presence of excitations even in zero temperature.
Complementarily, we have studied the kagome ice inducing a similar situation to that of the square ice system by choosing from the parameter space $(J,d)$ such that it brings $E_{1}\approx E_{3}$. At low temperatures and intensity of length fluctuation above $\sigma\approx 0.05$ the distinctive phases are removed and replaced by a $60\%$$\mathrm{T}_{1}$$-40\%$$\mathrm{T}_{3}$ mixed ground state. Below, for $\sigma\gtrsim 0.02$, the transition peak K1$\to$K2 (gas to ordered phase) disappears and the transition peak P$\to$K1 (paramagnetic to gas phase) transforms into P$\to$mixed-state. In particular, for mean length $d=d_{+}$ corresponding to energies $E_{1}<E_{3}$, the transition peak K2$\to$GS (ground state) vanishes. Below $\sigma\approx 0.02$ the four different phases and the transitions between them, described for systems without length fluctuations, are recovered.
Despite being two different systems with a different coordination number, the effect of having flawed islands in the system is similar. In both cases a mixed-state, rather than a random state or an Ice-like state, can be achieved as a result of an intrinsic asymmetry in the dipolar energy which is intensified by the presence of flaws.
In kagome ASI, how would the interactions between these two kinds of monopoles be? Could the population of excitations in square ice be increased at zero temperature? It would be interesting to study these issues in the future, taking these results as a starting point.
Acknowledgments
This work was partially supported by Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Argentina, PIP 2014/16 N${}^{\circ}$ 112-201301-00629. RCB thanks C Rabini for her suggestions on the final manuscript.
References
[1]
Harris MJ, Bramwell ST, McMorrow DF, Zeiske T, Godfrey KW (1997) Geometrical
frustration in the ferromagnetic pyrochlore
$\mathrm{Ho}_{2}\mathrm{Ti}_{2}\mathrm{O}_{7}$.
Phys Rev Lett 79: 2554.
[2]
Nisoli C, Moessner R, Schiffer P (2013) Colloquium: Artificial spin ice:
Designing and imaging magnetic frustration.
Rev Mod Phys 85: 1473.
[3]
Bramwell ST, Gingras MJP, Holdsworth PCW (2013) Spin ice.
In: Diep HT, editor, Frustrated Spin Systems, World Publishing Co.
2nd edition.
[4]
Pauling L (1935) The structure and entropy of ice and of other crystals with
some randomness of atomic arrangement.
J Am Chem Soc 57: 2680.
[5]
Ramirez AP, Hayashi A, Cava RJ, Siddharthan RB, Shastry S (1999) Zero-point
entropy in ‘spin ice’.
Nature 399: 333.
[6]
Wang RF, Nisoli C, Freitas RS, J Li WM, Cooley BJ, et al. (2006) Artificial
âspin iceâ in a geometrically frustrated lattice of nanoscale
ferromagnetic islands.
Nature (London) 439: 303.
[7]
Bramwell ST, Giblin SR, Calder S, Aldus R, Prabhakaran D, et al. (2009)
Measurement of the charge and current of magnetic monopoles in spin ice.
Nature 461: 956.
[8]
Giblin SR, Bramwell ST, Holdsworth PCW, Prabhakaran D, Terry I (2011) Creation
and measurement of long-lived magnetic monopole currents in spin ice.
Nature Phys 7: 956.
[9]
Mól LA, Silva RL, Silva RC, Pereira AR, Moura-Melo WA, et al. (2009) Magnetic
monopole and string excitations in two-dimensional spin ice.
J Appl Phys 106: 063913.
[10]
Castelnovo C, Moessner R, Sondhi SL (2012) Spin ice, fractionalization, and
topological order.
Annu Rev Condens Matter Phys 3: 35.
[11]
Lieb EH (1967) Residual entropy of square ice.
Phys Rev 162: 162.
[12]
Möller G (2006) Dynamically reduced spaces in condensed matter physics:
Quantum Hall bilayers, dimensional reduction, and magnetic spin systems.
Ph.D. thesis, Université Paris Sud - Paris XI.
HAL Id: tel-00121765.
[13]
Möller G, Moessner R (2006) Artificial square ice and related dipolar
nanoarrays.
Phys Rev Lett 96: 237202.
[14]
Möller G, Moessner R (2009) Magnetic multipole analysis of kagome and
artificial spin-ice dipolar arrays.
Phys Rev B 80: 140409(R).
[15]
Mól LAS, Pereira AR, Moura-Melo WA (2010) Extending spin ice concepts to
another geometry: The artificial triangular spin ice.
Phys Rev B 85: 184410.
[16]
Nisoli C, Wang R, Li J, McConville WF, Lammert PE, et al. (2007) Artificial
square ice and related dipolar nanoarrays.
Phys Rev Lett 98: 217203.
[17]
Nisoli C, Li J, Ke X, Garand D, Schiffer P, et al. (2010) Effective temperature
in an interacting vertex system: Theory and experiment on artificial spin
ice.
Phys Rev Lett 105: 047205.
[18]
Zhang S, Li J, Bartell J, Ke X, Nisoli C, et al. (2011) Ignoring your
neighbors: Moment correlations dominated by indirect or distant interactions
in an ordered nanomagnet array.
Phys Rev Lett 107: 117204.
[19]
Rodrigues JH, Mól LAS, Moura-Melo WA, Pereira AR (2013) Efficient
demagnetization protocol for the artificial triangular spin ice.
Appl Phys Lett 103: 092403.
[20]
Arnalds UB, Farhan A, Chopdekar RV, Kapaklis V, Balan A, et al. (2012)
Thermalized ground state of artificial kagome spin ice building blocks.
Appl Phys Lett 101: 112404.
[21]
Schumann A, Sothmann B, Szary P, Zabel H (2010) Charge ordering of magnetic
dipoles in artificial honeycomb patterns.
Appl Phys Lett 97: 022509.
[22]
Chern GW, Mellado P, Tchernyshyov O (2011) Two-stage ordering of spins in
dipolar spin ice on the kagome lattice.
Phys Rev Lett 106: 207202.
[23]
Tanaka M, Saitoh E, Miyajima H, Yamaoka T, Iye Y (2006) Magnetic interactions
in a ferromagnetic honeycomb nanoscale network.
Phys Rev B 73: 052411.
[24]
Qi Y, Brintlinger T, Cumings J (2008) Direct observation of the ice rule in an
artificial kagome spin ice.
Phys Rev B 77: 094418.
[25]
Zhang S, Gilbert I, Nisoli C, Chern GW, Erickson MJ, et al. (2013) Crystallites
of magnetic charges in artificial spin ice.
Nature 500: 553.
[26]
Li J, Ke X, Zhang S, Garand D, Nisoli C, et al. (2010) Comparing artificial
frustrated magnets by tuning the symmetry of nanoscale permalloy arrays.
Phys Rev B 81: 092406.
[27]
Chern GW, Morrison MJ, Nisoli C (2013) Degeneracy and criticality from emergent
frustration in artificial spin ice.
Phys Rev Lett 111: 177201.
[28]
Chern GW, Mellado P (2016) Magnetic monopole polarons in artificial spin ices.
EPL 114: 37004.
[29]
Barkema GT, Newman MEJ (1998) Monte Carlo simulation of ice models.
Phys Rev E 57: 1155.
[30]
Melko RG, Gingras MJP (2004) Monte carlo studies of the dipolar spin ice model.
Journal of Physics: Condensed Matter 16: R1277.
[31]
Thonig D, Reißaus S, Mertig I, Henk J (2014) Thermal string excitations in
artificial spin-ice square dipolar arrays.
Journal of Physics: Condensed Matter 26: 266006. |
Generating French virtual commuting network at municipality level
Maxime Lenormand
1Cemagref, LISC, 24 avenue des Landais, 63172 AUBIERE, France
(maxime.lenormand, sylvie.huet)@cemagref.fr 1
Sylvie Huet
1Cemagref, LISC, 24 avenue des Landais, 63172 AUBIERE, France
(maxime.lenormand, sylvie.huet)@cemagref.fr 1
Floriana Gargiulo
2INED, 133 boulevard Davout, 75020 PARIS, France
floriana.gargiulo@gmail.com2
Abstract
We aim to generate virtual commuting networks in the French rural regions in order to study the dynamics of their municipalities. Since we have to model small commuting flows between municipalities with a few hundreds or thousands inhabitants, we opt for a stochastic model presented by Gargiulo et al. (2012). It reproduces the various possible complete networks using an iterative process, stochastically choosing a workplace in the region for each commuter living in the municipality of a region. The choice is made considering the job offers in each municipality of the region and the distance to all the possible destinations. This paper presents how to adapt and implement this model to generate French regions commuting networks between municipalities. We address three different questions: How to generate a reliable virtual commuting network for a region highly dependant of other regions for the satisfaction of its resident’s demand for employment? What about a convenient deterrence function? How to calibrate the model when detailed data is not available? We answer proposing an extended job search geographical base for commuters living in the municipalities, we compare two different deterrence functions and we show that the parameter is a constant for network linking French municipalities.
1 Introduction
Some rural areas have an increasing population while others continue to suffer from depopulation in Europe (Johansson and Rauhut, 2007; Champetier, 2000). This is what the European project PRIMA111PRototypical policy Impacts on Multifunctional Activities in rural municipalities - EU 7th Framework Research Programme; 2008-2011; https://prima.cemagref.fr/the-project has been trying to understand studying the dynamics of a commuting network of virtual rural municipaties through microsimulation (Huet et al., 2012). In this framework, as in many studies through microsimulations or agent-based simulations, we need generation models capable of building reliable virtual commuting networks. Indeed, new economic theories assume local positive dynamics can be explained by implicit geographical money transfers made by commuters or retired people (see for example Davezies (2009)). It is thus necessary to have virtual commuting networks of individuals for the micro simulation approaches increasingly used (Birkin and Wu, 2012; Parker et al., 2003; Bousquet and Page, 2004) to study the economic and land use dynamics.
We are interested in understanding the dynamics of the French rural municipalities. 95% of them have at most 3000 inhabitants. This means most of the commuting flows we want to study are weak, with a spatial distribution highly decided by chance. It’s why we opt for a stochastic model proposed recently by (Gargiulo et al., 2012). Moreover, we want to consider the commuting network at different dates. Detailed data regarding flows between couples of municipalities are available in France only for 1999. At the other dates, the only reliable data is aggregated data, which for each municipality describes how many people go to work outside of the municipality and how many come from the outside of the municipality to work in without precisions about the various places of work or the various municipalities of residence. Then we also choose the model of Gargiulo et al. (2012) for its ability to generate a population of individuals on a commuting network, starting from this data. This model reproduces the complete network using an iterative process stochastically choosing a workplace in the region for each commuter living in the municipality of the region. The choice is made considering the job offers in each municipality of the region and the distance to all the possible destinations. It differs from the classical generation models presented in Ortúzar and Willumsen (2011) since it is a discrete choice model where the individual decision function is inspired from the gravity law model which is not used usually at an individual level (Haynes and Fotheringham, 1984; Ortúzar and Willumsen, 2011; Barthélemy, 2011). Moreover, the model ensures that for every municipality the virtual total numbers of commuters coming in and going out are the same as the ones of the data.
This paper presents how to adapt and implement this model to generate French regions commuting networks between municipalities. This implementation has forced us to address three different questions: How to generate a reliable virtual commuting network for a region highly dependant of other regions to satisfy the need for jobs of people living in their municipalities? What about a convenient deterrence function? How to calibrate the model when detailed data is not available.
A first problem we have to solve is that our French regions are not islands as for example in De Montis et al. (2007, 2010). Indeed, some of the inhabitants, especially those living close to the border of the region, are likely to work in municipalities located outside the region of residence. This part, especially if it is significant, make the generated network largely false if we only consider that people living in the region also work in the region. A way to solve this problem is to generate the commuting network only for people living and working in the region. However, this means the modeller has to know the quantity and the place of residence of individuals who work outside the region and live in the region. The data giving this detail is very rare, and so is being able to benefit from an expertise that could give this knowledge. Then, we address this issue by extending the job search geographical base for commuters living in the municipalities to a sufficiently large number of municipalities located outside the region of residence. We compare the model without municipalities from outside (called only outside later in this paper) and the model with outside on 23 French regions and conclude about the quality of our solution.
The second problem relates to the form of the deterrence function which rules the impact of the distance on the choice of the place of work relatively to the quantity of job offers. The initial work done by Gargiulo et al. (2012) proposes to use a power law. However, Barthélemy (2011) says the form of the deterrence function varies a lot, sometimes it can be inspired from an exponential function as in Balcan et al. (2009) or from a power law one as in Viboud et al. (2006). To choose the much more convenient deterrence function, we have compared the quality of generated networks for 34 French regions obtained on the one hand with the exponential law, and with the power law on the other hand. We showed that we obtained better results with the exponential law.
The last, but not the least problem to solve is the one related to calibration. The generation model, as most of the currently used commuting network generation models, has one parameter to calibrate. This parameter rules the impact of the distance in the individual decision regarding its place of work relatively to the quantity of job offers. The only available distance we can use is the Euclidian distance. We have detailed data on the commuting network for the year 1999 which can be used for calibration, but it is not the case for early years or more recent ones. It may be possible to assume the parameter value does not change over time but we know the transportation network can evolve greatly at the local level to reduce the time distance while we can’t capture such a change with the Euclidian distance. The solution was finally easy to manage. Using 34 French regions we show that every French region can be generating using a constant value for the parameter. Then, we assume that the parameter value is constant over time and space.
2 Material and methods
2.1 The French regions and the data from the French statistical office
A complete description of the regions from which the network has been generated is provided in the Table 4. These regions have been randomly chosen for their diversity in terms of number of municipalities and commuters, and surface areas. Some correspond to a region while others are closer to the county (called ”departement” in French).
The French Statistical Office (called $INSEE$) collects information about where individuals live and where they work. From this collected data, the Maurice Halbwachs Center or the $INSEE$ make available for every researcher the following data:
•
in 1999, data about the numbers of individuals commuting from location $i$ to location $j$ for every municipality of a region;
•
in 1990 and 2006, the total number of commuters, the total job offers and the total number of workers living in for every municipality; these data allows to compute for each municipality the number of commuters coming to work in.
The Lambert coordinates for each municipality are easy to find on internet. They allow to compute the Euclidian distance between every municipality couple.
We start from these data sets for our implementation work of the model presented in the next section.
2.2 The model of Gargiulo et al. (2012)
Consider a region composed by $n$ municipalities. We can model the observed commuting network starting from the matrix: $R\in\mathrm{M}_{n\times n}(\mathbb{N})$ where $R_{ij}$ is the number of commuters from the municipality $i$ (in the region) to the municipality $j$ (in the region). This matrix represents the ligth grey origin-destination table presented in Table 1.
The inputs of the algorithm are:
•
$D=(d_{ij})_{1\leq i,j\leq n}$ the Euclidean distance matrix between municipalities
•
$I_{j}$ the number of in-commuters from the region to the municipality $j$ of the region, $1\leq j\leq n$ (i.e. the number of individuals living in the region in a municipality $i$ ($i\neq j$) and working in the municipality $j$).
•
$O_{i}$ the number of out-commuters from the municipality $i$ of the region to the region, $1\leq i\leq n$ (i.e. the number of individuals working in the region in a municipality $j$ ($j\neq i$) and living in the municipality $i$).
$I_{k}$ and $O_{k}$ can be respectively assimilated to the job offers for the employed of the region and the job demand of the employed of the region for the municipality $k$, $1\leq k\leq n$. The algorithm starts with:
$$I_{j}=\sum_{i=1}^{n}R_{ij}$$
(1)
and
$$O_{i}=\sum_{j=1}^{n}R_{ij}$$
(2)
The purpose of the model is to generate the ligth grey origin-destination subtable of the region described in Table 1. To do this it generates the matrix $S\in\mathrm{M}_{n\times n}(\mathbb{N})$ where $S_{ij}$ is the number of commuters from the municipality $i$ (in the region) to the municipality $j$ (in the region). It’s important to note that $S_{ij}=0$ if $i=j$.
The algorithm assigns at each individual a place of work with a probability based on the distance from its place of residence to every possible places of work and their corresponding job offer. The number of in-commuters of the municipality $j$ and the number of out-commuters of the municipality $i$ decreases each time an individuals living in $i$ is assigned the municipality $j$ as workplace. We stop the algorithm when all the out-commuters have a place of work. The algorithm is described in Algorithm 1 with $m=n$.
Gargiulo et al. (2012) uses a deterrence function $f(d_{ij},\beta)$ with a power law shape:
$$f(d_{ij},\beta)=d_{ij}^{-\beta}\quad 1\leq i,j\leq n\enspace.$$
(3)
3 Statistical tools
This section presents the tools used to calibrate the model and to compare various implementation choices.
3.1 Calibration of the $\beta$ value.
We used the same method as in Gargiulo et al. (2012) to calibrate the $\beta$ value. We calibrate $\beta$ in order to minimize the average Kolmogorov-Smirnov distance between the simulated commuting distance distribution and the one building from the observed data.
For the basic model we compute the commuting distance distribution with the commuting distance of the individuals who are commuting from the region to the region. For the model with the outside we compute the commuting distance distribution with the commuting distance of the individuals who are commuting from the region to the region and also to the outside.
As the model of Gargiulo et al. (2012) is stochastic, the final calibration value we consider is the average $\beta$ value over 10 replications of the generation process.
3.2 An indicator to assess the change.
We need an indicator to compare the simulated commuting network and the observed commuting network. Let $R\in\mathrm{M}_{n_{1}\times n_{2}}(\mathbb{N})$ a commuting network when $R_{ij}$ is the number of commuters from the municipality $i$ to the municipality $j$. Let $S\in\mathrm{M}_{n_{1}\times n_{2}}(\mathbb{N})$ another commuting network of the same municipalities. We can compute the number of common commuters between $R$ and $S$ (Eq. 4) and the number of commuters in $R$ (Eq. 5):
$$NCC_{n_{1}\times n_{2}}(S,R)=\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}\min(S_{ij},R%
_{ij})\enspace$$
(4)
$$NC_{n_{1}\times n_{2}}(R)=\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}R_{ij}\enspace$$
(5)
From (Eq. 4) and (Eq. 5) we compute the Sørensen similarity index (Sørensen, 1948). This index makes sense since it corresponds to the common part of commuters between $R$ and $S$. Then we call it common part of commuters ($CPC$) (Eq. 6):
$$CPC_{n_{1}\times n_{2}}(S,R)=\frac{2NCC_{n_{1}\times n_{2}}(S,R)}{NC_{n_{1}%
\times n_{2}}(R)+NC_{n_{1}\times n_{2}}(S)}\enspace$$
(6)
It has been chosen for its intuitive explanatory power: it’s a similarity coefficient which gives the likeness degree between two networks. It is ranging from a value of 0, for which there aren’t any commuters flows in common in the two networks, to a value of 1 when all commuters flows are exactly identical in the two networks.
4 Generating French regions commuting network at municipality level
4.1 How to cope with regions which are not island or with the lack of detailed data?
A commuting network is defined by an origin-destination table (light grey table in Table 2). At the regional level, it means that we need to know, for each municipality of residence of the region and for each municipality of employment of the region, the value of the flow of commuters going from one to another. This kind of data is not always provided by the statistical offices and usually the datasets are aggregated: only the total number of out-commuters and in-commuters for each municipality is available for each municipality (dark grey row and colum in Table 2). To apply the model and define the commuting network, unless we are on a really isolated region222an island for example, in this case grey rows and colums in Table 2 would not exist, we should need to find a way to isolate from the total number of in(out)-commuters (dark grey row and colum in Table 2) the fraction strictly relating to the region (light grey table in Table 2). This is actually not a simple task.
Moreover, even if we are able to isolate these parts, it remains a problem due to the border effect. Indeed, if we consider only the region, we risk to make an error in the reconstruction of the network of the municipalities close to the border of the region. The higher the proportion of individuals working outside of the region is, the higher the error will be.
To go further, we propose to change the inputs of the algorithm. Instead of only considering the regional municipalities as possible places of work, we also consider an outside of the region. The outside represents the surroundings of the studied area. The following part describes how to consider this outside practically.
4.1.1 A new extended to outside job search base.
We implement the model, taking or not into account an outside, to generate 23 various French regions (see the 23rd regions in the table 4). Their outside is composed of the set of municipalities of their neighbouring ”‘departements”’.
We consider the outside of the region composed by $m-n$ municipalities, where $n$ is the number of municipalities of the region. The inputs are the directly available aggregated data at the municipal level:
•
$D=(d_{ij})_{1\leq i\leq n\atop 1\leq j\leq m}$ the Euclidean distance matrix between the municipalities both in the same region and in the outside
•
$(I_{j})_{1\leq j\leq m}$ the total number of in-commuters of the municipality $j$ of the region and outside of it (i.e. the number of individuals working in the municipality $j$ of the region or the outside and living in another municipality).
•
$(O_{i})_{1\leq i\leq n}$ the total number of out-commuters of the municipality $i$ of the region only (i.e. the number of individuals living in the municipality $i$ of the region and working inan other municipality).
The purpose of the algorithm with introduction of the outside is to generate the origin-destination table (ligth grey and grey subtable in Table 2). To do this we use the algorithm presented in Algorithm 1 to simulate the Table 3. Now, we can easily obtain by difference the Table 2 with the total number of in-commuters $(I_{j})_{1\leq j\leq n}$, the total number of out-commuters $(O_{i})_{1\leq i\leq n}$ and the light grey table of the Table 3.
We obtain a matricial representation of the origin-destination table presented in the ligth grey and grey subtable in Table 2, the simulated matrix $S\in\mathrm{M}_{(n+1)\times(n+1)}(\mathbb{N})$ where $S_{ij}$ is:
•
the number of commuters from the municipality $i$ (in the region) to the municipality $j$ (in the region) if $i,j\neq n+1$;
•
the number of commuters from the outside to the municipality $j$ (in the region) if $i=n+1$ and $j\neq n+1$;
•
the number of commuters from the municipality $i$ to the outside if $i\neq n+1$ and $j=n+1$.
4.1.2 Comparison of the two models: Assessing the impact of the outside.
We assess the impact of the outside by a comparison between the generations of the network for 23 French regions with and without the outside. The generation is made at the municipality scale with a power law detterence function.
The inputs of the case without-outside are built from detailed data while the inputs for the with-outside case are directly the aggregated data (the total municipal number of in and out-commuters).
Both implementations are compared through their $CPC$ values for each region. We replicate ten times the generation for each region and compute our indicator on each replicate. In all the presented figures, the indicator is averaging on 10 replications. The variation of the indicator over the replications is very low, $1.02\%$ of the average at most. Consequently, it is not represented on the figures. Fig. 1 presents the common part of commuters $CPC_{n\times n}(S,R)$ between the simulated network $S$ and the observed network $R$ obtained with the regional job search base (square) and obtained with a job search base comprising the region and its outside (triangle). It’s important to note that for the implementation without outside $S\in\mathrm{M}_{n\times n}(\mathbb{N})$ while for the implementation with outside $S\in\mathrm{M}_{(n+1)\times(n+1)}(\mathbb{N})$. In order to compare the two models we just consider the regional network (commuters from the region to the region). Indeed, in the without outside case $NC_{n\times n}(S)=NC_{n\times n}(R)$ but it’s not necessarily true for the with outside case.
Fig. 1 shows that the two job search bases give results which are not really different. Thus, introducing the outside solves the problem linked to the lack of detailed data without changing the quality of the resulted simulated network. Indeed we have to keep in mind that the inputs of the with-outside case does not require to have detailed data on the contrary to the without outside case.
4.2 Choosing a shape for the deterrence function
The second problem relates to the form of the deterrence function which rules the impact of the distance on the choice of the place of work relatively to the quantity of job offers. The initial work done by Gargiulo et al. (2012) proposes to use a power law. However, Barthélemy (2011) says the form of the deterrence function varies a lot, sometimes it can be inspired from an exponential function as in Balcan et al. (2009) or from a power law one as in Viboud et al. (2006). To choose the much more convenient deterrence function, we compare the quality of generated networks for 34 French regions obtained with the model with outside on the one hand with the exponential law, and with the power law on the other hand.
A deterrence function following an exponential law is introduced:
$$f(d_{ij},\beta)=\displaystyle e^{-\beta d_{ij}}\quad 1\leq i\leq n\mbox{ and }%
1\leq j\leq m\enspace.$$
(7)
To compare the two deterrence functions, we have generated the networks of 34 various French regions (see table 4 for details) replicating ten times for each region. The networks were generated with a job search base for the algorihtm considering the outside.
As an example, the Fig. 2 shows we obtained a better estimation of the Auvergne commuting distance distribution with the exponential law.
More systematically, we plot for the two different deterrence functions, exponential law and power law, the average on the replications of the common part of commuters $CPC_{(n+1)\times(n+1)}(S,R)$ in the Fig. 3. It clearly shows that the average proportion of common commuters is always better with the exponential law represented by the squares.
4.3 Calibrating the model for French regions
The last difficulty to solve is about the calibration process which requires until now to have detailed data to be accurate.
Fig. 4 shows the calibrated $\beta$ values for each of the $34$ French regions. We can see these values weakly vary from about $1.7\cdot 10^{-4}$ to $2.4\cdot 10^{-4}$ with an average $\beta$ valued ($C=1.94\cdot 10^{-4}$) corresponding to the dark line.
Then we hypothesize that it is possible to directly calibrate the algorithm to generate the $34$ French regions, simply using a constant equal to $C$. To study the influence of this approximation on the common part of commuters we have computed the $CPC$ with $C$ as the parameter value for the $34$ regions. We observe in the Fig. 5 that the influence of the $\beta$’s approximation on the $CPC$ is very weak. We note that the average $CPC$ obtained with $C$ is, for some regions, higher than the $CPC$ obtained with the not averaging $\beta$ value. It’s possible that the common part of commuters are better with another $\beta$ value because it’s not the calibration criterion.
We don’t need to study the influence of the $\beta$’s approximation on the calibration criterion. Indeed from the studies made in Gargiulo et al. (2012), we know the $CPC$ and the calibration criterion are highly correlated. The $CPC$ and the calibration criterion have the same evolution in terms of $\beta$. The $\beta$ value for the minimization of the Kolmogorov-Smirnov distance is very close to the one obtained for the maximization of the $CPC$ (see the figure 7 in Gargiulo et al. (2012) which perfectly illustrates this relation). Then, as $CPC$ values remain quasi identical with $\beta$=$C$ or with $\beta$ valued from the calibration process presented in 3.1, the quality of the approximation of the calibration criterion, i.e. the commuting distance distribution, remains the same.
5 Discussion and conclusion
To study the rural areas dynamics by microsimulation, we need virtual commuting networks linking individuals living in the municipalities of various French regions. As the studied scale is very low, we have small flows and decided to opt for a stochastic generation algorithm. The one recently proposed by (Gargiulo et al., 2012) is relevant for our problem. Starting from this model, we implement the commuting network of 34 different French regions. The implementation work leads us to solve three practical problems.
The first problem we have to solve is that our French regions are not islands. Indeed, some of the inhabitants, especially those living close to the border of the region, are likely to work in municipalities located outside the region of residence. However the classical approaches to generate commuting network consider only residents of the region working in the region. That is also the case of ours. The data giving details, or the knowledge, allowing the modeller to suppress people living in the region and working outside is hard to obtain. Then, we address this issue by extending the job search geographical base for commuters living in the municipalities to a sufficiently large number of municipalities located outside the region of residence. We compare the model without municipalities from outside and the model with outside on 23 French regions. We conclude about the relevance of our solution which keeps the value of our quality indicator identical while it does not obliged to know about people who don’t work in the region and permit to generate networks only starting from aggregated data.
(Gargiulo et al., 2012) model is based on the gravity law. Then, the second problem relates to the deterrence function which is more a power law or an exponential law depending on the study. Moreover, as the empirical studies comparing the generated networks to ”real” data are very rare (Barthélemy, 2011), no one knows about the better shape. To choose the much more convenient one for our French regions, we have compared the quality of generated networks for 34 regions obtained on the one hand with the exponential law, and with the power law on the other hand. We showed that we obtained better results with the exponential law whatever the region is. Indeed our 34 regions vary a lot in surface areas, number of municipalities and number of commuters.
The last problem we solved is the one related to calibration. Applying the model with an extended job search base and an exponential deterrence function, we found a constant equal to $1.94\cdot 10^{-4}$ is a perfect parameter value to generate commuting network of French administrative regions, whatever they are. However, we didn’t test this result for other countries having different types of administrative regions. The robustness of this result to commuting network described at very different scales than the municipality one remain a question we want to address in the future.
References
Balcan et al. (2009)
Balcan, D., Colizza, V., Goncalves, B., Hud, H., and Ramasco,
J.J.and Vespignani, A. (2009).
Multiscale mobility networks and the spatial spreading of infectious
diseases.
Proceedings of the National Academy of Sciences of the United
States of America, 106(51):21484–21489.
Barabási and Albert (1999)
Barabási, A. and Albert, R. (1999).
Emergence of scaling in random networks.
Science, 286(5439):509–512.
Barrat et al. (2004)
Barrat, A., Barthélemy, M., Pastor-Satorras, R., and Vespignani, A. (2004).
The architecture of complex weighted networks.
Proceedings of the National Academy of Sciences of the United
States of America, 101(11):3747–3752.
Barrat et al. (2005)
Barrat, A., Barthélemy, M., and Vespignani, A. (2005).
The effects of spatial constraints on the evolution of weighted
complex networks.
Journal of Statistical Mechanics: Theory and Experiment,
(5):49–68.
Barthélemy (2011)
Barthélemy, M. (2011).
Spatial networks.
Physics Reports, 499:1–101.
Birkin and Wu (2012)
Birkin, M. and Wu, B. (2012).
A review of microsimulation and hybrid agent-based approaches.
In Heppenstall, A. J., Crooks, A. T., See, L. M., and Batty, M.,
editors, Agent-Based Models of Geographical Systems, pages 51–68.
Springer Netherlands.
Bousquet and Page (2004)
Bousquet, F. and Page, C. L. (2004).
Multi-agent simulations and ecosystem management: a review.
Ecological Modelling, 176(34):313 – 332.
Champetier (2000)
Champetier, Y. (2000).
The (re)population of rural areas.
Leader Magazine, Spring. Special Issue.
Davezies (2009)
Davezies, L. (2009).
L’économie locale ”résidentielle”.
Géographie Economie Société, 11(1):47–53.
De Montis et al. (2007)
De Montis, A., Barthélemy, M., Chessa, A., and Vespignani, A. (2007).
The structure of interurban traffic: A weighted network analysis.
Environment and Planning B: Planning and Design,
34(5):905–924.
De Montis et al. (2010)
De Montis, A., Chessa, A., Campagna, M., Caschili, S., and Deplano, G. (2010).
Modeling commuting systems through a complex network analysis: A
study of the italian islands of sardinia and sicily.
The Journal of Transport and Land Use, 2(3):39–55.
Gargiulo et al. (2012)
Gargiulo, F., Lenormand, M., Huet, S., and Baqueiro Espinosa, O. (2012).
Commuting network: going to the bulk.
Journal of Artificial Societies and Social Simulation, page 13
pages.
Gitlesen et al. (2010)
Gitlesen, J. P., Kleppe, G., Thorsen, I., and Ubøe, J. (2010).
An empirically based implementation and evaluation of a hierarchical
model for commuting flows.
Geographical Analysis, 42(3).
Haynes and Fotheringham (1984)
Haynes, K. E. and Fotheringham, A. S. (1984).
Gravity and spatial interaction models.
Sage Publications, Beverly Hills.
Hensen and Bongaerts (2009)
Hensen, M. and Bongaerts, D. (2009).
Delimitation and coherence of functional and administrative regions.
Regional Studies, 1:19–31.
Huet and Deffuant (2011)
Huet, S. and Deffuant, G. (2011).
Common framework for the microsimulation model in prima project.
Technical report, Cemagref LISC.
Huet et al. (2012)
Huet, S., Dumoulin, N.and Deffuant, G., Gargiulo, F., Lenormand, M.,
Baqueiro Espinosa, O., and Ternès, S. (2012).
Micro-simulation model of municipality network in the auvergne case
study.
Technical report, PRIMA Project, IRSTEA(Cemagref) LISC.
Johansson and Rauhut (2007)
Johansson, M. and Rauhut, D. (2007).
The spatial effects of demographic trends and migration.
Technical report, European Observation Network for Territorial
Development and Cohesion (ESPON) - ESPON Project 1.1.4.
Konjar et al. (2010)
Konjar, M., Lisec, A., and Drobne, S. (2010).
Method for delineation of functional regions using data on commuters.
Guimarães, Portugal. 13th AGILE International Conference on
Geographic Information Science.
Lemercier and Rosental (2008)
Lemercier, C. and Rosental, P.-A. (2008).
Les migrations dans le nord de la france au XIXe siècle.
In Nouvelles approches, nouvelles techniques en analyse des
réseaux sociaux, Lille France.
Ortúzar and Willumsen (2011)
Ortúzar, J. and Willumsen, L. (2011).
Modeling Transport.
John Wiley and Sons Ltd, New York.
Parker et al. (2003)
Parker, D. C., Manson, S. M., Janssen, M. A., Hoffmann, M. J., and Deadman, P.
(2003).
Multi-agent systems for the simulation of land-use and land-cover
change: A review.
Annals of the Association of American Geographers,
93(2):314–337.
Pastor-Satorras and Vespignani (2004)
Pastor-Satorras, R. and Vespignani, A. (2004).
Evolution and Structure of the Internet: A Statistical Physics
Approach.
Cambridge University Press, New York, NY, USA.
Sørensen (1948)
Sørensen, T. (1948).
A method of establishing groups of equal amplitude in plant sociology
based on similarity of species and its application to analyses of the
vegetation on danish commons.
Biol. Skr., 5:1–34.
Stillwell and Duke-Williams (2007)
Stillwell, J. and Duke-Williams, O. (2007).
Understanding the 2001 UK census migration and commuting data: The
effect of small cell adjustment and problems of comparison with 1991.
Journal of the Royal Statistical Society.Series A: Statistics in
Society, 170(2):425–445.
Viboud et al. (2006)
Viboud, C., Bjørnstad, O. N., Smith, D. L., Simonsen, L., Miller, M. A., and
Grenfell, B. T. (2006).
Synchrony, waves, and spatial hierarchies in the spread of influenza.
Science, 312(5772):447–451. |
Quantum information processing:
The case of vanishing interaction energy
Miroljub Dugić${}^{1,*}$ and Milan M. Ćirković${}^{2,**}$
${}^{*}$Department of Physics, Faculty of Science
P. O. Box 60, 34000 Kragujevac, Serbia
${}^{**}$Astronomical Observatory, Volgina 7,
Belgrade, Serbia
${}^{1}$E-mail: dugic@knez.uis.kg.ac.yu
${}^{2}$E-mail: mcirkovic@aob.aob.bg.ac.yu
Abstract: We investigate the rate of operation of quantum
”black boxes” (”oracles”) and point out the possibility of
performing an operation by a quantum ”oracle” whose average energy
equals zero. This counterintuitive result not only presents a
generalization of the recent results of Margolus and Levitin, but
might also sharpen the conceptual distinction between the
”classical” and the ”quantum” information.
PACS: 03.67.Lx, 03.65.Ud, 03.65.Yz
1. Introduction: information-processing bounds
In the realm of computation, one of the central questions is ”what
limits the laws of physics place on the power of computers?”
[1,2]. The question is of great relevance to a wide range of
scientific disciplines, from cosmology and nascent discipline of
physical eschatology [3-5] to biophysics and cognitive sciences
which study information processing in the conscious mind [6, 7].
One of the physical aspects of this question refers to the minimum
time needed for execution of the logical operations, i.e. to the
maximum rate of transformation of state of a physical system
implementing the operation. From the fundamental point of view,
this question tackles the yet-to-be-understood relation between
the energy (of a system implementing the computation) on the one,
and the concept of information, on the other side. Apart from
rather obvious practical interest stemming from the explosive
development of information technologies (expressed, for instance,
in the celebrated Moore’s law), this trend of merging physics and
information theory seems bound to offer new insights into the
traditional puzzles of physics. Specifically, answering the
question above might shed new light on the standard ”paradoxes”
of the quantum world [8-10].
Of special interest are the rates of the reversible
operations (i.e. of the reversible quantum state
transformations). To this end, the two bounds for the so-called
”orthogonal transformations (OT)” are known; by OT we mean a
transformation of a (initial) state $|\Psi_{i}\rangle$ to a
(final) state $|\Psi_{f}\rangle$, while $\langle\Psi_{i}|\Psi_{f}\rangle=0$. First, the minimum time needed for OT can be
characterized in terms of the spread in energy, $\Delta\hat{H}$,
of the system implementing the transformation [11-15]. However,
recently, Margolus and Levitin [16, 17] have extended this result
to show that a quantum system with average energy $\langle\hat{H}\rangle$ takes time at least $\tau=h/4(\langle\hat{H}\rangle-E_{0})$ to evolve to an orthogonal state, where $E_{0}$ is the
ground state energy. In a sense, the second bound is more
restrictive: a system with the zero energy (i.e. in the ground
state) cannot perform a computation ever. This however stems
nothing about the nonorthogonal evolution which is still of
interest in quantum computation.
Actually, most of the efficient quantum algorithms [18-21] employ
the so-called quantum ”oracles” (quantum ”black boxes”) not
requiring orthogonality of the initial and the final states of the
composite quantum system ”input register + output register
($I+O$)” [22-24]. Rather, orthogonality of the final states of the
subsystems’ (e.g. $O$’s) states is required, thus emphasizing a
need for a new bound for the operation considered.
In this paper we show that the relative maximum of the rate of
operation of the quantum ”oracles” may point out the zero
average energy of interaction in the composite system $I+O$.
Actually, it appears that the rate of an operation cannot be
characterized in terms of the average energy of the composite
system as a whole. Rather, it can be characterized in terms of
the average energy of interaction Hamiltonian. Interestingly
enough, the ground state energy $E_{0}$ plays no role, here,
and the absolute value of the average energy of interaction
($|\langle\hat{H}_{\rm int}\rangle|$) plays the role
analogous to the role of the difference $\langle\hat{H}_{\rm int}\rangle-E_{0}$ in the considerations of OT. Physically, we
obtain: the lower the average energy, the higher the rate of
operation. This result is in obvious contradistinction with the
result of Margolus and Levitin [16, 17]—in terms of the
Margolus-Levitin theorem, our result would read: the lower the
difference $\langle\hat{H}_{\rm int}\rangle-E_{0}$, the higher the rate of (nonorthogonal) transformation. On the
other side, our result is not reducible to the previously obtained
bound characterized in terms of the spread in energy [11-15], thus
providing us with a new bound in the quantum information theory.
2. The quantum ”oracle” operation
We concern ourselves with the bounds characterizing the rate of
(or, equivalently, the minimum time needed for) the reversible transformations of a quantum system’s states.
Therefore, the bounds known for the irreversible transformations
are of no use here. Still, it is a plausible statement that the
information processing should be faster for a system with higher
(average) energy, even if—as it is the case in the classical
reversible information processing—the system does not dissipate
energy (e.g. [25]). This intuition of the classical information
theory is justified by the Margolus-Levitin bound [16, 17].
However, this bound refers to OT, and does not necessarily
applies to the nonorthogonal evolution.
The typical nonorthogonal transformations in the quantum
computing theory are operations of the quantum ”oracles” employing ”quantum entanglement” [18-21, 23, 24]. Actually,
the operation considered is defined by the following state
transformation:
$$|\Psi_{i}\rangle_{IO}=\sum_{x}C_{x}|x\rangle_{I}\otimes|0\rangle_{O}\to|\Psi_{%
f}\rangle_{IO}=\sum_{x}C_{x}|x\rangle_{I}\otimes|f(x)\rangle_{O},$$
(1)1( 1 )
where $\{|x\rangle_{I}\}$ represents the
”computational basis” of the input register, while $|0\rangle_{O}$ represents an initial state of the output register; by
”$f$” we denote the oracle transformation.
The key point is that the transformation (1) does not
[18-21] require the orthogonality condition ${}_{IO}\langle\Psi_{i}|\Psi_{f}\rangle_{IO}=0$ to be fulfilled. Rather,
orthogonality for the subsystem’s states is required
[18-21]:
$${}_{O}\langle f(x)|f(x^{\prime})\rangle_{O}=0,\;x\neq x^{\prime}$$
(2)2( 2 )
for at least some pairs $(x,x^{\prime})$, which, in turn, is
neither necessary nor a sufficient condition for the
orthogonality ${}_{IO}\langle\Psi_{i}|\Psi_{f}\rangle_{IO}=0$
to be fulfilled.
Physical implementation of the quantum oracles of the type Eq.
(1) is an open question of the quantum computation theory. However
(and in analogy with the quantum measurement and the decoherence
process [26-31]), it is well understood that the implementation
should rely on (at least indirect, or externally controlled) interaction in the system $I+O$ as presented by the following
equality:
$$|\Psi_{f}\rangle_{IO}\equiv\hat{U}(t)|\Psi_{i}\rangle_{IO}\equiv\hat{U}(t)\sum%
_{x}C_{x}|x\rangle_{I}\otimes|0\rangle_{O}=\sum_{x}C_{x}|x\rangle_{I}\otimes|f%
(x,t)\rangle_{O},$$
(3)3( 3 )
where $\hat{U}(t)$ represents the unitary operator of
evolution in time (Schrodinger equation) for the combined system
$I+O$; index $t$ represents an instant of time. Therefore, the
operation (1) requires the orthogonality:
$${}_{O}\langle f(x,t)|f(x^{\prime},t)\rangle_{O}=0,$$
(4)4( 4 )
which substitutes the equality (2).
So, our task reads: by the use of Eq. (4), we investigate
the minimum time needed for establishing of the entanglement
present on the r.h.s. of Eq. (1), i.e. of Eq. (3).
3. The optimal bound for the quantum oracle operation
In this Section we derive the optimal bound for the minimum time
needed for execution of the transformation (1), i.e. (3), as
distinguished by the expression (4). We consider the composite
system ”input register + output register ($I+O$)” defined by the
effective Hamiltonian:
$$\hat{H}=\hat{H}_{I}+\hat{H}_{O}+\hat{H}_{\rm int}$$
(5)5( 5 )
where the last term on the r.h.s. of (5) represents the
interaction Hamiltonian. For simplicity, we introduce the
following assumptions: (i) $\partial\hat{H}/\partial t=0$,
(ii) $[\hat{H}_{I},\hat{H}_{\rm int}]=0$, $[\hat{H}_{O},\hat{H}_{\rm int}]=0$, and (iii) $\hat{H}_{\rm int}=C\hat{A}_{I}\otimes\hat{B}_{O}$, where $\hat{A}_{I}$ and $\hat{B}_{O}$ represent unspecified
observables of the input and of the output register,
respectively, while the constant $C$ represents the coupling
constant. As elaborated in Appendix I, the simplifications
(i)-(iii) are not very restrictive. For instance, concerning point
(i)—widely used in the decoherence theory—one can naturally
relax this condition to account for the wide class of the time
dependent Hamiltonians; cf. Eq. (I.1) of Appendix I. Another way
of relaxing the condition (i) is to assume the sudden
switching the interaction in the system on and off.
3.1 The bound derivation
Given the above simplifications (i)-(iii), the unitary operator
$\hat{U}(t)$ (cf. Eq. (3)) spectral form reads:
$$\hat{U}(t)=\sum_{x,i}\exp\{-\imath t(\epsilon_{x}+E_{i}+C\gamma_{xi})/\hbar\}%
\hat{P}_{Ix}\otimes\hat{\Pi}_{Oi}.$$
(6)6( 6 )
The quantities in Eq. (6) are defined by the following spectral
forms: $\hat{H}_{I}=\sum_{x}\epsilon_{x}\hat{P}_{Ix}$, $\hat{H}_{O}=\sum_{i}E_{i}\hat{\Pi}_{Oi}$, and $\hat{H}_{\rm int}=C\sum_{x,i}\gamma_{xi}\hat{P}_{Ix}\otimes\hat{\Pi}_{Oi}$; bearing in mind
that $\hat{A}_{I}=\sum_{x}a_{x}\hat{P}_{Ix}$ and $\hat{B}_{O}=\sum_{i}b_{i}\hat{\Pi}_{Oi}$, the eigenvalues $\gamma_{xi}=a_{x}b_{i}$.
¿From now on, we take the system’s zero of energy at the ground
state by the exchange $E_{xi}\to E_{xi}-{\bf E}_{\circ}$;
$E_{xi}\equiv\epsilon_{x}+E_{i}+C\gamma_{xi}$, ${\bf E}_{\circ}$ is the minimum energy of the composite system—which
Margolus and Levitin [16, 17], as well as Lloyd [1], have used.
Then one easily obtains for the output-register’s states:
$$|f(x,t)\rangle_{O}=\sum_{i}\exp\{-\imath t(\epsilon_{x}+E_{i}+C\gamma_{xi}-{%
\bf E}_{\circ})/\hbar\}\hat{\Pi}_{Oi}|0\rangle_{O}.$$
(7)7( 7 )
Substitution of Eq. (7) into Eq. (4) directly gives:
$$D_{xx^{\prime}}(t)\equiv_{O}\langle f(x,t)|f(x^{\prime},t)\rangle_{O}=\exp\{-%
\imath t(\epsilon_{x}-\epsilon_{x^{\prime}})/\hbar\}\times$$
$$\times\sum_{i}p_{i}\exp\{-\imath Ct(a_{x}-a_{x^{\prime}})b_{i}/\hbar\}=0,\quad%
\sum_{i}p_{i}=1,$$
(8)8( 8 )
where $p_{i}\equiv_{O}\langle 0|\Pi_{Oi}|0\rangle_{O}$. The expression (8) represents the condition for
”orthogonal evolution” of subsystem’s ($O$’s) states bearing
explicit time dependence; the ground energy ${\bf E}_{\circ}$
does not appear in (8).
But this expression is already known from, e.g., the decoherence
theory [26-31]. Actually, one may write:
$$D_{xx^{\prime}}(t)=\exp\{-\imath t(\epsilon_{x}-\epsilon_{x^{\prime}})/\hbar\}%
z_{xx^{\prime}}(t),$$
(9)9( 9 )
where
$$z_{xx^{\prime}}(t)\equiv\sum_{i}p_{i}\exp\{-\imath Ct(a_{x}-a_{x^{\prime}})b_{%
i}/\hbar\}$$
(10)10( 10 )
represents the so-called ”correlation amplitude”,
which appears in the off-diagonal elements of the (sub)system’s
($O$’s) density matrix [26]:
$$\rho_{Oxx^{\prime}}(t)=C_{x}C_{x^{\prime}}^{\ast}z_{xx^{\prime}}(t).$$
So, we could make a direct application of the general results of
the decoherence theory. However, our aim is to estimate the
minimum time for which $D_{xx^{\prime}}(t)$ may approach zero, rather than
calling for the qualitative limit of the decoherence theory
[26]:
$$\lim_{t\to\infty}|z_{xx^{\prime}}(t)|=0,$$
(11)11( 11 )
or equivalently $\lim_{t\to\infty}z_{xx^{\prime}}(t)\to 0$.
In order to obtain the more elaborate quantitative results,
we shall use the inequality $\cos x\geq 1-(2/\pi)(x+\sin x)$,
valid only for $x\geq 0$ [16, 17]. However, the use cannot
be straightforward.
Namely, the exponent in the ”correlation amplitude” is
proportional to:
$$(a_{x}-a_{x^{\prime}})b_{i},$$
(12)12( 12 )
which need not be strictly positive. That is, for a
fixed term $a_{x}-a_{x^{\prime}}>0$, the expression Eq. (12) can be
both positive or negative, depending on the eigenvalues $b_{i}$.
For this reason, we will refer to the general case of eigenvalues
of the observable $\hat{B}_{O}$, $\{b_{i},-\beta_{j}\}$, where both
$b_{i},\beta_{j}>0,\forall{i,j}$.
In general, Eq. (10) reads:
$$z_{xx^{\prime}}(t)=z^{(1)}_{xx^{\prime}}(t)+z_{xx^{\prime}}^{(2)}(t),$$
(13a)13𝑎( 13 italic_a )
where
$$z_{xx^{\prime}}^{(1)}=\sum_{i}p_{i}\exp\{-\imath Ct(a_{x}-a_{x^{\prime}})b_{i}%
/\hbar\},$$
(13b)13𝑏( 13 italic_b )
$$z_{xx^{\prime}}^{(2)}=\sum_{j}p^{\prime}_{j}\exp\{\imath Ct(a_{x}-a_{x^{\prime%
}})\beta_{j}/\hbar\},$$
(13c)13𝑐( 13 italic_c )
while $\,\sum_{i}p_{i}+\sum_{j}p^{\prime}_{j}=1$. Since both
$(a_{x}-a_{x^{\prime}})b_{i}>0$, $(a_{x}-a_{x^{\prime}})\beta_{j}>0\;\;(\forall{i,j})$, one may apply the inequality mentioned above.
Relaxed equality (4)—or relaxed equality (11)—is equivalent to
${\rm Re}\,z_{xx^{\prime}}$ $\cong 0$ and ${\rm Im}\,z_{xx^{\prime}}\cong 0$. Now, from Eq. (13a-c) it directly follows:
$${\rm Re}\,z_{xx^{\prime}}=\sum_{i}p_{i}\cos[C(a_{x}-a_{x^{\prime}})b_{i}t/\hbar]$$
$$+\sum_{j}p^{\prime}_{j}\cos[C(a_{x}-a_{x^{\prime}})\beta_{j}t/\hbar],$$
(14)14( 14 )
which, after applying the above inequality gives:
$${\rm Re}\,z_{xx^{\prime}}>1-{4\over h}C(a_{x}-a_{x^{\prime}})(B_{1}+B_{2})t-{2%
\over\pi}{\rm Im}\,z_{xx^{\prime}}-$$
$$-{4\over\pi}\sum_{i}p_{i}\sin[C(a_{x}-a_{x^{\prime}})b_{i}t/\hbar],$$
(15)15( 15 )
where $B_{1}\equiv\sum_{i}p_{i}b_{i}$, and $B_{2}\equiv\sum_{j}p^{\prime}_{j}\beta_{j}$.
Since $|\sum_{i}p_{i}\sin[C(a_{x}-a_{x^{\prime}})b_{i}t/\hbar]|\leq\sum_{i}p_{i}%
\equiv\alpha<1,\quad\forall{t}$, from Eqs. (11) and (15) it follows:
$$0\cong{\rm Re}\,z_{xx^{\prime}}+{2\over\pi}{\rm Im}\,z_{xx^{\prime}}>1-{4%
\alpha\over\pi}-{4\over h}C(a_{x}-a_{x^{\prime}})(B_{1}+B_{2})t.$$
(16)16( 16 )
¿From (16) it is obvious that the condition imposed by Eq. (4)
cannot be fulfilled in time intervals shorter than $\tau_{xx^{\prime}}$:
$$\tau_{xx^{\prime}}>{(1-4\alpha/\pi)h\over 4C(a_{x}-a_{x^{\prime}})(B_{1}+B_{2}%
)}.$$
(17)17( 17 )
The expression is strictly positive for $\alpha<\pi/4$, and which directly defines the optimal bound $\tau_{\rm ent}$ as:
$$\tau_{\rm ent}={\rm sup}\,\{\tau_{xx^{\prime}}\}.$$
(18)18( 18 )
The assumption $\alpha<\pi/4$ is not very restrictive.
Actually, above, we have supposed that neither $\sum_{i}p_{i}\cong 1$, nor $\sum_{j}p^{\prime}_{j}\cong 1$, while the former is automatically
satisfied with the condition $\alpha<\pi/4$.
3.2 Analysis of the results
The bound $\tau_{\rm ent}$ is obviously determined by the minimum
difference $a_{x}-a_{x^{\prime}}$. This difference is virtually
irrelevant (in the quantum computation models, it is typically of
the order of $\hbar$). So, one may note, that the bound in Eq. (18) may be operationally decreased by the increase in the
coupling constant $C$ and/or by the increase in the sum $B_{1}+B_{2}$. As to the former, for certain quantum ”hardware” [32, 33],
the coupling constant $C$ may be partially manipulated by
experimenter. On the other side, similarly—as it directly
follows from the above definitions of $B_{1}$ and $B_{2}$—by the
choice of the initial state of the output register, one could
eventually increase the rate of the operation by increasing the
sum $B_{1}+B_{2}$.
Bearing in mind the obvious equality:
$$\langle\hat{H}_{\rm int}\rangle=C\langle\hat{A}_{I}\rangle\langle\hat{B}_{O}%
\rangle=C\langle\hat{A}_{I}\rangle(B_{1}-B_{2}),$$
(19)19( 19 )
one directly concludes that adding energy to the
composite system as a whole, does not necessarily increase the
rate of the operation considered. Rather, the rate of the
operation is determined by the absolute value of the average
energy of interaction, $|\langle\hat{H}_{\rm int}\rangle|$. For instance, if $B_{1}\neq 0$ while $B_{2}=0$ (or $B_{2}\neq 0,B_{1}=0$), from Eq. (19) it follows that the increase in
$B_{1}$ (or in $B_{2}$, and/or in the coupling constant $C$) coincides
(for $\langle\hat{A}_{I}\rangle\neq 0$) with the increase in
$|\langle\hat{H}_{\rm int}\rangle|$, as well as with the
decrease in the bound Eq. (18). This observation is in accordance
with the Margolus-Levitin bound [16, 17]: the increase in the
average energy (of interaction) gives rise to the increase in the
rate of the operation (still, without any restrictions posed by
the minimum energy of either the total, or the interaction
Hamiltonian). Therefore, the absolute value $|\langle\hat{H}_{\rm int}\rangle|$ plays, in our considerations, the
role analogous to the role of the difference $\langle\hat{H}_{\rm int}\rangle-E_{0}$ in the considerations of the
”orthogonal transformations”.
However, for the general initial state of the output register,
both $B_{1}\neq 0$ and $B_{2}\neq 0$. Then, e.g., for $B_{1}>B_{2}$:
$$B_{1}+B_{2}=B_{1}(1+\kappa)\leq 2B_{1},\;\kappa\leq 1,$$
(20)20( 20 )
which obviously determines the relative maximum
of the rate of the operation by the following equality:
$$B_{1}=B_{2},\;\kappa=1,$$
(21a)21𝑎( 21 italic_a )
which, in turn (for $\langle\hat{A}_{I}\rangle\neq 0)$,
is equivalent with:
$$\langle\hat{H}_{\rm int}\rangle=0.$$
(21b)21𝑏( 21 italic_b )
But this result is in obvious contradistinction with the
result of Margolus and Levitin [16, 17]. Actually, the expressions
(21a,b) stem that, apart from the concrete values of $B_{1}$ and
$B_{2}$, the relative maximum of the rate of the operation requires (mathematically: implies) the zero average energy
of interaction, $\langle\hat{H}_{\rm int}\rangle=0$—which
(as distinguished above) is analogous to the equality $\langle\hat{H}_{\rm int}\rangle-E_{0}=0$ for ”orthogonal
transformations.”
4. Discussion
Intuitively, the speed of change of a system’s state should be
directly proportional to the average energy of the system. This
intuition is directly justified for the quantum OT by the
Margolus-Levitin theorem [16, 17]. Naively, one would expect this
statement to be of relevance also for the nonorthogonal
evolution. Actually, in the course of the orthogonal evolution,
the system’s state ”passes” through a ”set” of nonorthogonal
states, thus making nonorthogonal evolution faster than the
orthogonal evolution itself. (At least this physical picture is
justified for ”realistic” interpretations of quantum mechanics,
like the dynamical-reduction or many-worlds theories.)
This intuition is obviously incorrect for the cases studied. In a
sense, the expressions (21) state the opposite: the lower the
difference $B_{1}-B_{2}$ (i.e. the lower the average energy of
interaction, $|\langle\hat{H}_{\rm int}\rangle|$), the
faster the operation considered. Therefore, our main result, Eq.
(21), is in obvious contradistinction with the conclusion drawn
from the Margolus-Levitin bound [16, 17]: the zero energy
quantum information processing is possible and, in the sense of
Eq. (21), is even preferable. From the operational
point of view, the bound $\tau_{\rm ent}$ can be decreased by
manipulating the interaction in the combined system $I+O$, as
well as by the proper local operations (e.g., the proper
state preparations increasing the sum $B_{1}+B_{2}$) performed on
the output register.
As it can be easily shown, the increase in the sum $B_{1}+B_{2}$
coincides with the increase in the spread in $\hat{B}_{O}$, $\Delta\hat{B}_{O}$, i.e. with the increase in the spread $\Delta\hat{H}_{\rm int}$. This observation however cannot be interpreted as to
suggest reducibility of the bound in Eq. (17) onto the bound
characterized in terms of the spread in energy [11-15]—in the
case studied, $\Delta\hat{H}_{\rm int}$. Actually, as it is rather
obvious, the increase in the spread $\Delta\hat{H}_{\rm int}$ does not pose any restrictions on the average value $\langle\hat{H}_{\rm int}\rangle$. Therefore, albeit having a common
element with the previously obtained bound [11-15], the bound in
Eqs. (17) and (18), represents a new bound in the quantum
information theory.††${}^{*}$ This bound is of interest also
for the decoherence theory, but it does not provide us with
magnitude of the ”decoherence time”, $\tau_{D}$. Actually, one may
write—in our notation—that $\tau_{D}\propto(a_{x}-a_{x^{\prime}})^{-2}$, while—cf. Eq. (17)—$\tau_{\rm ent}\propto(a_{x}-a_{x^{\prime}})^{-1}$, which therefore indicates $\tau_{D}\gg\tau_{\rm ent}$. This relation is in accordance with the general
results of the decoherence theory: the entanglement formation
should precede the decoherence effect.
From Eq. (17), one directly determines the absolute maximum of
the rate of the operation, i.e. the absolute minimum of the
r.h.s. of Eq. (17). Actually, for $\hat{B}_{O}$ bounded (which is
generally the case for quantum computation models), the
inequality $B_{\rm min}\leq B_{1}+B_{2}\leq B_{\rm max}$ determines
the absolute minimum of the r.h.s. of Eq. (17):
$${(1-4\alpha/\pi)h\over 4C(a_{x}-a_{x^{\prime}})B_{\rm max}},$$
(22)22( 22 )
where $B_{\rm max}$ ($B_{\rm min}$) is the maximum
(minimum) in the set $\{b_{i},\beta_{j}\}$. Interestingly enough,
the minimum Eq. (22) is achievable also in the following special
case: if $B_{1}+B_{2}\equiv p_{1}b_{\rm max}+p^{\prime}_{1}\beta_{\rm max}$, while $p_{1}=p^{\prime}_{1}=1/2$ and $b_{\rm max}=\beta_{\rm max}\equiv B_{\rm max}$, one obtains, again, that $\langle\hat{H}_{\rm int}\rangle=0$; by $b_{\rm max}$ ($\beta_{\rm max}$) we
denote the maximum in the set $\{b_{i}\}$ ($\{\beta_{j}\}$).
It cannot be overemphasized: the zero (average) energy quantum
information processing is in principle possible. Moreover, the
condition $\langle\hat{H}_{\rm int}\rangle=0$ determines the
relative maximum of the operation considered. But this result
challenges our classical intuition, because it is commonly
believed that the efficient information processing presumes an
”energy cost”. In the classical domain, this was established in
1960s by Brillouin [25], following the ground-breaking studies of
Szilard and others on the problem of Maxwell’s demon. So, one may
wonder if ”saving energy” might allow the efficient information
processing ever. Without ambition to give a definite answer
to this question, we want to stress: as long as the ”energy cost”
in the classical information processing (including the
quantum-mechanical ”orthogonal evolution”) is surely necessary,
this need not be the case with the quantum information
processing, such as the entanglement establishing. Actually, the
”classical information” refers to the orthogonal (mutually
distinguishable) states, while dealing exclusively with the
orthogonal states is a basis of the classical information
processing [23]. Nonorthogonal states (i.e. nonorthogonal
transformations) we are concerned with necessarily refer to the
nonclassical information processing. So, without further
ado, we stress that Eq. (21) exhibits a peculiar aspect of the
quantum information (here: of the entanglement formation), so
pointing to the necessity of its closer further study.
The roles of the two registers ($I$ and $O$) are by definition
asymmetric, as obvious from Eqs. (1) and (3). This asymmetry is
apparent also in the bound given in Eq. (17), which is the reason
we do not discuss in detail the role of the average value $\langle\hat{A}_{I}\rangle$. Having in mind the considerations of Section
3, this discussion is really an easy task not significantly
changing the conclusions above.
Finally, applicability of the bound (17) for the general
purposes of the quantum computing theory is limited by the
defining expression Eq. (1). The bound in Eq. (17) is of no
use for the algorithms not employing quantum
entanglement. As such an example, we may consider Grover’s
algorithm [34], which does not employ quantum entanglement in its
oracle operation. As another example, we mention the so-called
”adiabatic quantum computation” (AQC) model [35, 36]. This new
computation model does not employ any ”oracles” whatsoever.
Moreover, the AQC algorithms typically involve non-persistent
entanglement (as distinct from those in Eq. (1)) of states of
neighbor qubits (cf. Eq. (II.1) in Appendix II). Therefore, the
bound in Eq. (17) is of no direct use in AQC, and cannot be used
for analyzing this non-circuit model of quantum
computation (cf. Appendix II).
The work on application of Eq. (17) in optimizing entangling
circuits is in progress, and will be published elsewhere.
5. Conclusion
We show that the zero average energy quantum information
processing is theoretically possible. Specifically, we show that
the entanglement establishing in the course of operation of some
typical quantum oracles employed in the quantum computation
algorithms distinguishes the zero average energy of interaction
in the composite system ”input register + output register”. This
result challenges our classical intuition, which plausibly stems
a need for the ”energy cost” in the information processing. To
this end, our result, which sets a new bound for the nonorthogonal
evolution in the quantum information processing, sets a new
quantitative relation between the concept of information on the
one, and of the physical concept of energy, on the other
side—the relation yet to be properly understood.
Literature:
[1]
S. Lloyd, Nature 406, 1047 (2000).
[2] A. Galindo and M. A. Martin-Delgado, Rev. Mod.
Phys. 74, 347 (2002).
[3] S. Lloyd, Phys. Rev. Lett. 88, 237901-1 (2002)
[4] F. J. Tipler, Int. J. Theor. Phys. 25, 617-661
(1986).
[5] F. C. Adams and G. Laughlin, Rev. Mod. Phys. 69,
337-372 (1997).
[6] C. H. Woo, Found. Phys. 11, 933 (1981).
[7] S. Hagan, S. R. Hameroff, and J. A. Tuszynski, Phys. Rev. E 65, 061901 (2002).
[8]
B. d’Espagnat, ”Conceptual Foundations of Quantum Mechanics”
(Benjamin, Reading, MA, 1971).
[9]
J. A. Wheeler and W. H. Zurek, (eds.), ”Quantum Theory and
Measurement” (Princeton University Press, Princeton, 1982).
[10]
Cvitanović et al, (eds.), ”Quantum Chaos–Quantum Measurement”
(Kluwer Academic Publishers, Dordrecht, 1992).
[11]
S. Braunstein, C. Caves and G. Milburn, Ann. Phys. 247, 135 (1996).
[12]
L. Mandelstam and I. Tamm, J. Phys. (USSR) 9, 249
(1945).
[13]
A. Peres, ”Quantum Theory: Concepts and Methods” (Kluwer Academic
Publishers, Hingham, MA, 1995).
[14]
P. Pfeifer, Phys. Rev. Lett. 70, 3365 (1993).
[15]
L. Vaidman, Am. J. Phys. 60, 182 (1992).
[16]
N. Margolus and L. B. Levitin, in Phys. Comp96, (eds.) T.
Toffoli, M. Biafore and J. Leao (NECSI, Boston 1996).
[17]
N. Margolus, L. B. Levitin, Physica D 120, 188
(1998).
[18]
D.R. Simon, SIAM J. Comp. 26, 1474 (1997).
[19]
P.W. Shor, SIAM J. Comp. 26, 1484 (1997).
[20]
Peter W. Shor, ”Introduction to Quantum Algorithms”, e-print
arXiv quant-ph/0005003.
[21]
M. Ohya and N. Masuda, Open Sys. & Information Dyn. 7, 33 (2000).
[22]
A.M. Steane, Rep. Prog. Phys. 61, 117 (1998).
[23]
M. Nielsen and I. Chuang, ”Quantum Information and Quantum
Computation” (Cambridge University Press, Cambridge, 2000).
[24]
J.Preskill, in ”Introduction to Quantum Computation and
Information”, (eds.) H.K. Lo, S. Popescu, J. Spiller, World
Scientific, Singapore, 1998.
[25] L. Brillouin, ”Science and Information Theory” (New York:
Academic Press, 1962).
[26]
W.H. Zurek, Phys. Rev. D 26, 1862 (1982).
[27]
W.H. Zurek, Prog. Theor. Phys. 89, 281 (1993).
[28]
D. Giulini, E. Joos, C. Kiefer, J. Kupsch, I.-O. Stamatescu and
H.D. Zeh, ”Decoherence and the Appearance of a Classical World in
Quantum Theory” (Springer, Berlin, 1996).
[29]
M. Dugić, Physica Scripta 53, 9 (1996).
[30]
M. Dugić, Physica Scripta 56, 560 (1997).
[31]
W.H. Zurek, Phys. Today 48, 36 (1991).
[32]
Fortschritte der Physik 48, Issue 9-11 (2000).
[33]
Quantum Information & Computation, 1, Special Issue,
December, 2001
[34]
L. Grover, Phys. Rev. Lett. 78, 325 (1997).
[35]
E. Farhi et al, ”Quantum computation by adiabatic evolution”,
e-print arXiv quant-ph/0001106.
[36]
E. Farhi et al, Science 292, 472 (2001).
Appendix I
Relaxing the simplifications (i)-(iii) of Section 3 does not lead
to significant changes of our results. This can be seen by
employing the arguments of Dugić [29, 30], but for
completeness, we briefly outline the main points in this regard.
First, for a time dependent Hamiltonian, which is still a ”nondemolition observable”, $[\hat{H}(t),\hat{H}(t^{\prime})]=0$, the
spectral form [30]:
$$\hat{H}=\sum_{x,i}\gamma_{xi}(t)\hat{P}_{Ix}\otimes\hat{\Pi}_{Oi}.$$
(I.1)𝐼.1( italic_I .1 )
This is a straightforward generalization of the cases
studied and also a wide class of the time dependent Hamiltonians.
E.g., from (I.1) it easily follows that the term $\alpha_{xi}(t)=\int\limits_{0}^{t}\gamma_{xi}(t^{\prime})dt^{\prime}$ substitutes the term
$\gamma_{xi}t$ in the exponent of the expression (6). Needless
to say, this relaxes the constraint (i) of Section 3 and makes
the link to the realistic models of the quantum ”hardware”.
To this end, it is worth emphasizing: in the realistic models one
assumes the actions performed on the qubits in order to design
the system dynamics. Interestingly enough, such actions usually
result in the (effective) time independent model
Hamiltonians [23, 33]. Moreover, some time dependent models allow
direct applicability of the above notion; e.g., for the
controlled Heisenberg interaction, $\hat{H}=J(t)\vec{S}_{1}\cdot\vec{S}_{2}$, the action reads: $\hat{U}(t)=\exp(-\imath K(t)\vec{S}_{1}\cdot\vec{S}_{2})$, where $K(t)\equiv\int\limits_{0}^{t}J(t^{\prime})dt^{\prime}$. As an illustration of the models not fitting (i), one
can consider NMR models which, in turn, are known to be of only
limited use in the large-scale quantum computation [23, 32, 33].
We conclude that the realistic models of the large-scale
quantum computation fit with the relaxed point (i) of our
considerations.
Similarly, relaxing the exact compatibilities (cf. the point (ii)
in Section 3) leads to the approximate separability—i.e., in Eq.
(I.1), there appear terms of small norm—which does not change
the results concerning the ”correlation amplitude” $z_{xx^{\prime}}(t)$
[26], and consequently concerning $D_{xx^{\prime}}(t)$.
Finally, generalization of the form of the interaction
Hamiltonian (cf. point (iii) of Section 3) does not produce any
particular problems, as long as the Hamiltonian is of (at least
approximately) separable kind, and also a ”nondemolition
observable”. E.g., from the general form for $\hat{H}_{\rm int}$ [30], $\hat{H}_{\rm int}=\sum_{k}C_{k}\hat{A}_{Ik}\otimes\hat{B}_{Ok}$, one obtains the term $\sum_{k}C_{k}(a_{kx}-a_{kx^{\prime}})b_{ki}$, instead of the term of Eq. (12).
The changes of the results may occur [30] if the Hamiltonian of
the composite system is not of the separable kind and/or not a
”nondemolition observable”; for an example see Appendix II.
For completeness, we notice: a composite-system observable is of
the separable kind if it can be proved diagonalizable in a noncorrelated (the tensor-product) orthonormal
basis of the Hilbert space of the composite system [30].
Appendix II
By ”nonpersistent entanglement” we mean the states of a composite
system which can be written as:
$$|\Psi\rangle=\sum_{i}C_{it}|i_{t}\rangle|i_{t}\rangle,$$
(II.1)𝐼𝐼.1( italic_I italic_I .1 )
i.e. states whose Schmidt (canonical) form is labeled
by an instant of time, $t$ (continuously varying with time). The
occurrence of such forms for AQC can be easily proved by the use
of the method developed in Ref. [30] applied to, e.g., Eq. (3.5)
of Ref. [35]. Needless to say, states of the Eq.(II.1)-form are
exactly what should be avoided in the situations described by Eq.
(1).
To this end, the problem addressed in Ref. [30] reads: ”what
characteristics of the system Hamiltonian are required in order
to attain the persistent entanglement (cf. Eq. (1))?”. The
answer is given by the points (i)-(iii) (but see Appendix I). In
other words, as long as the conditions (i)-(iii) are fulfilled,
nonpersistent entanglements do not occur in the system. As a
corollary, having (i)-(iii) in mind, the nonpersistent
entanglement of AQC cannot be (at least not directly) addressed
within the present considerations. |
Addressing Spatially Structured Interference in Causal Analysis Using Propensity Scores
Keith W. Zirkle
Marie-Abèle Bind
Jenise L. Swall
David C. Wheeler
Department of Biostatistics, Virginia Commonwealth University, Richmond, VA, USA
Department of Statistics, Harvard University, Cambridge, MA, USA
Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, VA, USA
Abstract
Environmental epidemiologists are increasingly interested in establishing causality between exposures and health outcomes. A popular model for causal inference is the Rubin Causal Model (RCM), which typically seeks to estimate the average difference in study units’ potential outcomes. An important assumption under RCM is no interference; that is, the potential outcomes of one unit are not affected by the exposure status of other units. The no interference assumption is violated if we expect spillover or diffusion of exposure effects based on units’ proximity to other units and several other causal estimands arise. Air pollution epidemiology typically violates this assumption when we expect upwind events to affect downwind or nearby locations. This paper adapts causal assumptions from social network research to address interference and allow estimation of both direct and spillover causal effects. We use propensity score-based methods to estimate these effects when considering the effects of the Environmental Protection Agency’s 2005 nonattainment designations for particulate matter with aerodynamic diameter $<2.5\mu m$ (PM${}_{2.5}$) on lung cancer incidence using county-level data obtained from the Surveillance, Epidemiology, and End Results (SEER) Program. We compare these methods in a rigorous simulation study that considers both spatially autocorrelated variables, interference, and missing confounders. We find that pruning and matching based on the propensity score produces the highest probability coverage of the true causal effects and lower mean squared error. When applied to the research question, we found protective direct and spillover causal effects.
keywords:
Causal inference, interference, propensity scores, spillover effects, air pollution epidemiology, environmental exposure
††journal: Statistics in Medicine
Research reported in this publication was supported by the National Institute of Environmental Health Sciences of the National Institutes of Health under Award Number T32ES007334, by VCU Massey Cancer Center’s Cancer Prevention and Control 2018 Scholarship, by the Office of the Director, National Institute of Health under Award Number DP5OD021412, and by the John Harvard Distinguished Science Fellows Program, within the FAS Division of Science of Harvard University. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
1 Introduction
Many public health studies aim to estimate causal relationships between some exposure or treatment and health-related outcomes; however, most epidemiologic studies are observational.
Observational data present a distinct, but well-studied challenge to address causality. The ideal paradigm for causal analysis is randomization that, on average, leads to similar treated and non-treated subjects, and the treatment effect can be estimated unbiasedly. In observational studies, the treatment or exposure is not typically randomized and must be treated as a conditionally randomized experiment1. Generally, the treatment assignment mechanism is characterized to emulate randomization and usually involves adjusting for multiple covariates known to affect the observed treatment assignment. In other words, the treatment assignment $Z$ should be independent of the potential outcomes $Y(Z)$ given covariates $X$. This is called ignorability and is expressed as
$$P(Z|Y(0),Y(1),X)=P(Z|X).$$
(1)
Rubin’s Causal Model (RCM) is the most popular paradigm for defining causal effects2. Several popular methods exist that utilize RCM, including balancing scores, outcome regression, and principal stratification. For every unit, the potential outcomes are considered, i.e. the unit’s outcome under exposure to the treatment and the unit’s outcome not under exposure to the treatment. The difference in these potential outcomes is considered a causal effect where all other things are held constant except for exposure to the treatment. RCM relies on several assumptions whose validity varies study to study, including the stable unit treatment value assumption (SUTVA), first proposed by Cox in 19583, 4. SUTVA states that (i) there are not multiple forms of a treatment (called “consistency”) and (ii) a unit’s exposure or treatment does not affect the outcome of other units (called “no interference”). Until recently, causal inference methods either struggled to or introduced restrictive assumptions to handle data that may have interference.
We typically expect interference in studies when the outcome or exposure is related to other observations. Readily available examples of interference include infectious diseases 5, social networks 6, 7, educational programs 8, crime prevention strategies 9, and air pollution 10. This interference cannot be ignored without making erroneous inferences 11, 12. More often, the interference is more than a nuisance. Different causal estimands arise under interference, including a direct effect and an indirect (spillover) effect. This spillover effect between units or groups of units can be of research interest 13.
In the last decade, causal methods that handle interference have grown 8, 9, 10, 11.
Most methods make assumptions about the interference structure to allow for relevant estimands. These assumptions are not always appropriate for our application. Hong and Raudenbush proposed unit-specific interference as low or high; they assumed potential outcomes are a function of a subject’s treatment status and the interference status8. Sobel proposed a partial interference assumption where subjects are grouped into classes and no interference is assumed between subjects in different classes12. Zigler et al. further extended Sobel’s partial interference assumption into an assignment group interference assumption (AGIA) where locations within a regulated area do not affect locations in an unregulated area10. They found AGIA may not hold if interference is expected to occur between exposed and control units. Random effects with spatial structure have also been used to account for interference 14, but may be insufficient for capturing spillover between subjects.
We motivate the methods presented in this paper by considering air pollution studies. Spatial interference exists in air pollution applications by air pollution’s very nature. We expect upwind events to affect downwind locations and, thus, expect interference to be directional based on wind patterns. We expect to find a direct causal effect of air pollution (or air pollution regulation) in an area exposed directly to treatment (e.g. regulation) and may expect a spillover effect in downwind areas. Researchers are increasingly interested in these effects because they allow for more strategic initiatives and program implementations and also may provide stronger evidence for regulatory control15.
In this paper, we propose methods for estimating air pollution interference by elucidating spatial relationships and adapting certain causal assumptions to a spatial setting where there is expected spillover of an exposure. We characterize both a treatment assignment and interference mechanism and adjust for confounders for both using propensity score modeling. Our assumptions differ from previous work in air pollution epidemiology, specifically Zigler et al., by allowing interference to occur between exposed and control units and also adjusting for confounding of the interference mechanism10. To evaluate our approach, we compare the methods when applied to simulated datasets where we introduce both spatially autocorrelated covariates and treatment assignment. We finally apply the methods to an air pollution dataset.
2 Methods
2.1 Notation and Causal Estimands
Consider a study population of $i=1,\ldots,N$ contiguous areal units at a certain time. Let $Y_{i}$ be a count outcome such as number of deaths per unit. Denote treatment $Z_{i}\in\{0,1\}$ where $Z_{i}=1$ if unit $i$ receives the treatment and $Z_{i}=0$ otherwise, and let $\mathbf{Z}$ denote the treatment allocation for all units. We illustrate these concepts in Figure 1 for a population of $N=4$ areas (panel A). In Figure 1, we shade treated areas and can express $\mathbf{Z}=\{0,1,0,1\}$ for the study area represented in panel A. Under no interference, each unit has two potential outcomes $Y_{i}(Z_{i})$ expressed as $[Y_{i}(1),Y_{i}(0)]$. We only ever observe one of these outcomes dependent on the observed value of $Z_{i}$. We define a causal effect as $Y_{i}(1)-Y_{i}(0)$. In Figure 1, this would be akin to finding the difference in outcomes for area $i=3$ in panels A and C.
With general interference, the potential outcomes become $Y_{i}(\mathbf{Z})=Y_{i}(Z_{i},\mathbf{Z}_{-i})$ where $Z_{i}$ is the individual treatment for unit $i$ and $\mathbf{Z}_{-i}$ is the treatment vector for all other units in the population. We may reasonably expect interference is limited to a spatial neighborhood structure. We denote this structure through an $N\times N$ network matrix $\mathbf{A}$ where element $A_{ij}>0$ if a relationship exists between units $i$ and $j$. We note that $A_{ij}$ may not equal $A_{ji}$ if the relationship is directional. For example, we expect air pollution interference to vary with or to be a function of wind direction so that pollution in unit $i$ blows downwind to unit $j$, but not vice versa.
Let $\mathbb{N}_{i}$ represent all units that may affect unit $i$, which we call a neighborhood. While other units in a population may affect unit $i$, this is computationally intensive and challenging to interpret so we often restrict analysis to only first-order, or also called adjacent, neighbors. Let $\mathbb{N}_{i}$ represent the neighborhood, and let $\mathbf{A}$ represent a network matrix for these neighbors. Then $A_{ij}>0$ if units $i$ and $j$ are first-order neighbors, or immediately adjacent. Denote the size of the neighborhood of unit $i$ as $m_{i}=\sum_{j=1}^{n}A_{ij}=|\mathbb{N}_{i}|$.
In Figure 1, each area has three first-order neighbors: $\mathbb{N}_{i=1}$ contains areas 2, 3, and 4, $\mathbb{N}_{i=2}$ contains units 1, 3, and 4 as first-order neighbors, etc. We may express $\mathbf{A}=\mathbf{I}_{4\times 4}$ because every area is adjacent with each other.
For all the areas, $m_{i}=3$; that is, each area has three neighbors.
For interference, we are interested in the treatment allocation of $\mathbb{N}_{i}$, $\mathbf{Z}_{\mathbb{N}_{i}}=[z_{j}]_{j\in\mathbb{N}_{i}}\in\{0,1\}^{m_{i}}$. The set of treated units in neighborhood $\mathbb{N}_{i}$ are $\mathbb{N}_{i}^{\mathbf{z}}$, and we consider interference to be a function of $\mathbb{N}_{i}^{\mathbf{z}}$. We will call this a functional neighborhood interference assumption, which is similar to assumptions proposed by several others 9, 16, 8, 17, 18. We define our potential outcomes as
$$Y_{i}(Z_{i},f(\mathbf{Z}_{\mathbb{N}_{i}})).$$
(Definition 1)
In other words, a unit-level potential outcome is dependent both on its own treatment assignment and some function of the treatment assigned to its first-order neighbors. In Figure 1, we may write the potential outcomes for area 3 in panel A as $Y(Z=0,f(\{0,1,1\}))$ where $\{0,1,1\}$ represents the treatment allocation of areas 1, 2, and 4 (the neighbors of area 3).
Until this point, we assume that interference occurs between units $i$ and $j$ based on proximity, e.g. neighborhood structure. We also suggest that the relationship can be directional, e.g. based on wind direction in air pollution studies. From this, we argue that there exists both a treatment assignment mechanism and an interference mechanism, which can be characterized as the network $\mathbf{A}$. Defining an interference mechanism is not well established in the literature and will be contextual to the data and the expected “spillover” of a treatment or exposure. Information on the spatial neighborhood structure may inform this network as well as the observed outcomes 6.
Using typical causal inference estimand language where we estimate effects either from randomized experiment data or meet other causal assumptions necessary for causal analysis with observational data, we define a direct causal effect as:
$$DE_{i}(\mathbf{z})=Y_{i}(Z_{i}=1,f(\mathbf{Z}_{\mathbb{N}_{i}}=\mathbf{z}))-Y_{i}(Z_{i}=0,f(\mathbf{Z}_{\mathbb{N}_{i}}=\mathbf{z}))$$
(2)
where $\mathbf{z}$ represents an observed treatment vector. Kao termed this a “primary” treatment causal effect in social influence networks6 . In Figure 1, we would compare the outcomes for area 3 in panels A and C to estimate an individual direct causal effect. The average direct causal effect can be defined as:
$$\overline{DE}=\frac{1}{N}\sum_{i=1}^{N}DE_{i}(\mathbf{z}).$$
(3)
We define an indirect causal effect as:
$$IE_{i}(z)=Y_{i}(Z_{i}=z,f(\mathbf{Z}_{\mathbb{N}_{i}}=\mathbf{z}))-Y_{i}(Z_{i}=z,f(\mathbf{Z}_{\mathbb{N}_{i}}=\mathbf{0})).$$
(4)
This indirect effect is defined based on which $k$th-order neighbors are assumed to influence unit $i$. First-order neighbors are areas immediately adjacent to area $i$, second-order neighbors are areas adjacent to the first-order neighbors of area $i$; this logic continues for $k$th order neighbors. We assume the direct and indirect effects to be additive, so that $Z_{i}=z$ does not affect indirect effect estimation. We also note that this indirect effect is not comparable to the indirect effects estimated in mediation analysis. Kao termed these “peer influence” causal effects6. In Figure 1, we can estimate the individual indirect effect by comparing the outcomes for area 3 between panels A and B or between panels C and D, where area 3’s treatment is held constant in both these comparisons. We define the average total indirect effect of a unit’s $k$th-order neighbors as:
$$\overline{IE}=\frac{1}{N}\sum_{i=1}^{N}Y_{i}(Z_{i}=z,f(\mathbf{Z}_{\mathbb{N}_{i}}=\mathbf{z}))-Y_{i}(Z_{i}=z,f(\mathbf{Z}_{\mathbb{N}_{i}}=\mathbf{0})).$$
(5)
If we consider an outcome $\mathbf{Y}$ that is the number of outcome events occurring in an area, e.g. number of deaths, number of cancer cases, etc., then we can model the individual potential outcomes for area $i$ as:
$$Y_{i}(Z_{i},f(\mathbf{Z}_{\mathbb{N}_{i}}))\sim\mbox{Poisson}(\theta_{i}E_{i})$$
(6)
where $\theta_{i}$ is the relative risk of event $Y$ happening in unit $i$ and $E_{i}$ is the expected number of events in unit $i$. We further model the relative risk as:
$$\log(\theta_{i})=\tau\cdot Z_{i}+\gamma\cdot f(\mathbf{Z}_{\mathbb{N}_{i}})+\sum^{P}_{p=1}\beta_{p}{X_{ip}}+\epsilon_{i}$$
(7)
where $\mathbf{X}$ represent $P$ confounders and $\epsilon\sim N(0,\sigma^{2})$ is iid random effect. We note that $\tau=\overline{DE}$ and $\gamma=\overline{IE}$. We also assume no interaction between the direct and spillover effects.
2.2 Ignorability Under Interference
In RCM without interference, we typically assume $P(Z|Y(0),Y(1),X)=P(Z|X)$. In observational studies, it is often assumed that covariates $\mathbf{X}$ are sufficient to adjust for confounding in the treatment-outcome relationship. In other words, the potential outcomes are independent of the treatment assignment conditional on the covariates. With interference, we must assume that the potential outcomes $\mathbf{Y(z)}$ are independent of the treatment assignment $\mathbf{Z}$ given the covariates $\mathbf{Z}$ and the influence network $\mathbf{A}$. That is,
$$\mathbf{Y(z)}\Perp\mathbf{Z}|\mathbf{X},\mathbf{A}.$$
(A1)
We make the additional assumption that the potential outcomes are independent of the influence network conditional on the covariates, or
$$\mathbf{Y(z)}\Perp\mathbf{A}|\mathbf{X}.$$
(A2)
Kao called these assumptions the unconfounded treatment assignment and network influence assumptions under network interference6. We collectively call assumptions A1 and A2 ignorability under interference.
Practically, ignorability under interference entails defining both a treatment assignment mechanism and an interference mechanism. Covariates relevant to both mechanisms need to be identified. Characterizing a treatment assignment mechanism is well established in the literature19, but the interference mechanism is a novel concept and should be viewed as “the vehicle through which exposures to peer treatments are delivered” 6. That is, the interference mechanism is how the treatment status of one unit affects the other units. The covariates necessary to meet unconfoundedness for both mechanisms will depend on the data and research question, and the relevant covariates may overlap. We partition the relevant covariates $\mathbf{X}$ as $[\mathbf{X}_{Z},\mathbf{X}_{A}]$ to reflect this.
Rubin first outlined the use of Bayesian inference for estimating causal effects 20. Under the Bayesian framework, potential outcomes are treated as random variables and can be partitioned as $[\mathbf{Y}_{obs},\mathbf{Y}_{mis}]$ for the observed and missing potential outcomes. Bayesian imputation is used to compute the posterior distribution for the missing potential outcomes and causal estimands of interest. Ignorability under interference simplifies modeling and inference by dropping $\mathbf{Z}$ and $\mathbf{A}$ from the posterior distribution.
2.3 Propensity Scores
While ignorability under interference allows us to estimate causal effects with interference, we must still address other challenges to estimating causal effects that occur in observational studies such as covariate imbalances. Because treatment is not randomized in an observational study, we expect treated units to display different characteristics than the control units. This covariate imbalance must be addressed in order to estimate causal effects. Propensity scores are typically estimated as predicted values from a logistic regression model where the outcome is a binary variable indicating which units receive treatment $\mathbf{Z}$. The logistic regression model should adjust for the covariates $\mathbf{X}$ relevant to the treatment assignment and interference mechanisms. Treated and control units are likely to be similar if they have the same propensity score value. By controlling for covariates relevant both to the treatment assignment and to the interference mechanism, we expect units to be similar both in the probability of receiving treatment and in the probability of spillover exposure based on those covariate values. We illustrate this by assuming that the treatment assignment $\mathbf{Z}$ is independent of the potential outcomes given the covariates $[\mathbf{X_{Z},X_{A}}]$ (A1), or
$P(Z=1|Y(Z),X_{A},X_{Z})=P(Z|X_{A},X_{Z})$,
and that the influence network $\mathbf{A}$ is independent of the potential outcomes given the covariates $\mathbf{X_{A}}$ (A2), or
$P(A|Y(Z),X_{A})=P(A|X_{A})$.
Then it follows that
$P(Z=1|Y(Z),X_{A},X_{Z},PS(X_{A},X_{Z}))=P(Z=1|PS(X_{A},X_{Z}))$
where $PS(X_{A},X_{Z})$ is the propensity score that accounts for the covariates $[\mathbf{X_{Z},X_{A}}]$. The proof extends from Imbens and Rubin19 and uses iterated expectations:
$$\displaystyle P(Z=1|Y(Z),X_{A},X_{Z},PS(X_{A},X_{Z}))$$
$$\displaystyle=\mathbb{E}_{Z}(Z|Y(Z),X_{A},X_{Z},PS(X_{A},X_{Z}))$$
$$\displaystyle=\mathbb{E}[\mathbb{E}_{Z}\{Z|Y(Z),X_{A},X_{Z},PS(X_{A},X_{Z})\}|Y(Z),PS(X_{A},X_{Z})]$$
$$\displaystyle=\mathop{\mathbb{E}}[\mathbb{E}_{Z}\{Z|X_{A},X_{Z},PS(X_{A},X_{Z})\}|Y(Z),PS(X_{A},X_{Z})]$$
$$\displaystyle=\mathbb{E}[\mathbb{E}_{Z}\{Z|PS(X_{A},X_{Z})\}|Y(Z),PS(X_{A},X_{Z})]$$
$$\displaystyle=\mathbb{E}_{Z}|PS(X_{A},X_{Z})\}$$
$$\displaystyle=P(Z=1|PS(X_{A},X_{Z})).$$
In other words, given a vector of covariates that ensure unconfoundedness $[\mathbf{X_{Z},X_{A}}]$, adjusting for the treatment difference in a propensity score model removes all biases associated with differences in the covariates because, conditional on the propensity score, the treatment assignment should be independent of the covariates. Rosenbaum and Rubin showed that the difference in outcomes for treated and control units at the same propensity score value is an unbiased average treatment effect21. Propensity score use to address confounding in spatial settings has been limited 22.
Once propensity scores are estimated for a study population, we consider three methods to ensure balance between treated and control units: i.) pruning; ii.) grouping; and iii.) matching. Pruning involves omitting units where there is no propensity score overlap; in other words, the omitted units have no comparable features to the comparison group. Grouping, or stratifying, involves identifying subgroups of treated and control units with similar propensity score values after pruning 21, 8. Matching occurs when units are matched $1:n$ where $n$ is the number of matched units to a particular unit 23, e.g. we may match one treated unit to two control units so the analysis dataset will include $3\times n$ units ($n$ treated units and $2\times n$ control units). In matching, both $n$ and a caliper need to be set. The caliper avoids matching dissimilar units beyond a specified threshold. We expect that if the propensity score model includes the covariates $\mathbf{X}$ necessary for ignorability under interference, then we may account for pertinent differences between treated and control units and improve the credibility of estimated causal effects. We compare propensity score methods to traditional outcome regression of the log relative risk. In outcome regression, we keep all units from the analysis dataset, and we control for $\mathbf{X}$ as confounders with linear adjustment terms. We argue that inference from data with limited overlap may extrapolate causal effects with no observed information for certain confounders for certain units. Bayesian inference may handle some of this uncertainty, but propensity score methods reduce biasedness in causal estimates. The tradeoff is that as our study population changes and we shrink our sample size to reduce covariate imbalance between units, causal estimates can only be interpreted within that study population.
3 Simulation Studies
3.1 Data Generation
We used a simulation study to compare the effectiveness of propensity score methods to outcome regression when estimating causal effects. We performed the simulation under a variety of scenarios that we describe below. In general, we simulated data for $N=500$ areal units based on real counties at the geographic center of the contiguous United States (Figure 2). We conducted 100 simulations for each scenario. For each of the 100 simulated datasets, we specified nine covariates $\mathbf{X}=X_{1},\ldots,X_{9}$ for the 500 fixed units. For each covariate, we randomly generated a mixture distribution of treated and control units using package distr in R with overlap specified in Table 1 24, 25. We computed the empirical overlap between the treated and control distributions for each covariate by generating 1,000 values from each specified distribution using package overlapping. We report the average percentage overlap over the 1,000 generations in Table 126. We also incorporated spatial autocorrelation into each covariate by computing a simultaneous autoregressive (SAR) weights matrix using function invIrW in library spdep and multiplying each generated variable by the matrix with a specified spatial dependence parameter (Table 1) 27, 28.
Five of the simulated covariates, $X_{1},\ldots,X_{5}$, served as confounders for the treatment assignment $\mathbf{Z}$. We represent the mixture distributions for the confounders as $\mathbf{X_{All}}$; the treated distributions for the confounders as $\mathbf{X_{T}}$; and the control distributions for the confounders as $\mathbf{X_{C}}$. We generated treatment $\mathbf{Z}$ as a binary variable where we modeled $\text{logit}(\Pr(Z_{i}=1))$. We also generated a population distribution for each unit as a negative binomial-distributed random variable with size 0.058 and mean 152,169, which we obtained from the applied dataset described in the next subsection.
3.2 Specifying Treatment and Outcome
In our simulations, we compared two treatment assignment models. First, we specified $\mathbf{Z}$ as a Bernoulli trial and modeled the logit of $\mathbf{Z}$ using the confounders’ mixture distributions $\mathbf{X_{All}}$, which we modeled as
$$\text{logit}(\Pr(Z_{i}=1))=\mathbf{X_{All,i}}\alpha+u_{i}+-0.05\cdot e_{i}$$
(8)
where $\alpha=[0.0001,0.00075,0.005,-0.015,-0.001]$ and $u_{i}\sim\mbox{Normal}(0,1)$ and was multiplied by the SAR matrix with autocorrelation 0.9 to mimic unobserved spatial confounding in the treatment assignment and the random effect $e_{i}\overset{iid}{\sim}\mbox{Normal}(-0.5,1)$. We centered the error at -0.05 to allow the treatment assignment model to produce datasets with approximately 40.8% of the 500 units receiving treatment in each simulation. As a second model, we specified $\mathbf{Z}$ using the covariate distributions of the treated units, $\mathbf{X_{T}}$, instead of $\mathbf{X_{All}}$, expressed
$$\text{logit}(\Pr(Z_{i}=1))=\mathbf{X_{T}}\alpha+u_{i}-0.5\cdot e_{i}.$$
(9)
where all other values remained the same as in 8. Using only the distributions of the confounders of the treated units to specify $\mathbf{Z}$ reflects real-life observational studies.
As we assumed the outcome $\mathbf{Y}$ to be Poisson-distributed, we modeled the log relative risk of $\theta_{i}$ and assumed a population risk of 0.001 so that the expected count in each area was $E_{i}=0.001\times Population_{i}$. We specified the outcome with and without interference. Under no interference, we generated relative risks for each unit as
$$\log(\theta_{i})=\tau\cdot Z_{i}.$$
(10)
Under interference, we included a spillover effect $\gamma$ and specified the relative risk model as
$$\log(\theta_{i})=\tau\cdot Z_{i}+\gamma\cdot f(\mathbf{Z}_{-i})$$
(11)
where $f(\mathbf{Z}_{-i})$ was the proportion of neighbors receiving treatment $\mathbf{Z}$. In all instances, we specified the true direct causal effect as $\tau=3$ and the true spillover effect as $\gamma=2$ in simulations with interference.
3.3 Estimation
The aim of our simulation study was to evaluate the impact of several factors on estimating direct and spillover effects. In our simulations that had no interference, i.e. we modeled the log relative risk of $\mathbf{Y}$ without interference (equation 10), we aimed to only estimate the direct effect, $\hat{\tau}$. In simulations with interference, we modeled the log relative risk of $\mathbf{Y}$ as in equation 11 and estimated both $\hat{\tau}$ and the spillover effect, $\hat{\gamma}$, except in cases where we sought to assess the effect of ignoring interference. We compared estimation across four methods. The first of these methods was standard outcome regression as previously described. The remaining models were the propensity score-based methods: pruning, stratification, and matching. For the propensity score-based methods, we consider two different models: (1) the propensity score model (PSM) and (2) the potential outcomes model, which we model as $\log{\hat{\theta_{i}}}$. No PSM was used for standard outcome regression.
In order to achieve ignorability under interference, the relevant covariates for the treatment assignment mechanism consisted of the confounders $\mathbf{X}_{Z}=X_{1},\ldots,X_{5}$. The relevant information for the interference mechanism, $\mathbf{X}_{A}$, was simply first-order proximity. We included this information explicitly when we modeled the spillover effect by only estimating the spillover effect in first-order neighboring units of other treated units. This is true for how we specified interference in the true outcome (equation 11).
For the outcome regression method (which we label “Full” in the results, Tables 4 and 5), we modeled the potential outcomes for the entire sample ($N=500$) in a regression model that assumed a linear relationship between the covariates and log-linear relative risk of the outcome. For simulations that did not consider interference, we modeled the log relative risk as
$$\log(\hat{\theta}_{i})=\hat{\tau}\cdot Z_{i}+\sum^{P}_{p=1}\beta_{p}X_{ip}.$$
(12)
Alternatively, for simulations that aimed to capture interference, we modeled the log relative risk as
$$\log(\hat{\theta}_{i})=\hat{\tau}\cdot Z_{i}+\hat{\gamma}\cdot f(\mathbf{Z}_{i})+\sum^{P}_{p=1}\beta_{p}X_{ip}.$$
(13)
We determined which covariates $\mathbf{X}_{p}$ to adjust for in the outcome regression model by assessing the balance for a particular covariate between the treated and control unit using an unequal variance Student’s $t$-test. We always adjusted based on the covariates’ mixture distribution because in real life we cannot readily separate the treated and control units’ covariate distributions.
For the pruning method (labeled “Pruned” in Tables 4 and 5) , we estimated the propensity score for each unit via logistic regression as
$$\text{logit}(\Pr(Z_{i}=1))=\mathbf{X}\hat{\alpha}$$
(14)
where the $P$ covariates contained in $\mathbf{X}$ varied depending on the simulation setup (described in next section). We iteratively dropped units where there was no propensity score overlap and re-fit a PSM until all units were within the propensity score overlap. We used these $N_{pruned}$ units to model $\log(\hat{\theta}_{i})$ where we adjusted only for the covariates that were not balanced within the $N_{pruned}$ units using the mixture distributions.
For the stratified propensity score method (labeled “Stratified” in Tables 4 and 5), we used the same $N_{pruned}$ units and categorized the treated and control units into quintiles based on their propensity score. We adjusted for the strata in the potential outcomes model along with any covariates that were not balanced within the strata, i.e. we modeled
$$\log(\hat{\theta}_{i})=\hat{\tau}\cdot Z_{i}+\hat{\gamma}\cdot f(\mathbf{Z}_{i})+\sum^{P}_{p=1}\hat{\beta}_{p}X_{ip}+\sum^{5}_{d=1}\hat{\upsilon}_{d},$$
(15)
where $\hat{\upsilon}_{d}$ is the linear adjustment for each propensity score strata. To determine covariate selection, we calculated Student’s $t$-test for each covariate in each strata and evaluated whether the $|\max(t)|>1.96$ where $t$ is the observed test statistic; if so, we included that particular covariate in the potential outcomes model. We chose quintiles due to frequent use in the literature 21, 29.
In the matching method (labeled “Matched” in Tables 4 and 5), we matched treated and control units 1:1 with a caliper of 1.0 standard deviations of the propensity score. That is, we matched only units where the difference in propensity score values did not exceed more than one standard deviation of the PSM. Because not every unit can be matched this way, the sample size was reduced to $N_{matched}$.
We estimated all models using Markov chain Monte Carlo (MCMC) in OpenBUGS and the R computing environment 30. For each model, we ran two chains for 20,000 iterations with a burn-in of 10,000 and thinned the remaining sample to every 15 iterations. We provided non-informative normal priors centered at 0 with variances equal to 1,000 for each parameter such that
$$\hat{\tau},\hat{\gamma},\hat{\beta}_{p},\hat{\upsilon}_{d}\sim\mbox{Normal}(0,1000).$$
(16)
We assessed the chains’ convergence using the Gelman-Rubin Diagnostic 31. If the chains did not converge, then we iteratively ran the chains for another 20,000 iterations with the same burn-in and thinning parameter until convergence was achieved. We assessed the mean posterior estimate for the direct and spillover effects and the variance of the distribution in order to calculate the mean squared error (MSE) and probability coverage for the estimates relative to the known true values. We considered the optimal method to be the one with highest probability coverage and minimal MSE.
3.4 Simulation Scenarios
Our simulations explored a variety of possible scenarios when estimating causal effects with and without interference (Table 2). In scenarios 1 and 3a, we explored how interference affects estimation of the direct and spillover effects by modeling the log relative risk of outcome $\mathbf{Y}$ without interference using equation 10. For all other scenarios, we incorporated interference by modeling $\mathbf{Y}$ as specified in equation 11. We further explored the impact of different treatment assignment models in scenarios 1, 2a, and 2b compared to all other scenarios. In scenarios 1, 2a, and 2b, we generated treatment assignment $\mathbf{Z}$ as in equation 8 where the confounders’ mixture distribution informed the treatment assignment generation. In the other scenarios, only the confounder distributions of the treated units contributed to $\mathbf{Z}$, as specified in equation 9. In all cases, the PSM relies on the confounders’ mixture distributions as in practice.
From this paradigm, scenario 1 may be described as a straightforward causal analysis with no interference, considered as a baseline case. Scenario 3a was a similar straightforward causal analysis with no interference, but illustrated the effects of the treatment assignment $\mathbf{Z}$ being specified as it truly is in nature, i.e. treatment assignment is informed only by the confounders’ treated distributions versus the observed mixtures.
Scenarios 2a and 2b described estimating the direct effect when there is interference and the analysis ignores it (2a) and when the analysis assumes or tackles it (2b). In both these scenarios, we assumed $\mathbf{Z}$ was generated using the confounders’ mixture distribution. Scenario 3b was arguably the most straightforward and realistic scenario to practice where there was interference, the analysis assumed it, and the treatment assignment was informed by only the confounders’ treated distributions. Scenario 3a contrasted from 3b by having no interference and the analysis not assuming it.
We evaluated the use of spatial random effects in scenarios 4a and 4b by including such an effect in the potential outcomes model. We were interested in the random effects’ ability to discern a spillover effect when we did and did not specify a spillover effect explicitly in our potential outcomes model (equation 12 versus 13, or 4a versus 4b). In both scenarios, we had specified the true outcome with interference. We specified the spatial random effect to have an intrinsic conditionally autoregressive prior (CAR) 32, which conditions a variable on its neighbors’ values using a spatial adjacency matrix. The CAR random effect $\mathbf{\zeta}$ may be specified as:
$$\zeta_{i}|\zeta_{-i},\mathbf{W},\omega^{2}\sim N\Bigg{(}\frac{\sum^{N}_{i=1}w_{ji}\zeta_{i}}{\sum^{N}_{i=1}w_{ji}},\frac{\omega^{2}}{\sum^{N}_{i=1}w_{ji}}\Bigg{)}$$
(17)
where $\mathbf{W}$ is a $N\times N$ adjacency matrix with $w_{ij}=1$ if areas $i$ and $j$ are spatially contiguous and $\omega^{2}$ is the variance of the spatial random effects $\mathbf{\zeta}$.
In scenario 5, we considered the addition of covariates beyond the true confounders $\mathbf{X}_{T}$. Specifically, we introduced $X_{6},\ldots,X_{9}$ to the outcome regression model of the log relative risk and to the PSMs for each method. In contrast, we considered the omission of true confounders from the analysis in scenarios 6a and 6b. In 6a, we omitted $X_{5}$ from the outcome regression and PSM, and in 6b, we omitted both $X_{4}$ and $X_{5}$.
3.5 Results
As expected, we found that the average sample size shrunk as we moved from the full dataset to the pruned to the matched dataset in our simulation study. For the full method, which used the entire simulated sample data, the analysis dataset was consistently the 500 counties. However, we found the pruned and stratified datasets to be around 495 units as shown in Table 3. The matched datasets showed the most drastic size reduction. The lowest sample sizes observed were in scenarios 6a and 6b ($\overline{N}_{Matched}=391.50$) and scenario 5 ($\overline{N}_{Matched}=391.90$). These scenarios modified the number of covariates in the PSM, which in turn may have increased the difficulty of matching by reducing the propensity score overlap between treated and control units by either using less covariates (scenarios 6a and 6b) or incorporating irrelevant covariates (scenario 5). This contributed to more dissimilar treated and control units.
We found that the stratified and matched methods consistently outperformed the full and pruned methods when estimating direct and spillover effects simultaneously. In scenario 1, which contained no interference and only attempted to estimate a direct effect, we found the higher probability coverages in the pruned and matched methods (91% and 90% respectively, Table 4); however, the matched method had the lowest MSE (1.012 $\cdot 10\times E^{-5}$, Table 5). That is, the credible interval for the direct effect $\hat{\tau}$ contained the true value of $\tau$ for 90 of the 100 simulations using the matched dataset. In comparison, the credible interval for the direct effect only contained the true value for 65 of the 100 simulations using the full dataset.
In scenario 2a, we introduced interference in the dataset, but made no attempt to estimate a spillover effect. In turn, we observed that we entirely missed estimating the true direct effect (0% probability coverage across all methods), though the MSE remained relatively low (Table 5). In scenario 2b, we estimated both the direct and spillover effects when interference was present. The full method had marginal probability coverage when estimating the direct effect (26%), but found most success in the stratified and matched methods. The credible interval for the direct effect contained the true value of $\tau$ 94 times out of the 100 simulations for both methods (Table 4). Similarly, the credible interval for the spillover effect had 91% and 92% probability coverage for the stratified and matched methods, respectively.
In scenarios 3a onward, we found that covariate imbalance played a role in successful estimation of the direct and spillover effects. The full method continued to underperform the other methods even when there was no interference, with lower probability coverage for capturing the true direct effect and higher MSE (scenario 3a in Tables 4 and 5). When interference was introduced, the full method severely failed: the credible interval for $\hat{\tau}$ only captured the true direct effect 33 of the 100 simulations in scenario 3b. The full method was slightly better at estimating the spillover effect; the credible intervals for $\hat{\gamma}$ contained the true spillover effect for 52 of the 100 simulations.
Spatial random effects, as demonstrated in scenarios 4a and 4b, did not improve estimation for the direct and spillover effects. When interference was present and a spillover effect was not included in the potential outcomes model (scenario 4a), the probability of capturing the true direct effect was 50% or lower for all methods (Table 4) and the MSE was five-fold higher when compared to all other scenarios (Table 5). For the pruning scenario, the MSEs may be highest because we dropped observations but still needed to estimate coefficients for imbalanced covariates. Even when the spillover effect was included in the potential outcomes model (scenario 4b), we compared scenario 4b to 3b to find that the model with a spatial random effect did not perform as well as the model with no spatial random effect. In fact, the matched method with a spatial random effect captured the true direct effect 69 out of 100 times compared to the matched method with no spatial random effect, which captured the true direct effect 91 out of 100 times. This was also true when estimating the spillover effect: 64% probability coverage compared to 94% probability coverage.
Incorporating extraneous covariates into the PSM increased the probability coverage for the direct effect when estimating the direct and spillover effects in the full method (scenario 5 compared to scenario 3b). The stratified method had the highest probability coverage for estimating both effects, however (Table 4). We observed slightly higher MSE for estimating these effects when the PSM contained irrelevant confounders (Table 5).
The MSE increased even more in scenarios 6a and 6b as we omitted relevant confounders ($X_{5}$ in scenario 6a and both $X_{4}$ and $X_{5}$ in scenario 6b). However, we still found high probability coverage (above 84%) for capturing the true direct and spillover effects in every method except the full method in these scenarios.
4 Data Application
4.1 Methods
We applied the methods to publicly available data from the Surveillance, Epidemiology, and End Results (SEER) program and Environmental Protection Agency (EPA) to estimate the causal effects of air pollution regulation on lung cancer incidence. SEER is a cancer registry database covering approximately 26% of the U.S. population with data registries including California, New Mexico, Iowa, Kentucky, Louisiana, Connecticut, New Jersey, and Georgia along with several metropolitan areas. The database contains information on year and county of cancer diagnosis and patient demographics33.
In 1970, the Clean Air Act allowed the EPA to regulate air emissions and establish National Ambient Air Quality Standards (NAAQS) intended to protect public health and manage hazardous pollutant emissions34. In 1990, under the Clean Air Act Ammendments (CAAA), the EPA began to regulate air quality emissions and designate areas in violation of the NAAQS as “nonattainment”, prompting these areas to take actions to improve air quality10. In 1997 the EPA updated its standards for ambient concentrations of particulate matter with aerodynamic diameter $<2.5\mu m$ (PM${}_{2.5}$). In 2005, the EPA designated certain counties as nonattainment or otherwise “attainment” (or “unclassifiable” if there was not enough data to classify35). The EPA made these designations based on nine factors including historic air quality, population density and urbanization, traffic and commuting patterns, meterology, and geography 36. They used available data from 2001 onward.
Several of these regulated areas overlap with the study area of the SEER Program. Our analysis focused specifically on lung cancer cases in California, Georgia, and Kentucky (Figure 3) based on the ratio of nonattainment counties to all counties respective to each state. Multiple studies have established significant associations between air pollution exposure and lung cancer incidence 37, 38, 39. We evaluated lung cancer cases reported between 2005 and 2013. We aimed to estimate the causal effects of the 1997 (PM${}_{2.5}$) nonattainment designations on lung cancer incidence starting in 2005 following nonattainment designation. Based on previous research on particulate matter with aerodynamic diameter $<10$ $\mu m$ and ozone (O${}_{3}$)10, we expected interference and sought to estimate direct causal effects in the nonattainment counties and spillover causal effects in surrounding counties.
We imputed PM${}_{2.5}$ values from the Downscaler Model40. The Downscaler Model allowed for finer scale predictions rather than simply observed values from the EPA Air Quality System, because the Downscaler Model utilizes both observed measurements and the Community Multi-Scale Air Quality Model (CMAQ). Because the data are reported as a mean value with a standard error at the census tract level, we treated PM${}_{2.5}$ as a random Gaussian process with mean and standard error specified by the Downscaler Model. For each ZIP code, we generated ten values in each year and aggregated to the county level using the mean. The final analysis was performed using the county-level aggregated data.
We obtained additional data from multiple sources including nonattainment designations from the EPA Nonattainment Areas for Criteria Pollutants (“Green Book”)35, population demographics from the U.S. Census, meteorological variables from the Automated Surface Observing System (ASOS), elevation at county centroids from ESRI, and county-level smoking rates estimated from the Centers for Disease Control and Prevention’s Behavioral Risk Factor Surveillance System 41. We considered lung cancer cases reported between 2002 and 2004 and ASOS climate metrics observed prior to 2005 to be baseline measurements.
We obtained ASOS weather data from 705 weather stations located across the three states in our study area in addition to the bordering states for edge correction. The stations collected data with varying frequency (typically more than once a day) and with no consistent coverage across the study area. For each year, we averaged observed measurements for wind speed (knots), wind direction (degrees), relative humidity, dew point and air temperatures (Fahrenheit), visibility (in miles), barometric pressure, and hourly precipitation (inches). We omitted observations where wind exceeded 50 knots, relative humidity was observed over 100%, and visibility was observed greater than 11 miles, believing such observations to be errors. To average circular wind direction, we converted wind speed and direction into U and V cosine direction vectors. For each variable, we interpolated observations at weather station sites across each state individually (California, Georgia, and Kentucky) using inverse distance weighting (IDW). We chose the optimal power and neighborhood size by comparing sums of squared residuals after 5-fold cross-validation. We also compared IDW to ordinary kriging and found that IDW had lower sum of squared residuals. We assigned each county the interpolated value at the county’s centroid. Finally, we reverted the U and V vectors to wind direction and speed (which we report in meters per second). This interpolation method was shown effective for directional data by Gumiaux 42.
The full dataset contained information on the 337 counties in the three states, of which 46 were designated nonattainment. To build the PSM for each method, we used Student’s $t$-test for covariate selection and also considered practical knowledge of nonattainment designation and PM${}_{2.5}$ dispersion. While we made no inference from the PSM, we limited the number of covariates in the model to avoid overfitting and used mean values across the study period, i.e. we averaged values between 2002 and 2004 for pre-treatment periods and between 2005 and 2013 for post-treatment. The final PSM contained the covariates listed in Table 6. Of these, we identified confounders relevant to the treatment assignment to be a county’s ozone nonattainment designations made in 2004 based on 8-hour levels; mean PM${}_{2.5}$ between 2002 and 2004 and between 2005 and 2013, separately; mean temperature; percent urban housing units; mean relative humidity; elevation of the country centroid; population density in 2000; and the mean amount of time spent traveling to work by workers aged 16 or older (minutes). We also included dewpoint temperature in 2000 and 2013 as possible confounders based on imbalances found in exploratory analyses. For the interference mechanism, we identified mean wind speed between 2005 and 2013 and elevation as relevant covariates that could contribute to spillover. We chose these covariates to meet the ignorability under interference assumption. We believe the treatment (PM2.5 nonattainment designation) and the network influence mechanism to be unconfounded based on subject matter expertise, including the variables that determined nonattainment designations and the factors that influence downwind spillover.
In the full model, we used data from all 337 counties and modeled the log relative risk of lung cancer while adjusting for ozone nonattainment designation, mean PM${}_{2.5}$, mean temperature, mean relative humidity, population density, percentage urban housing, mean work travel time, dewpoint temperature in 2000 and 2013, smoking prevalence, and the mean number of lung cancer cases 2002-2004.
To prune the data, we fit a logistic regression PSM that contained all confounders listed in Table 6. Within the pruned dataset, we checked for imbalances in the covariates before modeling the log relative risk in a potential outcomes model. We used the pruned dataset additionally for the stratified potential outcomes model. We identified three subgroups based on the PSM, splitting the data at the 86th and 96.5th quantiles of the PSM. We chose these quantiles in order to have an equal number of nonattainment counties within each strata. The potential outcomes model contained two indicators to correspond to these three subgroups. We again checked for covariate balance within these strata and adjusted for the appropriate confounders. Finally, we created two matched datasets from the pruned dataset. In the first matched dataset, we matched 1:1 with a caliper of 0.25 standard deviation of the PSM. In the second dataset, we matched 1:1 with a caliper of one standard deviation. Once more, we controlled for relevant confounders in the potential outcomes model for both matched datasets.
For all of the potential outcomes models, we included a direct effect term, $\hat{\tau}$, corresponding to nonattainment designation and a spillover term, $\hat{\gamma}$, corresponding to the proportion of nonattainment counties surrounding a county. We also included a temporal term for each year with an exchangeable prior to capture changes in lung cancer risk across time not captured by confounders or nonattainment designation. We provided normal priors to all estimated parameters with gamma hyperpriors for the variance terms with shape and scale parameters of 0.1 and 0.1, respectively. For each model, we ran two MCMC chains for the same lengths, burn-in, and thinning as the simulation studies until we achieved convergence. Overall, nonattainment designation was static based on the 2005 designations. For the final report models, we also assumed the spillover structure to be the proportion of adjacent nonattainment counties bordering a county.
We considered several spillover structures for this application. We initially assumed spillover would be directional based on wind patterns. We used the interpolated annual wind bearing for each county to identify first- and second-order neighboring counties of nonattainment counties whose centroids fell within 30${}^{\circ}$ of the wind bearing. We expanded this threshold to 45${}^{\circ}$ to compare whether this better captured spillover counties. We also evaluated two simpler spillover structures: an indicator for whether a county bordered a nonattainment county and the proportion of nonattainment counties bordering a county. We compared models’ deviance information criteria (DIC)43 to assess which spillover structure best fit the matched data. Models with lower DIC are considered better candidates for a dataset. In our evaluation, we also included models with spillover terms corresponding to a mixture of structures. We found that the proportion of nonattainment counties bordering a county as a spillover structure resulted in a model with the lowest DIC. We also found the weight for this structure to be near 1 in models that included this structure in weighted mixtures.
4.2 Results
Similar to the simulation study results, the sample size for the applied dataset shrank between methods. The pruned dataset contained 239 counties (Figure 4). We adjusted for ozone nonattainment designation, mean relative humidity, mean work travel time, elevation, and smoking prevalence in the pruned model along with a temporal term. For the stratified model, we used the same 239 counties and adjusted for mean relative humidity, dewpoint temperature in 2013, smoking prevalence, and ozone nonattainment designation in the potential outcomes model.
In the two matched models, we continued to control for mean PM${}_{2.5}$ 2002-2004, smoking prevalence, and ozone nonattainment designation. The first matched dataset (caliper equal to 0.25 standard deviation) contained 30 counties (Figure 5). The second matched dataset contained 36 counties due to the larger caliper (equal to 1 standard deviation) (Figure 6).
In general, we found a direct and spillover effect that trended toward protective across the models (near or below 1 on the relative risk scale, Table 7). We found a significant direct effect in the full and stratified models. According to the full model, counties with nonattainment designation have between 0.90 and 0.91 times the risk of lung cancer incidence as control counties after adjusting for relevant covariates and the proportion of nonattainment counties bordering a county. Similarly, the stratified model identified counties with nonattainment designation as having between 0.93 and 0.96 times the risk of lung cancer incidence compared to counties without nonattainment designation.
For the spillover effect, the risk trended toward protective except in the case of the full model (posterior mean: 1.66, 95% CI: 1.61, 1.71). These results suggest a decreased risk of lung cancer incidence in counties bordering counties with nonattainment designation. This decreased relative risk would only be fully achieved if a county is entirely surrounded by nonattainment counties (of which only five counties in our entire dataset are); otherwise, this risk must be multiplied by the proportion of a county’s neighbors that are designated nonattainment. The total causal effect is the sum of the direct and spillover effects. We remind the reader that the true effects could be not protective and misleading if we have not addressed all confounders, i.e., violated our ignorability under interference assumption. More research needs to occur to validate these findings. Further research may also consider including second- or higher-order neighbor spillover effects. However, due to counties’ larger size compared to oft-used ZIP codes or census tracts or block groups in spatial analyses, first-order neighbor counties may capture substantial portions of PM${}_{2.5}$’s range.
Because the matched sample sizes were so small, we conducted an additional simulation study to assess how sample size may affect causal effect estimation. Using the template of scenario 3b, which we believed to best approximate the applied analysis, we sampled 30 units from the complete matched dataset in each of the 100 simulations. We found probability coverages of 88% and 93% and MSE of $9.93\times 10^{-5}$ and 0.00019, respectively, for the direct and spillover effects. Based upon the results of this simulation study, the observed results in our application are credible.
5 Discussion
In this chapter, we have adapted new assumptions to spatial settings so that we may estimate causal effects in the presence of spatial interference. We have illustrated how these assumptions may be met when addressing causal questions in air pollution epidemiology and shown how to apply these assumptions to a specific dataset. We also proposed methods to accurately and precisely estimate direct and spillover causal effects in the presence of interference and have demonstrated the scenarios in which these methods may outperform each other.
In general, we argue that researchers should characterize both a treatment assignment and interference mechanism when dealing with data under interference 6. We also encourage using propensity scores to handle imbalances in confounders, which are inherent in observational studies 23, 29.
The use of propensity scores in spatial causal analysis has been limited. We know of only one other paper that utilizes propensity scores to address spatial confounding22, and in that case distance between study units was utilized within the propensity score model.
We have also shown that including spatial random effects in a potential outcomes model is inadequate for addressing spatial confounding. Even when the spillover is explicitly and correctly modeled, we found higher bias and variance in the estimates for direct and spillover effects if the analysis even captured the true parameter value at all. Spatial random effects have been used previously to address spatial interference14, but our simulations show this may lead to misestimation of direct and spillover effects. Estimating multiple spatially structured terms, including a treatment effect, spillover effect, and spatial random effect, may be too computationally challenging as shown in scenarios 4a and 4b. Similar results were previously demonstrated by Hodges and Reich when studying associations 44, but were not explored in a causal context. They found that including spatially-correlated error terms may contribute to collinearity and inflate error variances. Further research may explore how including both a spillover effect corresponding to an asymmetric, directional neighborhood matrix and a CAR random effect based on first-order adjacency may interact together. We suspect, however, that CAR random effects are not so flexible that they could capture the structure of the indirect effect as we specified in our simulations (i.e., the proportion of treated neighbors). We also did not address in this paper how well CAR random effects estimate spatial dependence in data that was generated using a SAR, though using a binary weights matrix as we did is more similar to a CAR structure than row-standardized weights.
We also found that incorporating covariates that are not necessarily confounders, as may happen when the treatment assignment mechanism is not fully understood, does not severely hinder estimation of causal effects (scenario 5). Future may research may include a simulation where true confounders are included in the outcome model; we omitted this detail for simplicity’s sake, yet illustrated how confounders may be modeled for estimation purposes. We also demonstrated that successful estimation is still possible when known confounders are omitted as long as covariate imbalances are appropriately addressed (scenarios 6a and 6b). In summary, rigorous methods may supplement missing confounders that theoretically violate ignorability under interference.
Further research may extend our simulation designs to covariates that are spatially correlated with one another. This is the first simulation design we know of to incorporate spatially autocorrelated covariates when assessing methods, but we also know that in reality covariates are often correlated with one another. This paper does not address how that may affect analysis. We also do not investigate how the size of an areal unit may affect results, though covariates such as population density and land mass may proxy for this information. We recognize that the size of an area, specifically the proportion of area bordering neighboring areas, may impact spillover effects and the accuracy of spillover effect estimation.
We experienced some of these challenges in the applied dataset. Overall, we found evidence of protective direct and spillover effects from nonattainment designation at the county level. This suggests that county-level actions to reduce PM${}_{2.5}$ had a causal effect on lung cancer incidence, and further policy and action should be considered either in these already designated areas or in other areas with elevated lung cancer risk. It is important to note that the true causal effects may not be protective if we have not addressed all confounders, i.e., violated our ignorability under interference assumption. More research needs to be conducted to validate these findings. We also recognize that lung cancer can have an extended latency, and we may see stronger evidence if this analysis considered exposure periods of a decade or longer 39. We theorize that the spillover effect may be stronger than the direct effect because counties receiving spillover from the nonattainment policy may have a history of lower PM${}_{2.5}$ levels (if they are not nonattainment counties themselves) and are benefitting from county-level actions taken to reduce PM${}_{2.5}$. The non-significant direct effects may be attributed to the fact that PM${}_{2.5}$ levels decreased in both control and nonattainment counties at nearly identical rates across the study period (Figure 7). This fact remained true for all the study populations. We especially note the uptick in PM${}_{2.5}$ from 2012 to 2013, most notable in the matched datasets. Possible explanations include PM${}_{2.5}$-reduction actions tapering off with time and PM${}_{2.5}$ output sources relocating to other counties not being actively regulated.
We additionally recognize that we made a structural assumption about the spillover from nonattainment counties into other counties. While we explored other spillover structures, we remained surprised that the proportion of nonattainment counties surrounding a county explained the most variation in lung cancer risk. Spatial analysis in meteorology remains its own challenge. Further work may be done to better model wind direction and PM${}_{2.5}$ trajectories, which could enhance spillover estimation in this application42.
In summary, the methods outlined in this paper motivate further research in air pollution epidemiology. We are especially hopeful to apply these methods to estimate effects of air pollution in China on the United States’ West Coast. However, we also believe these methods may be applied beyond air pollution studies to a broader class of observational studies with interference. Researchers increasingly ask questions that involve spatially-related units. While nuances and expert knowledge change from study to study, we believe assumptions and analytical approaches may cater to specific problems and answer causal questions where we can model the relationship between units and the effect of a treatment or an exposure on an outcome.
The data that support the findings of this study are available from the corre- sponding author upon reasonable request.
6 References
References
(1)
M. A. Hernán, J. M. Robins, Estimating causal effects from epidemiologic
data, Journal of Epidemiologic Community Health 60 (7) (2006) 578–586.
(2)
D. Rubin, Estimating causal effects of treatments in randomized and
nonrandomized studies, Journal of Educational Psychology 66 (1974) 688.
(3)
D. Cox, Planning Experiments, New York, NY: Wiley, 1958.
(4)
D. B. Rubin, Bayesianly justifiable and relevant frequency calculations for the
applies statistician, The Annals of Statistics 12 (4) (1984) 1151–1172.
(5)
M. E. Halloran, C. J. Struchiner, Causal Inference in Infectious
Diseases, Epidemiology 6 (2) (1995) 142–151.
(6)
E. K. Kao, Causal Inference Under Networl Interference: A Framework
for Experiments on Social Networks, Dissertation, Department of
Statistics, Harvard University.
(7)
P. Toulis, E. Kao, Estimation of Causal Peer Influence Effects,
Proceedings of the 30th International Conference on International Conference
on Machine Learning 28 (2013) 1489–1497.
(8)
G. Hong, S. W. Raudenbush, Evaluating Kindergarten Retention Policy,
Journal of the American Statistical Association 101 (475) (2006) 901–910.
(9)
N. Verbitsky-Savitz, S. Raudenbush, Causal Inference Under Interference
in Spatial Settings: A Case Study Evaluating Community
Policing Program in Chicago, Epidemiologic Methods 1 (1) (2012)
107–130.
(10)
C. Zigler, F. Dominici, Y. Wang, Estimating causal effects of air quality
regulations using principal stratification for spatially correlated
mutlivariate intermediate outcomes, Biostatistics 13 (2) (2012) 289–302.
(11)
E. J. T. Tchetgen, T. J. VanderWeele, On causal inference in the presence of
interference, Statistical Methods in Medical Research 21 (1) (2010) 55–75.
(12)
M. E. Sobel, What Do Randomized Studies of Housing Mobility
Demonstrate?: Causal Inference in the Face of Interference, Journal
of the American Statistical Association 101 (476) (2006) 1398–1407.
(13)
M. G. Hudgens, M. E. Halloran, Toward Causal Inference with Interference,
Journal of the American Statistical Association 103 (482) (2008) 832–842.
(14)
C. M. Zigler, C. Choirat, F. Dominici, Impact of National Ambient Air
Quality Standards nonattainment designations on particulate pollution and
health, Epidemiology (2017) 1–30.
(15)
C. M. Zigler, F. Dominici, Point: Clarifying Policy Evidence With
Potential-Outcomes Thinking—Beyond Exposure-Response
Estimation in Air Pollution Epidemiology, American Journal of
Epidemiology 180 (12).
(16)
D. L. Sussman, E. M. Airoldi, Elements of estimation theory for causal
effects in the presence of network interference, ArXiv e-printsarXiv:1702.03578.
(17)
S. Athey, D. Eckles, G. Imbens, Exact P-values for Network
Interference, ArXiv e-printsarXiv:1506.02084.
(18)
C. Manski, Identification of treatment response with social interactions,
Econometrics Journal 16 (1).
doi:10.1111/j.1368-423X.2012.00368.x.
(19)
G. Imbens, Causal inference for statistics, social, and biomedical sciences :
an introduction, 2015.
(20)
D. B. Rubin, Bayesian Inference for Causal Effects: The Role of
Randomization, The Annals of Statistics 6 (1978) 34–58.
(21)
P. H. Rosenbaum, D. B. Rubin, The Central Role of the Propensity Score
in Observational Studies for Causal Effects, Biometrika 70 (1983)
41–55.
(22)
G. Papadogeorgou, C. Choirat, C. M. Zigler, Adjusting for unmeasured spatial
confounding with distance adjusted propensity score matching, Biostatistics.
(23)
E. A. Stuart, Matching Methods for Causal Inference: A Review and a
Look Forward, Statistical Science: a Review Journal of the Institute of
Mathematical Statistics 25 (1) (2010) 1–21.
(24)
R Core Team, R: A Language and Environment
for Statistical Computing, R Foundation for Statistical Computing, Vienna,
Austria (2017).
URL https://www.R-project.org/
(25)
P. Ruckdeschel, M. Kohl, General
purpose convolution algorithm in S4 classes by means of fft, Journal of
Statistical Software 59 (4) (2014) 1–25.
URL http://www.jstatsoft.org/v59/i04/
(26)
M. Pastore, overlapping:
Estimation of Overlapping in Empirical Distributions, r package version
1.5.0 (2017).
URL https://CRAN.R-project.org/package=overlapping
(27)
A. D. Cliff, J. K. Ord, Spatial Processes, Pion, 1981.
(28)
R. Bivand, G. Piras, Comparing
implementations of estimation methods for spatial econometrics, Journal of
Statistical Software 63 (18) (2015) 1–36.
URL https://www.jstatsoft.org/v63/i18/
(29)
P. C. Austin, An Introduction to Propensity Score Methods for
Reducing Effects of Confounding in Observational Studies,
Multivariate Behavioral Research 46 (2011) 399–424.
(30)
D. Lunn, D. Spiegelhalter, A. Thomas, N. Best,
The bugs
project: Evolution, critique and future directions, Statistics in Medicine
28 (25) 3049–3067.
arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/sim.3680,
doi:10.1002/sim.3680.
URL https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.3680
(31)
A. Gelman, D. B. Rubin, Inference
from iterative simulation using multiple sequences, Statist. Sci. 7 (4)
(1992) 457–472.
doi:10.1214/ss/1177011136.
URL https://doi.org/10.1214/ss/1177011136
(32)
J. Besag, J. York, A. Mollié,
Bayesian image restoration, with
two applications in spatial statistics, Annals of the Institute of
Statistical Mathematics 43 (1) (1991) 1–20.
doi:10.1007/BF00116466.
URL https://doi.org/10.1007/BF00116466
(33)
E. K. Cahoon, R. M. Pfeiffer, D. C. Wheeler, J. Arhancet, S.-W. Lin, B. H.
Alexander, M. S. Linet, D. M. Freedman, Relationship between ambient
ultraviolet radiation and non-Hodgkin lymphoma subtypes: A U.S.
population-based study of racial and ethnic groups, International Journal of
Cancer 136 (2015) 432–441.
(34)
U. S. Environmental Protection Agency, Summary of the Clean Air
ActHttps://www.epa.gov/laws-regulations/summary-clean-air-act.
(35)
U. S. Environmental Protection Agency, Green Book PM-2.5 (1997)
Area
InformationHttps://www.epa.gov/green-book/green-book-pm-25-1997-area-information.
(36)
U. S. Environmental Protection Agency, Technical support for state and
tribal air quality fine particle (pm2.5) designations (2004) 5–1–5–3.
(37)
L. Gharibvand, D. Shavlik, M. Ghamsary, W. L. Beeson, S. Soret, R. Knutsen,
S. F. Knutsen, The association between ambient fine particulate air pollution
and lung cancer incidence: Results from the ahsmog-2 study, Environmental
health perspectives 125 (3).
(38)
P. J. Villeneuve, M. Jerrett, D. Brenner, J. Su, H. Chen, J. R. McLaughlin,
Villeneuve et al. respond to “impact of air pollution on lung cancer”,
American Journal of Epidemiology 179 (4) (2014) 455–456.
(39)
J. E. Hart, Invited commentary: Epidemiologic studies of the impact of air
pollution on lung cancer, American Journal of Epidemiology 179 (4) (2014)
452–454.
(40)
D. Holland.
Downscaler
model for predicting daily air pollution [online] (March 2019) [cited March
22, 2019].
(41)
L. Dwyer-Lindgren, A. H. Mokdad, T. Srebotnjak, A. D. Flaxman, G. M. Hansen,
C. J. Murray, Cigarette smoking
prevalence in us counties: 1996-2012, Population Health Metrics 12 (1)
(2014) 5.
doi:10.1186/1478-7954-12-5.
URL https://doi.org/10.1186/1478-7954-12-5
(42)
C. Gumiaux, D. Gapais, J. Brun, Geostatistics applied to best-fit interpolation
of orientation data, Tectonophysics 376 (3) (2003) 241–259.
(43)
D. J. Spiegelhalter, N. G. Best, B. P. Carlin, A. Van Der Linde,
Bayesian
measures of model complexity and fit, Journal of the Royal Statistical
Society: Series B (Statistical Methodology) 64 (4) 583–639.
arXiv:https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/1467-9868.00353,
doi:10.1111/1467-9868.00353.
URL https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/1467-9868.00353
(44)
J. S. Hodges, B. J. Reich, Adding Spatially-Correlated Errors Can Mess Up the
Fixed Effect You Love, Vol. 64, American Statistical Association, 2010. |
On the sizes of large subgraphs of the binomial random graph††thanks: The first author’s research is partially supported by NSF
Grant DMS-1500121 and DMS-1764123, Arnold O. Beckman Research Award (UIUC Campus Research Board RB 18132) and
the Langan Scholar Fund (UIUC). The second authors’s research is supported by the grant 16-11-10014 of Russian Science Foundation.
József Balogh111Department of Mathematical Sciences, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA,
and Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprodny, Moscow Region, 141701, Russian Federation.
Maksim Zhukovskii222Moscow Institute of Physics and Technology, laboratory of advanced combinatorics and network applications, 9 Institutskiy per., Dolgoprodny, Moscow Region, 141701, Russian Federation; Adyghe State University, Caucasus mathematical center, ul. Pervomayskaya, 208, Maykop, Republic of Adygea, 385000, Russian Federation; The Russian Presidential Academy of National Economy and Public Administration, Prospect Vernadskogo, 84, bldg 2, Moscow, 119571, Russian Federation.
()
Abstract
In the paper, we answer the following two questions. Given $e(k)=p{k\choose 2}+O(k)$, what is the maximum $k$ such that $G(n,p)$ has an induced subgraph with $k$ vertices and $e(k)$ edges? Given $k>\varepsilon n$, what is the maximum $\mu$ such that a.a.s. the set of sizes of $k$-vertex subgraphs of $G(n,p)$ contains a full interval of length $\mu$? We prove that the value $\mathcal{X}_{n}$ from the first question is not concentrated in any finite set (in contrast to the case of a small $e=e(k)$). Moreover, we prove that for an $\omega_{n}\to\infty$, a.a.s. the size of the concentration set is smaller than $\omega_{n}\sqrt{n/\ln n}$. Otherwise, for an arbitrary constant $C>0$, a.a.s. it is bigger than $C\sqrt{n/\ln n}$. Our answer on the second question is the following: $\mu=\Theta\left(\sqrt{(n-k)n\ln{n\choose k}}\right)$.
1 Introduction
Consider a sequence $\mathcal{F}_{k}$ of sets of graphs on $k$ vertices (i.e., for every $k\in\mathbb{N}$, $\mathcal{F}_{k}$ is a set of graphs on $k$ vertices). Let $X_{n}$ be the maximum $k$ such that there exists $F\in\mathcal{F}_{k}$ and an induced subgraph $H$ in the binomial random graph $G(n,p)$ (see, e.g., [2, 4, 12, 20]) such that $H$ and $F$ are isomorphic. Below, we briefly discuss the main results on an asymptotical behaviour of $X_{n}$ (although we focus on constant $p$, we try to state all known results in most general settings).
The first related result describes an asymptotical behaviour of the independence number (the maximum size of an independent set) and the clique number (the maximum size of a clique) of $G(n,p)$ [6, 16, 17]. It states that, for arbitrary constant $p\in(0,1)$, there exists $f(n)$ such that asymptotically almost surely (a.a.s.) the clique number of $G(n,p)$ belongs to $\{f(n),f(n)+1\}$ (below, in such situations we say that there is a 2-point concentration). By symmetry reasons, the same is true for the independence number. For the latter parameter, the same techniques work when $p=p(n)$ is large enough ($p\geq n^{-\varepsilon}$ for small enough constant $\varepsilon>0$). By symmetry reasons, the same (but for small enough $p$ — e.g., for $p\leq 1-n^{-\varepsilon}$) is true for the clique number. Certain improvements and generalizations of these results can be found in [14, 19].
Clearly, the above concentration results are special cases of the considered general problem. Indeed, $X_{n}$ is the independence number (the clique number), if each $\mathcal{F}_{k}$ contains only the empty (complete) graph.
A natural question to ask is, what about other ‘common’ graph sequences, such as paths, cycles, etc.? Let, for $k\in\mathbb{N}$, $\mathcal{F}_{k}=\{F_{k}\}$. In [7], 2-point concentration results are obtained for $F_{k}=P_{k}$ (simple path on $k$ vertices) and $F_{k}=C_{k}$ (simple cycle on $k$ vertices). Both results hold when $p\geq n^{-1/2}(\ln n)^{2}$.
Let us turn to larger graph families $\mathcal{F}_{k}$. The following families were considered by several researchers: trees, regular graphs, complete bipartite graphs and complete multipartite graphs. Unfortunately, for all these families, it is still unknown, if there is a 2-point concentration, or even an $m$-point concentration for some fixed number $m$. In 1983, Erdős and Palka [10] proved that, for trees (i.e., $\mathcal{F}_{k}$ consists of all trees on $k$ vertices), $\frac{X_{n}}{\ln n}\stackrel{{\scriptstyle{\sf P}}}{{\to}}\frac{2}{\ln[1/(1-p)]}$ as $n\to\infty$ (hereinafter, $\stackrel{{\scriptstyle{\sf P}}}{{\to}}$ denotes the convergence in probability). In 1987, Ruciński [21] obtained a similar law of large numbers type general result for a respectively wide class of graph families $\mathcal{F}_{k}$. In particular, from his result follows that: if $\mathcal{F}_{k}$ are sets of $ck(1+o(1))$-regular graphs, then $\frac{X_{n}}{\ln n}\stackrel{{\scriptstyle{\sf P}}}{{\to}}\frac{2}{c\ln[1/p]+(%
1-c)\ln[1/(1-p)]}$ as $n\to\infty$. For several families of complete bipartite and multipartite graphs, similar results were obtained in [18, 21].
In [13], families of graphs having different edge conditions are considered. More formally, given a sequence $e=e(k)$, $\mathcal{F}_{k}=\mathcal{F}_{k}(e)$ is a set of all graphs on $k$ vertices having at most $e(k)$ edges. The main result of [13] states, in particular, the following. Let $n^{-1/3+\varepsilon}<p<1-\varepsilon$ for some $\varepsilon\in(0,1/3)$. Let $e=e(k)=o(\frac{pk\ln k}{\ln\ln k})$ be a sequence of non-negative integers. Then there is a function $f(n)$ such that a.a.s. $X_{n}\in\{f(n),f(n)+1\}$.
It is easy to show, using the so-called second moment method, that a similar result holds for families of graphs having exactly $e$ edges: if $0\leq e(k)=O(k)$ (for sure, this bound can be improved, but there is no point to be very precise here), $\mathcal{F}_{k}=\mathcal{F}_{k}(e)$ is the set of all graphs on $k$ vertices with exactly $e(k)$ edges, then there is a 2-point concentration of $X_{n}$. For the sake of convenience, let us denote this random variable $X_{n}$ by $\mathcal{X}_{n}[e]$.
One of the main goals of our study is to find a natural sequence of graph families such that, for the respective $X_{n}$, there is no sequence $f(n)$ and fixed number $m$ such that a.a.s. $X_{n}\in[f(n),f(n)+m]$ (in such cases, we say that $X_{n}$ is not tightly concentrated). In particular, we want to find a sequence $e=e(k)$ such that $\mathcal{X}_{n}[e]$ is not tightly concentrated. It is quite natural to check, if the ‘average’ number of edges $e(k)=p{k\choose 2}+O(1)$ is appropriate (since the number of edges is integer, the right side should be integer as well — this is why $O(1)$ appears). In other words, how many vertices should we remove from the random graph to make the number of edges equal to the expected number? Is this number of vertices tightly concentrated? We give the following answer for both questions for a much wider class of functions $e(k)$.
Theorem 1
Let $e(k)={k\choose 2}p+O(k)$ be a sequence of non-negative integers.
(i)
There exists $t>0$ such that, for $c>t$ and $C>2c+t$, we have
$$0<{\mathrm{lim\,inf}}_{n\to\infty}{\sf P}\left(n-C\sqrt{\frac{n}{\ln n}}<%
\mathcal{X}_{n}(e)<n-c\sqrt{\frac{n}{\ln n}}\right)\leq$$
$${\mathrm{lim\,sup}}_{n\to\infty}{\sf P}\left(n-C\sqrt{\frac{n}{\ln n}}<%
\mathcal{X}_{n}(e)<n-c\sqrt{\frac{n}{\ln n}}\right)<1.$$
(ii)
Let, for a sequence $m_{k}=O(\sqrt{k/\ln k})$ of non-negative integers, the following smoothness condition hold: $\left|\left(e(k)-{k\choose 2}p\right)-\left(e(k-m_{k})-{k-m_{k}\choose 2}p%
\right)\right|=o(k)$. Then, for every $\varepsilon>0$, there exist $c,C$ such that
$${\mathrm{lim\,inf}}_{n\to\infty}{\sf P}\left(n-C\sqrt{\frac{n}{\ln n}}<%
\mathcal{X}_{n}(e)<n-c\sqrt{\frac{n}{\ln n}}\right)>1-\varepsilon.$$
Remark. The first part of Theorem 1 implies that $\mathcal{X}_{n}(e)$ is not tightly concentrated. Moreover, the size of the concentration set is $O(\sqrt{\frac{\ln n}{n}})$, and this asymptotical bound is best possible. The smoothness condition in (ii) holds for all $e(k)={k\choose 2}p+o(k)$.
This result is closely related to a study of possible sizes (i.e., number of edges) of subgraphs of the random graph that was started by Alon and Kostochka [1]. Let us ask the following question. What is the maximum $\mu=\mu(k)$, $k\in\mathbb{N}$, such that a.a.s., for every $k$, the set of sizes of $k$-vertex induced subgraphs of $G(n,p)$ contains a full interval of length $\mu(k)$? In [1] it is proved that, for $k\leq 10^{-3}n$ and $p=1/2$, $\mu=\Omega(k^{3/2})$.
This result is motivated by the following conjecture of Erdős, Faudree and Sós (see [8, 9]): for every constant $c>0$, there exists a constant $b=b(c)>0$ so that if $G$ is a $c$-Ramsey graph on $n$ vertices, then the number of distinct pairs $(|V(H)|,|E(H)|)$, as $H$ ranges over all induced subgraphs of $G$, is at least $bn^{5/2}$ (an $n$-vertex graph is $c$-Ramsey, if both its independence number and clique number are at most $c\ln n$; $V(H)$ and $E(H)$ denotes the set of vertices and the set of edges of $H$ respectively). The result of [1] immediately implies that the conjecture is true for almost all graphs. Recently, the conjecture was proved by Kwan and Sudakov [15].
Extending results of [1], we get asymptotically close upper and lower bounds (that differ in a constant multiplicative factor) on $\mu$ for $k>\varepsilon n$.
Theorem 2
Let $\varepsilon>0$ be an arbitrary small constant, $m(k)=\sqrt{(n-k)n\ln{n\choose k}}$ for $k\geq\varepsilon n$ and $m(k)=k\sqrt{\ln{n\choose k}}$ for $k<\varepsilon n$.
(i)
There exists $q>0$ such that a.a.s., for every $k\in\{\lfloor\varepsilon n\rfloor,\ldots,n-1\}$, the set of sizes of induced $k$-vertex subgraphs of $G(n,p)$ contains a full interval of length at least $qm(k)$. Moreover, a.a.s., for every $k\in\{1,\ldots,\lfloor\varepsilon n\rfloor-1\}$, the set of sizes of induced $k$-vertex subgraphs of $G(n,p)$ contains a full interval of length at least $qk^{3/2}$.
(ii)
There exists $Q>0$ such that a.a.s., for every $k\in\{1,\ldots,n-1\}$, the set of sizes of induced $k$-vertex subgraphs of $G(n,p)$ does not contain any full interval of length at least $Qm(k)$.
Therefore, for $k\geq\varepsilon n$, $\mu=\Theta\left(\sqrt{(n-k)n\ln{n\choose k}}\right)$. For $k<\varepsilon n$, $\mu\in[qk^{3/2},Qk^{3/2}\ln(n/k)]$ for some constants $q,Q$. The latter lower bound ($\mu\geq qk^{3/2}$) follows immediately from the result of [1] since their proof works for arbitrary constant $p$. However, our result is of the most interest when $k=n-o(n)$ and $m(k)$ becomes much smaller than $k^{3/2}$.
Notice that, for all $k<\frac{(2-\delta)}{\max\{\ln(1/p),\ln(1/(1-p))\}}\ln n$, the exact value of $\mu(k)$ is known: $\mu(k)={k\choose 2}+1$ since a.a.s., for every such $k$ and every graph $F$ on $k$ vertices, there is an induced subgraph in $G(n,p)$ isomorphic to $F$ (this is a simple exercise that can be solved using the second moment method; for $p=1/2$, it appears as exercise 1 in [2]).
2 Preliminaries
Given a graph $\Gamma$ and a set $U\subset V(\Gamma)$, we call the number of edges of $\Gamma$ having vertices in $U$ the degree of $U$ and denote it $\delta(U)$ (i.e., $\delta(U)=|\{\{u,v\}\in E(\Gamma):\,\text{either }u\in U,\text{ or }v\in U\}|$).
We also use notations $v(\Gamma)$ and $e(\Gamma)$ for the number of vertices and the number of edges in $\Gamma$ respectively; $\Delta[\Gamma]$ denotes the maximum degree of $\Gamma$.
As usual, the vertex set of $G(n,p)$ is $\{1,\ldots,n\}$ and we denote it by $V_{n}$. We will use the following fact: a.a.s. the maximum degree of $G(n,p)$ is at most $pn+\sqrt{2p(1-p)n\ln n}$ [5].
Let $\Phi(x)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-t^{2}/2}dt$. Consider a binomial random variable $\xi$ with parameters $N$ and $p$. Then, by DeMoivre–Laplace theorem (see, e.g., [3] and [11]), for $h=o(N^{1/6})$,
$${\sf P}\left[\xi\leq Np+h\sqrt{Np(1-p)}\right]\sim\Phi(h),$$
(1)
$$\text{for integer }Np+h\sqrt{Np(1-p)},\quad{\sf P}\left[\xi=Np+h\sqrt{Np(1-p)}%
\right]\sim\frac{1}{\sqrt{2\pi Np(1-p)}}e^{-h^{2}/2}$$
(2)
as $N\to\infty$.
In our proofs, we multiple times use the following relation:
$$1-\Phi(x)\sim\frac{1}{\sqrt{2\pi}x}e^{-x^{2}/2}\text{ as }x\to\infty$$
(3)
(see relation ($1^{\prime}$) in [3]).
3 Proof of Theorem 1
Denote $f(k)=e(k)-{k\choose 2}p$. Below in the proof, we assume that $Q\in\mathbb{R}$ is such that $-Qk\leq f(k)\leq Qk$ for all $k\in\mathbb{N}$.
The proof is divided into three parts.
In Section 3.1, we consider several bounds on the number of edges in $G(n,p)$ that are true with positive asymptotical probabilities. That is, we consider two intervals $I_{n}^{1},I_{n}^{2}$ and a set $I_{n}(\varepsilon)$ such that the left bound $a_{2}+Q$ of $I_{n}^{2}$ is bigger than the right bound $b_{1}-Q$ of $I_{n}^{1}$, and the difference between them is bigger than $2Q$. All intervals are of sizes $O(n)$, the asymptotical probability that the number of edges is inside $I_{n}^{j}$, $j\in\{1,2\}$, is positive, and the probability of the same event but for $I_{n}(\varepsilon)$ is bigger than $1-\varepsilon$.
In Section 3.2, we obtain upper bounds on $\mathcal{X}_{n}[e]$. First, we assume that $e(G(n,p))\in I_{n}^{i}$ and obtain upper bounds $B_{i}=n-c_{i}\sqrt{n/\ln n}$. Second, we assume that $e(G(n,p))\in I_{n}(\varepsilon)$, and obtain an upper bound $B(\varepsilon)=n-c\sqrt{n/\ln n}$.
In Section 3.3, we obtain lower bounds on $X$. First, we assume that $e(G(n,p))\in I_{n}^{i}$ and obtain lower bounds $A_{i}=n-C_{i}\sqrt{n/\ln n}$. Second, we assume that $e(G(n,p))\in I_{n}(\varepsilon)$ and obtain a lower bound $A(\varepsilon)=n-C\sqrt{n\ln n}$.
Combining the second and the third part, we obtain that, first, the lower bound $A_{2}$ is bigger than the upper bound $B_{1}$ whenever $a_{2}>2b_{1}$. This finishes the proof of Theorem 1.(i). Second, since both bounds $A(\varepsilon)$ and $B(\varepsilon)$ are true with asymptotical probabilities at least $1-\varepsilon$, we get Theorem 1.(ii).
3.1 Bounds on the number of edges
Fix real numbers $a_{1}<b_{1}<a_{2}<b_{2}$ such that $a_{1}>0$, $b_{1}>a_{1}+2Q$, $a_{2}>2b_{1}$, $b_{2}>a_{2}+2Q$. Consider the sets
$$I_{n}^{1}=\left(p{n\choose 2}+(a_{1}+Q)n,\,p{n\choose 2}+(b_{1}-Q)n\right),$$
$$I_{n}^{2}=\left(p{n\choose 2}+(a_{2}+Q)n,\,p{n\choose 2}+(b_{2}-Q)n\right).$$
Let $\gamma>0$ be such that, for $n$ large enough,
$$\min\left\{{\sf P}(e(G(n,p))\in I_{n}^{1}),\,{\sf P}(e(G(n,p))\in I_{n}^{2})%
\right\}>\gamma.$$
(4)
Such $\gamma$ exists since $e(G(n,p))\sim$Bin$({n\choose 2},p)$, see Section 2.
Moreover, for every $\varepsilon>0$, consider $a=a(\varepsilon)$ and $b=b(\varepsilon)$ such that, for $n$ large enough,
$${\sf P}\left(e(G(n,p))\in I_{n}(\varepsilon)\right)>1-\varepsilon,\text{ where }$$
$$I_{n}(\varepsilon)=\left(p{n\choose 2}-(b-Q)n,p{n\choose 2}+(b-Q)n\right)%
\setminus\left[e(n)-an,e(n)+an\right].$$
(5)
3.2 Upper bounds on $\mathcal{X}_{n}(e)$
Consider a sequence of integers $m=m(n)\leq\frac{c}{\sqrt{2p(1-p)}}\sqrt{\frac{n}{\ln n}}$. Denote $M=M(m)={m\choose 2}+m(n-m)$ the maximum possible degree of an $m$-set. Then, for a fixed $m$-set, the expected value of its degree equals $pM$. Consider the random variable
$$Y_{m}=\max_{U\in{V_{n}\choose m}}\delta(U).$$
Then $Y_{1}=\Delta[G(n,p)]$ is the maximum degree of $G(n,p)$. Since $Y_{1}<pn+\sqrt{2p(1-p)n\ln n}$ holds a.a.s. (see Section 2), we immediately get that, a.a.s.
$$Y_{m}\leq mY_{1}<mpn+m\sqrt{2p(1-p)n\ln n}=Mp+m\sqrt{2p(1-p)n\ln n}+o(n).$$
A.a.s. $Y_{m}<Mp+cn+o(n)$. Under the assumption that $e(G(n,p))>p{n\choose 2}+(a_{i}+Q)n$, we should “kill” at least $a_{i}n$ extra edges to obtain at most $Qn$ edges more than the average value. Thus, if $c<a_{i}$, a.a.s. we cannot reach the desired number of edges by removing an $m$-set. Therefore, for every $\delta>0$, from (4), we get that
$${\sf P}\left(\mathcal{X}_{n}(e)<n-\frac{a_{i}(1-\delta)}{\sqrt{2p(1-p)}}\sqrt{%
\frac{n}{\ln n}}\right)>\gamma$$
(6)
for all large enough $n$ and $i\in\{1,2\}$.
Since $|f(n)-f(n-m)|=o(n)$, in the same way, from (5), we get that
$${\sf P}\left(\mathcal{X}_{n}(e)<n-\frac{a(1-\delta)}{\sqrt{2p(1-p)}}\sqrt{%
\frac{n}{\ln n}}\right)>1-\varepsilon$$
(7)
for all large enough $n$.
3.3 Lower bounds on $\mathcal{X}_{n}(e)$
This part of the proof is divided into five parts. The overall idea is to use a small set of vertices (we extract it in Section 3.3.1) to make the number of edges precisely $e(k)$. This small set appears helpful after the major part of extra edges is destroyed. More precisely, having $(b+Q)n$ edges more than the average, we can easily destroy extra $bn$ edges by removing a set of $O(\sqrt{n/\ln n})$ vertices. We do that in Section 3.3.2. But this is far from what we need since $f$ may differ a lot from its bound $Q$. In Section 3.3.3, we show how to reduce the number of extra edges up to $O(\sqrt{n\ln n})$. We use the supplementary small set in Sections 3.3.4 and 3.3.5 where we get the precise number of edges in two steps exploiting two equal parts of the set.
3.3.1 Extracting a supplementary part
Let $n_{0}=\left\lfloor\frac{\sqrt{n}}{\ln n}\right\rfloor$, $\tilde{n}=n-2n_{0}$. Consider the partition $V_{n}=\{1,\ldots,2n_{0}\}\sqcup\tilde{V}_{\tilde{n}}$, where $\tilde{V}_{\tilde{n}}=\{2n_{0}+1,\ldots,n\}$. Divide the supplementary set $\{1,\ldots,2n_{0}\}$ into two disjoint parts of equal sizes $V_{1}=\{1,\ldots,n_{0}\}$ and $V_{2}=\{n_{0}+1,\ldots,2n_{0}\}$. Denote by $G_{\tilde{n}}$ the subgraph of $G(n,p)$ induced by $\tilde{V}_{\tilde{n}}$.
For $A\in\mathbb{R}$, let $\zeta(A)$ be the number of vertices in $G_{\tilde{n}}$ with degrees greater than
$$\tilde{n}p+A\sqrt{\tilde{n}p(1-p)\ln\tilde{n}}=np+A\sqrt{np(1-p)\ln n}+O(\sqrt%
{n}).$$
3.3.2 Estimating from above the number of vertices we need to remove
Fix $c>0$. Let us estimate the probability
$$g_{c}:={\sf P}\left[\zeta\left(\sqrt{\frac{1}{2}}\right)>c\sqrt{\frac{n}{\pi%
\ln n}}\right].$$
By (1) and (3), the expectation ${\sf E}\zeta$ of $\zeta:=\zeta\left(\sqrt{\frac{1}{2}}\right)$ is equal to
$$(1+o(1))n\int_{\sqrt{\ln n/2}}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}dx\sim%
\sqrt{\frac{n^{3/2}}{\pi\ln n}},$$
and the variance is
$${\sf D}\zeta=(1+o(1))n(n-1)\left(\int_{\sqrt{\ln n/2}}^{\infty}\frac{1}{\sqrt{%
2\pi}}e^{-x^{2}/2}dx\right)^{2}+{\sf E}Y-({\sf E}Y)^{2}=o(({\sf E}Y)^{2}).$$
So, by the Chebyshev’s inequality,
$$1-g_{c}\leq\frac{o(({\sf E}\zeta)^{2})}{({\sf E}\zeta)^{2}(1+o(1))}=o(1).$$
Therefore, a.a.s. there are more than $m_{c}:=c\sqrt{\frac{n}{\pi\ln n}}$ vertices having degrees bigger than $np+\sqrt{\frac{1}{2}np(1-p)\ln n}+O(\sqrt{n})$. So, a.a.s.
$$Y_{m_{c}}[G_{\tilde{n}}]>\left(np+\sqrt{\frac{1}{2}np(1-p)\ln n}+O(\sqrt{n})%
\right)m_{c}-(m_{c})^{2}=$$
$$M(m_{c})p+cn\sqrt{\frac{p(1-p)}{2\pi}}+o(n).$$
Roughly speaking, in order to remove extra $Cn$ edges, we need to remove at most $C\sqrt{\frac{2n}{p(1-p)\ln n}}$ vertices. We do that in the next section.
3.3.3 Removing a major part of extra edges
For $m\in\mathbb{N}$, set
$$\tilde{M}(m)=m(\tilde{n}-m)+{m\choose 2},\quad\tilde{\delta}(U)=\delta[G_{%
\tilde{n}}](U).$$
Moreover, let $E_{\tilde{n}}:=e(G_{\tilde{n}})-{\tilde{n}\choose 2}p$. From (4), ${\sf P}(E_{\tilde{n}}\in((a_{i}+Q)\tilde{n},\,(b_{i}-Q)\tilde{n}))>\gamma$ for $n$ large enough and $i\in\{1,2\}$.
Let us describe an algorithm of constructing a set of $m=O(\sqrt{n/\ln n})$ vertices $U\subset\tilde{V}_{\tilde{n}}$ such that $G_{\tilde{n}}|_{\tilde{V}_{\tilde{n}}\setminus U}$ has ${{\tilde{n}}-m\choose 2}p+f(\tilde{n}-m)+O(\sqrt{n\ln n})$ edges.
At step $1$, $U_{1}=\{v_{1}\}$ where $v_{1}$ has maximum degree in $G_{\tilde{n}}$. If
$$\tilde{\delta}(U_{1})>p\tilde{M}(1)+E_{\tilde{n}}-f(\tilde{n}-1),$$
then the algorithm terminates, and $U=U_{0}:=\varnothing.$
Assume that, at step $i\geq 1$, we have a set $U_{i}$ of $i$ vertices. If the algorithm still works, then consider the set $U_{i+1}=U_{i}\cup\{v_{i+1}\}$ of $i+1$ vertices having maximum degrees in $G_{\tilde{n}}$. If
$$\tilde{\delta}(U_{i+1})>p\tilde{M}(i+1)+E_{\tilde{n}}-f(\tilde{n}-i-1),$$
then the algorithm terminates, and $U=U_{i}$.
By results from Section 3.3.2, for $n$ large enough, with probability at least $\gamma$, the algorithm terminates in time $O(\sqrt{n/\ln n})$.
Let us prove that the algorithm gives a set of vertices $\tilde{V}_{\tilde{n}}\setminus U$ inducing a graph with the desired amount of edges but $O(\sqrt{n\ln n})$.
Let $i=O(\sqrt{\frac{n}{\ln n}})$. Let us estimate from above
$$(\tilde{\delta}(U_{i+1})-p\tilde{M}(i+1))-(\tilde{\delta}(U_{i})-p\tilde{M}(i)%
)=\tilde{\delta}(U_{i+1})-\tilde{\delta}(U_{i})-p(\tilde{n}-i-1).$$
Obviously, it cannot be bigger than $\Delta[G_{\tilde{n}}]-p(\tilde{n}-i-1)$. But the latter is bigger than $2\sqrt{np(1-p)\ln n}+O(\sqrt{n})$ with probability $O(\frac{1}{n})$. Indeed, by (1) and (3),
$${\sf P}(\Delta[G_{\tilde{n}}]>\tilde{n}p+2\sqrt{\tilde{n}p(1-p)\ln\tilde{n}})%
\leq{\sf E}\zeta(2)\sim n\int_{2\ln\tilde{n}}^{\infty}\frac{1}{\sqrt{2\pi}}e^{%
-x^{2}/2}dx\sim\frac{1}{n\sqrt{2\pi}}.$$
Therefore, for every $\delta>0$, for $n$ large enough, with probability at least $\gamma$, using the described algorithm, for some $m\leq(b_{i}+\delta)\sqrt{\frac{2}{p(1-p)}}\sqrt{\frac{n}{\ln n}}$, we can find an $m$-set $U\subset\tilde{V}_{\tilde{n}}$ such that the subgraph induced on the remaining set of vertices $\tilde{V}_{\tilde{n}}\setminus U$ has
$${\tilde{n}-m\choose 2}p+f(\tilde{n}-m)+f_{0}(n)\sqrt{np(1-p)\ln n}$$
edges, where $f_{0}(n)\in(3,6)$ (note that $f_{0}$ is random).
3.3.4 Exploiting the first part of the supplementary set
Here, we assume that the above algorithm constructs the desired set $U$ (this happens with a probability at least $\gamma$) of size $m$, and all the below events are conditioned by this event. Since all the below events are defined by edges chosen independently of $G(n,p)|_{\tilde{V}_{\tilde{n}}}$, we are still working with independent Bernoulli random variables.
For $V\subset V_{1}$, denote $\delta_{U}(V)=\sum_{v\in V}\delta_{U}(v)$ where $\delta_{U}(v)$ is the number of neighbors of $v$ in $\tilde{V}_{\tilde{n}}\setminus U$. Here, we find a subset $V_{1}^{0}\subset V_{1}$ of a constant size such that its recovering corrects the deviation from the desired number of edges up to $o(\sqrt{n})$. For doing this, we consider the following algorithm.
For a subset $U_{0}\subseteq V_{1}$ and a positive integer $h$, let $p_{h}(U_{0})$ ($\tilde{p}_{h}(U_{0})$) be the probability that all but at most $h-1$ (all) vertices of $U_{0}$ have more than $(\tilde{n}-m)p-\frac{1}{h}f_{0}\sqrt{n\ln np(1-p)}$ neighbors in $\tilde{V}_{\tilde{n}}\setminus U$. Set $\kappa:=|U_{0}|$. By (1) and (3),
$$p_{h}(U_{0})=\sum_{i=0}^{h-1}{\kappa\choose i}\left(n^{-\frac{f_{0}^{2}}{2h^{2%
}}+o(1)}\right)^{i}\left(1-n^{-\frac{f_{0}^{2}}{2h^{2}}+o(1)}\right)^{\kappa-i%
},\quad\tilde{p}_{h}(U_{0})=\left(1-n^{-\frac{f_{0}^{2}}{2h^{2}}+o(1)}\right)^%
{\kappa}.$$
Surely, there exists the minimum $h\leq 7$ such that the probability $p_{h}(V_{1})$ approaches $0$. Fix such $h$. Since $\frac{f_{0}^{2}}{2h^{2}}>\frac{9}{98}$, there exists $\beta\in(0,\frac{1}{2})$ such that $\tilde{p}_{h}(U_{0})\to 1$ for $U_{0}:=\{1,\ldots,\lfloor n^{\beta}\rfloor\}$ (e.g., any $\beta<9/98$ is appropriate). Fix such $\beta$.
Start from $V_{1}^{1}=U_{0}$. Consider $h$ vertices $u_{1}^{1},\ldots,u_{h}^{1}$ of $V_{1}^{1}$ that have minimum number of neighbors in $U$. If $\delta_{U}(\{u_{1}^{1},\ldots,u_{h}^{1}\})\leq h(\tilde{n}-m)p-f_{0}\sqrt{n\ln
np%
(1-p)}$, then the algorithm terminates, and $V_{1}^{0}=V_{1}^{1}$. Let, at step $\kappa\geq 1$, the set $V_{1}^{\kappa}$ be considered and the algorithm still works. Then, at step $\kappa+1$, consider $V_{1}^{\kappa+1}=V_{1}^{\kappa}\cup\{\lfloor n^{\beta}\rfloor+\kappa-1\}$ and choose $h$ vertices $u_{1}^{\kappa+1},\ldots,u_{h}^{\kappa+1}$ from it that have minimum number of neighbors in $U$. If $\delta_{U}(\{u_{1}^{\kappa+1},\ldots,u_{h}^{\kappa+1}\})\leq h(\tilde{n}-m)p-f%
_{0}\sqrt{n\ln np(1-p)}$, then the algorithm terminates, and $V_{1}^{0}=V_{1}^{\kappa+1}$.
Clearly, the probability that the algorithms terminates at the first step is at most $1-\tilde{p}_{h}(U_{0})\to 0$ as $n\to\infty$. It remains to prove that a.a.s., whatever the last $\kappa$ is, $\delta_{U}(\mathbf{u}_{\kappa-1})-\delta_{U}(\mathbf{u}_{\kappa})=o(\sqrt{n})$ where $\mathbf{u}_{\kappa}\subset V_{1}^{\kappa}$ is the set of $h$ vertices with minimum number of neighbors in $\tilde{V}_{\tilde{n}}\setminus U$.
Clearly, the sets $\mathbf{u}_{\kappa}$ and $\mathbf{u}_{\kappa-1}$ have at least $h-1$ vertices in the intersection. Let $u_{1},\ldots,u_{h+1}$ be the vertices of $V_{1}^{\kappa}$ having minimum number of neighbors in $\tilde{V}_{\tilde{n}}\setminus U$, and $\delta_{U}(u_{1})\leq\ldots\leq\delta_{U}(u_{h+1})$. Then, $\delta_{U}(\mathbf{u}_{\kappa-1})-\delta_{U}(\mathbf{u}_{\kappa})\leq\delta_{U%
}(u_{h+1})-\delta_{U}(u_{1})$.
Set $\tilde{\kappa}=\kappa+\lfloor n^{\beta}\rfloor-1$. Let $Z_{\kappa}(x)$ be the number of vertices in $V_{1}^{\kappa}$ having at most $(\tilde{n}-m)p-x\sqrt{(\tilde{n}-m)p(1-p)\ln\tilde{\kappa}}$ neighbors in $\tilde{V}_{\tilde{n}}\setminus U$. Clearly, by (1) and (3), ${\sf E}Z_{\kappa}(x)=\tilde{\kappa}P(x)$, ${\sf D}Z_{\kappa}(x)=\tilde{\kappa}P(x)(1-P(x))<{\sf E}Z_{\kappa}(x)$,
$$P(x)=\int_{-\infty}^{-x\sqrt{\ln\tilde{\kappa}}}\frac{1}{\sqrt{2\pi}}e^{-t^{2}%
/2}dt(1+o(1))=\frac{1}{\sqrt{2\pi\ln\tilde{\kappa}}x}\tilde{\kappa}^{-x^{2}/2}%
(1+o(1))=\frac{1}{\tilde{\kappa}}e^{\lambda}(1+o(1)),\text{ where}$$
$$\lambda=\ln\frac{\tilde{\kappa}^{1-x^{2}/2}}{\sqrt{2\pi\ln\tilde{\kappa}}x}.$$
First, let $\lambda=-\sqrt[4]{\ln n}$. Then $x=\sqrt{2}+\frac{\sqrt[4]{\ln n}}{\sqrt{2}\ln\tilde{\kappa}}(1+o(1))$ and ${\sf P}(Z_{\kappa}(x)\geq 1)\leq{\sf E}Z_{\kappa}(x)=e^{\lambda}=e^{-\sqrt[4]{%
\ln n}}$. Therefore,
$${\sf P}(\exists\kappa\in\{1,\ldots,n_{0}-\lfloor n^{\beta}\rfloor+1\}\quad Z_{%
\kappa}(x)\geq 1)\leq$$
$${\sf E}Z_{1}(x)+\sum_{\kappa=2}^{n_{0}-\lfloor n^{\beta}\rfloor+1}\frac{1}{%
\kappa+\lfloor n^{\beta}\rfloor-1}e^{\lambda}(1+o(1))$$
$$=\left(\frac{1}{2}-\beta\right)\ln ne^{-\sqrt[4]{\ln n}}(1+o(1)).$$
Then, for every $\kappa\in\{1,\ldots,n_{0}-\lfloor n^{\beta}\rfloor+1\}$, with probability $1-o\left(\frac{1}{\sqrt{n}}\right)$,
$$\delta_{U}(v_{\lfloor n^{\beta}\rfloor}+\kappa-1)>(\tilde{n}-m)p-\sqrt{2p(1-p)%
(\tilde{n}-m)\ln\tilde{\kappa}}-\frac{\sqrt{np(1-p)\sqrt{\ln n}}}{\sqrt{2\ln%
\tilde{\kappa}}}(1+o(1)).$$
(8)
Second, let $\lambda=\sqrt[4]{\ln n}$. Then $x=\sqrt{2}-\frac{\sqrt[4]{\ln n}}{\sqrt{2}\ln\tilde{\kappa}}(1+o(1))$ and ${\sf E}Z_{\kappa}(x)=e^{\sqrt[4]{\ln n}}$. From Chernoff inequality, for some $\beta>0$,
$${\sf P}(Z_{\kappa}(x)\leq h)\leq e^{-\frac{3}{8}e^{\sqrt[4]{\ln n}}(1+o(1))}=o%
\left(\frac{1}{\sqrt{n}}\right).$$
Then, for every $\kappa\in\{1,\ldots,n_{0}-\lfloor n^{\beta}\rfloor+1\}$, with probability $1-o\left(\frac{1}{\sqrt{n}}\right)$,
$$\delta_{U}(v_{\lfloor n^{\beta}\rfloor}+\kappa-1)<(\tilde{n}-m)p-\sqrt{2p(1-p)%
(\tilde{n}-m)\ln\tilde{\kappa}}+\frac{\sqrt{np(1-p)\sqrt{\ln n}}}{\sqrt{2\ln%
\tilde{\kappa}}}(1+o(1)).$$
(9)
Finally, from (8), (9), we get that a.a.s., for every $\kappa$,
$$\delta_{U}(U_{\kappa})-\delta_{U}(U_{\kappa-1})=O\left(\frac{\sqrt{n}}{\sqrt[4%
]{\ln n}}\right)=o(\sqrt{n}).$$
Let the algorithm terminate at step $\kappa$. Define $\tilde{U}=(\tilde{V}_{\tilde{n}}\setminus U)\sqcup\{u_{1}^{\kappa},\ldots,u_{h%
}^{\kappa}\}$.
3.3.5 Exploiting the second part of the supplementary set
Here, we exploit the set $V_{2}$ and finish the construction of the induced graph with exactly $e_{X}$ edges.
Let $0\leq\varphi=o(\sqrt{n})$. Below, we prove that, a.a.s. for every non-negative $\gamma=\gamma(n)\leq\varphi$ such that $3(\tilde{n}-m+h+1)p-\gamma$ is an integer, there exist three vertices $w_{1},w_{2},w_{3}$ in $V_{2}$ such that the number of edges in $G(n,p)|_{\tilde{U}\cup\{w_{1},w_{2},w_{3}\}}$ adjacent to at least one of $w_{1},w_{2},w_{3}$ is exactly $3(\tilde{n}-m+h+1)p-\gamma$.
Let $V_{2}^{1},V_{2}^{2}$ be a partition of $V_{2}$ such that $||V_{2}^{1}|-|V_{2}^{2}||\leq 1$. We will find $w_{1}\in V_{2}^{1}$ and $w_{2},w_{3}\in V_{2}^{2}$.
Fix $\gamma$ as above and find an integer $\sigma=\sigma(\gamma)$ such that $\gamma\in[\sigma\sqrt[4]{n},(\sigma+1)\sqrt[4]{n})$. Let us estimate from above the probability that, for every vertex $w\in V_{2}^{1}$, its number of neighbors in $\tilde{U}$ is outside $D(\gamma):=[p(\tilde{n}-m+h)+\sigma\sqrt[4]{n},p(\tilde{n}-m+h)+(\sigma+1)%
\sqrt[4]{n})$. Let $W$ be the number of vertices in $V_{2}^{1}$ having so many (in this interval) neighbors in $\tilde{U}$. The probability that $w\in V_{2}^{1}$ has so many neighbors equals $\Theta\left(\frac{1}{\sqrt[4]{n}}\right)$. Since $W$ has binomial distribution, and ${\sf E}W=\Theta\left(\frac{\sqrt[4]{n}}{\ln n}\right)$, we get that ${\sf P}(W=0)\leq e^{-\Theta(\frac{\sqrt[4]{n}}{\ln n})}$. Therefore, a.a.s. for every $\gamma\leq\varphi$, there exists a vertex $w_{1}\in V_{2}^{1}$ such that its number of neighbors in $\tilde{U}$ belongs to $D(\gamma)$.
Above, we have found a vertex $w_{1}\in V_{2}^{1}$ having the number of neighbors in $\tilde{U}$ that differs from $(\tilde{n}-m+h)p-\gamma$ on at most $\sqrt[4]{n}$. Let $d$ be this difference. It remains to find a pair of vertices $w_{2},w_{3}\in V_{2}^{2}$ having exactly $[2(\tilde{n}-m+h)+3]p-d$ edges between them or going to $\tilde{U}\cup\{w_{1}\}$. Given $d$, let $\tilde{W}$ be the number of such pairs. Since ${\sf E}\tilde{W}=\Theta\left(\frac{\sqrt{n}}{\ln^{2}n}\right)$, ${\sf D}\tilde{W}=O\left(\frac{\sqrt{n}}{\ln^{3}n}\right)$, by the Chebyshev’s inequality, ${\sf P}(\tilde{W}=0)=O\left(\frac{\ln n}{\sqrt{n}}\right)$. Then the probability of the existence of such a pair for every $d$ equals $1-O\left(\frac{\ln n}{\sqrt[4]{n}}\right)\to 1$, and this finishes the construction.
Indeed, the graph $G(n,p)|_{\tilde{U}\cup\{w_{1},w_{2},w_{3}\}}$ has
$$k:=\tilde{n}-m+h+3\geq n-(b_{i}+\delta)\sqrt{\frac{2}{p(1-p)}}\sqrt{\frac{n}{%
\ln n}}+O\left(\frac{\sqrt{n}}{\ln n}\right)$$
vertices and exactly $e_{k}$ edges.
Finally, we get that, for every $\delta>0$,
$${\mathrm{lim\,inf}}_{n\to\infty}{\sf P}\left(X>n-\frac{b_{i}(1+\delta)\sqrt{2}%
}{\sqrt{p(1-p)}}\sqrt{\frac{n}{\ln n}}\right)>\gamma$$
(10)
for both $i\in\{1,2\}$.
First, let $i=1$. Both (6) and (10) are obtained from (4) (i.e., both events are intersections of one common event having a probability bigger than $\gamma$ with events that hold a.a.s.). Therefore,
$${\mathrm{lim\,inf}}_{n\to\infty}{\sf P}\left(n-\frac{b_{1}(1+\delta)\sqrt{2}}{%
\sqrt{p(1-p)}}\sqrt{\frac{n}{\ln n}}<X<n-\frac{a_{1}(1-\delta)}{\sqrt{2p(1-p)}%
}\sqrt{\frac{n}{\ln n}}\right)>\gamma.$$
Second, let $i=2$. Since $a_{2}>2b_{1}$, from (6) and (10), we get that
$${\mathrm{lim\,sup}}_{n\to\infty}{\sf P}\left(n-\frac{a_{2}(1-\delta)}{\sqrt{2p%
(1-p)}}\sqrt{\frac{n}{\ln n}}\leq X\leq n-\frac{b_{1}(1+\delta)\sqrt{2}}{\sqrt%
{p(1-p)}}\sqrt{\frac{n}{\ln n}}\right)<1-2\gamma.$$
Putting $t=\frac{2Q\sqrt{2}}{\sqrt{p(1-p)}}$, we finish the proof of Theorem 1.(i).
In the same way, from (5), we get that
$${\mathrm{lim\,inf}}_{n\to\infty}{\sf P}\left(X>n-\frac{b(1+\delta)\sqrt{2}}{%
\sqrt{p(1-p)}}\sqrt{\frac{n}{\ln n}}\right)>1-\varepsilon.$$
Together with (7), this finishes the proof of Theorem 1.(ii).
4 Proof of Theorem 2
4.1 Proof of Theorem 2.(ii)
First, let $k\in\{\lfloor\varepsilon n\rfloor,\ldots,n-1\}$, $Q=3\sqrt{p}$ and $\mu=Qm(k)$.
Let $U$ be a $k$-vertex subset of $V_{n}$. Then, by the Chernoff inequality, the number of edges $e_{U}$ in $G(n,p)$ having at least one vertex outside $U$ does not belong to the interval
$$\mathcal{I}_{k}:=\left(p\left(k(n-k)+{{n-k}\choose 2}\right)-\frac{\mu}{2},p%
\left(k(n-k)+{{n-k}\choose 2}\right)+\frac{\mu}{2}\right)$$
with probability at most $2e^{-\frac{\mu^{2}}{8\left(pn(n-k)+\frac{\mu}{6}\right)}}=e^{-\frac{9}{8}\ln{n%
\choose k}(1+o(1))}$ since $m=\sqrt{n(n-k)\ln{n\choose k}}<(n-k)\sqrt{n\ln n}=o(n(n-k))$. The expected number of $k$-vertex sets $U$ having so many edges is at most
$${n\choose k}2e^{-\frac{m^{2}}{8\left(pn(n-k)+\frac{m}{6}\right)}}=e^{-\frac{1}%
{8}\ln{n\choose k}(1+o(1))}.$$
Therefore, the probability that there exist $k\geq\lfloor\varepsilon n\rfloor$ and a $k$-vertex subset $U$ of $V_{n}$ such that $e_{U}\notin\mathcal{I}_{k}$ is at most
$$\sum_{k=\lfloor\varepsilon n\rfloor}^{n-1}e^{-\frac{1}{8}\ln{n\choose k}(1+o(1%
))}\leq\sum_{\ell=1}^{8}n^{-\ell/8+o(1)}+n(1-\varepsilon)n^{-9/8+o(1)}\to 0%
\text{ as }n\to\infty.$$
Second, let $k\in\left\{\left\lceil\frac{1}{p}\ln n\right\rceil,\ldots,\lfloor\varepsilon n%
\rfloor-1\right\}$, $Q=3\sqrt{p}$ and $\mu=Qm(k)$ as well.
Let $U$ be a $k$-vertex subset of $V_{n}$. Then, by the Chernoff inequality, the number of edges $\tilde{e}_{U}$ in the induced subgraph $G(n,p)|_{U}$ does not belong to the interval
$$\mathcal{J}_{k}:=\left(p{k\choose 2}-\frac{\mu}{2},p{k\choose 2}+\frac{\mu}{2}\right)$$
with probability at most $2e^{-\frac{\mu^{2}}{8\left(pk^{2}+\frac{\mu}{6}\right)}}=O\left(e^{-\frac{9}{8%
}\ln{n\choose k}(1+o(1))}\right)$ since $\frac{pk^{2}}{2}\geq\frac{\sqrt{p}k\sqrt{k\ln n}}{2}>\frac{\mu}{6}$. The expected number of $k$-vertex sets $U$ having so many edges is at most $e^{-\frac{1}{8}\ln{n\choose k}(1+o(1))}$. Therefore, the probability that there exist $k\geq\lfloor\varepsilon n\rfloor$ and a $k$-vertex subset $U$ of $V_{n}$ such that $\tilde{e}_{U}\notin\mathcal{J}_{k}$ is at most
$$\sum_{k=\left\lceil\ln n/p\right\rceil}^{\lfloor\varepsilon n\rfloor-1}e^{-%
\frac{1}{8}\ln{n\choose k}(1+o(1))}=e^{-\frac{1}{8}\ln{n\choose k}(1+o(1))}\to
0%
\text{ as }n\to\infty.$$
Finally, for $k\in\left\{1,\ldots,\left\lceil\frac{1}{p}\ln n\right\rceil-1\right\}$ set $Q=\frac{1}{4\sqrt{p}}$. Then, the number of edges of a $k$-vertex graph should belong to the interval $\{0,1,\ldots,{k\choose 2}\}$ of the length smaller than $\frac{k^{2}}{2}\leq\frac{1}{4}k\sqrt{k\frac{1}{p}\ln\frac{n}{k}}<\frac{1}{4%
\sqrt{p}}k\sqrt{{n\choose k}}$ for $n$ large enough. The latter expression equals $Qm(k)$, and this finishes the proof.
4.2 Proof of Theorem 2.(i)
Let us remind, that, for $\epsilon>0$ small enough, the case $k<\varepsilon n$ was already considered in [1]. Fix such an $\varepsilon<\frac{1}{4}$. Since, for every $\varepsilon^{*}\in(\varepsilon,1)$, and every $k\in[\varepsilon n,\varepsilon^{*}k]$, $m(k)=\Theta\left(\sqrt{n(n-k){n\choose k}}\right)$, we may assume that $m(k)$ is exactly $\sqrt{n(n-k){n\choose k}}$ for all $k\geq\varepsilon n$.
Here, we consider three cases separately: 1) $k<n-n^{1/4}\ln^{2}n$, 2) $n-n^{1/4}\ln^{2}n\leq k\leq n-2$ and 3) $k=n-1$.
4.2.1 $\varepsilon n\leq k<n-n^{1/4}\ln^{2}n$
Let $q=\frac{\varepsilon\sqrt{\varepsilon p(1-p)}}{23}$.
Divide the set $\{1,\ldots,n-k-14\}$ into three ‘almost equal’ parts $V_{1},V_{2},V_{3}$ (such that $||V_{i}|-|V_{j}||\leq 1$ for $i,j\in\{1,2,3\}$). Set $\tilde{n}=k+14$ and let $V^{*}_{\tilde{n}}=\{n-\tilde{n}+1,\ldots,n\}$. Let $G_{\tilde{n}}$ be the induced subgraph of $G(n,p)$ on $V^{*}_{\tilde{n}}$.
We start with two technical statements.
Claim 1
A.a.s., for every integer $k\in[\varepsilon n,n-n^{1/4}\ln^{2}n)$, in $G_{\tilde{n}}$ there are more than $\frac{\varepsilon(n-k)}{15}$ vertices having degrees greater than $\zeta_{k}(1/2)$, where
$$\zeta_{k}(x)=\tilde{n}p+\sqrt{\tilde{n}p(1-p)}\left(\sqrt{x\ln(n/(n-k))}-1%
\right).$$
Proof. Fix $k$ and let $Y_{k}$ be the number of vertices in $G_{\tilde{n}}$ having degrees greater than $\zeta_{k}(2)$. Then, by (1) and (3), for $n$ large enough,
$${\sf E}Y_{k}\sim\tilde{n}\frac{1}{\sqrt{2\pi}\sqrt{2\ln(n/(n-k))}}e^{-\left(%
\sqrt{2\ln(n/(n-k))}-1\right)^{2}/2}\geq\frac{\varepsilon(n-k)}{\sqrt{2\pi e}}$$
and
$${\sf D}Y_{k}={\sf E}Y_{k}(Y_{k}-1)+{\sf E}Y_{k}-({\sf E}Y_{k})^{2}<$$
$$\tilde{n}(\tilde{n}-1)\biggl{(}{\sf P}(\mathrm{deg}(n)>\zeta_{k},\mathrm{deg}(%
n-1)>\zeta_{k})-[{\sf P}(\mathrm{deg}(n)>\zeta_{k})]^{2}\biggr{)}+{\sf E}Y_{k}.$$
Conditioning on $n\sim n-1$ (here, $\sim$ denotes the adjacency relation) $n\nsim n-1$, we get
$${\sf P}(\mathrm{deg}(n)>\zeta_{k},\mathrm{deg}(n-1)>\zeta_{k})=p[{\sf P}(%
\mathrm{deg}_{G_{\tilde{n}-1}}(n)>\zeta_{k}-1)]^{2}+(1-p)[{\sf P}(\mathrm{deg}%
_{G_{\tilde{n}-1}}(n)>\zeta_{k})]^{2}=$$
$$[{\sf P}(\mathrm{deg}_{G_{\tilde{n}-1}}(n)>\zeta_{k})]^{2}+p\biggl{(}2{\sf P}(%
\mathrm{deg}_{G_{\tilde{n}-1}}(n)>\zeta_{k}){\sf P}(\mathrm{deg}_{G_{\tilde{n}%
-1}}(n)=\zeta_{k})+[{\sf P}(\mathrm{deg}_{G_{\tilde{n}-1}}(n)=\zeta_{k})]^{2}%
\biggr{)}$$
$${\sf P}(\mathrm{deg}(n)>\zeta_{k})=p{\sf P}(\mathrm{deg}_{G_{\tilde{n}-1}}(n)>%
\zeta_{k}-1)+(1-p){\sf P}(\mathrm{deg}_{G_{\tilde{n}-1}}(n)>\zeta_{k})=$$
$${\sf P}(\mathrm{deg}_{G_{\tilde{n}-1}}(n)>\zeta_{k})+p{\sf P}(\mathrm{deg}_{G_%
{\tilde{n}-1}}(n)=\zeta_{k}).$$
By (2),
$${\sf P}(\mathrm{deg}_{G_{\tilde{n}-1}}(n)=\zeta_{k})\sim\frac{1}{\sqrt{2\pi%
\tilde{n}p(1-p)}}e^{-\left(\sqrt{2\ln(n/(n-k))}-1\right)^{2}/2}.$$
Therefore,
$${\sf D}Y_{k}<\tilde{n}(\tilde{n}-1)p(1-p)[{\sf P}(\mathrm{deg}_{G_{\tilde{n}-1%
}}(n)=\zeta_{k})]^{2}+{\sf E}Y_{k}=O({\sf E}Y_{k}).$$
By the Chebyshev’s inequality,
$${\sf P}\left(Y_{k}<\frac{\varepsilon(n-k)}{5}\right)=O\left(\frac{{\sf E}Y_{k}%
}{({\sf E}Y_{k}-\varepsilon(n-k)/5)^{2}}\right)=O\left(\frac{1}{n-k}\right).$$
Let $k^{*}\in\{k+1,\ldots,2k\}$ and $k^{*}<n-n^{1/4}\ln^{2}n$. Denote $\tilde{n}^{*}=k^{*}+14$. Consider the partition $V^{*}_{\tilde{n}^{*}}=\{n-\tilde{n}^{*}+1,\ldots,n-\tilde{n}\}\sqcup V^{*}_{%
\tilde{n}}$. Notice that $\frac{n}{n-k}>\frac{1}{2}\frac{n}{n-k^{*}}$. Then,
$$\zeta_{k^{*}}(1/2)-\tilde{n}^{*}p<\sqrt{\tilde{n}p(1-p)}\left(\sqrt{\ln(n/(n-k%
^{*}))}-1\right)<\zeta_{k}(2)-\tilde{n}p.$$
Let $v_{1},\ldots,v_{\nu}$ be the vertices of $G_{\tilde{n}}$ having degrees greater that $\zeta_{k}(2)$. Every vertex $v_{i}$ has at least $p(\tilde{n}^{*}-\tilde{n})$ neighbors among $n-\tilde{n}^{*}+1,\ldots,n-\tilde{n}$ with probability $1/2+o(1)$. Then, by the Chernoff inequality, with a probability at most $e^{-\frac{\nu}{48}(1+o(1))}$, the number of vertices $v_{i}$ having so many neighbors is less than $\frac{1}{3}\nu$.
So, under the condition $\{\nu>\frac{\varepsilon(n-k)}{5}\}$, with a probability at least $1-e^{-\Theta(n-k)}$, we get $Y_{k^{*}}\geq\frac{\varepsilon(n-k)}{15}>\frac{\varepsilon(n-k^{*})}{15}$.
Summing up, we have proved that, for every $k\in[\varepsilon n,\frac{1}{2}(n-n^{1/4}\ln^{2}n))$, with a probability $1-O(1/(n-k))$, for every $k^{*}\in\{k,k+1,\ldots,2k\}$, $Y_{k^{*}}\geq\frac{\varepsilon(n-k^{*})}{15}$. Therefore, with a probability $1-O(\frac{1}{n^{1/4}\ln n})$, the latter inequality holds for all $k^{*}\in[\varepsilon n,(n-n^{1/4}\ln^{2}n))$. $\Box$
Claim 2
A.a.s., for every integer $k\in[\varepsilon n,n-n^{1/4}\ln^{2}n)$, in $G_{\tilde{n}}$ there are no vertices having degrees at least $\tilde{n}p+\sqrt{6\tilde{n}p(1-p)\ln\tilde{n}}$.
Proof. Fix $k$ and let $Z_{k}$ be the number of vertices in $G_{\tilde{n}}$ having degrees at least $\tilde{n}p+\sqrt{6\tilde{n}p(1-p)\ln\tilde{n}}$. Then, by (1) and (3), for $n$ large enough,
$${\sf E}Z_{k}\sim\tilde{n}\frac{1}{\sqrt{12\pi\ln\tilde{n}}}e^{-3\ln\tilde{n}}<%
\frac{1}{\tilde{n}^{2}}.$$
Then, the desired property holds with a probability at least $1-\sum_{k\in[\varepsilon n,n-n^{1/4}\ln^{2}n)}\frac{1}{k^{2}}=1-O(n^{-1/4}\ln^%
{-2}n)$. $\Box$
Finding every $O(\sqrt{n\ln n})$-subgraph in the interval
Let us describe an algorithm of finding $\tau\in\mathbb{N}$ and constructing sequences of subsets $U_{1}\subset\ldots\subset U_{\tau}$ in $V^{*}_{\tilde{n}}$ and $\tilde{U}_{1}\subset\ldots\subset\tilde{U}_{\tau}$ in $V_{1}$ such that a.a.s.
$$e\left(G(\tilde{n},p)|_{V^{*}_{\tilde{n}}\cup\tilde{U}_{\tau}\setminus U_{\tau%
}}\right)\leq e\left(G(\tilde{n},p)\right)-qm$$
(11)
and, for every $i\in\{1,\ldots,\tau\}$, $|U_{i}|=|U_{\tilde{i}}|=i$,
$$-(\sqrt{2}+\sqrt{6})\sqrt{\tilde{n}p(1-p)\ln\tilde{n}}<e\left(G(\tilde{n},p)|_%
{V^{*}_{\tilde{n}}\cup\tilde{U}_{i}\setminus U_{i}}\right)-e\left(G(\tilde{n},%
p)|_{V^{*}_{\tilde{n}}\cup\tilde{U}_{i-1}\setminus U_{i-1}}\right)<$$
$$-\sqrt{\tilde{n}p(1-p)}\left(\sqrt{\frac{1}{2}\ln(n/(n-k))}-1\right),$$
(12)
where $U_{0}=\tilde{U}_{0}=\varnothing$.
It would mean that, up to an $(2+\sqrt{6})\sqrt{\tilde{n}p(1-p)\ln\tilde{n}}$-error, every value from
$$\left(e(G(\tilde{n},p))-qm,e(G(\tilde{n},p))\right)$$
(13)
is admissible by the number of edges in an induced $\tilde{n}$-vertex subgraph of $G(n,p)$.
Note that, having sequences of sets $U_{1}\subset U_{2}\subset\ldots$ and $\tilde{U}_{1}\subset\tilde{U}_{2}\subset\ldots$ with $|U_{i}|=|\tilde{U}_{i}|=i$ satisfying (12), the inequality (11) becomes true once
$$\tau\geq\frac{qm}{\sqrt{\tilde{n}p(1-p)}\left(\sqrt{\frac{1}{2}\ln(n/(n-k))}-1%
\right)}.$$
(14)
Below, we show that our algorithm works at least $\frac{\varepsilon(n-k)}{15}$ steps, and this immediately implies the inequality (14): for $n$ large enough,
$$\frac{\varepsilon(n-k)}{15}>\frac{\varepsilon\sqrt{\varepsilon p(1-p)/2}\sqrt{%
n(n-k)\ln\left[\left(\frac{n}{n-k}\right)^{n-k}\right]}}{16\sqrt{\varepsilon np%
(1-p)}\left(\sqrt{\frac{1}{2}\ln(n/(n-k))}-1\right)}\geq\frac{qm}{\sqrt{\tilde%
{n}p(1-p)}\left(\sqrt{\frac{1}{2}\ln(n/(n-k))}-1\right)}.$$
At step $1$, $U_{1}=\{v_{1}\}$, where $v_{1}$ is a vertex having maximum degree in $G_{\tilde{n}}$. Consider the set $\mathcal{A}_{1}\subset V_{1}$ of vertices having at most $(\tilde{n}-1)p$ and at least
$$(\tilde{n}-1)p-\sqrt{2\tilde{n}p(1-p)\ln\tilde{n}}$$
(15)
edges going to $V^{*}_{\tilde{n}}\setminus\{U_{1}\}$. Let $\tilde{v}_{1}\in\mathcal{A}_{1}$ (if $\mathcal{A}_{1}$ is non-empty; otherwise, the algorithm terminates), and $\tilde{U}_{1}=\{\tilde{v}_{1}\}$.
Since a vertex from $V_{1}$ has at most $(\tilde{n}-1)p$ and at least (15) neighbors in $V^{*}_{\tilde{n}}\setminus U_{1}$ with probability $1/2+o(1)$ (see Section 2), the set $\mathcal{A}_{1}$ is non-empty with probability at least $1-(1/2+o(1))^{|V_{1}|}$.
Assume that, at step $1\leq i<\frac{\varepsilon(n-k)}{15}$, we construct the target sets $U_{i},\tilde{U}_{i}$ having $i$ vertices.
At step $i+1$, take a set $U_{i+1}=U_{i}\cup\{v_{i+1}\}$ of $i+1$ vertices having maximum degrees in $G_{\tilde{n}}$. Consider the set $\mathcal{A}_{i+1}\subset V_{1}\setminus\tilde{U}_{i}$ of vertices having at most $(\tilde{n}-1)p$ and at least (15) edges going to $(V^{*}_{\tilde{n}}\cup\tilde{U}_{i})\setminus U_{i+1}$. Let $\tilde{v}_{i+1}\in\mathcal{A}_{i+1}$ (if $\mathcal{A}_{i+1}$ is non-empty; otherwise, the algorithm terminates), and $\tilde{U}_{i+1}=\tilde{U}_{i}\cup\{\tilde{v}_{i+1}\}$.
Let us prove that, with high probability, the set $\mathcal{A}_{i+1}$ is non-empty. Given an $(\tilde{n}-1)$-set, the probability that an outside vertex has at most $(\tilde{n}-1)p$ and at least (15) neighbors in this set, equals $1/2+o(1)$ (see Section 2). By the Chernoff inequality, the probability that there exists an $i$-set $\tilde{U}$ in $V_{1}$ such that every vertex in $V_{1}\setminus\tilde{U}$ has either at least $(\tilde{n}-1)p$ or at most (15) neighbors in $(V^{*}_{\tilde{n}}\cup\tilde{U})\setminus U_{i+1}$ is at most
$${|V_{1}|\choose i}e^{-\frac{|V_{1}|-i}{4}(1+o(1))}\leq e^{|V_{1}|\left(\frac{i%
}{|V_{1}|}\ln\frac{|V_{1}|}{i}+\frac{5i}{4|V_{1}|}-\frac{1}{4}\right)(1+o(1))}%
<e^{-\frac{|V_{1}|}{54}}$$
since the function $-x\ln x+\frac{5}{4}x$ increases in $(0,1)$, the inequality $\frac{i}{|V_{1}|}\leq\frac{1}{18}$ holds (since $i<\frac{\varepsilon(n-k)}{15}$, $|V_{1}|\geq\frac{n-k-16}{3}$ and $\varepsilon<\frac{1}{4}$) and $\ln 18-\frac{13}{4}<-\frac{1}{3}$.
Summing up, with a probability at least $1-e^{-\Omega(n)}$, for every $k\in[\varepsilon n,n-n^{1/4}\ln^{2}n)$, the described algorithm works at least $\lceil\frac{\varepsilon(n-k)}{5}\rceil$ steps. By Claims 1, 2, a.a.s. for every $k$ in the range, it gives the desired sets.
Fix $i\in\{1,\ldots,\lceil\frac{\varepsilon(n-k)}{5}\rceil\}$ and consider the algorithm output $V^{*}_{\tilde{n}}[i]:=V^{*}_{\tilde{n}}\cup\tilde{U}_{i}\setminus U_{i}$. Notice that this set still has $k+14$ vertices.
Finding all the remaining subgraphs
Now, let us prove that, a.a.s., for any real $f_{0}\in(3,7)$, we may find a set of $h\leq 10$ vertices in $V_{2}$ having $ph\tilde{n}-f_{0}\sqrt{\tilde{n}p(1-p)\ln\tilde{n}}+o(\sqrt{n})$ neighbors in $V^{*}_{\tilde{n}}[i]$. It would mean that, up to an $o(\sqrt{n})$-error, every value from (13) is admissible (since $\sqrt{6}+\sqrt{2}<4$).
For a subset $U_{0}\subseteq V_{2}$ and a positive integer $h$, let
•
$p_{h}(U_{0})$ be the probability that all but at most $h-1$ vertices of $U_{0}$ have more than $\tilde{n}p-\frac{7}{h}\sqrt{\tilde{n}\ln\tilde{n}p(1-p)}$ neighbors in $V^{*}_{\tilde{n}}[i]$,
•
$\tilde{p}_{h}(U_{0})$ be the probability that all vertices of $U_{0}$ have more than $\tilde{n}p-\frac{3}{h}\sqrt{\tilde{n}\ln\tilde{n}p(1-p)}$ neighbors in $V^{*}_{\tilde{n}}[i]$.
For $\kappa:=|U_{0}|$, by (1) and (3).
$$p_{h}(U_{0})=\sum_{\ell=0}^{h-1}{\kappa\choose\ell}\left(\tilde{n}^{-\frac{49}%
{2h^{2}}+o(1)}\right)^{\ell}\left(1-\tilde{n}^{-\frac{49}{2h^{2}}+o(1)}\right)%
^{\kappa-\ell},\quad\tilde{p}_{h}(U_{0})=\left(1-\tilde{n}^{-\frac{9}{2h^{2}}+%
o(1)}\right)^{\kappa}.$$
Surely, there exists the minimum $h\leq 10$ such that the probability $p_{h}(V_{2})$ approaches $0$. Fix such $h$. Let us assume, without loss of generality, that $V_{2}=\{1,\ldots,\lfloor(n-k-14)/3\rfloor\}$.
Set $\beta=10^{-2}$. Clearly, $\tilde{p}_{10}(U_{0})\to 1$ (and, therefore, the same is true for $\tilde{p}_{h}(U_{0})$ for all $h\leq 10$) for $U_{0}:=\{1,\ldots,\lfloor n^{\beta}\rfloor\}$.
The algorithm of constructing the desired set of $h$ vertices is described in Section 3.3.4: we start from $V_{2}^{1}=U_{0}$; at every step $j\geq 1$, we find a set $U_{j}\subset V_{2}^{j}$ of $h$ having minimum number of neighbors in $V^{*}_{\tilde{n}}[i]$ and, if these vertices have more than $h\tilde{n}p-7\sqrt{\tilde{n}\ln\tilde{n}p(1-p)}$ neighbors in $V^{*}_{\tilde{n}}[i]$ (we denote this number by $\delta_{V^{*}_{\tilde{n}}[i]}(U_{j})$), we add one more vertex to $V_{2}^{j}$ and move to step $j+1$. In Section 3.3.4, we have proved that, there exists a constant $a$ such that, for $|V_{2}|=\lfloor\frac{\sqrt{n}}{\ln n}\rfloor$, $|V^{*}_{\tilde{n}}[i]|=n-O(\sqrt{n/\ln n})$, with a probability at least $1-e^{-\Omega(\sqrt[4]{\ln n})}$, for all $j\geq 1$, the difference $\delta_{V^{*}_{\tilde{n}}[i]}(U_{j})-\delta_{V^{*}_{\tilde{n}}[i]}(U_{j+1})$ is at most $a\sqrt{\frac{n}{\sqrt{\ln n}}}$. It is straightforward to check the same is true for $|V_{2}|=\lfloor(n-k-14)/3\rfloor$, $|V^{*}_{\tilde{n}}[i]|=k+14$ for all $\varepsilon n\leq k<n-n^{1/4}\ln^{2}n$. The problem is that we can not immediately move the quantification over $k$ after the probability since $n\gg e^{\Omega(\sqrt[4]{\ln n})}$. But we can easily solve it in the following way. Recall that the bound on the probability follows from the fact that there exists a vertex in $V_{2}^{j}$ having at most $\tilde{n}p-x\sqrt{\tilde{n}p(1-p)\ln(j+n^{\beta})}$ (where $x=\sqrt{2}-\frac{\sqrt[4]{\ln n}}{\sqrt{2}\ln(j+n^{\beta})}(1+o(1))$) neighbors in $V^{*}_{\tilde{n}}[i]$ with a probability at most $e^{-\sqrt[4]{\ln n}(1+o(1))}$. We can improve this bound by dividing the set $V_{2}$ into $\ln n$ almost equal parts, and observing that the algorithm with the same probability bounds can be running on each of the sets of the partition. Then, the probability that, in every set from the partition, there exists a vertex having at most $\tilde{n}p-x\sqrt{\tilde{n}p(1-p)\ln(j+n^{\beta})}$ neighbors in $V^{*}_{\tilde{n}}[i]$, is at most $e^{-\ln n\sqrt[4]{\ln n}(1+o(1))}\ll\frac{1}{n}$.
Therefore, there exists $a$ such that a.a.s., for $k\in[\varepsilon n,n-n^{1/4}\ln^{2}n)$, $i\in\{1,\ldots,\lceil\frac{\varepsilon(n-k)}{5}\rceil\}$ and any real $f_{0}\in(3,7)$, there exists $j$ such that, at step $j$, the algorithm outputs with a set $U_{j}\subset V_{2}$ of $h\leq 10$ vertices having $ph\tilde{n}-f_{0}\sqrt{\tilde{n}p(1-p)\ln\tilde{n}}+\xi$ neighbors in $V^{*}_{\tilde{n}}[i]$, $|\xi|\leq an^{1/2}(\ln n)^{-1/4}$.
Finally, consider the set $V_{3}$.
Let $h\in\{0,1,\ldots,10\}$. Let $\hat{U}$ be a union of $V^{*}_{\tilde{n}}[i]$ with a subset of $V_{2}$ having $h$ vertices.
It remains to prove that, a.a.s., for every $\ell\in\{4,5,\ldots,14\}$ and every $\gamma=\gamma(n)$ such that $0\leq\gamma\leq 4an^{1/2}(\ln n)^{-1/4}$ and $(\ell(\tilde{n}+h)+{\ell\choose 2})p-\gamma$ is an integer, there exist $\ell$ vertices $w_{1},\ldots,w_{\ell}$ in $V_{3}$ such that the number of edges in $G(n,p)|_{\hat{U}\cup\{w_{1},\ldots,w_{\ell}\}}$ adjacent to at least one of $w_{1},\ldots,w_{\ell}$ is exactly $(\ell(\tilde{n}+h)+{\ell\choose 2})p-\gamma$.
Consider a partition $V_{3}=V_{3}^{1}\sqcup V_{3}^{2}$ such that $||V_{3}^{1}|-|V_{3}^{2}||\leq 1$. Fix $\gamma$ as above and find an integer $\sigma=\sigma(\gamma)$ such that $\gamma\in[\sigma\sqrt[4]{n},(\sigma+1)\sqrt[4]{n})$. Let us estimate from above the probability that, for every vertex $w$ in $V_{3}^{1}$, its number of neighbors in $\hat{U}$ is outside $D(\gamma):=(p(\tilde{n}+h)-(\sigma+1)\sqrt[4]{n},p(\tilde{n}+h)-\sigma\sqrt[4]%
{n}]$. Let $W$ be the number of vertices in $V_{3}^{1}$ having so many (in this interval) neighbors in $\hat{U}$. The probability that $w\in V_{3}^{1}$ has so many neighbors equals $\Theta\left(\frac{1}{\sqrt[4]{n}}\right)$. Since $W$ has binomial distribution, and ${\sf E}W=\Omega\left(\ln^{2}n\right)$, we get that ${\sf P}(W=0)\leq e^{-\Theta(\ln^{2}n)}$. Therefore, with a probability at least $1-e^{-\Theta(\ln^{2}n)}$, for every $\gamma$ in the range, there exists a vertex $w_{1}\in V_{3}^{1}$ such that its number of neighbors in $\hat{U}$ belongs to $D(\gamma)$.
For every $d\in(-\sqrt[4]{n},\sqrt[4]{n})$, it remains to find vertices $w_{2},\ldots,w_{\ell}\in V_{3}^{2}$ having exactly $[(\ell-1)(\tilde{n}+h)+{\ell\choose 2}]p-d$ edges between them or going to $\hat{U}\cup\{w_{1}\}$. Given $d$, let $W$ be the number of such $(\ell-1)$-tuples. Since ${\sf E}W=\Theta\left(\frac{(n-\tilde{n})^{\ell-1}}{\sqrt{n}}\right)$, ${\sf D}W=O\left(\frac{(n-\tilde{n})^{2\ell-3}}{n}\right)$, by the Chebyshev’s inequality, ${\sf P}(W=0)=O\left(\frac{1}{n-\tilde{n}}\right)$. Then the probability of the existence of such an $(\ell-1)$-tuple for every $d$ equals $1-O\left(\frac{\sqrt[4]{n}}{n-\tilde{n}}\right)$. Unfortunately, we again face the problem that we can not move the quantification over $k$ after the probability. Nevertheless, here the solution is the same: divide the set $V_{3}^{2}$ into two almost equal parts; the probability that, for some $d$, in both parts there are no $(\ell-1)$-tuples is $O\left(\frac{\sqrt[4]{n}}{(n-\tilde{n})^{2}}\right)$. This solves the problem since $\sum_{k\in[\varepsilon n,n-n^{1/4}\ln^{2}n)}\frac{\sqrt[4]{n}}{(n-\tilde{n})^{%
2}}=O\left(\frac{1}{\ln^{2}n}\right)$.
4.2.2 $n-n^{1/4}\ln^{2}n\leq k\leq n-2$
The result immediately follows from the following three technical statements.
Claim 3
A.a.s., for every
$$d\in I:=\biggl{[}(n-1)p-\sqrt{\frac{1}{5}n\ln np(1-p)},(n-1)p+\sqrt{\frac{1}{5%
}n\ln np(1-p)}\biggr{]},$$
in $G(n,p)$, there are at least $n^{3/10}/\ln^{2}n$ vertices having degree $d$.
Proof. Let $G_{\tilde{n}}$ be the subgraph of $G(n,p)$ induced on $\{1,\ldots,\tilde{n}\}$, $\tilde{n}=\lfloor n-n^{1/5}\ln^{2}n\rfloor$. Let $\tilde{I}$ be the maximum subset of $I$ such that $\min I=\min\tilde{I}$, and every two consequtive elements of $\tilde{I}$ are at the distance $\lfloor n^{1/10}\ln n\rfloor$.
Fix $d\in\tilde{I}$. The probability that a fixed vertex in $G_{\tilde{n}}$ has degree $d$ is ${\sf P}(\xi_{\tilde{n}-1,p}=d)$, where $\xi_{\tilde{n}-1,p}$ is a binomial random variable with parameters $\tilde{n}-1$ and $p$. Then, by (2),
$$P_{d}:={\sf P}(\xi_{\tilde{n}-1,p}=d)\sim\frac{1}{\sqrt{2\pi(\tilde{n}-1)p(1-p%
)}}e^{-\frac{((\tilde{n}-1)p-d)^{2}}{2(\tilde{n}-1)p(1-p)}}=\Omega\left(n^{-3/%
5}\right).$$
Let $X$ be the number of vertices in $G_{\tilde{n}}$ having degree $d$. Then ${\sf E}X=n{\sf P}(\xi_{\tilde{n}-1,p}=d)=\Omega(n^{2/5})$. Moreover,
$${\sf E}X(X-1)=n(n-1)\left(p\left[{\sf P}(\xi_{\tilde{n}-2,p}=d-1)\right]^{2}+(%
1-p)\left[{\sf P}(\xi_{\tilde{n}-2,p}=d)\right]^{2}\right),$$
$$({\sf E}X)^{2}=(nP_{d})^{2}>n(n-1)(p{\sf P}(\xi_{\tilde{n}-2,p}=d-1)+(1-p){\sf
P%
}(\xi_{\tilde{n}-2,p}=d))^{2}.$$
Then
$${\sf D}X={\sf E}X(X-1)+{\sf E}X-({\sf E}X)^{2}<n(n-1)p(1-p)\left[{\sf P}(\xi_{%
\tilde{n}-2,p}=d-1)-{\sf P}(\xi_{\tilde{n}-2,p}=d)\right]^{2}+{\sf E}X=$$
$$n(n-1)p(1-p)P^{2}_{d}\left(\frac{d-p(n-1)}{(n-1)p(1-p)}\right)^{2}+{\sf E}X=({%
\sf E}X)^{2}O\left(\frac{\ln n}{n}\right)+{\sf E}X.$$
Then, by the Chebyshev’s inequality,
$${\sf P}(X\leq{\sf E}X/2)\leq\frac{4{\sf D}X}{({\sf E}X)^{2}}=O(n^{-2/5}).$$
Let $\xi_{1},\ldots,\xi_{X}$ be the vertices of $G_{\tilde{n}}$ having degree $d$. Let $d_{0}\in[0,n^{1/10}\ln n]$ be a real number. Then, by (2), for every $i\in\{1,\ldots,X\}$, the probability $p_{0}$ that $\xi_{i}$ has exactly $\lfloor(n-\tilde{n})p+d_{0}\rfloor$ neighbors among the vertices $n-\tilde{n}+1,\ldots,n$ equals $\Omega\left(\frac{1}{n^{1/10}\ln n}\right)$. Let $Y$ be the number of vertices $i\in\{1,\ldots,X\}$ having exactly $\lfloor(n-\tilde{n})p+d_{0}\rfloor$ neighbors among the vertices $n-\tilde{n}+1,\ldots,n$. We get ${\sf E}Y=Xp_{0}$. By the Chernoff inequality, ${\sf P}(Y\leq\frac{1}{2}Xp_{0})\leq e^{-Xp_{0}/8}$.
Therefore, for every $d\in\tilde{I}$, with a probability $1-O(n^{-2/5})$, for every $\tilde{d}\in\{d,d+1,\ldots,\lfloor n^{1/10}\ln n\rfloor\}$, in $G(n,p)$ there are at least $\Omega\left(\frac{n^{3/10}}{\ln n}\right)$ vertices having degree $\tilde{d}$. It remains to notice that the target event is the intersection of $O\left(\frac{n^{2/5}}{\sqrt{\ln n}}\right)$ events, each of which happens with a probability bounded from below (uniformly) by $1-O(n^{-2/5})$. Then, the probability of the target event is $1-O(1/\sqrt{\ln n})$. $\Box$
For a subset $U\subset V_{n}$ of size $s$, let $\delta_{0}(U)=\delta(U)-({s\choose 2}+s(n-s))p$ be the difference between $\delta(U)$ and its expected value.
Let $q=\frac{1}{3}\sqrt{p(1-p)}$, $k\in[n-n^{1/4}\ln^{2}n,n-2]$.
Claim 4
Let, in a graph $\mathcal{G}$ on $V_{n}$, for every $d\in I$, there are at least $n^{3/10}/\ln^{2}n$ vertices having degree $d$.
Then, there exists a sequence $D_{1}\leq D_{2}\leq\ldots\leq D_{\kappa}$ such that $D_{\kappa}>qm/2$, $D_{1}<-qm/2$, for every $i\in\{1,\ldots,\kappa-1\}$, $D_{i+1}-D_{i}\leq n^{1/4}\ln^{2}n$, and, in $\mathcal{G}$, there are sets of vertices $U_{1},\ldots,U_{\kappa}$ of size $n-k-1$ having $\delta_{0}(U_{i})=D_{i}$ for $i\in\{1,\ldots,\kappa\}$.
Proof. Since $n-k-1<n^{3/10}/\ln^{2}n$, we can find $n-k-1$ vertices $v_{1},\ldots,v_{n-k-1}$ having degrees equal to
$$d^{*}=\left\lfloor(n-1)p+\sqrt{\frac{1}{5}n\ln np(1-p)}\right\rfloor.$$
Clearly, for the set $U^{*}$ of these vertices and large enough $n$, the following holds:
$$\delta_{0}(U^{*})\geq(n-k-1)d^{*}-{n-k-1\choose 2}-\left[{n-k-1\choose 2}+(n-k%
-1)(k+1)\right]p>$$
$$(p-1){n-k-1\choose 2}-(n-k-1)+(n-k-1)\sqrt{\frac{1}{5}n\ln np(1-p)}>\frac{1}{6%
}(n-k)\sqrt{n\ln np(1-p)}>\frac{qm}{2}.$$
At every step, replace $v_{1}$ with a vertex having degree deg$(v_{1})-1$. Once
$$\mathrm{deg}(v_{1})=d_{*}:=\left\lceil(n-1)p-\sqrt{\frac{1}{5}n\ln np(1-p)}%
\right\rceil,$$
we proceed with replacing $v_{2}$ in the same way. In the final sequence of steps, we replace the vertex $v_{n-k-1}$. Once
$\mathrm{deg}(v_{T-1})=d_{*}$, we stop and get a set $U_{*}$ having
$$\delta_{0}(U_{*})\leq(n-k-1)d^{*}-\left[{n-k-1\choose 2}+(n-k-1)(k+1)\right]p<$$
$$p{n-k-1\choose 2}+(n-k-1)-(n-k-1)\sqrt{\frac{1}{5}n\ln np(1-p)}<-\frac{1}{6}(n%
-k)\sqrt{n\ln np(1-p)}<-\frac{qm}{2}.$$
Clearly, at every step, the value of $\delta_{0}$ is changed on at most $n-k-2<n^{1/4}\ln^{2}n$. $\quad\Box$
It remains to prove the following.
Claim 5
A.a.s., for every integer $k\in[n-n^{1/4}\ln^{2}n,n-2]$, every non-negative $d\leq n^{1/4}\ln^{2}n$ such that $pk+d$ is integer and every $(n-k-1)$-set $U\subset V_{n}$, there exists a vertex $z\in V_{n}\setminus U$ having exactly $pk+d$ neighbors in $V_{n}\setminus U$.
Proof. Fix an integer $k\in[n-n^{1/4}\ln^{2}n,n-2]$ and a non-negative $d\leq n^{1/4}\ln^{2}n$ such that $pk+d$ is integer. Let $U\subset V_{n}$ be an $(n-k-1)$-set. Without loss of generality, assume that $V_{n}\setminus U=\{1,\ldots,k+1\}$. For $\ell\in\{1,\ldots,k+1\}$, let $\mathcal{A}_{\ell}=\mathcal{A}_{\ell}(d)$ be the event that the vertex $v_{\ell}$ has exactly $pk+d$ neighbors in $V_{n}\setminus U$. We should estimate ${\sf P}(\overline{\mathcal{A}_{1}}\cap\ldots\cap\overline{\mathcal{A}_{k+1}})$.
Divide the set $\{1,\ldots,k+1\}$ into $K:=\left\lfloor\frac{k+1}{\lfloor n^{3/4}\ln^{5}n\rfloor}\right\rfloor$ sets $W_{1},\ldots,W_{K}$ of the same size $\lfloor n^{3/4}\ln^{5}n\rfloor$ (up to a remainder of a size less than $\lfloor n^{3/4}\ln^{5}n\rfloor$ — we remove it and do not consider it any more).
Fix $i\in\{1,\ldots,K\}$. Let $\mathcal{S}_{i}$ be the event that all the degrees of the subgraph induced on $W_{i}$ are inside
$$J:=\left((|W_{i}|-1)p-\sqrt{n},(|W_{i}|-1)p+\sqrt{n}\right).$$
By the Chernoff bound, for every $i\in\{1,\ldots,K\}$,
$${\sf P}\left(\overline{\mathcal{S}_{i}}\right)\leq|W_{i}|e^{-\frac{n^{1/4}}{2%
\ln^{5}n}(1+o(1))}=e^{-\frac{n^{1/4}}{2\ln^{5}n}(1+o(1))}.$$
Without loss of generality, assume that $W_{i}=\{1,\ldots,w\}$, $w=\lfloor n^{3/4}\ln^{5}n\rfloor$. For every possible graph $\mathcal{G}$ on $W_{i}$ having all degrees inside $J$ (we denote $\Gamma_{i}$ the set of all possible graphs), let $\mathcal{B}[\mathcal{G}]=\{G(n,p)|_{W_{i}}=\mathcal{G}\}$. Clearly, for such $\mathcal{G}$,
$${\sf P}\left(\left.\overline{\mathcal{A}_{1}}\cap\ldots\cap\overline{\mathcal{%
A}_{w}}\right|\mathcal{B}[\mathcal{G}]\right)={\sf P}\left(\overline{\mathcal{%
A}_{1}[\mathcal{G}]}\right)\cdot\ldots\cdot{\sf P}\left(\overline{\mathcal{A}_%
{w}[\mathcal{G}]}\right),$$
where $\mathcal{A}_{\ell}[\mathcal{G}]$ is the event that the number of neighbors of $v_{\ell}$ in $V_{n}\setminus(U\cup W_{i})$ equals $pk+d-\mathrm{deg}_{\mathcal{G}}(v_{\ell})$. By (2), for some constant $c>0$, ${\sf P}\left(\left.\mathcal{A}_{\ell}[\mathcal{G}]\right|\mathcal{W}\right)%
\geq\frac{c}{\sqrt{n}}$.
Finally, we get
$${\sf P}\left(\overline{\mathcal{A}_{1}}\cap\ldots\cap\overline{\mathcal{A}_{k+%
1}}\right)\leq$$
$${\sf P}\left(\overline{\mathcal{A}_{1}}\cap\ldots\cap\overline{\mathcal{A}_{k+%
1}}\cap\{\exists i\in\{1,\ldots,K\}\,\,\mathcal{S}_{i}\}\right)+{\sf P}\left(%
\overline{\{\exists i\in\{1,\ldots,K\}\,\,\mathcal{S}_{i}\}}\right)\leq$$
$$\sum_{i=1}^{K}{\sf P}\left(\bigcap_{\ell\in W_{i}}\overline{\mathcal{A}_{\ell}%
}\cap\mathcal{S}_{i}\right)+{\sf P}\left(\bigcap_{i=1}^{K}\overline{\mathcal{S%
}_{i}}\right)=\sum_{i=1}^{K}\sum_{\mathcal{G}\in\Gamma_{i}}{\sf P}\left(\left.%
\bigcap_{\ell\in W_{i}}\overline{\mathcal{A}_{\ell}}\right|\mathcal{B}[%
\mathcal{G}]\right){\sf P}\left(\mathcal{B}[\mathcal{G}]\right)+{\sf P}\left(%
\bigcap_{i=1}^{K}\overline{\mathcal{S}_{i}}\right)\leq$$
$$\left(1-\frac{c}{\sqrt{n}}\right)^{w}\sum_{i=1}^{K}\sum_{\mathcal{G}\in\Gamma_%
{i}}{\sf P}\left(\mathcal{B}[\mathcal{G}]\right)+\mathrm{exp}\left(-\frac{n^{1%
/2}}{2\ln^{10}n}(1+o(1))\right)\leq$$
$$e^{-\frac{cw}{\sqrt{n}}}\sum_{i=1}^{K}{\sf P}\left(\mathcal{S}_{i}\right)+%
\mathrm{exp}\left(-\frac{n^{1/2}}{2\ln^{10}n}(1+o(1))\right)\leq$$
$$K\mathrm{exp}\left(-cn^{1/4}\ln^{5}n(1+o(1))\right)=\mathrm{exp}\left(-cn^{1/4%
}\ln^{5}n(1+o(1))\right).$$
Then, the probability that there exists an integer $k\in[n-n^{1/4}\ln^{2}n,n-2]$, a non-negative $d\leq n^{1/4}\ln^{2}n$ such that $pk+d$ is integer and an $(n-k-1)$-set $U\subset V_{n}$ such that every vertex $z\in V_{n}\setminus U$ does not have exactly $pk+d$ neighbors in $V_{n}\setminus U$ is at most
$$\left(n^{1/2}\ln^{4}n\right)n^{n^{1/4}\ln^{2}n}e^{-cn^{1/4}\ln^{5}n(1+o(1))}%
\to 0\text{ as }n\to\infty.\quad\Box$$
4.2.3 $k=n-1$
Let $q=\frac{\sqrt{p(1-p)}}{2}$. We should prove that a.a.s. the set of sizes of $(n-1)$-vertex subgraphs of $G(n,p)$ contains a full interval of length $q\sqrt{n\ln n}$, or, equivalently, the set of degrees of $G(n,p)$ contains a full interval of the same size. But this follows from Claim 3.
References
[1]
N. Alon, A.V. Kostochka, Induced subgraphs with distinct sizes, Random Structures and Algorithms, 34 (2009), 45–53.
[2]
N. Alon, J. H. Spencer, The Probabilistic Method, Second Edition, Wiley, 2000.
[3]
B. Bollobás, Degree sequences of random graphs, Discrete Mathematics, 33 (1981), 1–19.
[4]
B. Bollobás, Random Graphs, 2nd Edition, Cambridge University Press, 2001.
[5]
B. Bollobás, The distribution of the maximum degree of a random graph, Discrete Mathematics, 32 (1980), 201–203.
[6]
B. Bollobás, P. Erdős, Cliques in random graphs, Math. Proc. Camb. Phil. Soc., 80 (1976), 419–427.
[7]
K. Dutta, C.R. Subramanian, On Induced Paths, Holes and Trees in Random Graphs, Proc. ANALCO 2018, 168–177.
[8]
P. Erdős, Some of my favourite problems in various branches of combinatorics, Annals of Discrete Mathematics 51 (1992), 69–79.
[9]
P. Erdős, Some recent problems and results in Graph Theory, Discrete Math. 164 (1997), 81–85.
[10]
P. Erdős, Z. Palka, Trees in random graphs, Discrete Mathematics, 46 (1983), 145–150.
[11]
W. Feller, An Introduction to Probability Theory and Its Applications, vol. 1, 2nd ed., John Wiley and Sons, New York, 1975.
[12]
S. Janson, T. Luczak, A. Rucinski, Random Graphs, New York, Wiley, 2000.
[13]
N. Fountoulakis, R.J. Kang, C. McDiarmid, Largest sparse subgraphs of random graphs, European Journal of Combinatorics, 35 (2014), 232–244.
[14]
M. Krivelevich, B. Sudakov, V.H. Vu, N.C. Wormald, On the probability of independent sets in random graphs, Random Structures & Algorithms, Vol. 22 Issue 1 (2003), 1–14.
[15]
M. Kwan, B. Sudakov, Proof of a conjecture on induced subgraphs of Ramsey graphs, Transactions Amer. Math. Soc., to appear.
[16]
D. Matula, The employee party problem, Not. Amer. Math. Soc., 19(2): A–382, 1972.
[17]
D. Matula, The largest clique size in a random graph, Tech. Rep. Dept. Comp. Sci., Southern Methodist University, Dallas, Texas, 1976.
[18]
Z. Palka, Bipartite complete induced subgraphs of a random graph, Annals of Discrete Mathematics, 28 (1985), 209–219.
[19]
A.M. Raigorodskii, On the stability of the independence number of s random subgraph, Doklady Mathematics, 96:3 (2017), p. 628-630.
[20]
A.M. Raigorodskii, M.E. Zhukovskii, Random graphs: models and asymptotic characteristics, Russian Mathematical Surveys 70:1 (2015) 33–81.
[21]
A. Ruciński, Induced subgraphs in a random graph, Annals of Discrete Mathematics, 33 (1987), 275–296. |
Photon propagator, monopoles and the thermal phase
transition in 3D compact QED
M. N. Chernodub
ITEP, B.Cheremushkinskaja 25, Moscow, 117259, Russia
Institute for Theoretical Physics, Kanazawa
University, Kanazawa 920-1192, Japan
E.-M. Ilgenfritz
Research Center for
Nuclear Physics, Osaka University, Osaka 567-0047, Japan
A. Schiller
Institut für Theoretische Physik and NTZ, Universität
Leipzig, D-04109 Leipzig, Germany
(December 5, 2020)
Abstract
We investigate the gauge boson propagator in three dimensional
compact Abelian gauge model in the Landau gauge at finite
temperature. The presence of the monopole plasma in the
confinement phase leads to appearance of an anomalous dimension in
the momentum dependence of the propagator. The anomalous dimension
as well as an appropriate ratio of photon wave function
renormalization constants with and without monopoles are observed
to be order parameters for the deconfinement phase transition. We
discuss the relation between our results and the confining
properties of the gluon propagator in non–Abelian gauge theories.
pacs: 11.15.Ha,11.10.Wx,12.38.Gc
Three–dimensional compact electrodynamics (cQED${}_{3}$) shares two
outstanding features of QCD, confinement Polyakov and
chiral symmetry breaking ChSB .
With some care, it might be
helpful for the
understanding of certain non–perturbative aspects of QCD
to study them within cQED${}_{3}$.
The non–perturbative properties of cQED${}_{3}$ deserve interest by
themselves because this model was shown to describe some features
of Josephson junctions Josephson and high–$T_{c}$
superconductors HighTc .
Here, we want to elaborate
on cQED${}_{3}$ as a toy model of confinement.
Indeed, this has been the first
non–trivial case in which confinement of electrically charged
particles was understood analytically Polyakov . Confinement
is caused here by a plasma of monopoles which emerge due to the
compactness of the gauge field. Other common features of the two
theories are the existence of a mass gap and of a
confinement–deconfinement phase transition at some non–zero
temperature.
According to universality arguments universality the
phase transition of cQED${}_{3}$ is expected to be of Kosterlitz-Thouless
type KT .
In QCD${}_{4}$, the deconfinement phase transition is widely believed
to be caused by loss of monopole condensation (for a review
see Ref. monopole:condensation ) within the effective dual
superconductor approach dual:superconductor . Studying the
dynamics of the monopole current inside gluodynamics, monopole
de–condensation at the critical temperature is appearing as de–percolation, i.e. the decay of the infrared, percolating
monopole cluster into short monopole
loops monopole:percolation .
This change of vacuum structure has a dimensionally reduced
analog in the $3D$ monopole–antimonopole pair binding which has been
observed in cQED${}_{3}$ Binding ; CISPaper1 .
At present, the gluon propagator in QCD${}_{4}$ is under intensive
study. The analogies mentioned before encouraged us to study the
similarities between the gauge boson propagators in both theories.
In order to fix the role of the monopole plasma in cQED${}_{3}$, not
just for confinement of external charges but also for the
non-perturbative modification of the gauge boson propagator, we
consider it in the confinement and the deconfined phases. On the
other hand, on the lattice at any temperature we are able to
separate the monopole contribution to the propagator by means of
eq. (2) below.
We have chosen the Landau gauge since it has been adopted in most
of the investigations of the gauge boson propagators in
QCD CurrentQCD ; Kurt and QED MIP ; MMP . In order to
avoid the problem of Gribov copies Gribov , the alternative
Laplacian gauge has been used recently Laplacian . The
Coulomb gauge, augmented by a suitable global gauge in each time
slice (minimal Coulomb gauge) has been advocated both
analytically Zwanziger and numerically
Gribov:numerical .
The numerical lattice results for gluodynamics show that the
propagator for all these gauges in momentum space is less singular
than $p^{-2}$ in the immediate vicinity of $p^{2}=0$. Moreover,
the results for the propagator at zero momentum are ranging from a
finite Laplacian (Laplacian gauge) to a strictly vanishing
Gribov ; Gribov:numerical ; Zwanziger (Coulomb gauge) value.
Recent investigations in the Landau gauge show that, beside the
suppression at $p\to 0$, the propagator is enhanced at
intermediate momenta which can be characterized by an anomalous
dimension CurrentQCD (see the last reference
in CurrentQCD for a comparison of different model
functions).
In the present letter we demonstrate that the momentum behaviour of the
photon propagator in QED${}_{3}$ is also described
by a Debye mass and by an anomalous dimension
which both vanish at the deconfinement transition.
This mechanism can be
clearly attributed to
magnetic monopoles.
The plasma contribution is relatively easy to exhibit
by explicit calculation
and can be eliminated by monopole subtraction
on the level of the gauge fields.
The results of a study of the propagator in $SU(2)$ gluodynamics
have been interpreted Kurt in a similar spirit,
where $P$-vortices appearing in the maximal center gauge
were shown to be essential for the enhancement of
the Landau gauge propagator at intermediate momenta.
For our lattice study we have adopted the Wilson action,
$S[\theta]=\beta\sum_{p}\left(1-\cos\theta_{p}\right)$, where
$\theta_{p}$ is the $U(1)$ field strength tensor represented by the
plaquette curl of the compact link field $\theta_{l}$, and $\beta$
is the lattice coupling constant related to the lattice spacing
$a$ and the continuum coupling constant $g_{3}$ of the $3D$ theory,
$\beta=1/(a\,g^{2}_{3})$. We focus here on the difference
between confined and deconfined phase. All results presented have
been obtained on lattices of size $32^{2}\times 8$.
The finite temperature phase transition is known to take
place CISPaper1 ; Coddington at $\beta_{c}\approx 2.35$.
The Landau gauge fixing is defined by maximizing the functional
$\sum_{l}\cos\theta^{G}_{l}$ over all gauge transformations $G$. For
details of the Monte Carlo algorithm we refer to CISPaper1 .
A more complete presentation of our studies, including also a
thorough analysis of the propagator in the zero temperature case
is in preparation in-preparation . Details on the
implementation of Landau gauge fixing, including the elimination
of zero momentum modes and the careful control of double Dirac
strings can be found in Ref. MMP ; in-preparation .
We study the gauge boson propagator, $\langle\theta_{\mu}(x)\theta_{\nu}(0)\rangle$, in the momentum space. The propagator is a
function of the lattice momentum, $p_{\mu}=2\sin(\pi k_{\mu}/L_{\mu})$, where $k_{\mu}=0,\dots,L_{\mu}/2$ is an integer.
We discuss here the finite temperature case and focus on the
temporal component of the propagator,
$$\displaystyle D_{33}({\mathbf{p}}^{2},0)=\frac{1}{L_{x}L_{y}L_{z}}\langle%
\theta_{3}({\mathbf{p}},0)\theta_{3}(-{\mathbf{p}},0)\rangle$$
(1)
as function of the spatial momentum, ${\mathbf{p}}^{2}=\sum_{\mu=1}^{2}p_{\mu}^{2}$. We remind that at finite temperature
the confining properties of static electrically charged particles
are encoded in the temporal component of the gauge boson field,
$\theta_{3}$.
In order to pin down the effect of monopoles we have divided the
gauge field $\theta_{l}$ into a regular (photon) and a singular
(monopole) part which can be done following Ref. PhMon . In the
notation of lattice forms this is written:
$$\displaystyle\theta=\theta^{{\mathrm{phot}}}+\theta^{{\mathrm{mon}}}\,,\quad%
\theta^{{\mathrm{mon}}}=2\pi\Delta^{-1}\delta p[j]\,,$$
(2)
where $\Delta^{-1}$ is the inverse lattice Laplacian and the
0-form $\mbox{}^{\ast}j\in{Z\!\!\!Z}$ is nonvanishing on the sites of the dual
lattice occupied by the monopoles. The 1-form $\mbox{}^{\ast}p[j]$
corresponds to the Dirac strings (living on the links of the dual
lattice) which connect monopoles with anti–monopoles, $\delta\mbox{}^{\ast}p[j]=\mbox{}^{\ast}j$. For a Monte Carlo configuration, we have
fixed the gauge, then located the Dirac strings, $p[j]\neq 0$, and
constructed the monopole part $\theta^{{\mathrm{mon}}}$ of the gauge field
according to the last equation in (2). The photon
field is just the complement to the monopole part according to the
first equation of (2).
The photon and monopole parts of the gauge field contribute to the
propagator, $D=D^{{\mathrm{phot}}}+D^{{\mathrm{mon}}}+D^{{\mathrm{mix}}}$, where $D^{{\mathrm{mix}}}$
represents the mixed contribution from regular and singular fields.
We show the propagator for $p=({\mathbf{p}},0)$ together with the
separate contributions, multiplied by ${\mathbf{p}}^{2}$ and averaged over
the same ${\mathbf{p}}^{2}$ values, in Figure 1
for coupling constant $\beta=1.8$.
The regular part of the propagator has perfectly the free field
form
$$\displaystyle D_{33}^{{\mathrm{phot}}}=\frac{1}{\beta}\frac{Z^{{\mathrm{phot}}%
}}{{\mathbf{p}}^{2}}\,,$$
(3)
at all available $\beta$.
The perturbative propagator defined
in terms of $\theta_{l}$ is obviously proportional to $g_{3}^{2}$, which
is taken into account by the factor $1/\beta$ in eq. (3).
The fits of the photon part of the propagator by the above expression
give the parameter $Z^{{\mathrm{phot}}}$ as a function of lattice coupling
(dash-dotted line in Figure 1 for $\beta=1.8$).
The singular contribution to the gauge boson
propagator
shows a maximum in ${\mathbf{p}}^{2}D_{33}^{\mathrm{mon}}$
at some momentum (Figure 1),
moving
with increasing $\beta$ nearer to $|{\mathbf{p}}|\,a=0$.
The mixed component
gives a negative contribution to
${\mathbf{p}}^{2}\,D_{33}^{{\mathrm{mix}}}$, growing with decreasing momentum.
The central point of our paper is that all these contributions together
do not sum up to a simple massive Yukawa propagator.
To quantify the difference between a Yukawa–type and the actual behavior
we use the the following four–parameter model function for
$D_{33}({\mathbf{p}}^{2},0)$,
$$\displaystyle D_{33}({\mathbf{p}}^{2},0)=\frac{Z}{\beta}\frac{m^{2\alpha}}{{%
\mathbf{p}}^{2(1+\alpha)}+m^{2(1+\alpha)}}+C\,,$$
(4)
where $Z$, $\alpha$, $m$ and $C$ are the fitting parameters. This
model is similar to some of Refs. CurrentQCD ; Ma where the
propagator in gluodynamics has been studied.
The first part of the function (4) implies
that the photon acquires a Debye mass $m$ (due to
screening Polyakov ) together with the anomalous dimension $\alpha$.
The (squared) photon wave function renormalization constant $Z$
describes the renormalization of the
photon wave function due to quantum
corrections. The second part of (4) represents a
$\delta$–function–like interaction in
coordinate space.
Before fitting we average the propagator over all lattice momenta at same
${\mathbf{p}}^{2}$ to improve rotational invariance.
Thus the errors entering the fits include both the variance
among the averages for individual momenta and the individual errors.
The fits were performed using standard Mathematica packages
combined with a search for the global minimum in $\chi^{2}/d.o.f.$
To check the stability of the fits, we studied several possibilities of
averaging and thinning out the data sets, a procedure which will be discussed
elsewhere in-preparation .
The model function (4) works perfectly for all
${\mathbf{p}}^{2}$ and couplings $\beta$.
For $\beta\geq 2.37$ the best fit for mass parameter $m$
and anomalous dimension $\alpha$ are both consistent with zero.
Therefore we set $m=0$ and $\alpha=0$ for these values of $\beta$
to improve the quality of the fit of $Z$ and $C$.
It turns out that the inclusion of a constant term, $C$, in the
model function (4) is crucial for obtaining good fits in
the confinement phase, despite the fact that it is very small (as
function of $\beta$ the parameter $C$ decreases from $C(1.0)=0.18(4)$ to $C(2.2)=0.009(2)$, it rapidly vanishes in the
deconfined phase). Similarly to $m$ and $\alpha$ parameters we set
$C$ to zero for $\beta\geq 2.45$, where $C$ becomes smaller than
$10^{-4}$.
An example of the best fit of the full propagator for $\beta=1.8$
is shown in Figure 1 by the solid line
(with $C=0.033(5)$). The parameter $Z$ distinguishes clearly
between the two phases (Figure 2).
It coincides with the photon part $Z^{{\mathrm{phot}}}$ (defined without
monopoles) in the deconfined phase while it is much larger in the
confined phase. This indicates that the photon wave function gets
strongly renormalized by the monopole plasma. In contrast, the
factor $Z^{{\mathrm{phot}}}$ smoothly changes crossing the deconfinement
transition at $\beta_{c}\approx 2.35$.
The anomalous dimension $\alpha$ also distinguishes the two
phases (Figure 3):
it is equal to zero in the deconfinement phase (perturbative behaviour)
while in the confinement phase the monopole plasma causes
the anomalous dimension growing to $\alpha\approx 0.25\ldots 0.3$.
To characterize the properties of $Z$ and $\alpha$ approaching
the phase transition we fit the excess of the ratio of $Z$’s over unity,
$$\displaystyle R_{Z}(\beta)=\frac{Z(\beta)}{Z^{{\mathrm{phot}}}(\beta)}-1\,,$$
(5)
and the anomalous dimension $\alpha$ in the following form:
$$\displaystyle f_{i}(\beta)=h_{i}\,(\beta^{(i)}_{c}-\beta)^{\gamma_{i}}\,,\quad%
\beta<\beta^{(i)}_{c}\,,\quad(i=\alpha,Z)\,.$$
(6)
where $i=Z,\alpha$. The $\beta^{(\alpha,Z)}_{c}$ are
the pseudo–critical couplings which might differ on finite lattices.
The best fits $f_{\alpha}$ and $f_{Z}$ are shown
in Figures 3 and 4, respectively.
The solid lines in both plots extend over the
fitting region. The corresponding parameters are presented in
Table 1.
The pseudo–critical couplings $\beta^{(\alpha)}_{c}$ and
$\beta^{(Z)}_{c}$ are in agreement with previous numerical
studies Coddington ; CISPaper1 giving $\beta_{c}=2.346(2)$.
Note that the critical exponents $\gamma_{i}$ are close to $1/2$, both for the anomalous dimension $\alpha$ and for $R_{Z}$
expressing the ratio of photon field renormalization constants.
Finally, the $\beta$–dependence of the mass parameter, $m$, is
presented in Figure 5. As expected, the mass scale
generated is non–vanishing in the confinement phase due to
presence of the monopole plasma Polyakov . It vanishes at
the deconfinement transition point when the very dilute remaining
monopoles and anti–monopoles form dipoles CISPaper1 .
Summarizing, we have shown that the presence of the monopole
plasma leads to the appearance of a non–vanishing anomalous
dimension $\alpha>0$ in the boson propagator of cQED${}_{3}$ in the
confinement phase.
We would hope that our observation stimulates an analytical
explanation.
At this stage of studying cQED${}_{3}$ as a model of confinement we
conjecture that in the case of QCD the Abelian monopoles defined
within the Abelian projection may be responsible for the anomalous
dimension of the gluon propagator observed in
Refs. CurrentQCD ; Ma . If true, a monopole subtraction
procedure analogous to that employed here would be able to
demonstrate this. We found that the anomalous dimension $\alpha$
and the ratio of the photon wave function renormalization
constants with and without monopoles, $R_{Z}$ (5), represent
alternative, also non–local order parameters characterizing the
confinement phase.
Acknowledgements.
The authors are grateful to P. van Baal, K. Langfeld,
M. Müller-Preussker, H. Reinhardt and D. Zwanziger for useful
discussions. M. N. Ch. is supported by the JSPS Fellowship P01023.
E.-M. I. gratefully appreciates the support by the Ministry of
Education, Culture and Science of Japan (Monbu-Kagaku-sho) and the
hospitality extended to him by H. Toki. He thanks for the
opportunity to work in the group of H. Reinhardt at Tübingen and
for a CERN visitorship in an early stage of this work.
References
(1)
A. M. Polyakov,
Nucl. Phys. B 120, 429 (1977).
(2)
H. R. Fiebig, R. M. Woloshyn,
Phys. Rev. D 42, 3520 (1990).
(3)
Y. Hosotani, Phys. Lett. B 69, 499 (1977);
V. K. Onemli, M. Tas and B. Tekin,
JHEP 0108, 046 (2001).
(4)
G. Baskaran and P. W. Anderson,Phys. Rev B 37, 580 (1998);
L. B. Ioffe and A. I. Larkin, ibid. 39, 8988 (1989);
P. A. Lee, Phys. Rev. Lett. 63, 680 (1989);
T. R. Morris, Phys. Rev. D 53, 7250 (1996).
(5)
B. Svetitsky,
Phys. Rept. 132, 1 (1986).
(6)
J. M. Kosterlitz, D. Thouless, J. Phys. C 6, 1181 (1973).
(7)
M. N. Chernodub and M. I. Polikarpov,
in ”Confinement, Duality and Non-perturbative
Aspects of QCD”, edited by P. van Baal (Plenum, New York, 1998), p.387;
A. Di Giacomo, Prog. Theor. Phys. Suppl. 131, 161 (1998);
R. W. Haymaker, Phys. Rept. 315, 153 (1999).
(8)
G. ’t Hooft,
Nucl. Phys. B 190, 455 (1981);
S. Mandelstam,
Phys. Rept. 23, 245 (1976).
(9)
T. L. Ivanenko, A. V. Pochinsky and M. I. Polikarpov,
Phys. Lett. B 302, 458 (1993);
A. Hart and M. Teper,
ibid. B 523, 280 (2001).
(10)
N. Parga, Phys. Lett. B 107, 442 (1981);
N. O. Agasyan and K. Zarembo,
Phys. Rev. D 57, 2475 (1998).
(11)
M. N. Chernodub, E.-M. Ilgenfritz and A. Schiller,
Phys. Rev. D 64, 054507 (2001).
(12)
P. Marenzoni, G. Martinelli, N. Stella and M. Testa,
Phys. Lett. B 318, 511 (1993);
P. Marenzoni, G. Martinelli and N. Stella,
Nucl. Phys. B 455, 339 (1995);
D. B. Leinweber, J. I. Skullerud, A. G. Williams and C. Parrinello,
Phys. Rev. D 60, 094507 (1999)
A. G. Williams,
in Proc. of 3rd Int. Conf. on Quark Confinement and Hadron Spectrum,
hep-ph/9809201.
(13)
K. Langfeld, H. Reinhardt and J. Gattnar,
Nucl. Phys. B 621, 131 (2002).
(14)
M. I. Polikarpov, K. Yee and M. A. Zubkov,
Phys. Rev. D 48, 3377 (1993).
(15)
V. G. Bornyakov, V. K. Mitrjushkin, M. Müller-Preussker and F. Pahl,
Phys. Lett. B 317, 596 (1993);
I. L. Bogolubsky, V. K. Mitrjushkin, M. Müller-Preussker and P. Peter,
ibid. B 458, 102 (1999).
(16)
V. N. Gribov, Nucl. Phys. B 139, 1 (1978).
(17)
C. Alexandrou, P. de Forcrand and E. Follana,
Phys. Rev. D 63 (2001) 094504; hep-lat/0112043.
(18)
D. Zwanziger,
Nucl. Phys. B 364, 127 (1991).
(19)
A. Cucchieri and D. Zwanziger,
Phys. Lett. B 524, 123 (2002).
(20)
P. D. Coddington, A. J. Hey, A. A. Middleton and J. S. Townsend,
Phys. Lett. B 175, 64 (1986).
(21)
M. N. Chernodub, E.-M. Ilgenfritz and A. Schiller,
in preparation.
(22)
R. Wensley, J. Stack,
Phys. Rev. Lett. 63, 1764 (1989).
(23)
J. P. Ma,
Mod. Phys. Lett. A 15, 229 (2000). |
Multiscaling in Hall-Magnetohydrodynamic Turbulence: Insights from a
Shell Model
Debarghya Banerjee
debarghya@physics.iisc.ernet.in
Centre for Condensed Matter Theory,
Department of Physics, Indian Institute of Science,
Bangalore 560012, India
Samriddhi Sankar Ray
samriddhisankarray@gmail.com
[
International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bangalore 560012,
India
Ganapati Sahoo
ganapati.sahoo@ds.mpg.de
Max Planck Institute for Dynamics and Self-Organization,
Am Fassberg 17, 37077 Göttingen, Germany
Rahul Pandit
rahul@physics.iisc.ernet.in
[
Centre for Condensed Matter Theory,
Department of Physics, Indian Institute of Science,
Bangalore 560012, India
(December 3, 2020)
Abstract
We show that a shell-model version of the three-dimensional
Hall-magnetohydrodynamic (3D Hall-MHD) equations provides a natural theoretical
model for investigating the multiscaling behaviors of velocity and magnetic
structure functions. We carry out extensive numerical
studies of this shell model, obtain the scaling exponents
for its structure functions, in both the
low-$k$ and high-$k$ power-law ranges of 3D Hall-MHD, and find that the
extended-self-similarity (ESS) procedure is helpful in extracting the multiscaling
nature of structure functions in the high-$k$ regime, which otherwise appears
to display simple scaling. Our results shed light on intriguing solar-wind
measurements.
Hall-MHD, turbulence, multiscaling
pacs: 52.35.Ra, 95.30.Qd
Formerly at ]Laboratoire Lagrange, OCA, UNS, CNRS, BP 4229,
06304 Nice Cedex 4, France
Also at ]Jawaharlal Nehru Centre For Advanced
Scientific Research, Jakkur, Bangalore, India
Turbulent plasmas abound in accretion disks, galaxies, stars, the solar wind,
and laboratory experiments mhd ; cowley07 ; thus, the characterization of
the statistical properties mhd ; sahoo11 ; mhddns of turbulence in such
plasmas is a problem of central importance in astrophysics, plasma physics,
fluid dynamics, and nonequilibrium statistical mechanics. Such a
characterization begins with the energy spectra: e.g., in homogeneous and
isotropic fluid turbulence the energy spectrum $E(k)$, which gives the
distribution of energy over different wave numbers $k$, assumes the scaling
form $E(k)\sim k^{-\alpha}$ if the Reynolds numbers $Re$ is large and $k$ is
in the inertial range $L^{-1}\ll k\ll k_{d}$, where $L$ is the energy-injection
length scale and $k_{d}\equiv 2\pi/\eta_{d}$, with $\eta_{d}$ the length scale at
which viscous dissipation becomes significant; the phenomenological theory of
Kolmogorov (K41) yields K41 ; frisch95 $\alpha=5/3$. Turbulent plasmas
show similar scaling forms for the kinetic and magnetic-energy spectra $E^{u}(k)$
and $E^{b}(k)$, if the turbulence is statistically homogeneous and isotropic, and
both $Re$ and the magnetic Reynolds numbers $Re_{M}$ are large; their ratio $Pr_{M}=Re_{M}/Re$, the magnetic Prandtl number, governs the relative sizes of the
fluid and magnetic dissipation length scales $\eta_{d}^{u}$ and $\eta_{d}^{b}$; the
inertial-range scaling properties of $E^{u}(k)$ and $E^{b}(k)$ have been studied
theoretically and numerically by using the equations of magnetohydrodynamics
(MHD) mhd ; sahoo11 ; mhddns . Energy-spectra measurements in the solar wind
solarwind have shown, however, that $E^{b}(k)$ displays two
power-law ranges. Several
authors vkrishan ; shaikh09 ; meyrand12 ; hallmhdshell ; galtiershell have
suggested that, to obtain these two power-law regimes, we must augment the MHD
equations with a Hall-effect term, which leads to a scale separation at the
ion-inertial length $d_{I}$ or, equivalently, at the wave number $k_{I}=2\pi/d_{I}$. For $k<k_{I}$,
$E^{b}(k)\propto k^{-\alpha^{b,1}}$, it has been observed that $\alpha^{b,1}\simeq 5/3$. For $k_{d}>k>k_{I}$, $E^{b}(k)\propto k^{-\alpha^{b,2}}$,
where $\alpha^{b,2}$ is either $\simeq 7/3$ or $\simeq 11/3$.
The value of $\alpha^{b,2}$
depends on whether the magnetic energy dominates over the fluid kinetic energy,
which occurs in the electron-MHD (EMHD) biskamp limit, or the converse, i.e.,
the ion-MHD (IMHD) limit. These limits follow from the 3D Hall-MHD equations:
EMHD is obtained if the induction term is sub-dominant to the Hall term;
in the IMHD case these two terms are comparable to each other. In the EMHD limit,
we obtain a single, characteristic scale and K41 phenomenology yields $\alpha^{b,2}=7/3$;
in the IMHD case a comparison of the transfer time, from the Hall-term, and a second
time, from the induction part, followed by simple dimensional analysis yields
$\alpha^{b,2}=11/3$ galtiershell ; meyrand12 .
Direct numerical simulations (DNSs) shaikh09 ; meyrand12 ; hallmhdshell have
just begun to resolve these two scaling ranges; but their spatial resolution is
much more limited than has been achieved in DNS studies of MHD
turbulence sahoo11 ; mhddns . Thus, they have not been used to study the
scaling or multiscaling properties of order $p$ fluid and magnetic structure functions
(defined below). However, measurements of such equal-time magnetic structure functions
in solar-wind measurements solarwind show
that, although there is significant multiscaling in the low-$k$ ($k<k_{I}$),
power-law range of $E^{b}(k)$, the scaling exponents in the second, high-$k$
($k_{d}>k>k_{I}$) power-law range increase linearly with the order $p$. Thus,
it behooves us to develop a theoretical understanding of these important and
intriguing observations and to test them.
We show that a shell-model version of the 3D Hall-MHD
equations hallmhdshell ; galtiershell , which is a generalization of MHD shell
models basu ; mhdshell , provides a natural theoretical model for
investigating such multiscaling behaviors in structure functions in 3D Hall-MHD
turbulence. Given the large range of scales that we can cover in this shell
model goy , its magnetic spectrum $E^{b}(k)$ reveals two, distinct, power-law ranges.
We carry out the most comprehensive numerical study of this 3D Hall-MHD shell
model attempted so far; and thereby we characterize and quantify,
for the first time, the properties of the order-$p$ magnetic and
velocity structure functions in this model via their scaling exponents
$\zeta_{p}^{u}$ (fluid), $\zeta_{p}^{b,1}$ (magnetic, $k<k_{I}$ regime), and
$\zeta_{p}^{b,2}$ (magnetic, $k_{d}>k>k_{I}$ regime). We find that all
three sets of exponents show clear signatures of multiscaling. In particular,
we find the remarkable result that magnetic structure functions display
multiscaling for both the low-$k$ and the high-$k$
power-law ranges.
A second significant and surprising
finding is that, although the exponents $\zeta_{p}^{b,2}\neq\zeta_{p}^{b,1}$,
$\zeta_{p}^{b,2}\neq\zeta_{p}^{u}$,
the exponent ratios
$\frac{\zeta_{p}^{b,2}}{\zeta_{3}^{b,2}}\simeq\frac{\zeta_{p}^{b,1}}{\zeta_{3}^%
{b,1}}\simeq\frac{\zeta_{p}^{u}}{\zeta_{3}^{u}}$
foot1 .
The 3D Hall-MHD equations for the velocity $\bf{u}$ and magnetic
$\bf{b}$ fields are
$$\displaystyle\frac{\partial\bf{u}}{\partial t}+(\bf{u}\cdot\nabla)\bf{u}$$
$$\displaystyle=$$
$$\displaystyle-\nabla p+\bf{j}\times\bf{b}+\nu\nabla^{2}\bf{u};$$
$$\displaystyle\frac{\partial\bf{b}}{\partial t}$$
$$\displaystyle=$$
$$\displaystyle\nabla\times[({\bf u}-d_{I}{\bf j})\times{\bf b}]+\eta\nabla^{2}%
\bf{b};$$
(1)
here $\nu$ and $\eta$ are the kinematic viscosity and
magnetic diffusivity, respectively, $d_{I}$ is the ion-inertial
length, the scale at which the Hall effect
becomes important, the current density vector $\bf{j}=\nabla\times\bf{b}$, the pressure is $p$, $\nabla\cdot\bf{b}=0$, and, at low Mach numbers, the flow is
incompressible, i.e., $\nabla\cdot\bf{u}=0$. We
define the dissipation length scales $\eta_{d}^{u}=(\nu^{3}/\varepsilon^{u})^{1/4}$ and $\eta_{d}^{b}=(\eta^{3}/\varepsilon^{b})^{1/4}$, where $\varepsilon^{u}$ and
$\varepsilon^{b}$ are the kinetic and magnetic-energy
dissipation rates, respectively; we restrict ourselves to
decaying turbulence, so we do not include forcing terms.
The Hall term, which is a singular perturbation of the
MHD equations meyrand12 , has a significant effect
if $d_{I}\gg\eta_{d}^{u},\,\eta_{d}^{b}$. The shell-model
versions of Eq.(Multiscaling in Hall-Magnetohydrodynamic Turbulence: Insights from a
Shell Model) are hallmhdshell ; galtiershell :
$$\displaystyle\frac{du_{n}}{dt}$$
$$\displaystyle=$$
$$\displaystyle-\nu k_{n}^{2}u_{n}-\nu_{2}k_{n}^{4}u_{n}+\iota[\Phi_{n}^{u}]^{*},$$
$$\displaystyle\frac{db_{n}}{dt}$$
$$\displaystyle=$$
$$\displaystyle-\eta k_{n}^{2}b_{n}-\eta_{2}k_{n}^{4}b_{n}+\iota[\Phi_{n}^{b}]^{%
*},$$
(2)
where $u_{n}$ and $b_{n}$ are, respectively, the complex velocity and magnetic
field in the shell $n$, $*$ denotes complex conjugation, $1\leq n\leq N$,
where $N$ is the total number of shells, $\Phi_{n}^{u}=A_{n}(u_{n+1}u_{n+2}-b_{n+1}b_{n+2})+B_{n}(u_{n-1}u_{n+1}-b_{n-1}%
b_{n+1})+C_{n}(u_{n-2}u_{n-1}-b_{n-2}b_{n-1})$ and $\Phi_{n}^{b}=D_{n}(u_{n+1}b_{n+2}-b_{n+1}u_{n+2})+E_{n}(u_{n-1}b_{n+1}-b_{n-1}%
u_{n+1})-F_{n}(u_{n-2}b_{n-1}-b_{n-2}u_{n-1})-d_{I}[G_{n}b_{n+1}b_{n+2}+H_{n}b%
_{n-1}b_{n+1}+I_{n}b_{n-2}b_{n-1}]$, with
$A_{n}=k_{n}$, $B_{n}=-\frac{1}{2}k_{n-1}$, $C_{n}=-\frac{1}{2}k_{n-2}$,
$D_{n}=\frac{1}{6}k_{n}$, $E_{n}=\frac{1}{3}k_{n-1}$, $F_{n}=\frac{2}{3}k_{n-2}$, $G_{n}=-\frac{1}{2}(-1)^{n}k_{n}^{2}$, $H_{n}=-\frac{1}{2}(-1)^{n-1}k_{n-1}^{2}$, $I_{n}=(-1)^{n-2}k_{n-2}^{2}$, $k_{n}=2^{n}k_{0}$, and $k_{0}=1/16$; the values of the coefficients $A_{n}-I_{n}$ are
determined by enforcing the shell-model analogs of the Hall-MHD conservation
laws, in the inviscid, unforced limit; the conserved quantities are the total
energy $E=\Sigma_{n}(|u_{n}|^{2}+|b_{n}|^{2})/2$, the magnetic helicity $H_{M}=\Sigma_{n}(-1)^{n}|b_{n}|^{2}/2k_{n}$, and the ion helicity $H_{I}=\Sigma_{n}\left((b_{n}u_{n}^{*}+b_{n}^{*}u_{n})+d_{I}(-1)^{n}k_{n}|u_{n}%
|^{2}/2\right)$; the hyperviscosity
$\nu_{2}$ and the magnetic hyperdiffusivity $\eta_{2}$ have to be included for
numerical stability meyrand12 ; galtiershell . We use the boundary
conditions $A_{N-1}=A_{N}=B_{1}=B_{N}=C_{1}=C_{2}=0$, $D_{N-1}=D_{N}=E_{1}=E_{N}=F_{1}=F_{2}=0$, $G_{N-1}=G_{N}=H_{1}=H_{N}=I_{1}=I_{2}=0,$ and the
following initial values for $u_{n}=u^{(0)}k_{n}^{-1/3}e^{-k_{n}^{2}+\iota\phi^{u}}$ and $b_{n}=b^{(0)}k_{n}^{-1/3}e^{-k_{n}^{2}+\iota\phi^{b}}$; here
$u^{(0)}=0.5$, $b^{(0)}=0.05$, and the random phases $\phi^{u}$ and $\phi^{b}$
are distributed uniformly on the interval $[-\pi,+\pi]$; different values of
these random phases distinguish different initial conditions; we work with decaying turbulence,
so there is no forcing term; and our results are
averaged over $7500$ independent initial conditions foot2 . We set $N=22$, use a
second-order, slaved Adams-Bashforth scheme cox for solving the shell-model
ordinary differential equations (Multiscaling in Hall-Magnetohydrodynamic Turbulence: Insights from a
Shell Model), and calculate the energy spectra
$E^{u}(k_{n})=\frac{1}{2}|u_{n}|^{2}/k_{n}$ and $E^{b}(k_{n})=\frac{1}{2}|b_{n}|^{2}/k_{n}$
(the superscripts $u$ and $b$ refer to velocity
and magnetic field, respectively), the root-mean-square velocity
$u_{rms}=\sqrt{\Sigma_{n}\mid u_{n}\mid^{2}}$, the Taylor microscale $\lambda=\sqrt{\Sigma_{n}E^{u}(k_{n})/\Sigma_{n}k_{n}^{2}E^{u}(k_{n})}$, the Taylor-microscale
Reynolds number $Re_{\lambda}=u_{rms}\lambda/\nu_{\rm{eff}}$, the integral
length scale $l_{I}=\Sigma_{n}\left(E^{u}(k_{n})/k_{n}\right)/\Sigma_{n}E^{u}(k_{n})$, the effective
viscosity and magnetic diffusivity $\nu_{\rm{eff}}=\Sigma_{n}\left(\nu k_{n}^{2}E^{u}(k_{n})+\nu_{2}k_{n}^{4}E^{u}%
(k_{n})\right)/\Sigma_{n}k_{n}^{2}E^{u}(k_{n})$ and $\eta_{\rm{eff}}=\Sigma_{n}\left(\eta k_{n}^{2}E^{b}(k_{n})+\eta_{2}k_{n}^{4}E^%
{b}(k_{n})\right)/\Sigma_{n}k_{n}^{2}E^{b}(k_{n})$,
respectively, the effective magnetic Prandtl number iskakov
${Pr_{M}}_{\rm{eff}}=\nu_{\rm{eff}}/\eta_{\rm{eff}}$, and the dissipation rates
$\varepsilon^{u}=\nu_{\rm{eff}}\Sigma_{n}k_{n}^{2}E^{u}(k_{n})$ and $\varepsilon^{b}=\eta_{{\rm eff}}\Sigma_{n}k_{n}^{2}E^{b}(k_{n})$. The parameters of our simulations are given in Table I.
In shell models, the equal-time, order-$p$ structure functions for the velocity field and the magnetic
field are defined, respectively, as $S^{u}_{p}(k_{n})=\langle|u_{n}|^{p}\rangle\sim k_{n}^{\zeta^{u}_{p}}$ and
$S^{b}_{p}(k_{n})=\langle|b_{n}|^{p}\rangle$, where $S^{b}_{p}(k_{n})\sim k_{n}^{\zeta_{p}^{b,1}}$ ($k<k_{I}$) and
$S^{b}_{p}(k_{n})\sim k_{n}^{\zeta_{p}^{b,2}}$ ($k_{d}>k>k_{I}$).
However, to remove the effects of an underlying three
cycle in GOY-type shell models basu ; goy , we use the modified structure functions
$\Sigma_{p}^{u}(k_{n})=\langle|\Im[u_{n+2}u_{n+1}u_{n}+1/4u_{n-1}u_{n}u_{n+1}]|%
^{p/3}\rangle$ and $\Sigma_{p}^{b}(k_{n})=\langle|\Im[b_{n+2}b_{n+1}b_{n}+1/4b_{n-1}b_{n}b_{n+1}]|%
^{p/3}\rangle,$ from which
we can obtain multiscaling exponents via $\Sigma_{p}^{u}(k_{n})\sim k_{n}^{\zeta_{p}^{u}}$, $\Sigma_{p}^{b}(k_{n})\sim k_{n}^{\zeta_{p}^{b,1}}$ ($k_{n}<k_{I}$), and
$\Sigma_{p}^{b}(k_{n})\sim k_{n}^{\zeta_{p}^{b,2}}$ ($k_{d}>k_{n}>k_{I}$).
We also use the extended self-similarity (ESS) procedure ess to determine
exponent ratios from slopes of log-log plots of $\Sigma_{p}^{u}$ versus $\Sigma_{3}^{u}$
and their magnetic counterparts (Fig. 2 inset).
In Fig. 1(a) we show plots of $\varepsilon^{u}$ (red, upper curve) and
$\varepsilon^{b}$ (blue, lower curve) versus the rescaled time $t/\tau$, for
run R2, where the box-size eddy-turnover time $\tau=1/(u_{1}k_{1})$ is evaluated
at the principal peak of $\varepsilon^{u}$. This peak signals the completion
of the Richardson cascade sahoo11 , as we can see from the time evolution
of $E^{u}(k_{n})$ and $E^{b}(k_{n})$, in the insets of Figs.1(b) and (c), respectively,
where the red lines with full circles denote the spectra at cascade completion.
We evaluate the spectral-slope exponent $\alpha^{u}$
at cascade completion from log-log plots of $E^{u}(k_{n})$ versus $k_{n}$ as
shown in Fig. 1(b). We find $\alpha^{u}\simeq 5/3$, as predicted by dimensional analysis,
and illustrated in Fig. 1(b) by a thick, blue line.
In Fig. 1(c) we show a representative plot of $E^{b}(k_{n})$; we
see two different scaling regimes clearly: (1) from the low-$k$ one
(solid, blue line), we find that $\alpha^{b,1}\simeq 5/3$, which is consistent with dimensional analysis;
(2) from the high-$k$ regime, we obtain $\alpha^{b,2}=3.45\pm 0.06$, which is close to the
dimensional-analysis value 11/3 for IMHD systems, such as ours, in which induction and
Hall terms are comparable galtiershell ; meyrand12 . These spectral exponents are
consistent with those in solar-wind experiments solarwind .
In Table I we provide our results for all three spectral exponents;
we obtain the values of these and all other exponents from the means of our runs with 7500
independent initial conditions; the error bars follow from the associated
standard deviations.
To characterize the statistical properties of the Hall-MHD system, we now calculate the
equal-time exponents $\zeta_{p}^{u}$, $\zeta_{p}^{b,1}$ ($k<k_{I}$), and $\zeta_{p}^{b,2}$ ($k_{d}>k>k_{I}$)
via the modified structure functions $\Sigma_{p}^{u}$ and $\Sigma_{p}^{b}$, just
after cascade completion. We find $\zeta_{3}^{u}=1=\zeta_{3}^{b,1}$, which is consistent with dimensional analysis.
In Fig. 2 we show the order-$p$ equal-time exponents $\zeta_{p}^{u}$ (blue, filled
circles) and $\zeta_{p}^{b,1}$ (red, filled squares) for integer values of $p$ between 1 and 10;
the thick, black line illustrates the dimensional, simple K41 scaling.
We see that $\zeta_{p}^{u}\simeq\zeta_{p}^{b,1}$ and both exponents show clear multiscaling corrections
to K41 scaling, with values
consistent with those obtained in 3D MHD turbulence sahoo11 .
We obtain these multiscaling exponents by using $\Sigma_{p}^{u}$ and $\Sigma_{p}^{b}$
and the ESS procedure ess ,
to extend the scaling range. However, the result $\zeta_{3}^{u}\simeq 1$ ensures that
the exponent ratios and the exponents themselves are equal (within error bars).
In Table II, we list the order-$p$ equal-time exponents $\zeta_{p}^{u}$
and $\zeta_{p}^{b,1}$ for integer values of $p$ between 1 and 10.
We finally turn to the exponents $\zeta_{p}^{b,2}$, which characterize the
high-$k$ regime ($k_{d}>k>k_{I}$). Solar-wind measurements solarwind of
$\zeta_{p}^{b,2}$ suggest simple-scaling behaviour, with $\zeta_{p}^{b,2}$
a linear function of $p$. In Fig. 3 we plot $\zeta_{p}^{b,2}$
(obtained without ESS) versus $p$. Our results for $\zeta^{b,2}_{p}$ are in
qualitative agreement with solar-wind measurements to the extent that there is
only mild multiscaling; i.e., $\zeta^{b,2}_{p}$ is a nonlinear, monotone,
increasing function of $p$, but the deviation from a linear dependence on $p$
is not very pronounced.
Although the exponents $\zeta^{b,2}_{p}$, for all the runs
R1-R4, are in agreement with each other, given our error-bars, their mean values
seem to decrease with $d_{I}$.
We now use the ESS procedure to obtain the exponent ratios $\zeta_{p}^{b,2}/\zeta_{3}^{b,2}$, which
are plotted versus $p$ in the inset of Fig. 3 (note $\zeta_{3}^{b,2}\neq 1$). This ESS plot is remarkable for two reasons : (1)
There is a clear signature of multiscaling (the thick, black line in the inset indicates simple scaling);
(2) although the exponents $\zeta_{p}^{b,2}$ are very different from $\zeta_{p}^{u}$ and
$\zeta_{p}^{b,1}$, the ratios $\zeta_{p}^{b,2}/\zeta_{3}^{b,2}$ are equal to
$\zeta_{p}^{u}$ and $\zeta_{p}^{b,1}$ (within error bars). In Table II, we list both $\zeta_{p}^{b,2}$ and
$\zeta_{p}^{b,2}/\zeta_{3}^{b,2}$ for different values of $p$ for the representative run R2.
We hope our extensive studies of the multiscaling of structure functions
in a shell model for 3D Hall-MHD will stimulate high-precision and high-resolution
experimental and DNS
studies to determine conclusively whether 3D Hall-MHD turbulence shows
multiscaling for $k_{d}>k>k_{I}$. Our ESS results suggest
that structure functions show mild, but distinct, multiscaling in this region
To obtain quantitative agreement with solar-wind
exponents, we must, of course, carry out DNS studies of the 3D Hall-MHD
equations (Multiscaling in Hall-Magnetohydrodynamic Turbulence: Insights from a
Shell Model) and include compressibility effects and a
mean magnetic field vkrishan ; shaikh09 ; meyrand12 ; however, current
computational resources limit severely the spatial resolution of
such DNS studies so they cannot (a) uncover the multiscaling of magnetic-field
structure functions in 3D Hall-MHD turbulence in both low- and high-$k$
power-law ranges and (b) obtain well-averaged multiscaling exponent ratios.
For the moment, therefore, the shell-model study, which we have undertaken,
provides the only way of understanding the multiscaling of structure functions
in the solar wind solarwind and the apparent and intriguing universality of
the exponent ratios.
This apparant universality needs to be investigated in detail in experiments and DNS.
Acknowledgements.
We thank A. Basu and V. Krishan for discussions, S. Galtier for the preprint of
Ref. meyrand12 , CSIR, UGC, and DST (India) for support, and SERC (IISc)
for computational resources. R.P. and G.S. are members of the International
Collaboration for Turbulence Research; G.S., R.P., and S.S.R. acknowledge support from
the COST Action MP0806; S.S.R. thanks the European Research Council for support under the European
Communityâs Seventh Framework Program (FP7/2007-2013, Grant Agreement no. 240579).
References
(1)
A. R. Choudhuri, The Physics of Fluids and Plasmas: An
Introduction for Astrophysicists (Cambridge University Press, Cambridge, UK,
1998); V. Krishan, Astrophysical Plasmas and Fluids (Kluwer, Dordrecht,
(1999); D. Biskamp, Magnetohydrodynamic Turbulence (Cambridge University
Press, Cambridge, UK, 2003) ; G. Rüdiger and R. Hollerbach, The Magnetic
Universe: Geophysical and Astrophysical Dynamo Theory (Wiley, Weinheim,
2004); M.K. Verma, Phys. Rep. 401, 229 (2004).
(2)
S. Cowley, J.-F. Pinton, and A. Pouquet, eds.
New J. Phys. 9 (2007).
(3)
G. Sahoo, P. Perlekar and R. Pandit, New J. Phys.
13, 0130363 (2011).
(4)
D. Biskamp and W.-C. Müller Phys. Plasmas,
7, 4889 (2008); P.D. Mininni and A. Pouquet Phys. Rev. E
80, 025401 (2009); A. Brandenburg, D. Sokoloff, and K. Subramanian,
Space Science Reviews, 169, Issue 1-4, 123 (2012).
(5)
A.N. Kolmogorov, Dokl. Akad. Nauk. SSSR 30,
299â303 (1941).
(6)
U. Frisch, Turbulence: the Legacy of A.N. Kolmogorov,
(Cambridge University Press, Cambridge, UK, 1995).
(7)
K.H. Kiyani, et al., Phys. Rev. Lett. 103, 075006 (2009);
C.W. Smith, K. Hamilton, B.J. Vasquez and R.J. Leamon, The
Astrophysical Journal 645, L85 (2006).
(8)
V. Krishan and S.M. Mahajan, Solar Physics
220 29 (2004); J. Geophys. Research
109, A11105 (2004).
(9)
D. Shaikh and P.K. Shukla, Phys. Rev. Lett. 102,
045004 (2009); P.D. Mininni, A. Alexakis, and A. Pouquet, J. Plasma Phys.,
73, Part 3, 377 (2007).
(10)
R. Meyrand and S. Galtier, Phys. Rev. Lett.
109, 194501 (2012).
(11)
D. Hori, M. Furukawa, S. Ohsaki, and Z.
Yoshida, J. Plasma Fusion Res. 81 No.3, 141
(2005); D. Hori and H. Miura, Plasma and Fusion Research
3, s1053 (2008).
(12)
S. Galtier, and E. Buchlin The
Astrophysical Journal 656, 560 (2007).
(13)
D. Biskamp, E. Schwarz, and J.F. Drake, Phys. Rev. Lett.
76, 1264 (1996).
(14)
A. Basu, A. Sain, S. Dhar, and R. Pandit,
Phys. Rev. Lett. 81, 2687 (1998); C. Kalelkar
and R. Pandit, Phys. Rev. E 69, 046304 (2004);
G. Sahoo, D. Mitra, and R. Pandit, Phys. Rev. E
81, 036317 (2010).
(15)
P. Frick and D. Sokoloff,
Phys. Rev. E 57, 4155 (1998); S.A. Lozhkin, D.D.
Sokolov, and P.G. Frick, Astron. Rep. 43, 753
(1999); A. Brandenburg, K. Enqvist, and P. Olesen, Phys.
Rev. D 54, 1291 (1996); P. Giuliani and V.
Carbone, Europhys. Lett. 533(5), 527 (1998).
(16)
E. Gledzer, Sov. Phys. Dokl.
18, 216 (1973); K. Ohkitani and M. Yamada, Prog. Theor. Phys. 81, 329 (1989); L.P.
Kadanoff, D. Lohse, and J. Wang, Phys. Fluids
7, 517 (1995).
(17)
We note, in passing, that $\zeta_{p}^{u}\simeq\zeta_{p}^{b,1}\simeq\zeta_{p}^{\rm fl}$,
where $\zeta_{p}^{\rm fl}$ are the equal-time scaling exponents obtained for
3D fluid turbulence frisch95 .
(18)
Earlier studies have suggested the strong universality of scaling exponents,
i.e., the equality of multiscaling exponents obtained from studies of
decaying and forced turbulence strong .
(19)
V.S. L’vov, R. A. Pasmanter, A. Pomyalov, and I. Procaccia, Phys. Rev. E 67 066310 (2003);
S. S. Ray, D. Mitra, and R. Pandit, New. J. Phys 10, 033003 (2008).
(20)
S. M. Cox, and P. C. Matthews J. Sci.
Computing 176, 430 (2002).
(21)
See Supplemental Material at [URL to be inserted by publisher] for analogous
results from runs R1, R3, and R4.
(22)
A.B. Iskakov, et al., Phys. Rev.
Lett. 98, 208501 (2007).
(23)
R. Benzi, S. Ciliberto, C. Baudet, F.
Massaioli and S. Succi, Phys. Rev. E 48, R29
(1993); S. Chakraborty, U. Frisch and S.S. Ray, J. Fluid
Mech. 649, 275 (2010).
Appendix A Supplemental Material
In this Supplemental Material we describe details of our work that are of
interest only to specialists of the field. The notations and abbreviations
used in this Supplemental Material are the same as in the main paper.
In our main paper, we discuss the statistical nature of turbulence in the
Hall-MHD plasma and, in particular, the scaling properties of various structure
functions, in great detail. To substantiate our claims, we show in the main
paper representative data from only a single set of simulations (except in Fig.
3 where we show results from all our simulations), namely, Run R2 (see Table I
of the main paper). In Fig. 3 of the main paper we do show exponents from all
the four different sets of simulation (as detailed in Table I of the main
paper); however, in Table II (main paper) we list exponents from R2 only; because
the different sets of simulations all agree with each other, representative
data from one set of simulations (R2), in the main paper, is enough to
highlight the nature of multiscaling in Hall-MHD turbulence.
We give here the equal-time exponents from the runs
R1, R3, R4 in Table I; these exponents are in agreement with the ones listed for
R2 in Table II of the main paper. Furthermore, we show plots for the fluid and
magnetic energy dissipation rates (Fig 1(a) for run R1, Fig 2(a) for run R3, and Fig
3(a) for run R4), the kinetic energy spectrum (Fig 1(b) for run R1, Fig 2(b) for run R3,
and Fig 3(b) for run R4), and the magnetic energy spectrum (Fig 1(c) for run R1, Fig
2(c) for run R3, and Fig 3(c) for run R4); these plots are analogous to the plots shown
in Fig.1, for run R2, in the main paper.
The parameters of the runs R1, R3, and R4 (along with those for run R2) are given in
Table (I) of our main paper. |
Two-sided ideals in the ring of differential operators on a Stanley-Reisner ring
Ketil Tveiten
Ketil Tveiten
Matematiska Institutionen
Stockholms Universitet
106 91 Stockholm.
ktveiten@math.su.se
Abstract.
Let $R$ be a Stanley-Reisner ring (that is, a reduced monomial ring) with coefficients in a domain $k$, and $K$ its associated simplicial complex. Also let $D_{k}(R)$ be the ring of $k$-linear differential operators on $R$. We give two different descriptions of the two-sided ideal structure of $D_{k}(R)$ as being in bijection with certain well-known subcomplexes of $K$; one based on explicit computation in the Weyl algebra, valid in any characteristic, and one valid in characteristic $p$ based on the Frobenius splitting of $R$. A result of Traves [Tra99] on the $D_{k}(R)$-module structure of $R$ is also given a new proof and different interpretation using these techniques.
Key words and phrases:Rings of differential operators, Stanley-Reisner rings
2010 Mathematics Subject Classification: 16S32, 13N10, 13F55
1. Introduction
Rings of $k$-linear differential operators $D_{k}(R)$ on a $k$-algebra $R$ are generally difficult to study, even when the base ring $R$ is well-behaved. Some descriptions of $D_{k}(R)$ are given in e.g. [Mus94] for the case of toric varieties, [Bav10a] and [Bav10b] for general smooth affine varieties (in zero and prime characteristic respectively), and [Tra99], [Tri97] and [Eri98] for Stanley-Reisner rings.
Some criteria for simplicity of $D_{k}(R)$ exist (see [SVdB97] and [Sai07] among others), and the study of their left and right ideals, through the theory of $D$-modules, is well developed.
When $D_{k}(R)$ is not simple, however, it is an interesting problem to give a description of its two-sided ideals; the purpose of this paper is to do this for the case of Stanley-Reisner rings. Every Stanley-Reisner ring is the face ring $R_{K}$ of a simplicial complex $K$, and we will give two different descriptions of the two-sided ideal structure of $R$ in terms of the combinatorial structure of $K$; namely the lattice of ideals is in a certain sense determined by the poset of subcomplexes of $K$ that are stars of some face of $K$. The first description is based on explicit computations with monomials in the Weyl algebra, and the second (valid only in prime characteristic) takes advantage of the Frobenius splitting of $R$.
2. Some preliminaries
Let us fix some notation. Throughout, $k$ is a commutative domain. $K$ will denote an abstract simplicial complex on vertices $x_{1},\ldots,x_{n}$; we will not distinguish between $K$ as an abstract simplicial complex and its topological realization. In the corresponding face rings (see 2.1) the indeterminate corresponding to a vertex $x_{i}$ will also be named $x_{i}$ to avoid notational clutter.
Elements of $K$ will be referred to as simplices or faces. For a face $\sigma\in K$, we let $x_{\sigma}:=\prod_{x_{i}\in\sigma}x_{i}$.
$R$ will always mean a face ring $R_{K}$ for a simplicial complex $K$. We use standard multiindex notation: $x^{a}$ denotes $x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}$, and $|a|=a_{1}+\cdots a_{n}$.
We briefly recall for the benefit of the reader some basics of Stanley-Reisner rings, omitting the proofs.
Definition 2.1.
Let $K$ be an abstract simplicial complex on vertices $x_{1},\ldots,x_{n}$. The Stanley-Reisner ring, or face ring, of $K$ with coefficients in $k$ is the ring $R_{K}=k[x_{1},\ldots,x_{n}]/I_{K}$, where $I_{K}=\langle x_{i_{1}}\cdots x_{i_{r}}|\{x_{i_{1}},\ldots,x_{i_{r}}\}\not\in K\rangle$ is the ideal of square-free monomials corresponding to the non-faces of $K$, called the face ideal of $K$.
Geometrically, $R_{K}$ is the coordinate ring of the cone on $K$, so $\dim R_{K}=\dim K+1$. Accordingly, when we talk about support of elements, we will refer to faces of $K$ when strictly speaking we mean the cones on these faces. If $K=\Delta_{n}$ is a simplex, $I_{K}$ is the zero ideal, and $R_{K}$ is the polynomial ring in $n$ variables. If $K=K^{\prime}\ast K^{\prime\prime}$ is the simplicial join of complexes $K^{\prime}$ and $K^{\prime\prime}$, then $R_{K}\simeq R_{K^{\prime}}\otimes_{k}R_{K^{\prime\prime}}$. Face rings are exactly the reduced monomial rings, i.e. quotients of polynomial rings by square-free monomial ideals.
Given a simplicial complex $K$, we will have use for a well-known class of subsets of $K$:
Definition 2.2.
Let $\sigma\in K$ be a face. The closed star of $\sigma$ in $K$ is the subcomplex
$$st(\sigma,K):=\{\tau\in K|\tau\cup\sigma\in K\}.$$
The open star of $\sigma$ in $K$ is the set
$$st(\sigma,K)^{\circ}:=\{\tau\in K|\sigma\cup\tau\in K\wedge\sigma\cap\tau\neq%
\varnothing\};$$
$st(\sigma,K)^{\circ}$ is the interior of $st(\sigma,K)$ in $K$, and $st(\sigma,K)$ is the closure of $st(\sigma,K)^{\circ}$ in $K$.
The open complement of $st(\sigma,K)$ is the set (not usually a subcomplex)
$$U_{\sigma}(K)=K\setminus st(\sigma,K)=\{\tau\in K|\tau\cup\sigma\not\in K\}.$$
Stars are important because the support of a principal monomial ideal of $R_{K}$, considered as an $R_{K}$-module, is exactly equal to the open star of some face, and the closed star is the smallest subcomplex containing it. For the remainder, we will take star to mean closed star.
We will not have much need of comparing stars associated to different subcomplexes and so will often write simply $st(\sigma),U_{\sigma}$ if no confusion is likely to result. For completeness, we repeat a few simple facts:
Lemma 2.3.
(i)
If $\sigma\subset\tau$ are faces in $K$, $st(\sigma,K)\supset st(\tau,K)$;
(ii)
If $L\subset K$ is a subcomplex containing $\sigma$, $st(\sigma,L)\subset st(\sigma,K)$;
(iii)
For a face $\sigma=\tau\cup\{x\}$, $st(\sigma,K)=st(x,st(\tau,K))$.
(iv)
$st(\tau)\subset st(\sigma)$ if and only if $\{\text{maximal simplices in }K\text{ that contain }\tau\}$
$\subset\{\text{maximal simplices in }K\text{ that contain }\sigma\}$.
(v)
$\sigma\in st(\tau)\Leftrightarrow\tau\in st(\sigma)$.
(vi)
If $\sigma\cup\tau$ is a face of $K$, $st(\sigma)^{\circ}\cap st(\tau)^{\circ}=st(\sigma\cup\tau)^{\circ}$.
Proof.
$(i)$, $(ii)$ and $(v)$ are obvious. $(iv)$ follows from the fact that a complex is determined by its maximal cells.
$(iii)$ follows from unwrapping the definitions:
(2.1)
$$\displaystyle st(x,st(\tau,K))$$
$$\displaystyle=\{\alpha\in st(\tau,K)|\alpha\cup\{x\}\in st(\tau,K)\}$$
(2.2)
$$\displaystyle=\{\alpha\in st(\tau,K)|\alpha\cup\{x\}\cup\tau\in K\}$$
(2.3)
$$\displaystyle=\{\alpha\in st(\tau,K)|\alpha\cup\sigma\in K\}$$
(2.4)
$$\displaystyle=st(\tau,K)\cap st(\sigma,K)$$
(2.5)
$$\displaystyle=st(\sigma,K)$$
where the last equality follows from (i).
To show $(vi)$, note that for any $\sigma\in K$, $st(\sigma)^{\circ}$ is the interior of the union of maximal simplices containing $\sigma$. It follows that $st(\sigma\cup\tau)^{\circ}$ is the interior of the union of maximal simplices containing both $\sigma$ and $\tau$, in other words the maximal simplices in $st(\sigma)\cap st(\tau)$.
∎
We will need some properties of the face ideals $I_{st(\sigma)}$ and face rings $R_{st(\sigma)}$ of the subcomplexes $st(\sigma,K)$.
Lemma 2.4.
(1)
If $K_{1},K_{2}$ are subcomplexes of $K$, $I_{K_{1}}+I_{K_{2}}=I_{K_{1}\cap K_{2}}$ and $I_{K_{1}}\cap I_{K_{2}}=I_{K_{1}\cup K_{2}}$.
(2)
$I_{st(\sigma)}=\langle x_{\tau}|\tau\in U_{\sigma}\rangle$.
(3)
The minimal primes of $I_{K}$ are the face ideals $I_{st(\tau)}$ for the maximal simplices $\tau$.
Proof.
The first two items follow from the definition of $I_{st(\sigma)}$.
For the last item, observe that $I_{st(\sigma)}$ is clearly prime when $\sigma$ is a maximal simplex, as $I_{st(\sigma)}=\langle x_{i}|x_{i}\in U_{\sigma}\rangle$ and monomial ideals are prime exactly when they are generated by a subset of the variables; observe also that all $I_{st(\sigma)}$ are radical. These observations together with item 1 give the result, as $I_{K}=\bigcap_{\sigma\subset K\text{ maximal}}I_{st(\sigma)}$.
∎
We intend to study the ring of differential operators on $R$, so let us define what that is:
Definition 2.5.
The ring $D_{k}(R)$ of $k$-linear differential operators on a $k$-algebra $R$ is defined inductively by
$$D_{k}(R)=\bigcup_{n\geq 0}D_{k}^{n}(R)$$
where $D_{k}^{0}(R)=R$ and for $n>0$, $D_{k}^{n}(R):=\{\phi\in End_{k}(R)|\forall r\in R:[\phi,r]\in D_{k}^{n-1}(R)\}$. Elements of $D_{k}^{n}(R)\setminus D_{k}^{n-1}(R)$ are said to have order $n$, and there is a natural filtration
$$D_{k}^{0}(R)\subset D_{k}^{1}(R)\subset D_{k}^{2}(R)\subset\cdots$$
on $D_{k}(R)$ called the order filtration.
Definition 2.6.
The Weyl algebra in $n$ variables over $k$ is the ring of differential operators on the polynomial ring $k[x_{1},\ldots,x_{n}]$. It is generated as an $R$-algebra by the divided power operators $\partial_{i}^{(a)}=\frac{1}{a!}\frac{\partial^{a}}{\partial x_{i}^{a}}$,
with the relations $[x_{i},x_{j}]=[\partial_{i}^{(a)},\partial_{j}^{(b)}]=0$ for $i\neq j$, $\partial_{i}^{(a)}\partial_{i}^{(b)}=\binom{a+b}{a}\partial_{i}^{(a+b)}$ and $[\partial_{i}^{(b)},x_{i}]=\partial_{i}^{(b-1)}$ (in particular $[\partial_{i},x_{i}]=1$).
Remark 2.7.
We use the divided power operators rather than the usual vector fields $\frac{\partial}{\partial x_{i}}$ as the latter do not generate the whole ring of differential operators in the case of characteristic $p$; the divided power operators however always generate everything regardless of the characteristic, as they define differential operators on $\mathbb{Z}$ and so descend to any commutative ring. In characteristic zero, the derivations $\partial_{i}$ suffice to generate everything; in characteristic $p$ we need the full set of elements $\partial_{i}^{p^{r}}$ for $r\geq 0$, which suffice due to the relation $\partial_{i}^{(a)}\partial_{i}^{(b)}=\binom{a+b}{a}\partial_{i}^{(a+b)}$.
In the following, $k$ will always be fixed, so we will omit it from the notation and write simply $D(R)$. Elements of $k$ will be referred to as constants. One easily verifies that an element $x^{a}\partial^{(b)}$ in the Weyl algebra has order $|b|$.
3. The two-sided ideals of $D(R)$
When $R=R_{K}$ is a face ring, there exist several descriptions of $D(R)$ in the literature, see [Tri97], [Eri98] and [Tra99]. We wish to give a description of the two-sided ideals of $D(R)$ in terms of the combinatorics of $K$; for our purposes, the following description due to Traves ([Tra99]) is the most convenient.
Theorem 3.1.
Let $k$ be a commutative domain, and $R=k[X]/J$ a reduced monomial ring. An element $x^{a}\partial^{(b)}=\prod_{i}x_{i}^{a_{i}}\partial_{i}^{(b_{i})}$ of the Weyl algebra over $k$ is in $D(R)$ if and only if for each minimal prime $\mathfrak{p}$ of $R$, we have either $x^{a}\in\mathfrak{p}$ or $x^{b}\not\in\mathfrak{p}$. $D(R)$ is generated as a k-algebra by these elements, and they form a free basis of $D(R)$ as a left $k$-module.
Example 3.2.
Let $R=k[x_{1},x_{2},x_{3}]/(x_{1}x_{2}x_{3})$. The associated simplicial complex $K$ is the boundary of a 2-simplex. Then by 3.1, $D(R)=R\langle x_{i}^{a_{i}}\partial_{i}^{(b_{i})}|a_{i},b_{i}\in\mathbb{N}\rangle$.
Example 3.3.
Let $R=k[x_{1},x_{2},x_{3},x_{4}]/I$ where $I=(x_{1}x_{3},x_{1}x_{4},x_{2}x_{4})$. The associated complex $K$ is a chain of three 1-simplices, connected in order $x_{1},x_{2},x_{3},x_{4}$. Theorem 3.1 gives $D(R)=R\langle x_{1}^{a}\partial_{1}^{(b)},x_{2}^{a}\partial_{2}^{(b)},x_{3}^{a%
}\partial_{3}^{(b)},x_{4}^{a}\partial_{4}^{(b)},x_{1}^{a}\partial_{2}^{(b)},x_%
{4}^{a}\partial_{3}^{(b)}\rangle$ (for $a,b>0$).
Note that in both examples, generators of the form $x_{i}^{a}\partial_{i}^{(b)}$ appear; it is not hard to see that such “toric” operators are always in $D(R)$. In 3.3, we also have generators of the form e.g. $x_{i}^{a}\partial_{j}^{(b)}$ (where $i\neq j$). To understand when this happens, we may give a somewhat more geometric formulation of 3.1:
Proposition 3.4.
Let $K$ be a simplicial complex and $R=R_{K}$ its face ring. Also let $x^{a}=\prod x_{i}^{a_{i}},x^{b}=\prod x_{j}^{b_{j}}$ be such that $supp(x^{a})=st(\sigma)$ and $supp(x^{b})=st(\tau)$, for some $\sigma,\tau\in K$. Then $x^{a}\partial^{(b)}=\prod_{i}x_{i}^{a_{i}}\partial_{i}^{(b_{i})}$ is in $D(R)$ if and only if $st(\sigma)\subset st(\tau)$.
Proof.
Let $P_{x^{a}}$ denote the set of minimal primes in $R$ that contain $x^{a}$, and $P_{\neg x^{a}}$ the set of minimal primes that does not contain $x^{a}$. Clearly, $P_{x^{a}}\cup P_{\neg x^{a}}$ is equal to the set of minimal primes in $R$; denote this by $P$. Recalling from 2.4 that the minimal primes of $R$ are the face ideals $I_{st(\alpha)}$ for maximal simplices $\alpha$, we can reformulate these definitions: $P_{x^{a}}$ is the set of ideals $I_{st(\alpha)}$ such that $\alpha$ is maximal and $x^{a}\in I_{st(\alpha)}$, in other words those ideals $I_{st(\alpha)}$ such that $\alpha$ is maximal and $\alpha\in U_{\sigma}$; and $P_{\not x^{a}}$ is the set of ideals $I_{st(\alpha)}$ with $\alpha$ maximal and contained in $st(\sigma)$. Again using 2.4, the ideal $I_{st(\sigma)}$ defining $st(\sigma)$ is equal to the intersection of all ideals in $P_{\neg x^{a}}$. Unwrapping definitions, we get
$$\displaystyle st(\sigma)\subset st(\tau)$$
$$\displaystyle\Leftrightarrow I_{st(\sigma)}\supset I_{st(\tau)}$$
$$\displaystyle\Leftrightarrow P_{\neg x^{a}}\supset P_{\neg x^{b}}$$
$$\displaystyle\Leftrightarrow P_{x^{a}}\subset P_{x^{b}}.$$
Putting this together with 3.1, we have
$$\displaystyle x^{a}\partial^{(b)}\in D(R)$$
$$\displaystyle\Leftrightarrow\forall\mathfrak{p}\in P:x^{a}\in\mathfrak{p}\vee x%
^{b}\not\in\mathfrak{p}$$
$$\displaystyle\Leftrightarrow\forall\mathfrak{p}\in P:\mathfrak{p}\in P_{x^{a}}%
\vee\mathfrak{p}\in P_{\neg x^{b}}$$
$$\displaystyle\Leftrightarrow P=P_{x^{a}}\cup P_{\neg x^{b}}$$
$$\displaystyle\Leftrightarrow P_{x^{a}}\subset P_{x^{b}}\vee P_{\neg x^{b}}%
\subset P_{\neg x^{a}}\text{ (and these are equivalent)}$$
$$\displaystyle\Leftrightarrow st(\sigma)\subset st(\tau).$$
∎
Example 3.5.
Let $R=k[x_{1},x_{2},x_{3},x_{4},x_{5}]/(x_{1}x_{3},x_{1}x_{4},x_{2}x_{4})$, the associated $K$ is three 2-simplices $\{x_{1},x_{2},x_{5}\},\{x_{2},x_{3},x_{5}\},\{x_{3},x_{4},x_{5}\}$ glued along the edges $\{x_{2},x_{5}\}$ and $\{x_{3},x_{5}\}$; $x_{5}$ is a common vertex to all faces. Note that this makes $K$ a simplicial join of $\{x_{5}\}$ with the complex from Example 3.3. Looking at the closed stars of the faces, we see that
$$st(x_{1})\subset st(x_{2})\subset st(x_{5})\supset st(x_{3})\supset st(x_{4}).$$
As $st(x_{1})=st(\{x_{1},x_{2}\})$, $st(x_{4})=st(\{x_{4},x_{3}\})$ and for any face $\sigma$, $st(\sigma)=st(\sigma\cup x_{5})$ this accounts for all the stars. From this we should by 3.4 have the “toric” generators $x_{i}^{a}\partial_{i}^{(b)}$, and also $x_{1}^{a}\partial_{2}^{(b)},x_{1}^{a}\partial_{5}^{(b)},x_{1}^{a}\partial_{2}^%
{(b)}\partial_{5}^{(c)},x_{2}^{a}\partial_{5}^{(b)}$ and the same with $x_{1}$ and $x_{2}$ replaced by $x_{4}$ and $x_{3}$ respectively (by symmetry). In fact, $st(x_{5})=st(\varnothing)=K$, so we should also have $\partial_{5}^{(a)}=1\cdot\partial_{5}^{(a)}$ and the description is somewhat redundant.
From 3.4 we deduce the following very useful criterion.
Corollary 3.6.
$\langle x_{\tau}\rangle\subset\langle x_{\sigma}\rangle$ if and only if $st(\tau)\subset st(\sigma)$.
Proof.
If $st(\tau)\subset st(\sigma)$, it follows from 3.4 that $x_{\tau}\partial_{\sigma}=x_{\tau}\prod_{i:x_{i}\in\sigma}\partial_{i}$ is in $D(R)$. Now observe that $[\cdots[x_{\tau}\partial_{\sigma},x_{i_{1}}],\cdots,x_{i_{r}}]=x_{\tau}$ (where $x_{\sigma}=\prod_{1\leq j\leq r}x_{i_{j}}$), so we have $x_{\tau}\in\bigcap_{i:x_{i}\in\sigma}\langle x_{i}\rangle=\langle x_{\sigma}\rangle$.
To show the reverse implication, note that by definition of $I_{st(\sigma)}$, we have $I_{st(\sigma)}\cap\langle x_{\sigma}\rangle=\langle 0\rangle$. If now $st(\tau)\not\subset st(\sigma)$, it follows that $\tau\in U_{\sigma}$, so $x_{\tau}\in I_{st(\sigma)}$, which finally implies $x_{\tau}\not\in\langle x_{\sigma}\rangle$.
∎
The following very useful result is surprising.
Theorem 3.7.
Any proper two-sided ideal in $D(R)$ is generated by reduced monomials in the “ordinary” variables $x_{1},\ldots,x_{n}$.
Proof.
The proof is in three parts:
(1)
The ideal $\langle\sum_{(a,b)\in S}c_{ab}x^{a}\partial^{(b)}\rangle$ (for some index set $S\subset\mathbb{N}^{2n}$) is equal to the ideal $\langle x^{a}\partial^{(b)}|(a,b)\in S\rangle$;
(2)
the ideal $\langle x^{a}\rangle$ is equal to the ideal $\langle\prod_{a_{i}\neq 0}x_{i}\rangle$;
(3)
the ideal $\langle x^{a}\partial^{(b)}\rangle$ is equal to the ideal $\langle\prod_{a_{i}\neq 0}x_{i}\rangle$.
We will make heavy use of the fact that for any two-sided ideal $I$ and any element $\phi\in D(R)$, the set of commutators $[\phi,I]$ is contained in $I$.
For the first part, recall that we have two natural concepts of grading on the Weyl algebra, that descend to $D(R)$. First, the natural $\mathbb{Z}^{n}$-grading on the Weyl algebra given by the degree
$$deg(x^{a}\partial^{(b)})=(a_{1}-b_{1},\ldots,a_{n}-b_{n}),$$
which induces a grading on $D(R)$; second we have the $\mathbb{N}^{n}$-grading given by the order
$$ord(x^{a}\partial^{(b)})=(b_{1},\ldots,b_{n}).$$
Note that
$$\displaystyle[x_{i}\partial_{i},x^{a}\partial^{(b)}]=$$
$$\displaystyle x_{i}\partial_{i}x^{a}\partial^{(b)}-x^{a}\partial^{(b)}x_{i}%
\partial_{i}$$
$$\displaystyle=$$
$$\displaystyle x_{i}(x^{a}\partial_{i}+a_{i}x^{a-1_{i}})\partial^{(b)}-x^{a}(x_%
{i}\partial^{(b)}+\partial^{(b-1_{i})})\partial_{i}$$
$$\displaystyle=$$
$$\displaystyle x_{i}x^{a}\partial_{i}\partial^{(b)}+a_{i}x_{i}x^{a-1_{i}}%
\partial^{(b)}-x^{a}x_{i}\partial^{(b)}\partial_{i}-x^{a}\partial^{(b-1_{i})}%
\partial_{i}$$
$$\displaystyle=$$
$$\displaystyle a_{i}x^{a}\partial^{(b)}-\binom{b_{i}-1+1}{1}x^{a}\partial^{(b)}$$
$$\displaystyle=$$
$$\displaystyle(a_{i}-b_{i})x^{a}\partial^{(b)}$$
(in the remainder we omit the proof of such identities to avoid tedium), and in the case of characteristic $p$, if $a_{i}-b_{i}=cp^{r}$, we have $[x_{i}^{p^{r}}\partial_{i}^{(p^{r})},x^{a}\partial^{(b)}]=cx^{a}\partial^{(b)}$. In other words, the operators $[x_{i}\partial_{i},-]$ (and $[x_{i}^{p^{r}}\partial_{i}^{(p^{r})},-]$) give different weight to each degree-graded component. Note also that
$$[x^{a}\partial^{(b)},x_{i}]\cdot\partial_{i}=x^{a}\partial^{(b-1_{i})}\partial%
_{i}=b_{i}x^{a}\partial^{(b)},$$
and if $b_{i}=cp^{r}$, we have $[x^{a}\partial^{(b)},x_{i}^{p^{r}}]\partial_{i}^{(p^{r})}=cx^{a}\partial^{(b)}$. In other words the operators $[-,x_{i}]\partial_{i}$ (and $[-,x_{i}^{p^{r}}]\partial^{p^{r}}$) give different weight to each order-graded component. Putting these together, we can isolate any term $x^{a}\partial^{(b)}$ by applying a suitable polynomial in the operators $[x_{i}\partial_{i},-]$, $[x_{i}^{p^{r}}\partial_{i}^{(p^{r})},-]$, $[-,x_{i}]\partial_{i}$ and $[-,x_{i}^{p^{r}}]\partial^{p^{r}}$.
For the second part, we may reduce to a single variable. We separate the cases by characteristic. If $char(k)=p$, we have $[x_{i}\partial_{i}^{(p^{r})},x_{i}^{p^{r}}]=x_{i}$, so $x_{i}$ is in the ideal generated by $x_{i}^{p^{r}}$; choosing a power of $p$ larger than $a_{i}$ we have $x_{i}^{p^{r}}=x_{i}^{a_{i}}\cdot x_{i}^{p^{r}-a_{i}}$ and so $x_{i}\in\langle x_{i}^{a_{i}}\rangle$. If $char(k)=0$, on the other hand, we have
$$[x_{i}\partial_{i}^{(2)},x_{i}^{a_{i}}]=a_{i}x_{i}^{a_{i}}\partial_{i}+\binom{%
a}{2}x_{i}^{a_{i}-1}$$
and
$$[x_{i}^{2}\partial_{i}^{(3)},x_{i}^{a_{i}}]=a_{i}x_{i}^{a_{i}+1}\partial_{i}^{%
(2)}+\binom{a_{i}}{2}x_{i}^{a_{i}}\partial_{i}+\binom{a_{i}}{3}x_{i}^{a_{i}-1}.$$
If $a_{i}=0,1$ there is nothing to prove, and if $a_{i}>1$, we can invert $\frac{a_{i}-1}{2}\binom{a_{i}}{2}-\binom{a_{i}}{3}=\frac{1}{12}a_{i}(a_{i}^{2}%
-1)$ and get
$$x_{i}^{a_{i}-1}=\frac{12}{a_{i}(a_{i}^{2}-1)}\left(\frac{a_{i}-1}{2}[x_{i}%
\partial_{i}^{(2)},x_{i}^{a_{i}}]-[x_{i}^{2}\partial_{i}^{(3)},x_{i}^{a_{i}}]+%
a_{i}x_{i}^{a_{i}}\cdot x_{i}\partial_{i}^{2}\right).$$
This gives $\langle x_{i}^{a_{i}-1}\rangle\subset\langle x_{i}^{a_{i}}\rangle$ and by iterating this procedure, $\langle x_{i}\rangle=\langle x_{i}^{a_{i}}\rangle$.
For the third part, observe that $[x^{n}\partial^{(m)},x_{j}]=x^{n}\partial^{(m-1_{j})}$ (for $j$ such that $m_{j}\neq 0$) is a valid identity for all $n,m>0$. Iterating this beginning with $n=a,m=b$ gives $\langle x^{a}\rangle\subset\langle x^{a}\partial^{(b)}\rangle$. By applying part 2 this becomes $\langle\prod_{a_{i}\neq 0}x_{i}\rangle\subset\langle x^{a}\partial^{(b)}\rangle$.
To show the reverse implication $\langle x^{a}\partial^{(b)}\rangle\subset\langle\prod_{a_{i}\neq 0}x_{i}\rangle$ we show $\langle x^{a}\partial^{(b)}\rangle\subset\langle x_{i}\rangle$ for the two cases $a_{i},b_{i}\neq 0$ and $a_{i}\neq 0,b_{i}=0$. For the first case, $x_{i}^{a_{i}}\partial_{i}^{(b_{i})}$ is a factor of $x^{a}\partial^{(b)}$; and applying the above argument we have that $x_{i}^{a_{i}}\partial_{i}^{(b_{i})}\in\langle x_{i}\rangle$; it follows that $x^{a}\partial^{(b)}\in\langle\prod_{i:a_{i},b_{i}\neq 0}x_{i}\rangle$.
For the second case, $a_{i}\neq 0,b_{i}=0$, we may assume $a_{i}=1$, for if $a_{i}>1$, then clearly $x^{a}\partial^{(b)}=x_{i}x^{a-1_{i}}\partial^{(b)}\in\langle x_{i}\rangle$. By the previous case, $x_{i}\partial_{i}^{(2)}$ is in $\langle x_{i}\rangle$, and so is $x^{a+1_{i}}\partial^{(b)}=x_{i}x^{a}\partial^{(b)}$; then of course their commutator
$$[x^{a+1_{i}}\partial^{(b)},x_{i}\partial_{i}^{(2)}]=-(a_{i}+1)x^{a+1_{i}}%
\partial_{i}\partial^{(b)}-a_{i}x^{a}\partial^{(b)}$$
is also in $\langle x_{i}\rangle$. Rewriting this (with $a_{i}=1$ as we have assumed) we get
$$x^{a}\partial^{(b)}=[x^{a+1_{i}}\partial^{(b)},x_{i}\partial_{i}^{(2)}]-2x^{a+%
1_{i}}\partial_{i}\partial^{(b)}$$
and so $x^{a}\partial^{(b)}\in\langle x_{i}\rangle$; it follows that $x^{a}\partial^{(b)}\in\langle\prod_{i:a_{i}\neq 0,b_{i}=0}x_{i}\rangle$. Taking both cases together we have shown that $x^{a}\partial^{(b)}\in\langle\prod_{i:a_{i}\neq 0,b_{i}=0\vee b_{i}\neq 0}x_{i%
}\rangle=\langle\prod_{i:a_{i}\neq 0}x_{i}\rangle$.
∎
We have shown that all ideals in $D(R)$ are generated by reduced monomials $\prod x_{i}$ in the variables of $R$; the next question is of course which ones?
Recall that we will not distinguish between the vertices of the simplicial complex $K$ and the variables of the associated face ring $R$, but refer to either by the same name, e.g. $x_{i}$. We also remind of the notation $x_{\sigma}=\prod_{x_{i}\in\sigma}x_{i}$.
Theorem 3.8.
Any proper ideal in $D(R)$ is generated by monomials $x_{\sigma}$ with $\sigma\in K$ such that $st(\sigma)\neq K$.
Proof.
From 3.7 it follows that any ideal in $D(R)$ is generated by reduced monomials in the variables $x_{i}$, and clearly the monomials corresponding to non-faces cannot occur as they are in $I_{K}$, so what remains are the monomials $x_{\sigma}$ for $\sigma\in K$. Only those $x_{\sigma}$ such that $st(\sigma)\neq K$ generate proper ideals, as otherwise we have $st(\sigma)=K$ and by 3.4 the elements $1\cdot\partial_{i}$ where $x_{i}\in\sigma$ are in $D(R)$, as both $1$ and $\partial_{i}$ are monomials with support contained in $st(\sigma)=K$; if we write $\sigma=\{x_{i_{1}},\ldots,x_{i_{t}}\}$, we have $[\partial_{i_{1}},[\partial_{i_{2}},[\cdots,[\partial_{i_{r}},x_{\sigma}]%
\cdots]]]=1$ and so $\langle x_{\sigma}\rangle=\langle 1\rangle=R$.
∎
This now gives us all the ideals in $D(R)$, as by sums of principal ideals $\langle x_{\sigma}\rangle$ we can make everything. We may however also take a different approach:
Any two-sided ideal in $D(R)$ is the kernel of some ring homomorphism; the combinatorial structure of the associated simplicial complex $K$ gives rise to several such maps.
An obvious choice for candidate homomorphisms is the localization at an element $x_{\sigma}$; we will see that the kernels of such maps is another generating set for the lattice of two-sided ideals in $D(R)$. We introduce the notation $\overline{J}$ for the extension to $D(R)$ of an ideal $J\subset R$.
Theorem 3.9.
The kernel of the localization map $D(R)\to D(R)[\frac{1}{x_{\sigma}}]$ is the extension $\overline{I_{st(\sigma)}}$ of the ideal $I_{st(\sigma,K)}\subset R$ to $D(R)$.
Proof.
By 3.8 it is enough to examine what happens in the localization to monomials $x_{\alpha}$ for $\alpha\in K$.
Assume first that $x_{\sigma}=x_{i}$ (in other words, $\sigma$ is a vertex). Inverting $x_{i}$ has the effect that for any non-face $\beta=\cup x_{j}$ containing $x_{i}$, the monomial $\tfrac{x_{\beta}}{x_{i}}=\prod_{x_{j}\in\beta,j\neq i}x_{j}$ is zero in the localization. It is clear that no other monomials are killed, so what remains after localization are those monomials supported on a face $\tau$ such that $\tau\cup x_{i}$ is not a non-face, or clearing negations, that $\tau\cup x_{i}$ is a face in $K$; in other words the remaining monomials are those supported on a face of $st(x_{i})$.
For the general case, note that inverting $x_{\sigma}=\prod_{i}x_{i}$ is the same as inverting each $x_{i}$ successively, and observing that we have from 2.3(iii) that $st(\sigma,K)=st(x_{1},st(\sigma\setminus x_{1},K))$, we are done by recursion.
∎
Theorem 3.10.
The lattice of two-sided ideals in $D(R)$ is generated by the
ideals $\overline{I_{st(\sigma)}}\subset D(R)$.
Proof.
After applying 3.8 the question is whether we can generate any proper ideal $\langle x_{\tau}\rangle$ by sums and intersections of the ideals $\overline{I_{st(\sigma)}}$. Considering that $\overline{I_{st(\sigma)}}=\langle x_{\alpha}|\alpha\in U_{\sigma}\rangle$, we can look at the intersection of all such ideals that contain $x_{\tau}$:
(3.1)
$$\displaystyle\bigcap_{\sigma:\tau\in U_{\sigma}}\overline{I_{st(\sigma)}}=$$
$$\displaystyle\langle x_{\alpha}|\alpha\in\bigcap_{\sigma:\tau\in U_{\sigma}}U_%
{\sigma}\rangle$$
(3.2)
$$\displaystyle=$$
$$\displaystyle\langle x_{\alpha}|\forall\sigma\in K:\tau\in U_{\sigma}%
\Rightarrow\alpha\in U_{\sigma}\rangle$$
(3.3)
$$\displaystyle=$$
$$\displaystyle\langle x_{\alpha}|\forall\sigma\in K:\alpha\not\in U_{\sigma}%
\Rightarrow\tau\not\in U_{\sigma}\rangle$$
(3.4)
$$\displaystyle=$$
$$\displaystyle\langle x_{\alpha}|\forall\sigma\in K:\alpha\cup\sigma\in K%
\Rightarrow\tau\cup\sigma\in K\rangle$$
(3.5)
$$\displaystyle=$$
$$\displaystyle\langle x_{\alpha}|\forall\sigma\in K:\sigma\in st(\alpha)%
\Rightarrow\sigma\in st(\tau)\rangle$$
(3.6)
$$\displaystyle=$$
$$\displaystyle\langle x_{\alpha}|st(\alpha)\subset st(\tau)\rangle$$
(3.7)
$$\displaystyle=$$
$$\displaystyle\langle x_{\tau}\rangle$$
where the last step is applying Corollary 3.6.
∎
Example 3.11.
Consider again the ring from 3.3, $R=k[x_{1},x_{2},x_{3},x_{4}]/I$ where $I=(x_{1}x_{3},x_{1}x_{4},x_{2}x_{4})$; the associated complex $K$ is a chain of three 1-simplices. Inverting $x_{1}$ gives us that $x_{3}$ and $x_{4}$ go to zero in the localization as $x_{3}=\frac{1}{x_{1}}x_{1}x_{3}\in I$, etc; it follows that the generators $x_{4}^{a}\partial_{3}^{(b)}$ are also killed; the kernel of the localization $D(R)\to D(R)[\frac{1}{x_{1}}]$ is then (using 3.7 and 3.4) the ideal $(x_{3},x_{4})$, which is the face ideal of $st(x_{1},K)$. Localizing at $x_{2}$ gives $x_{3}=\frac{1}{x_{2}}x_{2}x_{4}=0$, and the kernel of the localization is indeed equal to the ideal $(x_{4})$, the face ideal of $st(x_{2},K)$. Proceeding in the same manner for the remaining faces $x_{3},x_{2},\{x_{1},x_{2}\},\{x_{2},x_{3}\}$, and $\{x_{3},x_{4}\}$, we get as possible kernels the ideals $(x_{1}),(x_{4}),(x_{1},x_{2}),(x_{3},x_{4})$ and $(x_{2},x_{3})$. By 3.4 we have $(x_{1},x_{2})=(x_{2})$ and $(x_{3},x_{4})=(x_{3})$; in other words our possible kernels of localization are the ideals $(x_{1}),(x_{2}),(x_{3})$ and $(x_{4})$; in light of 3.7 these obviously generate all the ideals by sums and intersections.
Let us round off this section with some applications. In [Tra99], Traves examines the $D(R)$-module structure of $R$ when $k$ is a field, and determines what the (left) $D(R)$-submodules of $R$ are. These are the ideals $I\subset R$ such that $D(R)\bullet I=I$, so we follow Traves’ terminology and call such a submodule a $D(R)$-stable ideal. The reason for restricting $k$ to be a field is that elements of $D_{k}(R)$ are $k$-linear endomorphisms of $R$, so any ideal of $k$ extends to a $D_{k}(R)$-submodule of $R$.
Theorem 3.12 (Traves).
When $k$ is a field, the $D_{k}(R)$-submodules of the reduced monomial ring $R$ are exactly the ideals given by intersections of sums of minimal primes of $R$.
Based on our results about the ideal structure of $D(R)$, we can give a new proof of this result. We denote the module action of $D(R)$ by $\bullet$ (e.g. $D(R)\bullet I$) and the product in $D(R)$ by $\cdot$ (e.g. $D(R)\cdot I$). We prove the result by means of a general fact which to our knowledge is previously unknown.
Proposition 3.13.
Let $k$ be a field and $R$ be a $k$-algebra. An ideal $J\subset R$ is $D(R)$-stable if and only if $J=\overline{J}\cap R$, where $\overline{J}$ denotes the extension of $J$ to $D(R)$.
Proof.
Observe first that $R$ is isomorphic as a $D(R)$-module to $D(R)/D^{>0}(R)$, the quotient by the left ideal of positive order elements; we can see this by writing $D(R)=D^{0}(R)+D^{>0}(R)=R+D^{>0}(R)$, as $R=D^{0}(R)$. In other words, if $S\subset R$ is a subset, then under this isomorphism $D(R)\bullet S=D(R)\cdot S+D^{>0}(R)$. Further, if $J\in D(R)$ is some subset, then
$$\displaystyle J\cdot D(R)+D^{>0}(R)=$$
$$\displaystyle J\cdot(D(R)^{0}+D^{>0}(R))+D^{>0}(R)$$
$$\displaystyle=$$
$$\displaystyle J\cdot R+D^{>0}(R).$$
Now, if $I\subset R$ is an ideal, the extension of $I$ to $D(R)$ is $\overline{I}=D(R)\cdot I\cdot D(R)$, so we have
$$\displaystyle\overline{I}+D^{>0}(R)=$$
$$\displaystyle D(R)\cdot I\cdot D(R)+D^{>0}(R)$$
$$\displaystyle=$$
$$\displaystyle D(R)\cdot I+D^{>0}(R)$$
$$\displaystyle=$$
$$\displaystyle D(R)\bullet I.$$
A $D$-stable ideal is an ideal $I\subset R$ such that $D(R)\bullet I=I$, so it follows that the $D$-stable ideals are exactly those such that $\overline{I}+D^{>0}(R)=I$.
It remains to show that for an ideal $J\subset D(R)$, $J+D^{>0}(R)=J\cap R$. Let $f\in J$ be some element, and write it as the sum $f=f_{0}+f_{1}+\cdots+f_{ord(f)}$ where $f_{i}$ are the terms of order $i$; it then follows from 3.7 that also each $f_{i}\in J$. Reducing modulo $D^{>0}(R)$ we get $J+D^{>0}(R)=\{f_{0}|f\in J\}$, and restricting to the homogenous elements of order zero we have $J\cap R=J\cap D^{0}(R)=\{f\in J|f=f_{0}\}$; these sets clearly are equal.
∎
Theorem 3.14.
The $D(R)$-stable ideals of $R$ are those generated by sums and intersections of the ideals $I_{st(\sigma)}$ for $\sigma\in K$.
Proof.
As we have shown (3.8, 3.10) that any ideal of $D(R)$ is an extension of an ideal of $R$, we only have to restrict these to $R$ to recover the $D(R)$-stable ideals. Theorem 3.10 tells us that the lattice of ideals in $D(R)$ is generated by sums and intersections of ideals $\overline{I_{st(\sigma)}}$, and it is easy to see that $\overline{I_{st(\sigma)}}\cap R=I_{st(\sigma)}$:
Indeed, the only possible problem is that in $D(R)$, $\langle x_{\alpha}\rangle\subset\langle x_{\beta}\rangle$ if and only if $st(\alpha)\subset st(\beta)$, and this may cause additional monomials not in $I$ to appear in $\overline{I}\cap R$. For $I_{st(\sigma)}$ however, this does not happen. Consider that $I_{st(\sigma)}=\langle x_{\tau}|\tau\in U_{\sigma}\rangle$ and $\overline{I_{st(\sigma)}}\cap R=\langle x_{\tau}|\tau\in U_{\sigma}\rangle=%
\langle x_{\alpha}|\exists\tau\in U_{\sigma}:st(\alpha)\subset st(\tau)\rangle$. In other words, we need to check if there are faces $\tau\in U_{\sigma}$ and $\alpha\in st(\sigma)$ such that $st(\alpha)\subset st(\tau)$, as then $x_{\alpha}$ would be in $\overline{I_{st(\sigma)}}\cap R$, but not in $I_{st(\sigma)}$. This is impossible, however: by 2.3$(v)$, $\alpha\in st(\sigma)$ if and only if $\sigma\in st(\alpha)$, and if $st(\alpha)\subset st(\tau)$, we have $\sigma\in st(\tau)$, which again by 2.3$(v)$ gives $\tau\in st(\sigma)$, which contradicts the assumption $\tau\in U_{\sigma}$.
∎
To recover 3.12, recall that by 2.4, the minimal primes are exactly the face ideals of the maximal faces of $K$, and any $I_{st(\sigma)}$ is the intersection of the face ideals of the maximal faces of $st(\sigma)$.
Remark 3.15.
Recall that the partially ordered set of two-sided ideals of $D(R)$ (or bijectively, the $D(R)$-stable ideals of $R$) is in order-reversing bijection with the partially ordered set of closed stars of $K$. This partially ordered set can be completed to a simplicial complex ($\widetilde{K}$, say), homotopic to the nerve of the cover of $K$ by open stars. The results about two-sided ideals of $D(R)$ and $D(R)$-stable ideals of $R$ imply that subcomplexes $L$ of $K$ such that $I_{L}$ is $D(R)$-stable or $\overline{I_{L}}$ is a two-sided ideal of $D(R)$ are exactly those that are unions of intersections of closed stars; in other words the complex $\widetilde{K}$ classifies such subcomplexes. This interesting connection is perhaps worthy of further study.
4. Characteristic $p$
The constructions in the previous section are independent of the characteristic of $k$, and so solve the problem of finding the two-sided ideal structure of $D(R)$. In characteristic $p$ however, there is a qualitatively different construction of $D(R)$, which perhaps offers more interesting possibilities for generalization. From here on, we assume $k$ is a field of characteristic $p$.
The major tool when working in characteristic $p$ is the Frobenius automorphism of $k$, given by $x\mapsto x^{p}$. This induces an endomorphism $F:R\to R$ given by $F(f)=f^{p}$, and the image $F(R)$ is the subring $R^{p}\subset R$ of $p$’th powers; as $R$ is reduced $F$ is also an isomorphism onto its image. Any $R$-module $M$ gets a new $R$-module structure through the pullback by the Frobenius map, namely $F_{*}M$ is equal to $M$ as an abelian group, but has $R$-module structure given by $f\cdot m=f^{p}m$. This is equivalent to considering $M$ as an $R^{p}$-module, as the maps $F:R\to R$ and $R^{p}\hookrightarrow R$ both are injections with image $R^{p}$. We will have need for considering also iterates of $F$, so if we let $q=p^{r}$ we write $F^{r}:R\to R$ or $R^{q}=R^{p^{r}}\subset R$. For our purposes in examining $D(R)$, it will be most convenient to use the description in terms of the subrings $R^{q}$, as we will see.
Considering the behaviour $R$ itself as an $R^{p}$-module gives rise to several classifying properties of the ring $R$. We will simply recall the definitions of the particular properties that are relevant for us, other such properties and further details may be found in [SVdB97]. If $R$ is finitely generated as an $R^{p}$-module, we say that $R$ is $F$-finite; if $R$ is $F$-finite and the map $R^{p}\hookrightarrow R$ splits as a map of $R^{p}$-modules we say $R$ is $F$-split; if $F^{r}_{*}R\simeq M_{1}^{r}\oplus\cdots\oplus M_{n(r)}^{q}$ as an $R$-module and the set of isomorphism classes $\{[M_{i}^{r}]|r\in\mathbb{N},1\leq i\leq n(r)\}$ of modules appearing in such a decomposition for some $r$ is finite, we say that $R$ has finite $F$-representation type, or FFRT.
For our purposes, the key property of face rings $R_{K}$ in this respect is that they are $F$-split and have FFRT. Even better, we can give a concrete decomposition of $R$ as an $R^{q}$-module:
Lemma 4.1.
As an $R^{q}$-module, $R$ is isomorphic to $\bigoplus_{st(\sigma)\subset K}(R^{q}_{st(\sigma)})^{m_{st(\sigma)}(q)}$, where $m_{st(\sigma)}(q)=\sum_{\alpha:st(\alpha)=st(\sigma)}(q-1)^{(\dim(\alpha)+1)}$.
Note that the direct sum runs over those subcomplexes of $K$ that is the star of some simplex.
Proof.
As we have $R\simeq R^{p}\oplus R^{p}x_{1}\oplus\cdots\oplus R^{p}x_{1}^{p-1}\cdots x_{n}^%
{p-1}$ (where only the appropriate monomials appear), this expresses $R$ as an $R^{p}$-module.
We can rewrite this using $R^{p}\cdot x^{\alpha}\simeq R^{p}/Ann_{R^{p}}(x^{\alpha})$, and observing that as the monomials $x^{\alpha}$ that appear in the decomposition are those supported on a face $supp(\alpha)=:\sigma$, and that the annihilator of $x^{\alpha}$ is the face ideal of the complex $st(\sigma,K)$, we get the decomposition $R=\bigoplus_{\sigma\in K}(R^{p}_{st(\sigma)})^{m_{st(\sigma)}}$, where $R^{p}_{st(\sigma)}$ is the ($p$’th power) face ring of $st(\sigma)$ and by simply counting monomials we have $m_{st(\sigma)}=\sum_{\alpha:st(\alpha)=st(\sigma)}(p-1)^{(\dim(\alpha)+1)}$ (using the convention that $\dim(\varnothing)=-1$). Iterating the same construction, we get $R=\bigoplus_{\sigma\in K}(R^{q}_{st(\sigma)})^{m_{st(\sigma)}(q)}$, where $m_{st(\sigma)}(q)=(q-1)^{(\dim(\sigma)+1)}$.
∎
Let us make use of this to compute some invariants of $R$ that only make sense in characteristic $p$, namely the Hilbert-Kunz function and the Hilbert-Kunz multiplicity. This invariant was introduced by Kunz [Kun69] for local rings, and extended to graded rings by Conca [Con96]; see also [Hun13] and [Mon83].
Definition 4.2.
Let $R$ be a local ring with maximal ideal $\mathfrak{m}$, or a graded ring with homogenous maximal ideal $\mathfrak{m}$, over a field $k$ of characteristic $p$, and let $q=p^{r}$. The Hilbert-Kunz function of a ring $R$ is the function
$$HK_{R}(q)=l(R/\mathfrak{m}^{[q]})$$
where $I^{[q]}$ is the ideal generated by $q$’th powers of elements in the ideal $I$. The Hilbert-Kunz multiplicity is the number
$$e_{HK}(R)=\lim_{q\to\infty}\frac{HK_{R}(q)}{q^{\dim R}},$$
in other words the leading coefficient of $HK_{R}(q)$.
The Hilbert-Kunz function gives a measure of singularity of $R$, roughly speaking higher multiplicities correspond to worse singularities. It is a theorem of Kunz that $HK_{R}(q)=q^{\dim R}$ if and only if $R$ is regular (see [Kun69]), so if $R$ is regular, $e_{HK}(R)=1$. The converse holds for unmixed rings, but not in general, and in particular not for face rings. The following is equivalent to Remark 2.2 in [Con96], though we prove it in a different way.
Proposition 4.3.
Let $R_{K}$ be a face ring, then $HK_{R}(q)=\sum_{i=-1}^{\dim(R)-1}f_{i}(q-1)^{i+1}$, where $f_{i}$ is the number of $i$-simplices in $K$, so $(f_{-1},\ldots,f_{\dim(R)-1})$ is the $f$-vector of $K$ (we recall the usual convention $\dim(\varnothing)=-1$, so $f_{-1}=1$). In particular, $e_{HK}(R_{K})=f_{\dim K}$, the number of top-dimensional faces of $K$.
Proof.
The number of indecomposable summands of $R$ as an $R^{q}$-module is $\sum_{\sigma\in K}(q-1)^{dim(\sigma)+1}$ by 4.1. By simply rearranging the sum, this is equal to $\sum_{i=-1}^{\dim(R)-1}f_{i}(q-1)^{i+1}$. The claim now follows from the fact that none of the generators of these summands are in $\mathfrak{m^{[q]}}=\langle x_{1}^{q},\ldots,x_{n}^{q}\rangle$, so the number of summands in the splitting of $R$ is the same as the length of $R/\mathfrak{m^{[q]}}$.
∎
The promised different construction of $D(R)$ is due to Yekutieli [Yek92]. We omit the proof here, but mention that in addition to [Yek92], the reader can find an excellent exposition in [SVdB97].
Proposition 4.4.
$D_{k}(R)\simeq\bigcup_{q}End_{R^{q}}(R)$, where $q=p^{r},r\in\mathbb{N}$ and $R^{q}$ is the subring of $q$-th powers.
Let us now give the summands appearing in 4.1 a more convenient notation, and define $M_{st(\sigma)}^{q}:=(R^{q}_{st(\sigma)})^{m_{st(\sigma)}(q)}$. It follows from 4.1 that
$$End_{R^{q}}(R)\simeq\bigoplus_{st(\sigma),st(\tau)\subset K}Hom_{R^{q}}(M_{st(%
\sigma)}^{q},M_{st(\tau)}^{q}).$$
As each $M_{st(\sigma)}^{q}$ is generated as an $R^{q}$-module by monomials of degree in each variable up to $q-1$, we can see that as an $R^{pq}$-module it is contained in $\bigoplus_{st(\alpha)\subset st(\sigma)}M_{st(\alpha)}^{pq}$, because the elements of $M_{st(\sigma)}^{q}$ contain monomials of degree larger than $q-1$, which have support on smaller stars (recall that as $q=p^{r}$, $pq=p^{r+1}$). In particular this implies the following:
Lemma 4.5.
$Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})\subset\bigoplus_{st(\alpha)%
\subset st(\sigma),st(\beta)\subset st(\tau)}Hom_{R^{pq}}(M_{st(\alpha)}^{pq},%
M_{st(\beta)}^{pq})$.
This lets us think of elements $\phi\in End_{R^{q}}(R)$ as block matrices with each block having entries in some $R^{q}/I_{st(\sigma)}$; it is vital to remember that this means that the entries have degree equal to a multiple of $q$.
Definition 4.6.
Let $J_{q}(st(\alpha),st(\beta))$ denote the ideal in $D(R)$ generated by the elements of $Hom_{R^{q}}(M_{st(\alpha)}^{q},M_{st(\beta)}^{q})$, and let $J(st(\alpha),st(\beta)):=\sum_{q}J_{q}(st(\alpha),st(\beta))$. For convenience we denote $J(st(\sigma),st(\sigma))$ by simply $J(st(\sigma))$.
The following result is essentially the same as 3.7 in a different guise.
Proposition 4.7.
Assume $st(\sigma)\supset st(\tau)$, and let $\phi\in Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})$ be a nonzero element. Then $\langle\phi\rangle$, the ideal in $D(R)$ generated by $\phi$, is equal to the ideal $J(st(\tau))$. Furthermore, we have that $J(st(\tau)\subset J(st(\sigma))$.
Proof.
Clearly, $J(st(\tau))$ is generated by the identity maps $id_{st(\tau)}^{q}:M_{st(\tau)}^{q}\to M_{st(\tau)}^{q}$ (for each $q$), so it suffices to show that these are in $\langle\phi\rangle$.
Recall that any element of $End_{R^{q}}(R)$ has entries with degree a multiple of $q$. We claim that for $s>q$ a sufficiently large power of $p$, $\phi$ considered as an element of $End_{R^{s}}(R)$ will have at least some constant entries in each block $Hom_{R^{s}}(M_{st(\sigma)}^{s},M_{st(\tau)}^{s})$. To see this, suppose $\phi$ (as an element of $End_{R^{q}}(R)$) has an entry $x_{i}^{q}$ in a block $Hom_{R^{q}}(R^{q}\cdot x^{a},R^{q}\cdot x^{b})$ (with all $0\leq a_{j},b_{j}<q$), in other words $\phi(x^{a+cq})=x^{a+(c+1_{i})q+b}$. It follows from 4.5 that this block has image in $End_{R^{pq}}(R)$ contained in $\bigoplus_{0\leq c,d<p}Hom_{R^{pq}}(R^{pq}\cdot x^{a+cq},R^{pq}\cdot x^{b+dq})$, and as $\phi(x^{a+cq})=x^{a+(c+1_{i})q+b}=x^{a+cq+(b+1_{i}q)}$ this yields the entry 1 in the blocks $Hom_{R^{pq}}(R^{pq}\cdot x^{a+cq},R^{pq}\cdot x^{b+1_{i}q})$. In similar fashion an entry with degree $nq$ will yield constant entries somewhere when considered as an $R^{s}$-linear map for $s>q$ a sufficiently large power of $p$.
Now let $s$ be such a sufficiently large power of $p$, and consider $\phi$ as an element of $End_{R^{s}}(R)$; by 4.5, $Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})$ is contained in $\bigoplus_{st(\alpha)\subset st(\sigma),st(\beta)\subset st(\tau)}Hom_{R^{s}}(%
M_{st(\sigma)}^{s},M_{st(\tau)}^{s})$. We can see that $\phi$, considered as a matrix $(\phi_{ij})$ in $End_{R^{s}}(R)$, will have (among others) some constant entries in each block $End_{R^{s}}(M_{st(\beta)}^{s})$ such that $st(\beta)\subset st(\tau)$. Each of these entries can be “picked out” in the following manner: Let $\mathbf{1}_{ii}$ be the matrix in $End_{R^{s}}(R)$ with the appropriate identity map in position $(i,i)$ and zeroes otherwise. It is clear that $\mathbf{1}_{ii}\cdot\phi\cdot\mathbf{1}_{jj}$ is the matrix with entry $\phi_{ij}$ in position $(i,j)$ and zeroes otherwise; we may assume $\phi_{ij}=1$ as it is constant. Applying permutations of $End_{R^{s}}(M_{st(\beta)}^{s})$ (on both sides), we can now place this entry $1$ wherever we want within the matrix block corresponding to $End_{R^{s}}(M_{st(\beta)}^{s})$; taking sums of these we can produce any matrix with constant entries. In particular, we can make $id_{st(\beta)}^{s}$.
Thus, we have that each $id_{st(\beta)}^{s}$ such that $st(\beta)\subset st(\tau)$ is in $\langle\phi\rangle$, and in the same way any such $id_{st(\beta)}^{t}$ for $t>s$ any larger power of $p$. To recreate $id_{st(\tau)}^{t}$ for smaller powers $t<s$ we observe that those maps, considered as elements of $End_{R^{s}}(R)$, are in $\bigoplus_{st(\beta)\subset st(\tau)}End_{R^{s}}(M_{st(\beta)})$ and as such are contained in the ideal generated by the identity maps $id_{st(\beta)}^{s}$, in other words contained in $\langle\phi\rangle$. We have shown $J(st(\tau))\subset\langle\phi\rangle$; the opposite inclusion follows from the observation that $\phi=id_{st(\tau)}^{q}\circ\phi$, and so $\phi\in J(st(\tau))$.
The final claim is similar: $\phi=\phi\circ id_{st(\sigma)}^{q}$, and so $\phi\in J(st(\sigma))$.
∎
Proposition 4.8.
The ideal $J(st(\sigma),st(\tau))$ is equal to $J(st(\sigma\cup\tau))$, if $\sigma\cup\tau$ is a face of $K$, and the zero ideal otherwise.
Proof.
The module $Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})$ has support $st(\sigma)^{\circ}\cap st(\tau)^{\circ}$. From 2.3$(vi)$ it follows that this is $st(\sigma\cup\tau)^{\circ}$, if $\sigma\cup\tau\in K$.
If $\sigma\cup\tau$ is a non-face, $st(\sigma)\cap st(\tau)$ does not contain any maximal simplices, and so the cone on $st(\sigma)\cap st(\tau)$ is not a union of irreducible components of $Spec(R)$, and so is not the closure of the support of any element in $Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})$, so this must be the zero module. It follows that $J(st(\sigma),st(\tau))$ is the zero ideal.
For the case when $\sigma\cup\tau$ is a face of $K$, recall that by Lemma 4.5,
$$Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})\subset\bigoplus_{st(\alpha)%
\subset st(\sigma),st(\beta)\subset st(\tau)}Hom_{R^{pq}}(M_{st(\alpha)}^{pq},%
M_{st(\beta)}^{pq}).$$
In particular, there will be entries in the block $Hom_{R^{pq}}(M_{st(\sigma\cup\tau)}^{pq},M_{st(\sigma\cup\tau)}^{pq})$, so by 4.7 we have that $J(st(\sigma\cup\tau))\subset J(st(\sigma),st(\tau))$.
For the converse, note that as an $R^{q}$-module,
$$Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})\simeq\big{(}(I_{st(\tau)}^{q}%
:I_{st(\sigma)}^{q})/I_{st(\tau)}^{q}\big{)}^{m_{st(\sigma)}(q)\times m_{st(%
\tau)}(q)}$$
(where $I^{q}$ is the restriction of $I\subset R$ to $R^{q}$). Any element of $Hom_{R^{q}}(M_{st(\alpha)}^{q},M_{st(\beta)}^{q})$ has, as a matrix, entries with degree (in each variable) a multiple of $q$, with constant (nonzero) entries only when $st(\beta)\subset st(\alpha)$, as then $(I_{st(\beta)}^{q}:I_{st(\alpha)}^{q})$ is the unit ideal in $R^{q}$ (otherwise it is generated by elements of degree $\geq q$). It follows that elements of the image of $Hom_{R^{q}}(M_{st(\sigma)}^{q},M_{st(\tau)}^{q})$ in $End_{R^{s}}(R)$ for $s>q$ (considered as matrices) have entries with degree some multiple of $s$, with constant (nonzero) entries only in those blocks $Hom_{R^{s}}(M_{st(\alpha)}^{s},M_{st(\beta)}^{s})$ with $st(\beta)\subset st(\alpha)$. In the direct limit, these elements become infinite matrices with entries in $k$, in other words there can only be nonzero entries in those blocks corresponding to $st(\beta)\subset st(\alpha)$ (any nonzero entry in a different block must have infinite degree, which is impossible). This implies that $J(st(\sigma),st(\tau))$ is contained in $\sum_{st(\sigma)\supset st(\alpha)\supset st(\beta)\subset st(\tau)}J(st(%
\alpha),st(\beta))$, which by 4.7 is equal to $\sum_{st(\sigma)\supset st(\beta)\subset st(\tau)}J(st(\beta))=J(st(\sigma\cup%
\tau))$ and we are done.
∎
Theorem 4.9.
The ideals $J(st(\sigma))$ generate the lattice of ideals in $D(R)$ by sums and intersections.
Proof.
Let $I$ be an ideal in $D(R)$; it is of course true in general that $I=\sum_{\phi\in I}\langle\phi\rangle$. By 4.7 and 4.8 this is equal to $\sum J(st(\sigma))$, where the sum goes over all $\sigma\in K$ such that $I$ contains elements from some $Hom_{R^{q}}(M_{st(\alpha)}^{q},M_{st(\sigma)}^{q})$.
Finally, the intersection $J(st(\sigma))\cap J(st(\tau))$ contains elements in those $End_{R^{q}}(M_{st(\alpha)}^{q})$ with $st(\alpha)\subset st(\sigma)\cap st(\tau)$; the maximal such star is $st(\sigma\cup\tau)$ if $\sigma\cup\tau$ is a face of $K$, and if $\sigma\cup\tau$ is not a face, there are no such $\alpha$; in other words $J(st(\sigma))\cap J(st(\tau))=J(st(\sigma\cup\tau))$.
∎
We have now given two essentially different descriptions of the ideals of $D(R)$, and we may wonder how to translate between the two languages. This is not too hard, as the obvious suggestion turns out to be true.
Theorem 4.10.
The ideal $J(st(\sigma))$ is equal to the ideal $\langle x_{\sigma}\rangle$.
Proof.
It follows from 4.7 and 4.8 that $J(st(\sigma))=\bigoplus_{q>0,st(\beta)\subset st(\sigma)}Hom_{R^{q}}(M^{q}_{st%
(\alpha)},M_{st(\beta)}^{q})$, in other words all the endomorphisms with support contained in $st(\sigma)$. We can think of $x_{\sigma}$ as an endomorphism of $R$, given by $f\mapsto fx_{\sigma}$, and considering that whatever element $f$ we choose, $fx_{\sigma}$ has support contained in $st(\sigma)$. This means that the endomorphism $x_{\sigma}$ is in $J(st(\sigma))$ and not in any larger ideal, and as $x_{\sigma}(1)=x_{\sigma}$ has support equal to $st(\sigma)^{\circ}$, it is not in any smaller ideal $J(st(\tau))$ with $st(\tau)\subset st(\sigma)$. From 4.7 it follows that $x_{\sigma}$ generates all of $J(st(\sigma))$ and the two ideals are equal.
∎
Acknowledgements
I would like to thank my advisor Rikard Bøgvad for all the usual reasons, and I also thank Anders Björner for some helpful remarks.
References
[Bav10a]
V. V. Bavula, Generators and defining relations for the ring of
differential operators on a smooth affine algebraic variety, Algebr.
Represent. Theory 13 (2010), no. 2, 159–187.
[Bav10b]
by same author, Generators and defining relations for the ring of differential
operators on a smooth affine algebraic variety in prime characteristic, J.
Algebra 323 (2010), no. 4, 1036–1051.
[Con96]
Aldo Conca, Hilbert-Kunz function of monomial ideals and binomial
hypersurfaces, Manuscripta Math. 90 (1996), no. 3, 287–300.
[Eri98]
Anders Eriksson, The ring of differential operators of a
Stanley-Reisner ring, Comm. Algebra 26 (1998), no. 12,
4007–4013.
[Hun13]
Craig Huneke, Hilbert-Kunz multiplicity and the F-signature,
Commutative algebra, Springer, New York, 2013, pp. 485–525.
[Kun69]
Ernst Kunz, Characterizations of regular local rings for characteristic
$p$, Amer. J. Math. 91 (1969), 772–784.
[Mon83]
P. Monsky, The Hilbert-Kunz function, Math. Ann. 263
(1983), no. 1, 43–49.
[Mus94]
Ian M. Musson, Differential operators on toric varieties, J. Pure Appl.
Algebra 95 (1994), no. 3, 303–315.
[Sai07]
Mutsumi Saito, Primitive ideals of the ring of differential operators on
an affine toric variety, Tohoku Math. J. (2) 59 (2007), no. 1,
119–144.
[SVdB97]
Karen E. Smith and Michel Van den Bergh, Simplicity of rings of
differential operators in prime characteristic, Proc. London Math. Soc. (3)
75 (1997), no. 1, 32–62.
[Tra99]
William N. Traves, Differential operators on monomial rings, J. Pure
Appl. Algebra 136 (1999), no. 2, 183–197.
[Tri97]
J. R. Tripp, Differential operators on Stanley-Reisner rings, Trans.
Amer. Math. Soc. 349 (1997), no. 6, 2507–2523.
[Yek92]
Amnon Yekutieli, An explicit construction of the Grothendieck residue
complex, Astérisque (1992), no. 208, 127, With an appendix by Pramathanath
Sastry. |
Generalized Kac-Moody Lie algebras, free Lie algebras and
the structure of the Monster Lie algebra
Elizabeth Jurisich
()
Department of Mathematics, Rutgers University
New Brunswick, NJ 08903
1 Introduction
Generalized Kac-Moody algebras, called Borcherds algebras in
[14], were investigated by R. Borcherds in [2]. We show
that any generalized
Kac-Moody algebra ${g}$ that has no mutually orthogonal
imaginary simple roots can be written as ${g}={u}^{+}\oplus({g}_{J}+{h})\oplus{u}^{-}$, where ${u}^{+}$ and ${u}^{-}$
are subalgebras
isomorphic to free Lie algebras with given generators, and ${g}_{J}$
is a Kac-Moody algebra defined from a symmetrizable Cartan matrix
(see Theorem 5.1).
There is a formula due to Witt that computes the graded dimension of
a free Lie algebra where all of the generators have been assigned
degree one. It is known that Witt’s formula can be extended to other
gradings (e.g., [5]). We present a further generalization of the
formula appearing in [5]. The denominator identity for
${g}$ is obtained
by using this generalization of Witt’s formula
and the denominator identity known for the Kac-Moody
algebra ${g}_{J}$. In this work, we are taking ${g}$ to be the
algebra defined by the appropriate generators and relations, rather
than the quotient of this algebra by its radical. In particular, our
main result and consequent proof of the
denominator identity give a new proof that the radical of a generalized
Kac-Moody algebra of the above type is zero. (We use the
fact that the radical of ${g}_{J}$ is zero, which is Serre’s theorem
in the case that ${g}_{J}$ is finite-dimensional; this is the main
case for us.)
The most important application of our work is to the Monster Lie
algebra ${m}$, defined by R. Borcherds [4]. In fact, we
show that ${m}={u}^{+}\oplus{g}{l}_{2}\oplus{u}^{-}$, with ${u}^{\pm}$ free Lie algebras.
This result is obtained by applying the above results to a
generalized Kac-Moody algebra
${g}(M)$ defined from a particular matrix $M$, given by the inner
products of the simple roots of ${m}$. Theorem
5.1 applied to this Lie algebra
establishes that the subalgebras ${n}^{\pm}\subset{g}(M)$ are
each the semidirect product of a one-dimensional Lie algebra and a free Lie algebra on countably many
generators. The Lie algebra
${g}(M)$ is shown to be a central
extension of the Monster
Lie algebra ${m}$ (Theorem 6.1) constructed by R.
Borcherds in [4]. By Theorem 6.1, the subalgebras
${m}^{\pm}$ in
${m}={m}^{+}\oplus{h}\oplus{m}^{-}$
are isomorphic to the subalgebras ${n}^{\pm}\subset{g}(M)$. In
this way we
show ${m}$ contains two large subalgebras ${u}^{\pm}$
which are isomorphic to
free Lie algebras, and ${m}={u}^{+}\oplus{g}{l}_{2}\oplus{u}^{-}$. The denominator identity for ${m}$
(see [4]) is obtained in this paper in the manner described
above for
more general ${g}$. In this case ${g}_{J}={s}{l}_{2}$,
and our results
give a new proof that the central extension ${g}(M)$ of ${m}$
has zero radical.
The Monster Lie algebra ${m}$ is of great interest because R.
Borcherds defines and uses this Lie algebra, along with its denominator
identity, to solve the following problem ([4]):
It was conjectured by Conway and Norton in [6] that there
should be
an infinite-dimensional representation of the Monster simple group
such that the McKay-Thompson series of the elements of the Monster group
(that is, the graded traces of the elements of the Monster group as
they act on the module) are equal to some known modular functions
given in [6]. After the
“moonshine module” $V^{\natural}$ for the Monster simple group was
constructed [10] and many of its properties, including the
determination of some of the McKay-Thompson series, were proven in
[11], the nontrivial problem of computing the rest of
the McKay-Thompson series of Monster group elements acting
on $V^{\natural}$ remained. Borcherds has shown in [4]
that the McKay-Thompson series are the expected modular functions.
In this paper, in preparation for our main result, we include a
detailed treatment of some of Borcherds’ work on generalized Kac-Moody
algebras, and of that part of [4] which shows that the
Monster Lie algebra has the properties that we need to prove
Theorem 6.1. We now explain this exposition.
Some results such as character formulas and a denominator identity
known for Kac-Moody algebras are stated in
[2] for generalized Kac-Moody algebras. We found it necessary
to do some extra work in order to understand fully the precise
definitions and also the reasoning which are implicit in Borcherds’
work on this subject. V. Kac in [18] gives an outline (without
detail) of how to rigorously develop the theory of generalized
Kac-Moody algebras by indicating that one should follow the arguments
presented there for Kac-Moody algebras (see also [16]).
Included in [17] is a
detailed exposition of the theory of generalized Kac-Moody algebras,
along the lines of [20], where the homology results of [13]
(not covered in [18]) are extended to these new Lie algebras.
That this can
be done is mentioned and used in [4]. The homology result
gives another proof of the character and denominator formulas
([17]).
We find it appropriate to work with
the extended Lie algebra as in [13] and [20] (that is, the Lie
algebra with suitable degree derivations adjoined). Alternatively, one
can generalize the theorems in [18]. In either of these
approaches the Cartan subalgebra is sufficiently enlarged to make the
simple roots linearly independent and have multiplicity one, just as in
the case of Kac-Moody algebras. Without working in
the extended Lie algebra it does not seem possible to prove the
denominator and character formulas for all generalized Kac-Moody
algebras. This is because the matrix from which we
define the Lie algebra can have linearly dependent columns (as in the
case of $\mathaccent 866{{s}{l}_{2}}$); we may even have infinitely
many columns equal. Naturally, when it makes sense to do so, we may
specialize formulas obtained involving the root lattice. In this way
we obtain Borcherds’ denominator identity for ${m}$, and show its
relation to our generalization of Witt’s formula.
The crucial link between the Monster Lie algebra and a generalized
Kac-Moody algebra defined from a matrix is provided by Theorem
4.1, which is a theorem given by Borcherds in [3]. Versions
of this theorem also appear in [2] and [4]. Since this
theorem can be stated most neatly in terms of a canonical central
extension of a generalized Kac-Moody algebra (as in [3]) we
include a section on this central extension.
Theorem 4.1 roughly says that a Lie algebra with an
“almost positive definite bilinear form”, like the Monster Lie
algebra, is the homomorphic image of a canonical central extension of
a generalized Kac-Moody algebra. The way that
Theorem 4.1 is stated here and in [3] (as
opposed to [4] where condition 4 is not used)
allows us to conclude that the Monster Lie algebra has a central extension
which is a generalized Kac-Moody algebra defined from a matrix.
We include in this paper a completely elementary proof of Theorem 4.1.
This proof is simpler than the argument
in [2] and the proof indicated [18], which require the
construction of a Casimir operator.
Here equation (11), which follows immediately from the
hypotheses of the theorem, is used in place of the Casimir operator.
The Monster Lie algebra is defined (see [4]) from the
vertex algebra which is the
tensor product of $V^{\natural}$ and a vertex algebra
obtained from a rank two hyperbolic lattice. This construction is
reviewed in Section 6.2. The infinite-dimensional
representation $V^{\natural}$ of the
Monster simple group constructed in [10] can be given the
structure of a vertex operator algebra, as stated in [1] and
proved in [11]. The theory
of vertex algebras and vertex operator algebras is used in proving
properties of the Monster Lie algebra, so the definition of vertex
algebra and a short
discussion of the properties of vertex algebras are given in this
paper. The “no-ghost” theorem
of string theory is used here, as it is in [4], to obtain an
isomorphism between homogeneous subspaces of the
Monster Lie algebra and the homogeneous subspaces of $V^{\natural}$.
A reformulation of the proof of the no-ghost theorem
as it given in [4] and
[25] is presented in the appendix of this paper.
This paper is related to the work of S.J. Kang [19], where a root
multiplicity formula for generalized Kac-Moody algebras is proven
by using Lie algebra homology. We recover Kang’s result for the class
of Lie algebras studied in this paper. Other related works include
that of K. Harada, M. Miyamato, and H. Yamada [16], who present
an exposition of generalized Kac-Moody algebras along the lines of
[18] (their proof of Theorem 4.1, is the proof in
[2] done in complete detail; see above). The recent work of
the physicists R. Gebert and J. Teschner [14] explores the module
theory for some basic examples of generalized Kac-Moody algebras.
I would like to thank Professors James Lepowsky and Robert L.
Wilson for their guidance and many extremely helpful discussions.
2 Generalized Kac-Moody algebras
2.1 Construction of the algebra associated to a matrix
In [2] Borcherds defines the generalized Kac-Moody algebra (GKM)
associated to a matrix. Statements given here without proof have been
shown in detail in [17]. In addition to [2], the reader
may also want to refer to [18] where an outline is given for
extending the arguments given there for Kac-Moody algebras.
The Lie algebra denoted ${g}^{\prime}(A)$ in
[18], which is defined from an arbitrary matrix $A$, is equal to the
generalized Kac-Moody algebra ${g}(A)$, defined below, when
the matrix $A$ satisfies conditions C1–C3 given below.
Remark: In [4]
any Lie algebra satisfying conditions 1–3 of Theorem 4.1 is
defined to
be a generalized Kac-Moody algebra. In this paper the term
“generalized Kac-Moody algebra” will always mean a Lie algebra
defined from
a matrix as in [18].
The theory presented here,
based on symmetric rather than symmetrizable matrices,
can be easily adapted to the case where the matrix
is symmetrizable. We use symmetric matrices in order to be consistent
with the work of R. Borcherds and because the symmetric case is
sufficient for the main applications.
We will begin by constructing a
generalized Kac-Moody algebra associated to a matrix.
All vector spaces are assumed to be over ${R}$.
Let $I$
be a set, at most countable, identified with
${{Z}}_{+}=\{1,2,\ldots\}$ or with $\{1,2,\ldots,k\}$. Let
$A=(a_{ij})_{i,j\in I}$ be a
matrix with entries in ${{R}}$, satisfying the following conditions:
(C1)
$A$ is symmetric.
(C2)
If $i\neq j$ then $a_{ij}~{}\leq~{}0$.
(C3)
If $a_{ii}>0$ then ${2a_{ij}\over a_{ii}}\in{Z}$
for all $j\in I$.
Let ${g}_{0}(A)={g}_{0}$ be the Lie algebra with generators
${h_{i},e_{i},f_{i}}$, where ${i\in I}$, and the following defining
relations:
(R1)
$\left[h_{i},h_{j}\right]=0$
(R2)
$\left[h_{i},e_{k}\right]-a_{ik}e_{k}=0$
(R3)
$\left[h_{i},f_{k}\right]+a_{ik}f_{k}=0$
(R4)
$\left[e_{i},f_{j}\right]-\delta_{ij}h_{i}=0$
for all $i,j,k\in I$.
Let
${h}=\sum_{i\in I}{{R}}h_{i}$. Let
${{n}}_{0}^{+}$ be the subalgebra generated by the $\{e_{i}\}_{i\in I}$
and let ${{n}}_{0}^{-}$ be the subalgebra generated by the
$\{f_{i}\}_{i\in I}$.
The following proposition is proven by the usual methods for
Kac-Moody algebras (see [17] or [18]):
Proposition 2.1
The Lie algebra ${g}_{0}$ has triangular decomposition ${g}_{0}={{n}}_{0}^{-}\oplus{{h}}\oplus{{n}}_{0}^{+}$. The abelian subalgebra ${{h}}$ has
a basis consisting of $\left\{h_{i}\right\}_{i\in I}$, and
${n}^{\pm}_{0}$ is the free Lie algebra generated by the $e_{i}$
(resp. the $f_{i}$) $i\in I$. In particular, $\left\{e_{i},f_{i},h_{i}\right\}_{i\in I}$ is a
linearly independent set in ${g}_{0}$.
$\mathchar 1027\relax$
For all $i\neq j$ and $a_{ii}>0$ define
$$d^{+}_{ij}=({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}\ \in{{g}}_{%
0}^{+}$$
(1)
$$d^{-}_{ij}=({\mathrm{ad}}\thinspace f_{i})^{1-2a_{ij}/a_{ii}}f_{j}\ \in{{g}}_{%
0}^{-}$$
(2)
Let ${k}_{0}^{\pm}\subset{n}_{0}^{\pm}$ be the ideal of
${n}_{0}^{\pm}$ generated by the elements:
$$\displaystyle d^{\pm}_{ij}$$
(3)
$$\displaystyle\left[e_{i},e_{j}\right]$$
if $$a_{ij}=0$$ (in the case of
$${k}_{0}^{+}$$)
(4)
$$\displaystyle\left[f_{i},f_{j}\right]$$
if $$a_{ij}=0$$ (in the case of $${k}_{0}^{-}$$).
(5)
Note that if $a_{ii}>0$, then the elements (4) and (5)
are of type (1) and (2). The subalgebra ${k}_{0}={k}^{+}_{0}\oplus{k}^{-}_{0}$ is an ideal of
${g}_{0}$ (for details adapt the proof of Proposition 3.1
in the next section).
Definition 1: The generalized Kac-Moody
algebra ${g}(A)={g}$ associated to the matrix $A$ is the
quotient
of ${g}_{0}$ by the ideal ${k}_{0}={k}^{+}_{0}\oplus{k}^{-}_{0}$.
Remark: In [17] and in [18],
a generalized Kac-Moody algebra is constructed as
a quotient of ${g}_{0}$ by its radical (that is, the largest graded
ideal having trivial intersection with ${h}$). Although
this fact is not used in this paper, the ideal ${k}_{0}$ is equal to
the radical of ${g}_{0}$.
Of course, proving that the radical of the
Lie algebra ${g}(A)$ is zero not trivial.
It is shown in [17]
that the radical of ${g}(A)$ is zero using results from [12],
[18] and a proposition proven in [2].
Let ${n}^{+}={n}^{+}_{0}/{k}^{+}_{0}$ and ${n}^{-}={n}^{-}_{0}/{k}^{-}_{0}$. Proposition 2.1 implies that
the
generalized Kac-Moody algebra has triangular decomposition ${g}={n}^{+}\oplus{h}\oplus{n}^{-}$.
The Lie algebra ${g}$ is
given by the corresponding generators and relations.
The Lie algebra ${g}(A)$ is a Kac-Moody algebra when the
matrix $A$ is a generalized Cartan matrix.
Let $\deg e_{i}=-\deg f_{i}=(0,\ldots,0,1,0,\ldots)$ where $1$ appears
in the $i^{th}$ position, and let $\deg h_{i}=(0,\ldots)$. This induces a Lie
algebra grading by ${Z}^{I}$ on ${g}$. Degree
derivations $D_{i}$ are defined by letting $D_{i}$ act on the degree
$(n_{1},n_{2},\ldots)$ subspace of ${g}$ as multiplication by the
scalar $n_{i}$. Let ${d}$
be the space spanned by the $D_{i}$. We extend
the Lie algebra ${g}$ by taking the semidirect product with
${d}$, so ${g}^{e}={d}\mathchar 9582\relax{g}$. Then ${{h}}^{e}={d}\oplus{h}$ is an
abelian subalgebra of ${g}^{e}$, which acts via scalar multiplication
on each space
${g}(n_{1},n_{2},\ldots)$.
Let $\alpha_{i}\in({h}^{e})^{*}$ for $i\in I$ be defined by the conditions:
$$[h,e_{i}]=\alpha_{i}(h)e_{i}\mbox{ for all }h\in{h}^{e}.$$
Note that $\alpha_{j}(h_{i})=a_{ij}$ for all $i,j\in I$.
Because we have adjoined ${d}$ to ${h}$, the $\alpha_{i}$ are
linearly independent.
For all $\varphi\in({h}^{e})^{*}$ define
$${g}^{\varphi}=\{x\in{g}|[h,x]=\varphi(h)x\ \forall h\in{h}^{e}\}.$$
If $\varphi,\psi\in({h}^{e})^{*}$ then $[{g}^{\varphi},{g}^{\psi}]\subset{g}^{\varphi+\psi}$.
By definition $e_{i}\in{g}^{\alpha_{i}},\mbox{ and }f_{i}\in{g}^{-\alpha_{i}}$ for all $i\in I$.
If all $n_{i}\leq 0$, or all $n_{i}\geq 0$ (only finitely many nonzero),
it can be shown by using the same methods as for Kac-Moody
algebras that:
$${g}^{n_{1}\alpha_{1}+n_{2}\alpha_{2}+\cdots}={g}(n_{1},n_{2},\ldots)$$
and ${g}^{0}={h}$. Therefore,
$${g}=\coprod_{(n_{1},n_{2},\ldots)\atop n_{i}\in{Z}}{g}^{n_{1}\alpha_{1}+n_{2}%
\alpha_{2}+\cdots}$$
(6)
Definition 2: The roots of ${g}$ are the nonzero
elements $\varphi$
of $({h}^{e})^{*}$ such that ${g}^{\varphi}\neq 0$.
The elements $\alpha_{i}$ are simple roots, and ${g}^{\varphi}$ is
the root space of $\varphi\in({h}^{e})^{*}$.
Denote by $\Delta$ the set of roots, $\Delta_{+}$ the set of positive
roots i.e., the
non-negative integral linear combinations of $\alpha_{i}$.
Let $\Delta_{-}=-\Delta_{+}$
be the set of negative roots. All of the roots are either
positive or negative.
The algebra ${g}$ has an
automorphism $\eta$ of order $2$ which acts as $-1$ on ${h}$
and interchanges the elements $e_{i}$ and $f_{i}$. By an inductive
argument, as in [18] or [23], we can construct a
symmetric invariant bilinear form on ${g}$ such that ${g}^{\varphi}$ and ${g}^{-\varphi}$ where $\varphi\in\Delta_{+}$ are
nondegenerately paired; however,
the restriction of this form to ${h}$ can be
degenerate. There is a character formula for standard modules
of ${g}$ and a denominator identity (see [2], [17]
and [18]):
$$\displaystyle\prod_{\varphi\in\Delta_{+}}(1-e^{\varphi})^{\dim{g}^{\varphi}}$$
$$\displaystyle=$$
$$\displaystyle\sum_{w\in W}(\det w)\sum_{\gamma\in\Omega(0)}(-1)^{l(\gamma)}e^{%
w(\rho+\gamma)-\rho}$$
(7)
where $\Omega(0)\subset\Delta_{+}\cup\{0\}$ is the set of all $\gamma\in\Delta_{+}\cup\{0\}$ such that $\gamma$ is the sum (of length zero or
greater) of mutually orthogonal imaginary simple roots.
Remark: The denominator formula (7) can
be specialized to the unextended Lie algebra as
long as the resulting specialization is well defined.
3 A canonical central extension
It is useful to consider a certain central extension of the generalized
Kac-Moody algebra.
Working with the central extension defined here
(which is the same as in [3]) will simplify the statement and
facilitate the proof of Theorem 4.1 below. Given a
matrix $A$ satisfying $\bf C1-C3$
let $\hat{g}$ be the Lie algebra with
generators $e_{i},f_{i},h_{ij}$ for $i,j,k,l\in I$ and relations:
(R1${}^{\prime}$)
$\left[h_{ij},h_{kl}\right]=0$
(R2${}^{\prime}$)
$\left[h_{ij},e_{k}\right]-\delta_{i,j}a_{ik}e_{k}=0$ and
$\left[h_{ij},f_{k}\right]+\delta_{i,j}a_{ik}f_{k}=0$
(R3${}^{\prime}$)
$\left[e_{i},f_{j}\right]-h_{ij}=0$
(R4${}^{\prime}$)
$d^{\pm}_{ij}=0\mbox{ for all }i\neq j,a_{ii}>0$
(R5${}^{\prime}$)
If $a_{ij}=0$ then $[e_{i},e_{j}]=0$ and $[f_{i},f_{j}]=0.$
The elements $d_{ij}^{\pm}$ are
defined by (1) and (2).
We will study this Lie algebra by first considering the Lie algebra
$\hat{g}_{0}$ with generators
${h_{ij},e_{i},f_{i}}$, where ${i,j\in I}$, and the defining relations
($\mbox{\bf R1}^{\prime})-(\mbox{\bf R3}^{\prime}$).
Let
$\hat{h}=\sum_{i,j\in I}{{R}}h_{ij}$ and let
${\hat{n}}_{0}^{+}$ be the subalgebra generated by the
$e_{i}$ $i\in I$,
${\hat{n}}_{0}^{-}$ be the subalgebra generated by the $f_{i}$ $i\in I$.
We shall prove a version of Proposition 2.1.
Lemma 3.1
The elements $h_{ij}$ are zero unless the ith and jth columns
of the matrix $A$ are equal.
Proof:
(also see [3]) The lemma follows from the Jacobi
identity and relations
($\mbox{\bf R1}^{\prime}$), ($\mbox{\bf R2}^{\prime}$). $\mathchar 1027\relax$
Proposition 3.1
The Lie algebra ${\hat{g}}_{0}={\hat{n}_{0}}^{-}\oplus{\hat{h}}\oplus{\hat{n}}_{0}^{+}$,
and the abelian Lie algebra $\hat{h}$
has a basis consisting of $\left\{h_{ij}\right\}_{i,j\in I}$ such that the
$i^{th}$ and $j^{th}$ columns of $A=(a_{ij})_{i,j\in I}$ are equal.
The subalgebra ${\hat{n}}^{\pm}_{0}$ is the free Lie algebra
generated by the $e_{i}$ (resp. the $f_{i}$), $i\in I$. The set $\left\{e_{i},f_{i}\right\}_{i,j\in I}\cup\left\{h_{ij}\right\}_{i,j\in S}$
is linearly independent where $S=\{(i,j)\in I\times I|a_{ki}=a_{kj}\mbox{ for all }k\in I\}$.
Proof: As in the classical case, one constructs a
sufficiently large representation of the Lie algebra. Let
${{h}}$ be the span of the elements
$\left\{h_{ij}\right\}_{i,j\in I}$.
Define $\alpha_{j}\in{{h}}^{*}$ as follows :
$$\alpha_{j}(h_{ik})=\delta_{ik}a_{ij}.$$
Let $X$ be the free associative algebra on the symbols $\left\{x_{i}\right\}_{i\in I}$. Let $\lambda\in{{h}}^{*}$ be such that $\lambda(h_{ij})=0$ if
$a_{li}\neq a_{lj}$, i.e., unless the $i^{th}$ and $j^{th}$ columns
of $A$ are equal.
We define a representation of the free Lie algebra ${g}_{F}$ with
generators
$e_{i},f_{i},h_{ij}$ on $X$
by the following actions of the generators:
1.
$h\cdot 1=\lambda(h)$ $\mbox{ for all }h\in{h}$
2.
$f_{i}\cdot 1=x_{i}$ $\mbox{ for all }i\in I$
3.
$e_{i}\cdot 1=0$ $\mbox{ for all }i\in I$
4.
$h\cdot x_{i_{1}}\cdots x_{i_{r}}=(\lambda-\alpha_{i_{1}}-\cdots-\alpha_{i_{r}}%
)(h)x_{i_{1}}\cdots x_{i_{r}}$ for $h\in{h}$
5.
$f_{i}\cdot x_{i_{1}}x_{i_{2}}\cdots x_{i_{r}}=x_{i}x_{i_{1}}x_{i_{2}}\cdots x_%
{i_{r}}$
6.
$e_{i}\cdot x_{i_{1}}\cdots x_{i_{r}}=x_{i_{1}}e_{i}\cdot x_{i_{2}}\cdots x_{i_%
{r}}+(\lambda-\alpha_{i_{2}}-\cdots-\alpha_{i_{r}})(h_{ii_{1}})x_{i_{2}}x_{i_{%
3}}\cdots x_{i_{r}}$.
Let ${\hat{s}}$ be the ideal generated by the elements
$\left[h_{ij},h_{kl}\right]$, $\left[h_{ij},e_{k}\right]-\delta_{i,j}a_{ik}e_{k}$,
$\left[h_{ij},f_{k}\right]+\delta_{i,j}a_{ik}f_{k}$, and $\left[e_{i},f_{j}\right]-h_{ij}$.
Now we will show that the ideal ${\hat{s}}$ annihilates the module $X$:
It is clear that $[h,h^{\prime}]$ is 0 on $X$ for $h,h^{\prime}\in{{h}}$. The
element $[e_{i},f_{i}]-h_{ij}$ also acts as $0$ on $X$ by the following
computation :
$$\displaystyle[e_{i},f_{j}]\cdot x_{i_{1}}\cdots x_{i_{r}}$$
$$\displaystyle=$$
$$\displaystyle e_{i}f_{j}\cdot x_{i_{1}}\cdots x_{i_{r}}-f_{j}e_{i}\cdot x_{i_{%
1}}\cdots x_{i_{r}}$$
$$\displaystyle=$$
$$\displaystyle e_{i}\cdot x_{j}x_{i_{1}}\cdots x_{i_{r}}-x_{j}e_{i}\cdot x_{i_{%
1}}\cdots x_{i_{r}}$$
$$\displaystyle=$$
$$\displaystyle(\lambda-\alpha_{i_{1}}-\cdots-\alpha_{i_{r}})(h_{ij})x_{i_{1}}%
\cdots x_{i_{r}}$$
$$\displaystyle=$$
$$\displaystyle h_{ij}\cdot x_{i_{1}}\cdots x_{i_{r}}.$$
By a similar computation $[h_{ij},f_{k}]+\delta_{ij}a_{ik}f_{k}$
annihilates $X$.
Now consider the action of $[h_{ij},e_{k}]-\delta_{ij}a_{ik}e_{k}$ on $X$:
$$\displaystyle[h_{ij},e_{k}]\cdot 1$$
$$\displaystyle=$$
$$\displaystyle h_{ij}e_{k}\cdot 1-e_{k}h_{ij}\cdot 1$$
$$\displaystyle=$$
$$\displaystyle e_{k}\cdot\lambda(h_{ij})1$$
$$\displaystyle=$$
$$\displaystyle 0$$
and
$$\delta_{ij}a_{ik}e_{k}\cdot 1=0.$$
Thus $[h_{ij},e_{k}]-\delta_{ij}a_{ik}e_{k}$ annihilates $1$.
Furthermore, $[h_{ij},e_{k}]-\delta_{ij}a_{ik}e_{k}$ commutes with the
action of $f_{l}$ for all $l$, as the following computation from
[3] shows:
$$\displaystyle[\ [h_{ij},e_{k}]-\delta_{ij}a_{ik}e_{k},f_{l}\ ]$$
$$\displaystyle=[\ [h_{ij},e_{k}],f_{l}\ ]-[\delta_{ij}a_{ik}e_{k},f_{l}]=[h_{ij%
},h_{kl}]+\delta_{ij}(a_{il}-a_{ik})h_{kl}.$$
By the assumption on $\lambda$, $h_{kl}$ is $0$ on $X$ unless $a_{il}=a_{ik}$, so that the above is zero.
Since any $x_{i_{1}}\cdots x_{i_{r}}=f_{i_{1}}\cdots f_{i_{r}}\cdot 1$, this
means that $[h_{ij},e_{k}]-\delta_{ij}a_{ik}e_{k}$ annihilates $X$.
Now $X$ can be regarded as a $\hat{g}_{0}$-module.
The remainder of the proof follows the classical argument. $\mathchar 1027\relax$
The following proposition will be
used in the proof of Theorem 4.1.
Proposition 3.2
In $\hat{g}_{0}$, for all $i,j,k\in I$ with $i\neq j$
and $a_{ii}>0$
$$[e_{k},d^{-}_{ij}]=0$$
and
$$[f_{k},d^{+}_{ij}]=0.$$
Proof: It is enough to show the first
formula.
Case 1: Assume $k\neq i$ and $k\neq j$.
Since $h_{ki}$ is central if $k\neq i$
$$\displaystyle({\mathrm{ad}}\thinspace e_{k})({\mathrm{ad}}\thinspace f_{i})x$$
$$\displaystyle=$$
$$\displaystyle[h_{ki},x]+[f_{i},[e_{k},x]]$$
(8)
$$\displaystyle=$$
$$\displaystyle({\mathrm{ad}}\thinspace f_{i})({\mathrm{ad}}\thinspace e_{k})x$$
so
$$\displaystyle[e_{k},({\mathrm{ad}}\thinspace f_{i})^{{-2a_{ij}/a_{ii}}+1}f_{j}]$$
$$\displaystyle=$$
$$\displaystyle({\mathrm{ad}}\thinspace f_{i})^{{-2a_{ij}/a_{ii}}+1}[e_{k},f_{j}]$$
$$\displaystyle=$$
$$\displaystyle({\mathrm{ad}}\thinspace f_{i})^{{-2a_{ij}/a_{ii}}+1}h_{kj}=0.$$
The last equality
holds because $k\neq j$ means $h_{kj}$ is central.
Case 2: Assume $k=i$.
By assumption $a_{ii}>0$, thus ${e_{i},f_{i},h_{ii}}$ generate a
Lie algebra isomorphic to ${s}{l}_{2}$. Consider the ${s}{l}_{2}$-module generated by the weight vector $f_{j}$. Then if
$a_{ij}=0$ the result follows from the Jacobi identity and the fact
that $[e_{i},f_{j}]=h_{ij}$ is in the center of the Lie algebra. If
$a_{ij}\neq 0$ then
$$\displaystyle{\mathrm{ad}}\thinspace e_{i}({\mathrm{ad}}\thinspace f_{i})^{-2a%
_{ij}/a_{ii}+1}f_{j}$$
$$\displaystyle=$$
$$\displaystyle(a_{ii}/2)(2a_{ij}/a_{ii}-1)(-{2a_{ij}/a_{ii}})({\mathrm{ad}}%
\thinspace f_{i})^{-2a_{ij}/a_{ii}}f_{j}$$
$$\displaystyle+(-2a_{ij}/a_{ii}+1)({\mathrm{ad}}\thinspace f_{i})^{-2a_{ij}/a_{%
ii}}({\mathrm{ad}}\thinspace h_{ii})f_{j}$$
$$\displaystyle=$$
$$\displaystyle 0.$$
Case 3: Assume $k=j$.
By (8)
$$[e_{j},({\mathrm{ad}}\thinspace f_{i})^{{-2a_{ij}/a_{ii}}+1}f_{j}]=({\mathrm{%
ad}}\thinspace f_{i})^{{-2a_{ij}/a_{ii}}+1}h_{jj}.$$
(9)
Since $[f_{i},h_{jj}]=a_{ji}f_{i}$ it follows immediately that
(9) equals zero if $a_{ij}\leq 0$. $\mathchar 1027\relax$
Let ${{k}}^{\pm}_{0}$ be the ideal of ${\hat{g}_{0}}^{\pm}$,
respectively, generated by the elements which give relations
$\mbox{\bf R4}^{\prime}$ and $\mbox{\bf R5}^{\prime}$:
$$d^{\pm}_{ij}\mbox{ for all }i\neq j,a_{ii}>0$$
$$[e_{i},e_{j}]\ \mbox{ if }a_{ij}=0\ \mbox{ for }{{k}}^{+}_{0}$$
$$[f_{i},f_{j}]\ \mbox{ if }a_{ij}=0\ \mbox{ for }{{k}}^{-}_{0}.$$
Proposition 3.3
Define ${k}_{0}={k}^{+}_{0}\oplus{k}^{-}_{0}$.
Then ${{k}^{\pm}_{0}}$
and ${k}_{0}$ are ideals of $\hat{g}_{0}$.
Proof: Let $\hat{g}_{0}$ act on itself by the adjoint
representation.
To see that ${k}^{-}$ is an ideal, first consider the action on the
generators $d^{-}_{ij}$. By Proposition 3.1 we have
$$\sum_{i\neq j}{\cal U}({\hat{g}}_{0})\cdot d^{-}_{ij}=\sum_{i\neq j}{\cal U}({%
\hat{g}}^{-}_{0}){\cal U}({\hat{h}}){\cal U}({\hat{g}}^{+}_{0})\cdot d^{-}_{ij}$$
$$=\sum_{i\neq j}{\cal U}({\hat{g}}^{-}_{0})\cdot d^{-}_{ij}{\subset}{k}^{-}_{0}.$$
The equality holds by Proposition 3.2 and the fact that
$h({\mathrm{ad}}\thinspace f_{i})^{N}f_{j}=\lambda({\mathrm{ad}}\thinspace f_{i%
})^{N}f_{j}$, where $\lambda$ is a scalar.
We must also consider the action on the generators of the form
$[f_{i},f_{j}],i\neq j,a_{ij}=0$.
In this case
$$\displaystyle[e_{k}[f_{i},f_{j}]]$$
$$\displaystyle=$$
$$\displaystyle[h_{ki},f_{j}]+[f_{i},h_{kj}]$$
$$\displaystyle=$$
$$\displaystyle-\delta_{ki}a_{ij}f_{j}+\delta_{kj}a_{ji}f_{i}$$
$$\displaystyle=$$
$$\displaystyle 0$$
i.e.
$$e_{k}\cdot[f_{i},f_{j}]=0.$$
Then by the same argument as above
$$\sum_{i\neq j,\ a_{ij}=0}{\cal U}({\hat{g}}_{0})\cdot[f_{i},f_{j}]\subset{{k}}%
^{-}_{0}.$$
By a symmetric argument, ${k}^{+}_{0}$ is an ideal of $\hat{g}_{0}$, so
${k}_{0}$ is an ideal. $\mathchar 1027\relax$
The Lie algebra $\hat{g}_{0}/{k}_{0}$ is equal to $\hat{g}$, the Lie algebra
defined above by generators and relations.
Remark: The Lie algebra $\hat{g}$ is called the
universal generalized Kac-Moody algebra in [3].
Let
${c}$ be the ideal of
$\hat{g}$ spanned by the $h_{ij}$
where $i\neq j$; note that these elements are central. The Lie
algebra $\hat{g}$ is a central extension of
the Lie algebra ${g}$, because there is an obvious homomorphism
from $\hat{g}$ to ${g}$ mapping generators to generators with
kernel ${c}$. So
$1\rightarrow{c}\rightarrow{\hat{g}}{\buildrel-\over{\rightarrow}}{g}\rightarrow
1$ is exact. The radical of $\hat{g}$
must be zero because the radical of ${g}$ is zero. We have shown
following:
Theorem 3.1
The generalized Kac-Moody algebra
${g}$ is isomorphic to ${\hat{g}}/{c}$.
The Lie algebra $\hat{g}$ can be given a ${{Z}}-$gradation
defined by taking $\deg e_{i}=-\deg f_{i}=s_{i}\in{Z}$ where $s_{i}=s_{j}$ if $h_{ij}\neq 0$, and $\deg h_{ij}=0$. The automorphism $\eta$
is well defined
on $\hat{g}$. It follows from Proposition 2.1 and
Proposition 3.1 that
${n}^{\pm}_{0}={\hat{n}}^{\pm}_{0}$.
If ${n}^{+}={\hat{g}_{0}}^{+}/{k}_{0}^{+}$ and ${n}^{-}={\hat{g}_{0}}^{-}/{k}_{0}^{-}$ then
we have the decomposition $\hat{g}={n}^{-}\oplus\hat{h}\oplus{n}^{+}$.
Recall that the Lie algebra ${g}^{e}$ has an
invariant bilinear form
$(\cdot,\cdot)$. It is useful to define an invariant bilinear form
$(\cdot,\cdot)_{\hat{g}}$ on $\hat{g}$.
For $a,b\in\hat{g}$, let $(a,b)_{\hat{g}}=(\bar{a},\bar{b})$. Note that the span of the $h_{ij}$
where $i\neq j$ is
in the radical of the form on $\hat{g}$. The form
$(\cdot,\cdot)_{\hat{g}}$ is symmetric and invariant because the
form on ${g}$ has these properties. Grade $\hat{g}$ by
letting $\deg e_{i}=1=-\deg f_{i}$, and $\deg h_{ij}=0$, so $\hat{g}=\oplus_{n\in{{Z}}}\hat{g}_{n}$, where $\hat{g}_{n}$ is
contained in ${n}^{+}$ if $n>0$, and ${n}^{-}$ if $n<0$. The
form $(\cdot,\cdot)_{\hat{g}}$ is nondegenerate on
$\hat{g}_{n}\oplus\hat{g}_{-n}$, because the map given by the central
extension, $x\mapsto\bar{x}$, is an isomorphism on ${n}^{\pm}$, and
the form defined on ${g}^{e}$ is nondegenerate on ${g}^{e}_{m}\oplus{g}^{e}_{-m}$ for $m\in{Z}_{+}$.
4 Another characterization of GKM algebras
Theorem 4.1 below is a version of Theorem 3.1 appearing in
[2].
Much of the proof of the
following theorem is different than the proof appearing in
[2]. In particular, there is no need to define a Casimir
operator in order to show that the elements $a_{ij}\leq 0$; as seen
below, this follows immediately from condition 3.
Remark: In
[4] Borcherds states, as a converse to the theorem, that the
canonical central extensions $\hat{g}$
satisfy conditions 1–3 below, although we note that the canonical
central extension of a generalized Kac-Moody algebra does not have
to satisfy condition 1 for some matrices. For example, if we
start with the infinite matrix whose
entries are all $-2$, then all of the $e_{i}$ must have the same degree
because of condition 3. Therefore,
there is no way to define a ${Z}-$grading of $\hat{g}$ so that
$\hat{g}_{i}$ is both finite-dimensional and satisfies condition
3.
We also note that the kernel of the map $\pi$ appearing in the
following theorem can be strictly larger that the span of the
$h_{ij}$; cf. the statement of Theorem 4.1 in [4].
Theorem 4.1 (Borcherds)
Let ${g}$ be a Lie algebra satisfying the
following conditions:
1.
${g}$ can be ${Z}$-graded as $\coprod_{i\in{{Z}}}{g}_{i}$, ${g}_{i}$ is finite dimensional if $i\neq 0$, and
${g}$ is diagonalizable with respect to ${g}_{0}$.
2.
${g}$ has an involution $\omega$ which maps ${g}_{i}$
onto ${g}_{-i}$ and acts as $-1$ on noncentral elements of
${g}_{0}$, in particular ${g}_{0}$ is abelian.
3.
${g}$ has a Lie algebra-invariant bilinear form $(\cdot,\cdot)$,
invariant under $\omega$, such that ${g}_{i}$ and ${g}_{j}$ are
orthogonal if $i\neq-j$, and such that the form $(\cdot,\cdot)_{0}$,
defined by $(x,y)_{0}=-(x,\omega(y))$ for $x,y\in{g}$, is positive
definite on ${g}_{m}$ if $m\neq 0$.
4.
${g}_{0}\subset[{g},{g}]$.
Then there is a central extension $\hat{{g}}$ of a generalized
Kac-Moody algebra and a homomorphism, $\pi$, from $\hat{{g}}$ onto
${g}$, such that the kernel of $\pi$ is in the center of $\hat{{g}}$.
Proof:
Generators of the Lie algebra ${g}$
are constructed, as in [2], as follows: For $m>0$, let ${l}_{m}$
be the subalgebra of ${g}$ generated by the ${g}_{n}$ for $0<n<m$, and let ${e}_{m}$ be the orthogonal complement of ${l}_{m}$
in ${g}_{m}$ under $(\cdot,\cdot)_{0}$. To see that ${e}_{m}$ is invariant under ${g}_{0}$ let $x\in{g}_{0},\ y\in{e}_{m},$ and $\ z\in{l}_{m}$. Then $[x,z]\in{l}_{m}$, so $(y,[x,z])_{0}=0$. Since the form $(\cdot,\cdot)_{0}$
satisfies $([x,y],z)_{0}=-(y,[\omega(x),z])_{0}$, i.e., is contravariant,
we have $([x,y],z)_{0}=0$, which implies that $[x,y]\in{e}_{m}$.
The operators induced by the action of ${g}_{0}$ on ${e}_{m}$ commute,
so we can construct a basis of ${e}_{m}$ consisting of weight
vectors with respect to ${g}_{0}$. The form $(\cdot,\cdot)_{0}$ is positive
definite on ${e}_{m}$ so an orthonormal basis can be constructed.
Contravariance of the form ensures that this orthonormal basis also
consists of weight vectors. The union of these bases for all the
${e}_{m}$’s can be indexed by $I={{Z}_{+}}$, in any order, and
will be denoted
$\{e_{i}\}_{i\in I}$. Each ${g}_{n},\ n>0$, is in the Lie algebra
generated by $\{e_{i}\}_{i\in I}$, as is seen by the following induction
on the degree, $n$: For $n=1$, ${g}_{1}={e}_{1}$. Now assume that the ${g}_{n}$ for all $0<n<m$ are
contained in the Lie algebra generated by the $e_{i}$, $i\in I$. The
finite dimensional space ${g}_{m}$
decomposes under $(\cdot,\cdot)_{0}$ as ${g}_{m}={e}_{m}\oplus{e}_{m}^{\perp}$, where ${e}_{m}^{\perp}={l}_{m}\cap{g}_{m}$.
By the induction assumption,
${e}_{m}^{\perp}$ is generated by some of the $e_{i}$’s, and by
construction ${e}_{m}$ has a basis consisting of $e_{i}$’s.
Define $f_{i}=-\omega(e_{i})$, and $h_{ij}=[e_{i},f_{j}]$. The ${g}_{n}$ where $n<0$
are generated by the $f_{i}$, $i\in I$.
The elements $h_{ij}$ can be nonzero only
when $\deg e_{i}=\deg e_{j}$. This is because if $\deg e_{i}>\deg e_{j}$, which
can be assumed without loss of generality, then
$[e_{j},[e_{i},f_{j}]]\in{l}_{\deg e_{i}}$. Thus $([e_{i},f_{j}],[e_{i},f_{j}])_{0}=0$ by contravariance, and $[e_{i},f_{j}]=0$ by the positive
definiteness of $(\cdot,\cdot)_{0}$. Therefore, all of the $h_{ij}$ are
in ${g}_{0}$. By assumption 4, ${g}_{0}$ is generated by
the $h_{ij}$, $i,j\in I$.
Let
$k\in$ rad$(\cdot,\cdot)$ (so $k\in{g}_{0}$).
Then $([k,g],[k,g])_{0}=(k,[\omega(g),[k,g]])_{0}=0$ for all $g\in{g}_{n}$, $n\neq 0$.
Thus $[k,g]=0$ by positive
definiteness, and since $k\in{g}_{0}$, $k$ must also commute with
${g}_{0}$. Therefore the radical of the form $(\cdot,\cdot)$ is in the
center of ${g}$.
If $i\neq j$ then $[e_{i},f_{j}]=h_{ij}$ is contained
in the center of ${g}$ because it is in the radical of the form
$(\cdot,\cdot)$. To see this,
consider $([e_{i},f_{j}],x)$ for a homogeneous $x\in{g}$. We know
by assumption that
this is zero if $x\notin{g}_{0}$. If $x\in{g}_{0}$ then
$([e_{i},f_{j}],x)=(e_{i},[f_{j},x])=c(e_{i},e_{j})_{0}=0$ for some real number $c$,
as $f_{j}$ is a weight vector for ${g}_{0}$, and the $e_{k}$’s are
orthogonal. Thus $([e_{i},f_{j}],x)=0\mbox{ for all }x\in{g}$.
Now it must be shown that the generators constructed above satisfy the
the relations $\mbox{\bf R1}^{\prime}$-$\mbox{\bf R5}^{\prime}$
of the central extension of a generalized Kac-Moody algebra, and
the symmetric matrix with entries
$(h_{ii},h_{jj})=a_{ij}$ satisfies conditions C2 - C3.
That $\mbox{\bf R1}^{\prime}$ (that is, $[h_{ij},h_{kl}]=0$), holds is obvious
because
${g}_{0}$ is abelian, and $\mbox{\bf R3}^{\prime}$ is true by definition.
The relations $\mbox{\bf R2}^{\prime}$
are proven by the following argument (cf. [2]):
By construction,
$e_{i}$ is a weight vector of ${g}_{0}$, thus for some real number $c$,
$[h_{lm},e_{i}]=ce_{i}$ and
$$\displaystyle c=c(e_{i},e_{i})_{0}$$
$$\displaystyle=$$
$$\displaystyle([h_{lm},e_{i}],e_{i})_{0}$$
$$\displaystyle=$$
$$\displaystyle([h_{lm},e_{i}],f_{i})$$
$$\displaystyle=$$
$$\displaystyle(h_{lm},[e_{i},f_{i}])=(h_{lm},h_{ii}).$$
Since $h_{ij}$ for $i\neq j$ is in the radical
of the form $(\cdot,\cdot)$, we have $c=\delta_{lm}a_{li}$. Applying
$\omega$ shows the relation for $f_{i}$.
To show $\mbox{\bf R5}^{\prime}$ and C2 let $i\neq j$.
We must prove $a_{ij}\leq 0$,
and if $a_{ij}=0$ then $[e_{i},e_{j}]=0$ and $[f_{i},f_{j}]=0$.
Consider the element $[e_{i},e_{j}]\in{g}$, we have
$$\displaystyle([e_{i},e_{j}],[e_{i},e_{j}])_{0}$$
$$\displaystyle=$$
$$\displaystyle-(e_{i},[f_{j},[e_{i},e_{j}]])_{0}$$
(10)
$$\displaystyle=$$
$$\displaystyle-a_{ij}(e_{i},e_{i})_{0}.$$
The last equality follows from
$$\displaystyle[f_{j},[e_{i},e_{j}]]$$
$$\displaystyle=$$
$$\displaystyle-[e_{j},[f_{j},e_{i}]]-[e_{i},[e_{j},f_{j}]]$$
$$\displaystyle=$$
$$\displaystyle a_{ij}e_{i}.$$
By equation (10) and the
positive definiteness of the form $(\cdot,\cdot)_{0}$ on ${g}_{m},\ m\neq 0,$ we have $a_{ij}\leq 0$, and $a_{ij}=0$ if and only if
$[e_{i},e_{j}]=0$. By applying $\omega$ we also show $[f_{i},f_{j}]=0$ in this case.
For $a_{ii}>0$ the Lie algebra generated by ${e_{i},f_{i},h_{ii}}$ is
isomorphic to ${s}{l}_{2}$, rescaling $e_{i}$ and $h_{ii}$ by
$2/a_{ii}$. For each $j\in I$ the element $f_{j}$ generates an ${s}{l}_{2}$-weight module, the weights must all be integers so
$[h_{ii},f_{j}]=(-2a_{ij}/a_{ii})f_{j}$ implies that $2a_{ij}/a_{ii}\in{{Z}}$, thus C3 is satisfied. Proposition 3.2,
which holds for ${g}$, shows
that $[e_{k},d_{ij}^{-}]=0$ where $d_{ij}^{-}=({\mathrm{ad}}\thinspace f_{i})^{n+1}f_{j}$ and
$n=-2a_{ij}/a_{ii}$ (note that $n$ is positive).
Contravariance of the form gives us
$$(({\mathrm{ad}}\thinspace f_{i})^{n+1}f_{j},({\mathrm{ad}}\thinspace f_{i})^{n%
+1}f_{j})_{0}=(({\mathrm{ad}}\thinspace f_{i})^{n}f_{j},[e_{i},({\mathrm{ad}}%
\thinspace f_{i})^{n+1}f_{j}])_{0}=0,$$
so that $({\mathrm{ad}}\thinspace f_{i})^{1-2a_{ij}/a_{ii}}f_{j}=0$ by the positive
definiteness of $(\cdot,\cdot)_{0}$. Applying $\omega$ to $d_{ij}^{-}$
gives the relation for $({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}$. This
shows the relations $\mbox{\bf R4}^{\prime}$.
Denote by $\hat{{g}}$ the canonical central extension, defined in
Section $3$,
of the generalized Kac-Moody algebra
associated to the matrix $a_{ij}=(h_{ii},h_{jj})$.
Define a homomorphism $\pi:\hat{{g}}\rightarrow{g}$,
taking the generators ${e_{i},f_{i},h_{ij}}$ in $\hat{{g}}$ to the
generators ${e_{i},f_{i},h_{ij}}$ in ${g}$. Since the generators of
${g}$ have been shown to satisfy the relations of $\hat{g}$
the map $\pi$ is a homomorphism from $\hat{{g}}$ onto
${g}$. The bilinear form, $(\cdot,\cdot)_{\hat{{g}}}$,
on $\hat{{g}}$ satisfies
$$(e_{i},f_{j})_{\hat{{g}}}=\delta_{ij}=(e_{i},f_{j})$$
$$(h_{ii},h_{jj})_{\hat{{g}}}=a_{ij}=(h_{ii},h_{jj}).$$
The $h_{ij}$ with $i\neq j$ are in the radical of $(\cdot,\cdot)_{\hat{g}}$.
Thus $(x,y)_{\hat{{g}}}=(\pi(x),\pi(y))$ for $x,y\in\hat{g}$, because
$(x,y)_{\hat{{g}}}$ can be reduced using invariance and the
Jacobi identity to some polynomial in the $(e_{i},f_{j})$ and $(h_{ii},h_{jj})$.
Now we determine the kernel of the map $\pi$. Let $a\in{\hat{{g}}}$ such that $a\neq 0$ and $\pi(a)=0$
in ${g}$. Recall the grading $\hat{g}=\oplus_{n\in{Z}}\hat{g}_{n}$ and the decomposition ${\hat{{g}}}={{n}}^{+}\oplus\hat{{h}}\oplus{{n}}^{-}$, thus can write $a=a_{+}+a_{0}+a_{-}$
where $a_{\pm}\in{{n}}^{\pm}\ ,a_{0}\in\hat{{h}}$.
Thus $\pi(a)=\pi(a_{+})+\pi(a_{0})+\pi(a_{-})=0$, which is still a direct sum in ${g}$. Therefore,
$\pi(a_{+})=0\ ,\pi(a_{0})=0\ ,\pi(a_{-})=0$. Assume that $a_{+}$ is
homogeneous and nonzero, then for some $n>0$, $a_{+}\in\hat{{g}}_{n}$ and $(a_{+},x)_{\hat{{g}}}=(\pi(a_{+}),\pi(x))=0\mbox{ for all }x\in\hat{{g}}_{-n}$.
Since $(\cdot,\cdot)_{\hat{{g}}}$ is nondegenerate on $\hat{{g}}_{n}\oplus\hat{{g}}_{-n}$ we have $a_{+}=0$.
By a similar argument $a_{-}=0$. Thus $a=a_{0}\in\hat{{h}}$, and
$[a,h]=0$ for all $h\in\hat{h}$. Since $\pi(a)=0$, we have
$$(a,h_{ii})_{\hat{{g}}}=(\pi(a),\pi(h_{ii}))=0\mbox{ for all }i$$
so
$$[a,e_{i}]=(a,h_{ii})_{\hat{{g}}}e_{i}=0\mbox{ for all }i$$
similarly $[a,f_{i}]=0$ for all $i\in I$. Thus $a$ is in the center of
$\hat{{g}}$. $\mathchar 1027\relax$
If the radical of the form $(\cdot,\cdot)$ is zero then the elements
$h_{ij}$ are all zero and we have a homomorphism to ${g}$ from a
generalized
Kac-Moody algebra, for which character and denominator formulas
have been
established. By construction of the monster Lie algebra in
[3],
which is also discussed later in this paper, the
radical of the invariant form is zero, so that the following corollary
will apply to this algebra.
Corollary 4.1
Let ${g}$ be a Lie algebra satisfying the
conditions in Theorem 4.1. If the radical of the form on
${g}$ is zero then there is a generalized Kac-Moody
algebra ${l}$ such that ${l}/{c}={g}$, where
${c}$ is the center of ${l}$.
5 Free subalgebras of GKM algebras
5.1 Free Lie algebras
Denote by $L(X)$ the free Lie algebra on a set $X$, if $W$ is
a vector space with basis $X$ then we also use the notation
$L(W)=L(X)$. We will assume that $X$ is
finite or countably infinite.
Let $\nu=(\nu_{1},\nu_{2},\ldots)$ be an $m-$tuple where $m\in{Z}_{+}\cup{\infty}$, and $\nu_{i}\in{N}$ satisfy
$\nu_{i}=0$ for $i$ sufficiently large.
We will use the notation $|\nu|=\sum_{i=1}^{m}\nu_{i}$.
If $T_{i}$ are indeterminates, let
$$T^{\nu}=\prod_{i\in{Z}_{+}}T_{i}^{\nu_{i}}\in{R}[T_{1},\ldots,T_{m}]\mbox{ or %
}{R}[T_{1},T_{2},\ldots].$$
We will consider gradings of the Lie algebra $L(X)$ of the following
type: Let $\Delta={Z}^{m}$ and assign to each element
$x\in X$ a degree $\alpha\in\Delta$ i.e., specify a map $\phi:X\rightarrow\Delta$. Let
$n_{\alpha}$ be the number of elements of $X$ of
degree $\alpha$ and assume this is finite for all $\alpha\in\Delta$.
This defines a grading of $L(X)$ which we denote $L(X)=\coprod_{(\alpha\in\Delta)}L^{\alpha}(X)$.
Let $d(\alpha)=\dim L^{\alpha}(X)$. We assume that $d(\alpha)$ is finite.
An example of this type of
grading is the multigradation from Bourbaki [5], which is
defined as follows:
Enumerate the set $X$ so that $X=\{x_{i}\}_{i\in I}$ where $I={Z}_{+}$ or $\{1,\ldots,m\}$. Denote
by $\Delta_{|X|}$ the group of $|X|-$tuples.
Let $\sigma:X\rightarrow\Delta_{|X|}$ be defined by $\sigma(x_{i})=\epsilon_{i}=(0,\ldots,0,1,0,\ldots)$
where the $1$ appears in position $i$. The map $\sigma$ defines the
multigradation of $L(X)$.
Given such a grading determined by the map $\phi$ and the group
$\Delta={Z}^{m}$, if
$\Delta^{\prime}={Z}^{n}$ and if $\psi:\Delta\rightarrow\Delta^{\prime}$ is a homomorphism, then $\psi\circ\phi:X\rightarrow\Delta^{\prime}$ also defines
a grading of the Lie algebra $L(X)$. It is clear that if $\alpha\in\Delta^{\prime}$ then $d(\alpha)=\sum_{\beta\in\Delta\atop\psi(\beta)=\alpha}d(\beta)$ as
$L^{\alpha}(X)=\coprod_{\beta\in\Delta\atop\psi(\beta)=\alpha}L^{\beta}(X)$.
Notice that $d(\alpha)$ is zero unless $\alpha$ has all nonnegative or
all nonpositive entries, if the degree of each element of $X$ has this
property.
The map $\psi$ induces a
homomorphism from ${R}[T_{1},\ldots,T_{m}]$ to ${R}[T_{1},\ldots T_{n}]$, where if $\alpha\in\Delta$ then $T^{\alpha}\mapsto T^{\psi(\alpha)}$.
Proposition 5.1
Let $L(X)=\coprod_{(\alpha\in\Delta)}L^{\alpha}(X)$ be a grading
of the above type. Then
$$1-\sum_{\alpha\in\Delta}n_{\alpha}T^{\alpha}=\prod_{\alpha\in\Delta\backslash%
\{0\}}(1-T^{\alpha})^{d(\alpha)}.$$
(11)
Proof:
If we
consider the multigradation of $L(X)$ by $\Delta_{|X|}$, so $L(X)=\coprod_{\beta\in\Delta_{|X|}}L^{\beta}(X)$, the following formula
is proven in [5]
(for finite $X$, but this immediately implies the result for an
arbitrary $X$):
$$1-\sum_{i\in I}T_{i}=\prod_{\beta\in\Delta_{|X|}\backslash\{0\}}(1-T^{\beta})^%
{d(\beta)}.$$
(12)
This implies a formula for the more general type of grading given
by a map $\phi:X\rightarrow\Delta$, as above. Any such map
satisfies $\phi=\phi^{\prime}\circ\sigma$ where $\phi^{\prime}:\Delta_{|X|}\rightarrow\Delta$ is given by:
$$\epsilon_{i}\mapsto\phi(x_{i}).$$
Applying the homomorphism $\phi^{\prime}$ to the identity (12)
gives the proposition.$\mathchar 1027\relax$
Some of our results will follow from the elimination theorem in
[5], which is restated here for the convenience of the reader.
Lemma 5.1 (elimination theorem)
Let $X$ be a set, $S$ a subset of
$X$ and $T$ the set of sequences $(s_{1},\ldots,s_{n},x)$ with $n\geq 0,s_{1},\ldots,s_{n}$ in $S$ and $x$ in $X\backslash S$.
(a)
The Lie algebra $L(X)$ is the direct sum as a vector space
of the subalgebra
$L(S)$ of $L(X)$ and the ideal ${a}$ of $L(X)$ generated by
$X\backslash S$.
(b)
There exists a Lie algebra isomorphism $\phi$ of $L(T)$
onto ${a}$ which maps $(s_{1},\ldots,s_{n},x)$ to $(\mbox{\em ad\thinspace}s_{1}\cdots\mbox{\em ad\thinspace}s_{n})(x)$.
$\mathchar 1027\relax$
As Bourbaki [5] does for two particular gradings, we
obtain formulas for computing the
dimension of the homogeneous subspaces of a free Lie algebra $L(X)$
graded as above by a group $\Delta$. The formulas
derived here relate the dimension of the piece of degree $\alpha$ with
the number of generators in that degree (which is assumed finite).
If $\beta\in\Delta$ can be partitioned $\beta=\sum a_{\alpha}\alpha$, where $a_{\alpha}\in{N}$ and $\alpha\in\Delta$, then define the partitions $P(\beta,j)=\{a=(a_{\alpha})_{\alpha\in\Delta}|\beta=\sum a_{\alpha}\alpha,|a|=j\}$ and $P(\beta)=\cup_{j}P(\beta,j)$.
Taking $\log$ of both sides of formula (11) leads to the
equations:
$$-\log(1-\sum_{\alpha\in\Delta}n_{\alpha}T^{\alpha})=\sum_{j\geq 1}{1\over j}(%
\sum_{\alpha\in\Delta}n_{\alpha}T^{\alpha})^{j}=\sum_{j\geq 1}{1\over j}\sum_{%
\beta\in\Delta}\sum_{a\in P(\beta,j)}{|a|!\over a!}\prod_{\alpha\in\Delta}n_{%
\alpha}^{a_{\alpha}}T^{\beta}$$
and
$$-\sum_{\alpha\in\Delta}d(\alpha)\log(1-T^{\alpha})=\sum_{\alpha\in\Delta,k\geq
1%
}{1\over k}d({\alpha})T^{k\alpha}=\sum_{\beta}\sum_{k|\beta}{1\over k}d(\beta/%
k)T^{\beta}.$$
Thus if $\gamma|\beta$ means $k\gamma=\beta$ then
$$\sum_{\gamma|\beta}{\gamma\over\beta}d(\gamma)=\sum_{a\in P(\beta)}{(|a|-1)!%
\over a!}n^{a}$$
where $n^{a}$ denotes the product of the $n_{\alpha}^{a_{\alpha}}$.
Applying the Möbius inversion formula gives:
Proposition 5.2
Let $\Delta$ be a grading of the free Lie algebra
$L(X)$ as in Proposition 11. If $d(\beta)$ is the
dimension of $L^{\beta}(X)$ for $\beta\in\Delta$ then
$$d(\beta)=\sum_{\gamma|\beta}({\gamma\over\beta})\mu({\beta\over\gamma})\sum_{a%
\in P(\gamma)}{(|a|-1)!\over a!}n^{a}.$$
(13)
5.2 Applications to generalized Kac-Moody algebras
In this section we will show that certain generalized Kac-Moody
algebras contain large subalgebras which are isomorphic to
free Lie algebras. We will apply the results of the preceding section
to these examples.
We will begin with an easy example. Let $A$ be a matrix
satisfying conditions C1-C3 which has no $a_{ii}>0$, and all
$a_{ij}<0$ for $i\neq j$ (this means that imaginary simple roots
are not mutually orthogonal). In this case, the generalized Kac-Moody
algebra ${g}(A)$ is equal to ${g}_{0}(A)$. By
Proposition 2.1, ${g}={n}^{+}\oplus{h}\oplus{n}^{-}$ where ${n}^{\pm}$ are the free Lie algebras on the
sets $\{e_{i}\}_{i\in I}$ and $\{f_{i}\}_{i\in I}$ respectively.
Remark: Formula (11) can be applied to the
root grading
of ${g}(A)$ to obtain the denominator identity for ${g}(A)$.
The root multiplicities are given by (13).
If the imaginary simple roots of a generalized Kac-Moody algebra
are not mutually orthogonal then ${n}^{\pm}$ are not in general
free, but we
will show they contain ideals which are isomorphic to free Lie
algebras. First
we will set up some notation. Let $J\subset I$ be the set $\{i\in I|\alpha_{i}\in\Delta_{R}\}=\{i\in I|a_{ii}>0\}$. Note that the
matrix $(a_{ij})_{i,j\in J}$ is a generalized Cartan matrix, let
${g}_{J}$ be the Kac-Moody algebra associated to this matrix.
Then ${g}_{J}={n}^{+}_{J}\oplus{h}_{J}\oplus{n}_{J}^{-}$, and
${g}_{J}$ is isomorphic to the subalgebra of ${g}(A)$ generated
by $\{e_{i},f_{i}\}$ with $i\in J$.
Theorem 5.1
Let $A$ be a matrix satisfying conditions C1-C3. Let $J$ and
${g}_{J}$ be as above. Assume
that if $i,j\in I\backslash J$ and $i\neq j$ then $a_{ij}<0$.
Then
$${g}(A)={u}^{+}\oplus({g}_{J}+{h})\oplus{u}^{-},$$
where
${u}^{-}=L(\coprod_{j\in I\backslash J}{\cal U}({n}^{-}_{J})\cdot f_{j})$
and
${u}^{+}=L(\coprod_{j\in I\backslash J}{\cal U}({n}^{+}_{J})\cdot e_{j})$.
The ${\cal U}({n}^{-}_{J})\cdot f_{j}$ for $j\in I\backslash J$ are integrable
highest weight ${g}_{J}$-modules, and the ${\cal U}({n}^{+}_{J})\cdot e_{j}$
are integrable lowest weight ${g}_{J}$-modules.
Note that the conditions on the $a_{ij}$ given in the theorem are
equivalent to the statement that the Lie algebra has no mutually
orthogonal imaginary simple roots.
Proof: We will consider ${n}^{+}$; the case of
${n}^{-}$ is shown by a similar argument or by applying the
automorphism $\eta$.
By the construction in Section 2, we have
$${n}^{+}=L(\{e_{i}\}_{i\in I})/{k}_{0}^{+},$$
where
${k}_{0}^{+}$ is generated as an ideal of $L(\{e_{i}\}_{i\in I})$
by the elements
$$\{({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}\ |\ i,j\in J,i\neq j\}$$
and
$$\{({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}\ |\ i\in J,j\in I%
\backslash J\}.$$
This is because there are no elements of type (4).
Apply the elimination theorem to the free Lie algebra $L(\{e_{i}\}_{i\in I})$ with $S=J$. Thus
$$L(\{e_{i}\}_{i\in I})=L(\{e_{i}\}_{i\in J})\mathchar 9582\relax{a}$$
where the ideal ${a}$ is isomorphic to the free Lie algebra on the
set $X=\{{\mathrm{ad}}\thinspace e_{i_{1}}{\mathrm{ad}}\thinspace e_{i_{2}}\cdots{%
\mathrm{ad}}\thinspace e_{i_{k}}e_{j}\ |\ j\in I\backslash J\mbox{ and }i_{m}%
\in J\}$. Let $W$ denote the vector
space with basis $X$, so that ${a}\cong L(W)$.
Observe that as an ${h}^{e}$-module
$W\cong\coprod_{j\in I\backslash J}{\cal U}({l})e_{j}$,
where ${l}$ denote the free Lie algebra
$L(\{e_{i}\}_{i\in J})$.
For each fixed $j\in I\backslash J$ consider the submodule
$$R_{j}=\coprod_{i\in J}{\cal U}({l})({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/%
a_{ii}}e_{j}\subset{\cal U}({l})e_{j}.$$
Thus, identifying quotient spaces with subspaces of $W$,
$$\displaystyle W$$
$$\displaystyle=$$
$$\displaystyle\coprod_{j\in I\backslash J}({\cal U}({l})e_{j}/R_{j}\oplus R_{j})$$
$$\displaystyle=$$
$$\displaystyle\coprod_{j\in I\backslash J}{\cal U}({l})e_{j}/R_{j}\oplus\coprod%
_{j\in I\backslash J}R_{j}$$
Now apply the elimination theorem to the Lie algebra
$L(X)=L(W)$, choosing a basis of $W$ of the form
$S_{1}\cup S_{2}$ where $S_{1}$ is a basis of the vector space
$\coprod_{j\in I\backslash J}{\cal U}({l})e_{j}/R_{j}$ and
$S_{2}$ is a basis of $\coprod_{j\in I\backslash J}R_{j}$.
Obtaining
$$L(W)=L(\coprod_{j\in I\backslash J}{\cal U}({l})e_{j}/R_{j})\mathchar 9582%
\relax{b}$$
where ${b}$ is the ideal of $L(W)$ that is generated by $S_{2}$,
i.e., by $\coprod_{j\in I\backslash J}R_{j}$.
So
$$L(\{e_{i}\}_{i\in I})=L(\{e_{i}\}_{i\in J})\mathchar 9582\relax L(\coprod_{j%
\in I\backslash J}{\cal U}({l})e_{j}/R_{j})\mathchar 9582\relax{b}.$$
Let ${k}_{J}^{+}$ be the ideal of ${l}$ generated
by
$$\{({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}\ |\ i,j\in J,i\neq j\},$$
then ${\cal U}({n}_{J}^{+})={\cal U}({l}/{k}_{J}^{+})={\cal U}({l})/{\cal K}$, where $\cal K$ denotes the ideal of
${\cal U}({l})$ generated by ${k}_{J}^{+}\subset{\cal U}({l})$.
Thus we can decompose the vector space
$${\cal U}({l})e_{j}/R_{j}={\cal U}({n}_{J}^{+})e_{j}/\coprod_{i\in J}{\cal U}({%
n}_{J}^{+})({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}\oplus{\cal K%
}e_{j}.$$
Applying the elimination theorem once again,
using the above decomposition we obtain:
$$L(\coprod_{j\in I\backslash J}{\cal U}({l})e_{j}/R_{j})=L\left(\coprod_{j\in I%
\backslash J}\left({\cal U}({n}_{J}^{+})e_{j}/\coprod_{i\in J}{\cal U}({n}_{J}%
^{+})({\mathrm{ad}}\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}\right)\right)%
\mathchar 9582\relax{c}$$
where ${c}$ is the ideal in $L(\coprod_{j\in I\backslash J}{\cal U}({l})e_{j}/R_{j})$
generated by the sum of the ${\cal K}e_{j}$. Each ${h}^{e}$-module
${\cal U}({n}_{J}^{+})e_{j}/\coprod_{i\in J}{\cal U}({n}_{J}^{+})({\mathrm{ad}}%
\thinspace e_{i})^{1-2a_{ij}/a_{ii}}e_{j}$
is an integrable lowest weight module for the Lie algebra ${g}_{J}$,
denoted by ${\cal U}({n}_{J}^{+})\cdot e_{j}$, with lowest weight $\alpha_{j}$.
Thus we have a decomposition into semidirect
products:
$$L(\{e_{i}\}_{i\in I})=L(\{e_{i}\}_{i\in J})\mathchar 9582\relax\left[\left(L(%
\coprod_{i\in I\backslash J}{\cal U}({n}_{J}^{+})\cdot e_{j})\mathchar 9582%
\relax{c}\right)\mathchar 9582\relax{b}\right].$$
It is clear that, as ideals of $L(\{e_{i}\}_{i\in I})$, ${b},{c}\subset{k}_{0}^{+}$, and
${k}_{J}^{+}$ is ${k}_{0}^{+}\cap L(\{e_{i}\}_{i\in J})$.
Therefore, since all elements of ${k}_{0}^{+}$ are zero in
$L(\coprod_{i\in I\backslash J}{\cal U}({n}_{J}^{+})\cdot e_{j})$,
$$L(\{e_{i}\}_{i\in I})/{k}_{0}^{+}=L(\{e_{i}\}_{i\in J})/{k}_{J}^{+}\mathchar 9%
582\relax L(\coprod_{j\in I\backslash J}{\cal U}({n}_{J}^{+})\cdot e_{j}).$$
By the definition of ${g}_{J}$, ${n}^{+}_{J}=L(\{e_{i}\}_{i\in J})/{k}_{J}^{+}$.
$\mathchar 1027\relax$
Corollary 5.1
Let $A$ be a matrix satisfying conditions C1-C3.
Assume that the matrix $A$ has only one positive diagonal entry,
$a_{ii}>0$, and if $a_{mj}=0$ then $m=i$, or $j=i$ or $m=j$.
Let $S=\{(\mbox{\em ad\thinspace}e_{i})^{l}e_{j}\}_{0\leq l\leq-2a_{ij}/a_{ii}}$.
The subalgebra ${n}^{+}\subset{g}(A)$ is the
semidirect product of a one-dimensional Lie algebra and a free Lie
algebra,
${n}^{+}={R}e_{i}\oplus L(S).$ Similarly, ${n}^{-}={R}f_{i}\oplus L(\eta(S)).$ Thus
$${g}(A)=L(S)\oplus({s}{l}_{2}+{h})\oplus L(\eta(S)).$$
The root grading is a grading of the type
considered in Proposition 5.1 because we have the correspondence
$\alpha_{i}\mapsto(0,\ldots,1,\ldots,0)$, where $1$ appears in the
$i^{th}$ place. This is the grading (6). The denominator
formula given in the next result is the same as (7), after
the change of variables $e^{\alpha}\mapsto T^{\alpha}$.
Corollary 5.2
Let $A$ be as in Theorem 5.1, $n_{\alpha}$ and $T^{\alpha}$ as
in Proposition 5.1. Let $\Delta^{J}_{+}=\Delta\cap\sum_{i\in J}{Z}_{+}\alpha_{i}$. The Lie algebra ${g}(A)$ has
denominator formula given by (11) and the denominator
formula for ${g}_{J}$:
$$\displaystyle\prod_{\varphi\in\Delta_{+}}$$
$$\displaystyle(1-T^{\varphi})^{\dim{g}^{\varphi}}$$
$$\displaystyle=$$
$$\displaystyle\prod_{\varphi\in\Delta_{+}^{J}}(1-T^{\varphi})^{\dim{g}_{J}^{%
\varphi}}\cdot\prod_{\varphi\in\Delta_{+}\backslash\Delta_{+}^{J}}(1-T^{%
\varphi})^{\dim L^{\varphi}(\coprod_{j\in I\backslash J}{\cal U}({n}_{J}^{+})%
\cdot e_{j})}$$
$$=\left(\sum_{w\in W}(-1)^{l(w)}T^{w\rho-\rho}\right)\left(1-\sum_{\varphi\in%
\Delta^{+}\backslash\Delta_{+}^{J}}n_{\varphi}T^{\varphi}\right).$$
Obtaining the denominator formula in this way provides an alternative
proof that the radical of the Lie algebra ${g}$ associated to the
matrix $A$ is zero. This is because Corollary 5.2
uses only the description
of ${g}$ in terms of generators and relations, while the previous proof
of the denominator formula (see [17] or [18]) is valid
after the radical of the Lie algebra has been factored out.
Thus we have shown, for the particular type of matrix in
Theorem 5.1:
Corollary 5.3
Let $A$ be as in Theorem 5.1. The generalized Kac-Moody
algebra ${g}(A)$ has zero radical.
Remark: A proof of the fact that any generalized
Kac-Moody algebra has zero radical can by found in [16] or
[17]. In both cases the argument of [12] or [18] is
extended to
include generalized Kac-Moody algebras by making use of a lemma
appearing in [2].
Remark: If we apply (13) to
$L(\coprod_{j\in I\backslash J}{\cal U}({n}_{J}^{+})\cdot e_{j})$ we
obtain Kang’s [19] multiplicity formulas for the special case
of generalized
Kac-Moody algebras with no mutually orthogonal imaginary simple
roots.
5.3 A Lie algebra related to the modular function $j$
We will apply our results to an important
example of a generalized Kac-Moody algebra ${g}(M)$, defined
in Section 6.2 below.
It will be shown that, if ${c}$ denotes the center of
the Lie algebra ${g}(M)$, then ${g}(M)/{c}$ is the Monster
Lie algebra.
Recall that the modular function $j$ has the expansion $j(q)=\sum_{i\in{Z}}c(i)q^{i}$, where $c(i)=0$ if $i<-1$, $c(-1)=1$,
$c(0)=744$, $c(1)=196884$. Let $J(v)=\sum_{i\geq-1}c(i)v^{i}$ be
the formal Laurent series associated to $j(q)-744$.
Let $M$ be the symmetric matrix of blocks indexed by
$\{-1,1,2,\ldots\}$, where the block in position $(i,j)$ has entries
$-(i+j)$ and size $c(i)\times c(j)$. Thus
$${M}=\left(\begin{array}[]{c|c|c|c}2&\begin{array}[]{ccc}0&\cdots&0\end{array}&%
\begin{array}[]{ccc}-1&\cdots&-1\end{array}&\cdots\\
\hline\begin{array}[]{c}0\\
\vdots\\
0\end{array}&\begin{array}[]{ccc}-2&\cdots&-2\\
\vdots&\ddots&\vdots\\
-2&\cdots&-2\end{array}&\begin{array}[]{ccc}-3&\cdots&-3\\
\vdots&\ddots&\vdots\\
-3&\cdots&-3\end{array}&\cdots\\
\hline\begin{array}[]{c}-1\\
\vdots\\
-1\end{array}&\begin{array}[]{ccc}-3&\cdots&-3\\
\vdots&\ddots&\vdots\\
-3&\cdots&-3\end{array}&\begin{array}[]{ccc}-4&\cdots&-4\\
\vdots&\ddots&\vdots\\
-4&\cdots&-4\end{array}&\cdots\\
\hline\vdots&\vdots&\vdots\end{array}\right)$$
Definition 3: Let ${g}(M)$ be the generalized
Kac-Moody algebra associated to the matrix $M$ given above.
We have the standard decomposition
$${g}(M)={n}^{+}\oplus{h}\oplus{n}^{-}.$$
(14)
The generators of ${g}(M)$ will be written $e_{jk},f_{jk},h_{jk}$,
indexed by integers $j,k$
where $j\in\{-1\}\cup{Z}_{+}$ and $1\leq k\leq c(j)$. Since
there is only one $e,f$ or $h$ with $j=-1$, we will write these
elements as $e_{-1},f_{-1},h_{-1}$. From the
construction of ${g}(M)$ we see
that the simple roots in $({h}^{e})^{*}$ are $\alpha_{-1},\alpha_{11},\alpha_{12},\cdots,\alpha_{1c(1)},\alpha_{21},\cdots,%
\alpha_{2{c(2)}},$ etc.
Note that for fixed $i$ the functionals
$\alpha_{ij}$ and
$\alpha_{ik}$ agree on all of ${h}$ for $1\leq j,k\leq c(i)$.
The $\alpha_{ik}$ for
$i>-1$ are simple imaginary
roots, and the root $\alpha_{-1}$ is the one real simple root.
Remark: We explain the relationship between our
definition of simple root and that appearing in [4]. If
the restrictions of the simple roots to ${h}$ are denoted
$\alpha_{-1},\alpha_{1},\alpha_{2},\ldots$, these elements of
${h}^{*}$ correspond to the notion of “simple imaginary roots of
multiplicity greater than one” in [4]. The “simple root”
$\alpha_{i}$ has “multiplicity” $c(i)$ in Borcherds’ terminology.
Fortunately, in this
case, nonsimple roots $\alpha\in({h}^{e})^{*}$ do not restrict to
any $\alpha_{i}$, also “multiplicities” do not become infinite,
“roots” remain either positive or negative, etc. The
functionals $\alpha_{i}$ are linearly dependent, in fact they span a
two-dimensional space. The
root lattice is described in [4] as the lattice ${Z}\oplus{Z}$ with the inner product given by
$\left(\begin{array}[]{cc}0&-1\\
-1&0\end{array}\right)$.
The “simple roots” of this Lie algebra are denoted $(1,n)$ where
$n=-1$ or $n\in{Z}_{+}$. However, in most cases (including the case
of $\mathaccent 866{{s}{l}_{2}}$) serious problems arise when we do not
work in a sufficiently large Cartan subalgebra. (We do not wish to
write down a “denominator identity” where some of the terms are
$\infty$). Here, we always work
in $({h}^{e})^{*}$, taking specializations when they are illuminating,
as in the case of the denominator identity for ${g}(M)$ given
below.
Corollary 5.1 applied to the Lie algebra ${g}(M)$ gives the
following:
Theorem 5.2
The subalgebra ${n}^{+}\subset{g}(M)$ is the
semidirect product of a one-dimensional Lie algebra and a free Lie algebra,
so ${n}^{+}={R}e_{-1}\oplus L(S).$ Similarly, ${n}^{-}={R}f_{-1}\oplus L(S^{\prime})$. Hence
$${g}(M)=L(S)\oplus({s}{l}_{2}+{h})\oplus L(S^{\prime}).$$
Here $S=\cup_{j\in{N}}\{(\mbox{\em ad\thinspace}e_{-1})^{l}e_{jk}\ |\ 0\leq l<j,1%
\leq k\leq c(j)\}$, and $S^{\prime}=\eta(S)$.
$\mathchar 1027\relax$
Now that we have established that ${n}^{+}$ is the direct sum of a
one dimensional space and an ideal isomorphic to a
free Lie algebra we shall obtain the
denominator formula for the Lie algebra ${g}(M)$.
Corollary 5.4
The denominator formula for the Lie algebra ${g}(M)$ is
$$\displaystyle\prod_{\varphi\in\Delta_{+}}(1-T^{\varphi})^{\dim{g}^{\varphi}}$$
$$\displaystyle=$$
$$\displaystyle(1-T^{\alpha_{-1}})\prod_{\varphi\in\Delta_{+}\backslash\{\alpha_%
{-1}\}}(1-T^{\varphi})^{\dim L^{\varphi}(S)}$$
(15)
$$\displaystyle=$$
$$\displaystyle(1-T^{\alpha_{-1}})\big{(}1-\sum_{{j\in{Z}_{+}\atop 1\leq k\leq c%
(j),\,0\leq l<j}}T^{l\alpha_{-1}+\alpha_{jk}}\big{)},$$
which has specialization
$$u(J(u)-J(v))=\prod_{i\in{Z}_{+}\atop j\in{{Z}_{+}\cup\{-1\}}}(1-u^{i}v^{j})^{c%
(ij)}$$
(16)
under the map $\phi:\Delta\rightarrow{Z}\times{Z}$
determined by $\alpha_{ik}\mapsto(1,i)$, where we write $T^{\alpha_{ik}}\mapsto uv^{i}$.
Proof of Corollary 5.4:
For the denominator identity simply apply Corollary 5.2
to the root grading. The ${Z}\times{Z}$-grading of $L(S)$
given above is such that a
generator $({\mathrm{ad}}\thinspace e_{-1})^{l}e_{j_{k}}$ has degree $l(1,-1)+(1,j)=(l+1,j-l)$ with $l<j$. The number of generators of degree $(i,j)$ is
$c(i+j-1)$.
Applying equation (11) (which is the same as specializing
the second product of the denominator identity via $T^{\alpha_{i}}\mapsto uv^{i}$) gives the formula:
$$1-\sum_{(i,j)\in{N}^{2}\backslash\{0\}}c(i+j-1)u^{i}v^{j}=\prod_{(i,j)\in{N}^{%
2}\backslash\{0\}}(1-u^{i}v^{j})^{\dim L^{(i,j)}}.$$
To obtain the specialization of the denominator formula of ${g}$
we must include the
degree $(1,-1)$ subspace ${g}^{(1,-1)}={R}e_{-1}$, which is one-dimensional.
$$\prod_{(i,j)}(1-u^{i}v^{j})^{\dim{g}^{(i,j)}}=\prod_{(i,j)\in{N}^{2}-\{0\}}(1-%
u^{i}v^{j})^{\dim L^{(i,j)}}(1-u/v)$$
$$=(1-\sum_{(i,j)\in{N}^{2}-\{0\}}c(i+j-1)u^{i}v^{j})(1-u/v)$$
$$=1-\sum c(i+j-1)u^{i}v^{j}-u/v+\sum c(i+j-1)u^{i+1}v^{j-1}$$
$$=u(J(u)-J(v)).$$
There is a product formula for the modular function $j$ (see [4])
which can be written:
$$p(j(p)-j(q))=\prod_{i=1,2,\ldots\atop j=-1,1,\ldots}(1-p^{i}q^{j})^{c(ij)},$$
(17)
which converges on an open set in ${C}$, and so
implies the corresponding identity for formal power series.
Now we conclude that $\dim{g}^{(i,j)}=c(ij)$. $\mathchar 1027\relax$
Note that here we must know the number
theory identity of [4], to determine the dimension of the
root spaces of ${g}$.
The identity (16) is the specialization $e^{\alpha_{i}}\mapsto uv^{i}$ of the denominator identity as it appears when we apply
equation (7) to ${g}(M)$.
Remark: The matrix $M$ can be replaced by any symmetric
matrix with the same first row (and column) as $M$ with all remaining
blocks having entries strictly less than zero, as long as the minor
obtained by removing the first row and column has the same rank as the
corresponding minor of $M$.
Remark: Now we apply equation (13) to the
${N}\times{N}$-grading, and the Lie algebra $L(S)$, where $S=\{({\mathrm{ad}}\thinspace e_{-1})^{l}e_{jk}\ |\ 0\leq l<j,1\leq k\leq c(j)\}$ and
there are $c(i+j-1)$ generators of degree $(i,j)$. Since we already
know the dimension of $L^{(i,j)}$ is $c(ij)$,
we recover (see [19]) the following relations between the
coefficients of $j$:
$$c(ij)=\sum_{k\in{{Z}}_{+}\atop k(m,n)=(i,j)}{1\over k}\mu(k)\sum_{a\in P(m,n)}%
{(\sum a_{rs}-1)!\over\prod a_{rs}!}\prod c(r+s-1)^{a_{rs}}.$$
(18)
6 The Monster Lie algebra
6.1 Vertex operator algebras and vertex algebras
For a detailed discussion of vertex operator algebras and vertex
algebras the reader should consult [8], [9], [11]
and the announcement [1]. Results stated here
without proof can either be found in [8], [9] and
[11] or follow without too much difficulty from the results
appearing there.
Definition 4: A vertex operator algebra, $(V,Y,{\bf 1},\omega)$, consists of a vector space $V$, distinguished vectors called
the vacuum vector $\bf 1$ and the conformal vector
$\omega$, and a linear map $Y(\cdot,z):V\rightarrow(\mbox{End }V)[[z,z^{-1}]]$ which is a generating function
for operators $v_{n}$, i.e., for $v\in V,\ Y(v,z)=\sum_{n\in{Z}}v_{n}z^{-n-1}$, satisfying the following conditions:
(V1)
$V=\coprod_{n\in{Z}}V_{(n)}$; for $v\in V_{(n)}$, $n=\mbox{wt\thinspace}(v)$
(V2)
$\dim V_{(n)}<\infty$ for $n\in{Z}$
(V3)
$V_{(n)}=0$ for $n$ sufficiently small
(V4)
If $u,v\in V$ then $u_{n}v=0$ for $n$ sufficiently large
(V5)
$Y({\bf 1},z)=1$
(V6)
$Y(v,z){\bf 1}\in V[[z]]$ and $\lim_{z\rightarrow 0}Y(v,z){\bf 1}=v$, i.e., the creation property holds
(V7)
The following Jacobi identity holds:
$$z_{0}^{-1}\delta\left({z_{1}-z_{2}\over z_{0}}\right)Y(u,z_{1})Y(v,z_{2})-z_{0%
}^{-1}\delta\left({z_{2}-z_{1}\over-z_{0}}\right)Y(v,z_{2})Y(u,z_{1}).$$
$$=z_{2}^{-1}\delta\left({z_{1}-z_{0}\over z_{2}}\right)Y(Y(u,z_{0})v,z_{2})$$
(19)
The following conditions relating to the vector $\omega$ also hold;
(V8)
The operators $\omega_{n}$ generate a Virasoro algebra i.e., if
we let $L(n)=\omega_{n+1}$ for $n\in{Z}$ then
$$[L(m),L(n)]=(m-n)L(m+n)+(1/12)(m^{3}-m)\delta_{m+n,0}(\mbox{rank}V)$$
(20)
(V9)
If $v\in V_{(n)}$ then $L(0)v=(\mbox{wt\thinspace}v)v=nv$
(V10)
${d\over dz}Y(v,z)=Y(L(-1)v,z)$.
Definition 5: A vertex algebra $(V,Y,{\bf 1},\omega)$
is a vector space $V$ with all of the above properties except for
$\bf V2$ and $\bf V3$.
Remark: This definition is a variant, with $\omega$, of
Borcherds’ original definition of vertex algebra in [1].
An important class of examples of vertex algebras (and vertex operator
algebras) are those associated with lattices. For the sake of the
reader who may be unfamiliar with the notation we will briefly review
this construction in the case of an even lattice. For complete details
(and more generality) the reader may consult [11] or [8].
Given an even lattice $L$ one can construct a vertex algebra $V_{L}$ with
underlying vector space:
$$V_{L}=S(\hat{h}^{-}_{Z})\otimes{R}\{L\}.$$
Here we take ${h}=L\otimes_{{Z}}{R}$, and
$\hat{h}^{-}_{{Z}}$ is the
negative part of the Heisenberg algebra (with $c$ central) defined by:
$$\hat{{h}}_{{Z}}=\coprod_{n\in{Z}}{h}\otimes t^{n}\oplus{{R}}c\subset{h}\otimes%
{{R}}[[t]]\oplus{{R}}c.$$
Therefore,
$$\hat{h}^{-}_{Z}=\coprod_{n<0}{h}\otimes t^{n}.$$
The symmetric algebra on $\hat{h}_{{Z}}^{-}$ is denoted $S(\hat{h}^{-}_{Z})$. Given a central extension of $L$ by a group of order $2$
i.e.,
$$1\rightarrow\langle\kappa|\kappa^{2}=1\rangle\rightarrow\hat{L}{\buildrel-%
\over{\rightarrow}}L\rightarrow 1,$$
with commutator map given by $\kappa^{\langle\alpha,\beta\rangle}$, $\alpha,\beta\in L$, ${R}$ is given the structure of a nontrivial
$\langle\kappa\rangle$-module.
Define ${R}\{L\}$ to be the induced
representation
$\mbox{Ind}_{\langle\kappa\rangle}^{\hat{L}}{{R}}$.
If $a\in{\hat{L}}$
denote by $\iota(a)$ the element $a\otimes 1\in{R}\{L\}$. We
will use the
notation $\alpha(n)=\alpha\otimes t^{n}\in S(\hat{h}^{-}_{Z})$.
The vector space $V_{L}$ is spanned by elements of the form:
$$\alpha_{1}(-n_{1})\alpha_{2}(-n_{2})\ldots\alpha_{k}(-n_{k})\iota(a)$$
where $n_{i}\in{N}$.
The space $V_{L}$, equipped with $Y(v,z)$ as defined in [11]
satisfies properties $\bf V1$ and $\bf V4-V10$, so
is a vertex algebra with conformal vector $\omega$. Features of
$Y(v,z)$ to keep in mind from [11] are: $\alpha(-1)_{n}=\alpha(n)\mbox{ for all }n\in{Z}$; the $\alpha(n)$ for $n<0$ act by left multiplication on
$u\in V_{L}$; and
$$\alpha(n)\iota(a)=\left\{\begin{array}[]{cc}0&\mbox{ if $n>0$ }\\
\langle\alpha,\bar{a}\rangle\iota(a)&\mbox{ if $n=0$}.\end{array}\right.$$
Definition 7: A bilinear form $(\cdot,\cdot)$ on a
vertex algebra $V$ is invariant (in the sense of
[9]) if it satisfies
$$(Y(v,z)w_{1},w_{2})=(w_{1},Y(e^{zL(1)}(-z^{-2})^{L(0)}v,z^{-1})w_{2}).$$
(21)
(By definition $x^{L(0)}$ acts
on a homogeneous element $v\in V$ as multiplication by $x^{wt(v)}$).
Such a form satisfies $(u,v)=0$ unless $\mbox{wt\thinspace}(u)=\mbox{wt\thinspace}(v)$.
Lemma 6.1
Let $L$ be an even unimodular lattice. There is a
nondegenerate symmetric invariant bilinear form $(\cdot,\cdot)$ on
$V_{L}$.
Proof:
The vertex algebra $V_{L}$ is a module for itself under the adjoint
action. In fact, $V_{L}$ is an irreducible module and any irreducible
module of $V_{L}$ is isomorphic to $V_{L}$ [7].
In order to define the contragredient module note that $V_{L}$
is graded by the lattice $L$ as well as by weights, and that under
this double grading $\dim{V_{L}}^{r}_{(n)}<\infty$ for $r\in L,n\in{Z}$. Let ${V_{L}}^{\prime}=\coprod_{n\in{Z}\atop r\in L}({V_{L}}^{r}_{(n)})^{*}$, the restricted dual of $V_{L}$. Denote by
$\langle\cdot,\cdot\rangle$ the natural pairing between
$V_{L}$ and ${V_{L}}^{\prime}$. Results of [9] pertaining to adjoint vertex
operators and the contragredient module now apply to
${V_{L}}^{\prime}$. In particular, the space ${V_{L}}^{\prime}$ can be given the structure of a
$V_{L}$-module $({V_{L}}^{\prime},Y^{\prime})$ via
$$\langle Y^{\prime}(v,z)w^{\prime},w\rangle=\langle w^{\prime},Y(e^{zL(1)}(-z^{%
-2})^{L(0)}v,z^{-1})w\rangle$$
for $v,w\in V_{L}$ $w^{\prime}\in{V_{L}}^{\prime}$.
Since the adjoint module $V_{L}$ is irreducible, the contragredient
module ${V_{L}}^{\prime}$ is also irreducible. By the result of [7] quoted
above, $V_{L}$ is isomorphic to ${V_{L}}^{\prime}$ as a $V_{L}$-module, which is
equivalent to $V_{L}$ having a nondegenerate invariant bilinear form.
(See remark 5.3.3 of [9])
$\mathchar 1027\relax$
The “moonshine module”
$V^{\natural}$ is an infinite-dimensional representation of the Monster
simple group constructed and shown to be a vertex
operator algebra in [11]. The graded dimension of $V^{\natural}$
is $J(q)$. There is a positive definite bilinear form $(\cdot,\cdot)$
on $V^{\natural}$.
The vertex operator algebra $V^{\natural}$ satisfies
all of the conditions of the no-ghost theorem (Theorem 6.2),
with $G$ taken to be the Monster simple group. This
vertex operator algebra will be essential to the construction of the
Monster Lie algebra.
Lemma 6.2
The positive definite form $(\cdot,\cdot)$ on
$V^{\natural}$ defined in [11] is invariant.
Proof: There is a unique up to constant multiple
nondegenerate symmetric invariant bilinear form on $V^{\natural}$
[22]. Fix such a form $(\cdot,\cdot)_{1}$ by taking
$(1,1)_{1}=1$.
Let $u\in V^{\natural}_{(2)}$, a homogeneous element of
weight $2$. By invariance
$$(u_{n}w_{1},w_{2})_{1}=(w_{1},u_{-n+2}w_{2})_{1}$$
for $w_{1},w_{2}\in V^{\natural}$.
We claim
$$(u_{n}w_{1},w_{2})=(w_{1},u_{-n+2}w_{2})$$
(22)
$w_{1},w_{2}\in V^{\natural}$.
In order to prove equation (22) we recall the construction and
properties of $V^{\natural}$ of [11].
Let $x_{a}^{+}=\iota(a)+\iota(a^{-1})$ for $a\in\wedge$ (the Leech
lattice),
$${k}=S^{2}({h}\otimes t^{-1})\oplus\sum_{a\in\hat{\wedge}_{4}}{R}x_{a}^{+},$$
and let ${p}$ be the space of elements of $V^{T}_{\lambda}$ (the
“twisted space”) of weight 2.
Then $V^{\natural}_{(2)}={k}\oplus{p}$. The action $Y(v,z)$
of elements $v\in{p}$ is determined by conjugating by certain
elements of ${M}$ the Monster simple group (see 12.3.8 and 12.3.9 of
[11]). Conjugation by these elements map $v\in{p}$ to
${k}$. Since the form $(\cdot,\cdot)$ is invariant under ${M}$, it is sufficient to check that equation (22) holds for
elements of ${k}$. Therefore,
it suffices to check
equation (22) for two types of elements
$x_{a}^{+},a\in\hat{\wedge}_{4}$
and $g(-1)^{2},g(-1)\in{h}\otimes t^{-1}$.
For $u=x_{a}^{+}$
equation (22) follows immediately from [11]. For $u=g(-1)^{2}$
$$\displaystyle Y(u,z)$$
$$\displaystyle=$$
$$\displaystyle\mbox{$\circ\atop\circ$}g(z)^{2}\mbox{$\circ\atop\circ$}$$
$$\displaystyle=$$
$$\displaystyle g(z)^{-}g(z)+g(z)g(z)^{+}$$
Using $(g(i)w_{1},w_{2})=(w_{1},g(-i)w_{2})$
one computes the adjoint of $g(z)^{-}g(z)+g(z)g(z)^{+}$ which is
$$g(z^{-1})g(z^{-1})^{+}+g(z^{-1})^{-}g(z^{-1})=Y(u,z^{-1})z^{-4}.$$
Thus equation (22) holds for all $u\in{k}$, and so
for all $u\in V^{\natural}_{(2)}$.
Now recall that $V^{\natural}_{(2)}=\cal{B}$, the Griess algebra, and
the notation $\hat{\cal{B}}$ for the commutative affinization of the
algebra $\cal{B}$,
$$\hat{\cal B}={\cal B}\otimes{{R}}[t,t^{-1}]\oplus{{R}}e$$
where $t$ is an indeterminate and $e\neq 0$ (with nonassociative
product given in [11]). By Theorem 12.3.1
[11] $V^{\natural}$ is an irreducible graded
$\hat{\cal{B}}$-module, under
$$\displaystyle\pi:\hat{\cal{B}}$$
$$\displaystyle\rightarrow$$
$$\displaystyle\mbox{End}V^{\natural}$$
$$\displaystyle v\otimes t^{n}$$
$$\displaystyle\mapsto$$
$$\displaystyle x_{v}(n)\ v\in{\cal B}$$
$$\displaystyle e$$
$$\displaystyle\mapsto$$
$$\displaystyle 1$$
Schur’s lemma then implies that any nondegenerate symmetric bilinear
form satisfying
equation (22) is unique up to multiplication by a constant.
Thus we can conclude that $(\cdot,\cdot)_{1}=(\cdot,\cdot)$, since
the length of the vacuum is one with respect to each form.
$\mathchar 1027\relax$
Given $V$ a vertex
operator algebra, or a vertex algebra with $\omega$
and therefore an action of the Virasoro algebra, let
$$P_{(i)}=\{v\in V|L(0)v=iv,L(n)v=0\mbox{ if }n>0\}.$$
Thus $P_{(i)}$ consists of
the lowest weight vectors for the Virasoro algebra of
weight $i$. Then $P_{(1)}/L(-1)P_{(0)}$ is a Lie algebra with bracket
given by $[u+L(-1)P_{(0)},v+L(-1)P_{(0)}]=u_{0}v+L(-1)P_{(0)}$.
If the vertex algebra $V$ has an invariant
bilinear
form $(\cdot,\cdot)$ this induces a form
$(\cdot,\cdot)_{Lie}$ on the Lie algebra
$P_{1}/L(-1)P_{(0)}$, because $L_{-1}P_{(0)}\subset\mbox{rad}(\cdot,\cdot)$.
Invariance of the form on the vertex algebra implies for
$u,v\in P_{(1)}$:
$$(u_{0}v,w)=-(v,u_{0}w).$$
(23)
So that the induced form is invariant on the Lie algebra
$P_{(1)}/L(-1)P_{(0)}$.
Tensor products of vertex operator algebras are again vertex operator
algebras (see [9]), and more generally, by [8], the tensor
product of vertex algebras is also a vertex
algebra. Given two vertex algebras $(V,Y,{\bf 1}_{V},\omega_{V})$ and
$(W,Y,{\bf 1}_{W},\omega_{W})$ the vacuum of
$V\otimes W$ is ${\bf 1}_{V}\otimes{\bf 1}_{W}$
and the conformal vector $\omega$ is given by $\omega_{V}\otimes{\bf 1}_{W}+{\bf 1}_{V}\otimes\omega_{W}$.
If the vertex algebras $V$
and $W$ both have invariant forms then it is not difficult to show
that the form on $V\otimes W$ given by the product of the forms on
$V$ and $W$ is also invariant in the sense of equation (21).
6.2 The Monster Lie algebra
We will review the construction of the Monster Lie algebra given in
[4]. Then we give a theorem regarding its structure as a quotient
of ${g}(M)$. Let ${\cal L}={Z}\oplus{Z}$ with bilinear form
$\langle\cdot,\cdot\rangle$ given by the matrix
$\left(\begin{array}[]{cc}0&-1\\
-1&0\end{array}\right)$.
Remark: $\cal L$ is the rank two Lorentzian lattice,
denoted in [4] as $II_{1,1}$.
Fix a symmetric invariant bilinear form $(\cdot,\cdot)$
on $V_{\cal L}$, normalized by taking $({\bf 1},{\bf 1})=-1$. The
reason that we choose this normalization is so that the resulting
invariant bilinear form on the monster Lie algebra will have the usual
values with respect to the Chevalley generators, and so that the
contravariant bilinear form defined below will be positive definite
and not negative definite on nonzero weight spaces.
Denote by $(\cdot,\cdot)$ the symmetric invariant
bilinear form on $V^{\natural}\otimes V_{\cal L}$ given by the product of
the invariant bilinear forms on $V^{\natural}$ and $V_{\cal L}$.
Definition 7: The Monster Lie algebra ${m}$
is defined by
$${m}=P_{1}/\mbox{rad}(\cdot,\cdot)_{Lie}=(P_{1}/L_{-1}P_{0})/\mbox{rad}(\cdot,%
\cdot)_{Lie}.$$
When no confusion will arise, we will use the same notation
for the invariant form on the vertex algebra, and for the induced form
on the Lie algebra.
Note that, by invariance
$(e^{r},e^{s})=(-1)^{\langle r,r\rangle/2}\langle 1,(e^{r})_{(-1+\langle r,r%
\rangle)}e^{s}\rangle=0$ unless $r=-s\in\cal L$. Therefore, the induced
form on ${m}$ satisfies the condition that ${m}_{r}$ be
orthogonal to ${m}_{s}$ if $r\neq-s\in\cal L$.
Definition 8: Let $\theta$ be
the involution of $V_{\cal L}$ given by $\theta\iota(a)=(-1)^{{wt}(a)}\iota(a^{-1})$ and $\theta(\alpha(n))=-\alpha(n)$.
This
induces an involution $\theta$ on all of $V^{\natural}\otimes V_{\cal L}$ by letting $\theta(u\otimes ve^{r})=u\otimes\theta(ve^{r})$. Use the same notation for the
involution induced by $\theta$ on ${m}$.
Note that $\theta:{m}_{r}\rightarrow{m}_{-r}$ if $r\neq 0$.
Let $(\cdot,\cdot)_{0}$ be the contravariant bilinear form
on $V^{\natural}\otimes V_{\cal L}$ given by
$(u,v)_{0}=-(u,\theta(v))$,
$u,v\in V^{\natural}\otimes V_{\cal L}$.
We also denote by $(\cdot,\cdot)_{0}$
the contravariant bilinear form
on ${m}$ given by $(u,v)_{0}=-(u,\theta(v))_{Lie}$,
$u,v\in{m}$.
Elements of ${m}$ can be written as $\sum u\otimes ve^{r}$, where $u\in V^{\natural}$ and $ve^{r}=v\iota(e^{r})\in V_{\cal L}$. Here, a section of the map $\hat{\cal L}{\buildrel-\over{\rightarrow}}{\cal L}$ has been chosen so that $e^{r}\in{\hat{\cal L}}$ satisfies $\overline{e^{r}}=r\in{\cal L}$. There is a grading of ${m}$ by the lattice defined by
$\deg(u\otimes ve^{r})=r$.
Recall the definition of the Lie algebra ${g}(M)$ and the standard
decomposition.
Theorem 6.1
Let ${c}$ denote the center of the Lie algebra
${g}(M)$. Then
$${g}(M)/{c}={m}.$$
There is a triangular decomposition
${m}={m}^{+}\oplus{h}\oplus{m}^{-}$, where ${h}\cong{R}\oplus{R}$. The subalgebras ${m}^{\pm}$ are
isomorphic to ${n}^{\pm}\subset{g}(M)$.
Theorem 6.1 is proven after the statement of the no-ghost
theorem. In [4] the no-ghost theorem from string
theory is used to see that the Monster Lie algebra has homogeneous
subspaces isomorphic to $V^{\natural}_{(1+mn)}$. A precise statement
of the no-ghost theorem as it is used here is provided for the reader,
and a proof is given in the appendix.
Theorem 6.2 (no-ghost theorem)
Let $V$ be a vertex operator algebra
with the following properties:
i.
$V$ has a symmetric invariant nondegenerate bilinear form.
ii.
The central element of the Virasoro algebra acts as
multiplication by 24.
iii.
The weight grading of $V$ is an ${N}$-grading of $V$,
i.e., $V=\coprod_{n=0}^{\infty}V_{(n)}$, and $\dim V_{(0)}=1$.
iv.
$V$ is acted on by a group $G$ preserving the above
structure; in particular the form on $V$ is $G$-invariant.
Let ${\cal P}_{(1)}=\{u\in V\otimes V_{\cal L}|L_{0}u=u,L_{i}u=0,i>0\}$. The group $G$ acts on $V\otimes V_{\cal L}$ via the
trivial action on $V_{\cal L}$. Let ${\cal P}_{(1)}^{r}$ denote the
subspace of ${\cal P}_{(1)}$ of degree $r\in\cal L$. Then the quotient of ${\cal P}_{(1)}^{r}$ by the nullspace of its
bilinear form is isomorphic as a $G$-module with $G$-invariant
bilinear form to $V_{(1-\langle r,r\rangle/2)}$ if $r\neq 0$ and to $V_{(1)}\oplus{{R}^{2}}$ if $r=0$.
Proof of Theorem 6.1:
The no-ghost theorem applied to $V^{\natural}$ immediately gives
${m}_{(m,n)}\cong V^{\natural}_{(mn+1)}$ if $(m,n)\neq(0,0)$. Thus
the elements
of ${m}_{r}$ where $r\neq 0$ are spanned by elements of the form $u\otimes e^{r}$ where $r\in\cal L$, and $u\in V^{\natural}$ is
an element of the appropriate weight.
(We will use elements of
$V^{\natural}\otimes V_{L}$ to denote their equivalence classes in
${m}$.)
We will show that all of the conditions of
Theorem 4.1
are satisfied.
By considering the weights (with respect to $L_{0}$), we see that
the abelian subalgebra ${m}_{(0,0)}$ is spanned by elements of the
form $1\otimes\alpha(-1)\iota(1)$ where $\alpha\in{\cal L}\otimes_{{Z}}{R}$. Note that ${m}_{(0,0)}$ is
two-dimensional.
By definition $\theta=-1$ on ${m}_{(0,0)}$.
Thus $(\theta(x),\theta(y))=(x,y)$ for
$x,y\in{m}_{(0,0)}$. It
also follows from the definition, and symmetry of the form on $V_{L}$
that
$$\displaystyle(\theta(u\otimes e^{r}),\theta(v\otimes e^{-r}))$$
$$\displaystyle=$$
$$\displaystyle(u\otimes e^{-r},v\otimes e^{r})$$
$$\displaystyle=$$
$$\displaystyle(u\otimes e^{r},v\otimes e^{-r})$$
for $u,v\in V^{\natural},r\in{\cal L}$ .
Consider
$-(x,\theta(x))$ for $x\in{m}_{r},r\neq 0$. To
see that this is strictly positive it is enough to consider
elements of the form
$x=u\otimes e^{r}$ where $u\in V^{\natural}$. Recalling the
normalization of the form on $V_{L}$,
$$\displaystyle(x,\theta(x))$$
$$\displaystyle=(u,u)(-1)^{\langle r,r\rangle/2}(e^{r},e^{-r})$$
$$\displaystyle=(-1)^{\langle r,r\rangle}(u,u)({\bf 1},(e^{r})_{(-1+\langle r,r%
\rangle)}e^{-r})$$
$$\displaystyle=(u,u)({\bf 1},{\bf 1})<0.$$
Therefore, we have the desired
properties on the form $(\cdot,\cdot)=(\cdot,\cdot)_{Lie}$, and the
contravariant form $(\cdot,\cdot)_{0}$.
Now if we grade
${m}$, as in [4], by $i=2m+n\in{Z}$ then we see that ${m}$ satisfies the grading condition. Furthermore, condition 3
is satisfied if we take $\theta$ to be the involution.
Let $v\in V^{\natural}$, so
that $v\otimes e^{r}$ is of degree $r=(m,n)$. Then
$$(1\otimes\alpha(-1)\iota(1))_{0}v\otimes e^{r}=v\otimes\alpha(0)e^{r}=\langle%
\alpha,r\rangle v\otimes e^{r}.$$
(24)
Thus $1\otimes\alpha(-1)\iota(1)$ acts as the scalar
$\langle\alpha,r\rangle$ on ${m}_{(m,n)}$.
Thus all elements of ${m}_{(0,0)}$
act as scalars
on the ${m}_{(m,n)}$. As $\alpha$ ranges over
${R}\oplus{R}$ the action distinguishes between spaces of
different degree.
This establishes conditions 1, 2 and 3 of
Theorem 4.1.
To see that
${m}_{(0,0)}\subset[{m},{m}]$, let $u,v\in V_{(2)}^{\natural}$ and $a=e^{(1,1)}$,
$b=e^{(1,-1)}$. We will show
$$[u\otimes\iota(a),v\otimes\iota(a^{-1})]$$
(25)
and
$$[\iota(b),\iota(b^{-1})]$$
(26)
are two linearly independent vectors in
${m}_{(0,0)}$. Since we know that ${m}_{(0,0)}$ is two-dimensional, this will give condition 4 of
Theorem 4.1.
By [11, 8.5.44] we have that formula (26) is $\iota(b)_{0}\iota(b^{-1})=\bar{b}(-1)\iota(1)$.
The formula [11, 8.5.44] also shows $\iota(a)_{i}\iota(a_{-1})=0$ unless $i\leq-3$ and
$$\iota(a)_{i}\iota(a^{-1})=\left\{\begin{array}[]{cc}\iota(1)&\mbox{ if $i=-3$}%
\\
{\bar{a}}(-1)\iota(1)&\mbox{ if $i=-4$}.\end{array}\right.$$
Using the Jacobi identity (19) or its component form
[11, 8.8.41] and the above, we obtain for formula (25):
$$\displaystyle(u\otimes\iota(a))_{0}(v\otimes\iota(a^{-1}))$$
$$\displaystyle=u_{3}v\otimes{\bar{a}}(-1)\iota(1)$$
$$\displaystyle=c{\bf 1}\otimes{\bar{a}}(-1)\iota(1).$$
Since we can pick $u$ and $v$ such that $c\neq 0$ these vectors are
linearly independent and we are done.
By definition the radical of the bilinear
form on ${m}$ is zero, so by Corollary 4.1, ${m}$ is
${l}/{c}$ for some generalized
Kac-Moody algebra ${l}$. In fact
${m}={g}(M)/{c}$:
Because ${m}_{(0,0)}$ is the image
of a maximal toral subalgebra, it must also be a maximal toral
subalgebra. Define roots of ${m}$ as elements
$\alpha\in({m}_{(0,0)})^{*}$ such that $[h,x]=\alpha(h)x$ for
all $x\in{m}$.
The grading given by the lattice $\cal L$
corresponds to the
root grading of ${m}$ because we have shown that the elements of
${m}_{(0,0)}$ act as scalars
on the ${m}_{(m,n)}$, and that the spaces of different degree are
distinguished. It follows from the no-ghost
theorem applied to $V^{\natural}$ that
${m}_{(m,n)}\cong V^{\natural}_{mn+1}$. We know from [11] that
$\dim V^{\natural}_{mn+1}=c(mn)$.
Consider the roots of ${g}(M)$ restricted to ${h}$. Then the
dimensions of these restricted root spaces of ${g}(M)$ are given by
$c(mn)$. By the specialization (12) of the denominator formula for
${g}(M)$ the generalized Kac-Moody algebra ${g}(M)/{c}$
is isomorphic to ${m}$. $\mathchar 1027\relax$
Since the map given by Corollary 4.1 is an isomorphism on
${n}^{\pm}$ there are immediate corollaries to
Theorem 5.2 and the denominator identity for ${g}(M)$.
Corollary 6.1
The Monster Lie algebra ${m}$ can be written as ${m}={u}^{+}\oplus{g}{l}_{2}\oplus{u}^{-}$, where ${u}^{\pm}$
are free Lie algebras with countably many generators given by
Corollary 5.1.
Corollary 6.2
The Monster Lie algebra has the denominator formula:
$$u(J(u)-J(v))=\prod_{i\in{Z}_{+}\atop j\in{{Z}_{+}\cup\{-1\}}}(1-u^{i}v^{j})^{c%
(ij)}.$$
(27)
Appendix A The Proof of the no-ghost theorem
In [4] it is shown how to use the no-ghost theorem from string
theory to understand some of the structure of the monster Lie algebra.
The proof of that theorem, Theorem 6.2, is reproduced here
with the necessary
rigor and in a more algebraic context.
The space $V\otimes V_{\cal L}$ is a vertex
algebra with conformal vector.
Recall from Section 6.1 that elements of the Virasoro algebra
acting on $V\otimes V_{\cal L}$
satisfy the relations:
$$[L_{i},L_{j}]=(i-j)L_{i+j}+{26\over 12}(i^{3}-i)\delta_{i+j,0}.$$
Given a nonzero $r\in{\cal L}$, fix nonzero $w\in\cal L$
such that $\langle w,w\rangle=0$ and $\langle r,w\rangle\neq 0$. Define operators $K_{i}$, $i\in{{Z}}$, on
$V\otimes V_{\cal L}$ by $K_{i}=({1}\otimes w(-1))_{i}={1}\otimes w(i)$.
Let $\cal A$ be the Lie algebra generated by the operators $L_{i},K_{i}$ with
$i\in{Z}$. These operators satisfy the relations:
$$[L_{i},K_{j}]=-jK_{i+j}\default@cr[K_{i},K_{j}]=0.$$
The first relation follows from the formula (in $V_{\cal L}$) $[L_{m},w(n)]=-n(w(n+m))$ of [11, 8.7.13] and the fact that $[L_{m}\otimes{1},{1}\otimes w(n)]=0$. The second relation holds
because $[w(i),w(j)]=\langle w,w\rangle i\delta_{i+j,0}$ [11, 8.6.42]
and $\langle w,w\rangle=0$.
Let ${\cal W}$ be the Virasoro subalgebra of $\cal A$ generated by the
$L_{i},i\in{Z}$, and let $\cal Y$ be the abelian subalgebra generated
by the $K_{i},i\in{Z}$.
Denote by ${\cal A}^{+}$ the subalgebra generated by the $L_{i},K_{i}$ with $i>0$,
let ${\cal A}^{-}$ be the subalgebra generated by the $L_{i},K_{i}$
with $i<0$, and let ${\cal A}^{0}$ be the subalgebra generated by
$L_{0}$, $K_{0}$.
The subalgebras ${\cal W}^{\pm}$, ${\cal W}^{0}$ and $\cal Y^{\pm}$,
${\cal Y}^{0}$ are defined analogously.
The vertex algebra $V\otimes V_{\cal L}$ is graded by $\cal L$,
because $V_{\cal L}=S(\hat{h}^{-}_{{Z}})\otimes{{R}}\{\cal L\}$ has
such a grading. The subspace of degree $r$ is $V\otimes S(\hat{h}^{-}_{{Z}})\otimes e^{r}$. This space will be denoted
$\cal H$. The following subspaces of the ${\cal A}$-module
$\cal H$ will be useful:
$${\cal P}=\{v\in{\cal H}|{\cal W}^{+}v=0\},\default@cr{\cal T}=\{v\in{\cal H}|{%
\cal A}^{+}v=0\},\default@cr{\cal N}=\hbox{the radical of the bilinear
form $(\cdot,\cdot)_{0}$
on $\cal P$ },\default@cr{\cal K}=U(\cal Y){\cal T}.$$
Denote $V\otimes e^{r}$ by $Ve^{r}$.
Lemma A.1
With respect to the bilinear form $(\cdot,\cdot)_{0}$ on
$V\otimes V_{\cal L}$, $L_{i}^{*}=L_{-i}$
and $K_{i}^{*}=K_{-i}$ for all $i\in{Z}$.
Proof: Let $\omega$ be the conformal vector of
$V\otimes V_{\cal L}$, so $L_{i}=\omega_{i+1}$ and $\theta(\omega)=\omega$.
By the definition of the form $(\cdot,\cdot)_{0}$ and
equation (21) $L_{i}^{*}$ is
$$\mbox{Res}_{z^{-i-2}}Y(e^{zL_{1}}(-z^{-2})^{L_{0}}\omega,z^{-1}).$$
Since $L_{1}\omega=0$ and
$\mbox{wt\thinspace}\omega=2$ we have
$$Y(e^{zL_{1}}(-z^{-2})^{L_{0}}\omega,z^{-1})=\sum_{n\in{Z}}\omega_{n}z^{n-3}.$$
Thus $L_{i}^{*}=\omega_{-i+1}=L_{-i}$.
Now consider $K_{i}=(1\otimes w(-1))_{i}$. By equation (21)
$$K_{i}^{*}=\mbox{Res}_{z^{-i-1}}\theta Y(e^{zL_{1}}(-z^{-2})^{L_{0}}(1\otimes w%
(-1)),z^{-1}).$$
To calculate
this, note that
$\theta(1\otimes w(-1))=-(1\otimes w(-1))$, that
$\mbox{wt\thinspace}(1\otimes w(-1))=1$ and that $L_{1}(1\otimes w(-1))=0$, so
$$-Y(e^{zL_{1}}(-z^{-2})^{L_{0}}(1\otimes w(-1)),z^{-1})=\sum_{n\in{Z}}(1\otimes
w%
(-1))_{n}z^{n-1}.$$
We conclude $K_{i}^{*}=K_{-i}$.$\mathchar 1027\relax$
Lemma A.2
The bilinear form $(\cdot,\cdot)_{0}$ restricted to
${\cal H}$ is nondegenerate.
Proof:
The form $(\cdot,\cdot)_{0}$ on $V\otimes V_{\cal L}$ is nondegenerate.
The form also satisfies $(u,v)_{0}=0$ unless $\deg(u)=\deg(v)$ in $\cal L$.
Thus the radical of the form $(\cdot,\cdot)_{0}$ restricted to
${\cal H}$ is contained in the
radical of the form on $V\otimes V_{\cal L}$.$\mathchar 1027\relax$
Lemma A.3
${\cal H}=U({\cal A}){\cal T}$.
Proof:
The bilinear form on ${\cal H}$ is nondegenerate (Lemma
A.2) and distinct $L_{0}$-weight spaces of $\cal H$ are
orthogonal.
Thus the finite-dimensional $i$th $L_{0}$-weight space
${\cal H}_{i}=U({\cal A}){\cal T}_{i}\oplus(U({\cal A}){\cal T})_{i}^{\perp}.$
Then
there is a decomposition into ${\cal A}$-submodules:
$${\cal H}=U({\cal A}){\cal T}\oplus(U({\cal A}){\cal T})^{\perp}.$$
If the graded submodule $(U({\cal A}){\cal T})^{\perp}$ is nonempty
then it contains a vector
annihilated by ${\cal A}^{+}$ by the following argument: The grading of ${\cal H}$
(by weights of $L_{0}$) is such that ${\cal H}=\coprod_{i\geq{1\over 2}\langle r,r\rangle}{\cal H}_{i}$.
The actions of the generators $L_{i}$ and $K_{i}$, $i>0$, of ${\cal A}^{+}$
lower the weight of a vector in ${\cal H}$. If
$n$ is the smallest integer such that $(U({\cal A}){\cal T})^{\perp}\cap{\cal H}_{n}$ is nonzero, then this subspace consists of vectors
annihilated by ${\cal A}^{+}$. By definition such a vector is in $\cal T$, hence
is in $U({\cal A}){\cal T}$, a contradiction.
$\mathchar 1027\relax$
Lemma A.4
${\cal K}={\cal T}\oplus\mbox{rad}(\cdot,\cdot)_{0}$
Proof:
Note that ${\cal Y}={\cal Y}^{-}\oplus{\cal Y}^{+}\oplus{\cal Y}^{0}$,
so that by the
Poincare-Birkhoff-Witt
theorem ${\cal K}=U({\cal Y}^{-})U({\cal Y}^{+})U({\cal Y}^{0}){\cal T}$.
By definition of
${\cal T}$,
$U({\cal Y}^{+}){\cal T}={\cal T}$ so ${\cal K}=U({\cal Y}^{-}){\cal T}$.
Thus
$${\cal K}={\cal T}\oplus{\cal Y}^{-}U({\cal Y}^{-}){\cal T}.$$
By Lemma A.1
$$\displaystyle({\cal K},{\cal Y}^{-}U({\cal Y}^{-}){\cal T})_{0}$$
$$\displaystyle=$$
$$\displaystyle({\cal Y}^{+}{\cal K},U({\cal Y}^{-}){\cal T})_{0}$$
$$\displaystyle=$$
$$\displaystyle 0$$
Therefore
${\cal Y}^{-}U({\cal Y}^{-}){\cal T}\subset\mbox{rad}(\cdot,\cdot)_{0}$.
Furthermore, ${\cal T}\cap\mbox{rad}(\cdot,\cdot)_{0}=0$ because if
$t\in{\cal T}\cap\mbox{rad}(\cdot,\cdot)_{0}$ then
$(t,U({{\cal A}}){\cal T})_{0}=(t,U({\cal A}^{-}){\cal T})_{0}=0$, and by Lemma
A.2 $t=0$. $\mathchar 1027\relax$
Lemma A.5
${\cal K}=Ve^{r}\oplus\mbox{rad}(\cdot,\cdot)_{0}$
Proof:
It is immediate from the definition that elements of ${\cal K}$
are lowest weight vectors of ${\cal Y}$. Furthermore,
$$\displaystyle\cal H$$
$$\displaystyle=$$
$$\displaystyle U({\cal A}){\cal T}=[U({\cal W}^{-})U({\cal Y}^{-})]{\cal T}$$
$$\displaystyle=$$
$$\displaystyle{\cal W}^{-}U({\cal W}^{-})U({\cal Y}^{-}){\cal T}\oplus U({\cal Y%
}^{-}){\cal T}.$$
Since no nonzero element of ${\cal W}^{-}U({\cal W}^{-})U({\cal Y}^{-}){\cal T}$ is a
lowest weight vector
of ${\cal Y}$, ${\cal K}$ is the subspace of $\cal H$ of lowest
weight vectors of the abelian Lie algebra ${\cal Y}$.
In order to describe the lowest weight vectors explicitly
consider $S(\hat{{h}}^{-}_{{Z}})$ as a polynomial algebra on the
generators $x_{i}=w(-i)$, $z_{i}=r(-i),i>0$, so that
$S(\hat{{h}}^{-}_{{Z}})={C}[x_{i}]_{i>0}\otimes{C}[z_{i}]_{i>0}$.
The elements $w(i)$, $i\in{Z}$ act on $S(\hat{{h}}^{-}_{{Z}})$
via multiplication, with $w(i)\cdot{\bf 1}=0$ if $i>0$,
$[w(k),w(j)]=0$ and $[w(k),r(j)]=k\delta_{k+j,0}\langle w,r\rangle$. Thus
the element $w(k)$ acts on ${C}[z_{i}]_{i>0}$ as the differential operator
$k\langle r,w\rangle\partial/\partial z_{k}$ for all $k>0$.
By definition
of the $K_{i}$, the lowest weight
vectors of the ${\cal Y}$-module $\cal H$ are the lowest
weight vectors of the above actions of the $w(i)$, $i\in{Z}$, on
${C}[x_{i}]_{i>0}\otimes{C}[z_{i}]_{i>0}$. Since the action of the
$w(i)$, $i\in{Z}$,
commute with the elements ${C}[x_{i}]_{i>0}$, the lowest weight vectors
are determined by the
elements $q\in{C}[z_{i}]_{i>0}$
satisfying $\partial/\partial z_{k}q=0$ for all $k>0$, so
$q$ is a constant. Therefore the lowest weight vectors of the action of
${\cal Y}$ on
$S(\hat{{h}}^{-}_{{Z}})$ correspond to the elements
${C}[x_{i}]_{i>0}$.
Thus
${\cal K}=Ve^{r}\oplus[V\otimes({C}[x_{i}]_{i>0}\backslash{C})e^{r}]$.
Furthermore $V\otimes({C}[w(-i)]_{i>0}\backslash{C})e^{r}\subset\mbox{rad}(\cdot,\cdot)_{0}$, and the form on $Ve^{r}$ is nondegenerate.
$\mathchar 1027\relax$
Let ${\cal S}={\cal W}^{-}U({\cal W}^{-})U({\cal Y}^{-}){\cal T}\subset{\cal H}$.
This is called the space of “spurious vectors” in [25].
It follows from this definition (and the Poincare-Birkhoff-Witt theorem)
that $\cal H=S\oplus K$.
Lemma A.6
The associative algebra generated by the elements $L_{i}$
for $i>0$ is generated by elements mapping ${\cal S}_{(1)}$ into $\cal S$.
Proof: This is exactly the same argument as in
[25]. First we will show that $L_{1}$ and $L_{2}+{3\over 2}L_{1}^{2}$ have this property. Any $s\in{\cal S}$ can be written
$$s=L_{-1}f_{1}+L_{-2}f_{2}$$
where $f_{1},f_{2}\in\cal H$ since any $L_{m},m<0$ can be written as a polynomial in $L_{-1}$ and $L_{-2}$.
Furthermore $L_{0}s=s$ if and only if
$$L_{0}L_{-1}f_{1}+L_{0}L_{-2}f_{2}=L_{-1}f_{1}+L_{-2}f_{2},$$
and we may assume $L_{0}f_{1}=0,L_{0}f_{2}=-f_{2}$.
Thus $s\in{\cal S}_{(1)}$ and $s=L_{-1}f_{1}+L_{-2}f_{2}$ and $L_{0}f_{2}=-f_{2}$ and $L_{0}f_{1}=0$.
Now we compute
$$\displaystyle L_{1}s$$
$$\displaystyle=$$
$$\displaystyle L_{1}L_{-1}f_{1}+L_{1}L_{-2}f_{2}$$
$$\displaystyle=$$
$$\displaystyle L_{-1}L_{1}f_{1}+2L_{0}f_{1}+3L_{-1}f_{2}+L_{-2}L_{1}f_{2}$$
and this is in $\cal S$. Furthermore
$$\displaystyle(L_{2}$$
$$\displaystyle+$$
$$\displaystyle{3\over 2}L_{1}L_{1})s$$
$$\displaystyle=$$
$$\displaystyle L_{2}L_{-1}f_{1}+{3\over 2}L_{1}L_{1}L_{-1}f_{1}+L_{2}L_{-2}f_{2%
}+{3\over 2}L_{1}L_{1}L_{-2}f_{2}$$
$$\displaystyle=$$
$$\displaystyle L_{-1}L_{2}f_{1}+3L_{1}f_{1}+{3\over 2}L_{1}L_{-1}L_{1}f_{1}+L_{%
-2}L_{2}f_{2}$$
$$\displaystyle+$$
$$\displaystyle 4L_{0}f_{2}+{26\over 12}6f_{2}+{3\over 2}L_{1}L_{-2}L_{1}f_{2}+{%
3^{2}\over 2}L_{1}L_{-1}f_{2}$$
$$\displaystyle=$$
$$\displaystyle L_{-1}(L_{2}+{3\over 2}L_{1}L_{1})f_{1}+L_{-2}(L_{2}+{3\over 2}L%
_{1}L_{1})f_{2}+9L_{-1}L_{1}f_{2}.$$
The above is a spurious vector (it contains $L_{-i}$ with $i>0$). Note
that $D=26$ is necessary for this computation to work. Since $L_{1}$
and $L_{2}+{3/2}L_{1}^{2}$ generate the algebra generated by the $L_{i}$, where
$i>0$, the lemma is proven. $\mathchar 1027\relax$
Lemma A.7
${\cal P}_{(1)}$ is the direct sum of
${\cal T}_{(1)}$ and ${\cal N}_{(1)}$.
Proof: Let $p\in{\cal P}_{(1)}$. Then $p=k+s$ where $k\in{{\cal K}}_{(1)}$ and $s\in{\cal S}_{(1)}$; the decomposition is unique. By the preceding lemma
a generator $u$ (that is, $L_{1}$ or $L_{2}+{3\over 2}L_{1}^{2}$) of ${\cal W}^{+}$
satisfies $0=up=us+uk\in{\cal S}\oplus{\cal K}$. Thus $us=uk=0$,
and we see that
$s$ is annihilated by ${\cal W}^{+}$.
We conclude that $k\in{{\cal K}\cap{\cal P}={\cal T}}$
and $s\in{\cal S}_{(1)}\cap{\cal P}$. Since $\cal S$ is orthogonal
to $\cal P$,
$s$ must be an element in the radical of the form, $s\in{\cal N}_{(1)}$. We conclude
that ${\cal P}_{(1)}={\cal T}_{(1)}\oplus{\cal N}_{(1)}$.$\mathchar 1027\relax$
Theorem 6.2 now follows for $r\neq 0$ because
Lemma A.4 and Lemma A.5 imply $Ve^{r}\approx{\cal T}$ so
$V_{(1-\langle r,r\rangle/2)}e^{r}\approx{\cal T}_{(1)}$, and Lemma A.7
shows that ${\cal T}_{(1)}$ is
isomorphic to the quotient ${\cal P}_{(1)}/{\cal N}_{(1)}$. The
isomorphism is naturally a $G$-isomorphism.
If $r=0$, first note that
$$(V\otimes V_{\cal L})_{(1)}=V_{(1)}\oplus(V_{(0)}\otimes(V_{\cal L})_{(1)}).$$
The subspace $(V_{\cal L})_{(1)}$ is two-dimensional, spanned
by vectors of the form
$\alpha(-1)\iota(1),\beta(-1)\iota(1)$ where $\alpha,\beta$ span ${\cal L}$.
Furthermore, if $v\in V_{(1)}$ then $L_{n}v=0$ for $n>1$ by the
condition that
$V$ be ${N}$-graded. Since $L_{1}v\in V_{(0)}$, $L_{1}v=c1$
for some constant $c$ and since $(L_{1}v,1)=(v,L_{-1}1)=0$ we have $c=0$.
It is easy to show that if $v\in V_{(0)}\otimes(V_{\cal L})_{(1)}$
then $L_{n}v=0$
for $n>0$. Thus the vectors in $V_{(1)}\oplus(V_{(0)}\otimes(V_{\cal L})_{(1)})$
are in ${\cal P}_{(1)}$, and the radical of the form restricted to this subspace
is nondegenerate so Theorem 6.2 follows.
$\mathchar 1027\relax$
References
[1]
R. Borcherds, Vertex algebras, Kac-Moody algebras, and the
monster, Proc. Natl. Acad. Sci. 83 (1986), 3068-3071.
[2]
R. Borcherds, Generalized Kac-Moody Lie algebras,
J. Algebra 115 (1988), 501-512.
[3]
R. Borcherds, Central extensions of generalized Kac-Moody Lie
algebras, J. Algebra 140 , 330-335, 1991.
[4]
R. Borcherds, Monstrous moonshine and monstrous Lie superalgebras,
Invent. Math. 109 (1992), 405-444.
[5]
N. Bourbaki, Lie Groups and Lie Algebras, Part 1, Hermann,
Paris, 1975.
[6]
J. H. Conway and S. P. Norton, Monstrous Moonshine, Bull. London
Math. Soc. 11 (1979), 308-339.
[7]
C. Dong, Vertex algebras associated with even lattices,
J. Algebra 161 (1993), 245-265.
[8]
C. Dong and J. Lepowsky,
Generalized vertex algebras and relative vertex
operators, Progress in Math. 112, Birkhauser, Boston 1993.
[9]
I. Frenkel, Y. Z. Huang, and J. Lepowsky, On axiomatic approaches
to vertex operator algebras and modules, preprint, 1989; Memoirs
American Math. Soc. 104 no. 494, American Math. Soc., Providence
1993.
[10]
I. Frenkel, J. Lepowsky, and A. Meurman, A natural representation of
the Fischer-Griess Monster with the modular function $J$ as character,
Proc. Natl. Acad. Sci. USA 81 (1984), 3256-3260.
[11]
I. Frenkel, J. Lepowsky, and A. Meurman, Vertex Operator
Algebras and the Monster, Academic Press, Boston 1988.
[12]
O. Gabber and V. Kac, On defining relations of certain
infinite-dimensional Lie algebras, Bull. Amer. Math. Soc.
5 no.2 (1981).
[13]
H. Garland and J. Lepowsky, Lie algebra homology and the
Macdonald-Kac formulas, Invent. Math. 34 (1976), 37-76.
[14]
R. Gebert and J. Teschner,
On the fundamental representation of Borcherds algebras with one
imaginary simple root, Lett. Math. Phys, to appear.
[15]
P. Goddard and C. B. Thorn, Compatibility of the Dual
Pomeron with unitarity and the absence of ghosts in the dual resonance
model, Phys. Lett. B 40 no. 2 (1972), 235-238.
[16]
K. Harada, M. Miyamoto and H. Yamada, A generalization of Kac-Moody Lie
algebras, to appear.
[17]
E. Jurisich, Generalized Kac-Moody algebras and their relation to
free Lie algebras, Ph.D. thesis,
Rutgers University, May, 1994.
[18]
V. Kac, Infinite dimensional Lie algebras, Cambridge
University Press, third edition, 1990.
[19]
S. J. Kang, Generalized Kac-Moody algebras and the modular function
$j$, Math. Ann., to appear.
[20]
J. Lepowsky, Lectures on Kac-Moody Lie algebras, Université
Paris VI, 1978 (unpublished).
[21]
H.-S. Li, Equivalence of definitions of vertex algebras,
personal communication, 1991.
[22]
H.-S. Li, Symmetric invariant bilinear forms on vertex operator
algebras, J. Pure and Applied Algebra 96 (1994), 276-279.
[23]
R. Moody, A new class of Lie algebras, J.Algebra 10 (1968),
211-230.
[24]
J. P. Serre, A course in arithmetic,
Springer-Verlag, New York, 1973.
[25]
C. Thorn, A detailed study of the physical state
conditions in covariantly quantized string theories, Nuclear
Physics B286 (1987), 61-77. |
A $5D$ non compact and non Ricci flat Kaluza-Klein Cosmology
F. Darabi
Department of Physics, Azarbaijan
University of Tarbiat Moallem, 53714-161, Tabriz, Iran .
Research Institute for Astronomy and Astrophysics of
Maragha, 55134-441, Maragha, Iran.
e-mail:
f.darabi@azaruniv.edu
Abstract
A model universe is proposed in the framework of 5-dimensional
noncompact Kaluza-Klein cosmology which is not Ricci flat. The $4D$
part as the Robertson-Walker metric is coupled to conventional
perfect fluid, and its extra-dimensional part is coupled to a dark
pressure through a scalar field. It is shown that neither early
inflation nor current acceleration of the 4$D$ universe would happen
if the non-vacuum states of the scalar field would contribute to
4$D$ cosmology.
1 Introduction
According to the old suggestion of Kaluza and Klein the $5D$ vacuum
Kaluza-Klein equations can be reduced under certain conditions to
the $4D$ vacuum Einstein equations plus the $4D$ Maxwell equations.
Recently, the idea that our four dimensional universe might have
emerged from a higher dimensional space-time is receiving much
attention [1]. One current interest is to find out in a more
general way how the $5D$ field equations relate to the $4D$ ones. In
this regard, a proposal was made recently by Wesson in that the $5D$
Einstein equations without sources $R_{AB}=0$ ( the Ricci flat
assumption ) may be reduced to the $4D$ ones with sources $G_{\alpha\beta}=8\pi GT_{\alpha\beta}$, provided an appropriate definition
is made for the energy-momentum tensor of matter in terms of the
extra part of the geometry [2]. Physically, the picture
behind this interpretation is that curvature in $(4+1)$ space
induces effective properties of matter in $(3+1)$ space-time. This
idea is known as space-time-matter or modern
Kaluza-Klein theory.
In a parallel way, the brane world scenario [3] assumes
that our four-dimensional universe ( the brane ) is embedded in a
higher dimensional space-time ( the bulk ). The important ingredient
of the brane world scenario, unlike the space-time-matter theory, is
that the matter exists apart from geometry and is confined to the
brane, and the only communication between the brane and the bulk is
through gravitational interaction. The brane world picture relies on
a $Z_{2}$ symmetry and is inspired from string theory and its
extensions [4]. This approach differs from the old Kaluza-Klein
idea in that the size of the extra dimensions could be large, more
or less similar to the idea in modern Kaluza-Klein theory.
On the other hand, the recent distance measurements of type Ia
supernova suggest an accelerating universe [5]. This
accelerating expansion is generally believed to be driven by an
energy source which provides positive energy density and negative
pressure, such as a positive cosmological constant [6], or a
slowly evolving real scalar field called quintessence
[7]. Since in a variety of inflationary models scalar fields
have been used in describing the transition from the
quasi-exponential expansion of the early universe to a power law
expansion, it is natural to try to understand the present
acceleration of the universe by constructing models where the matter
responsible for such behavior is also represented by a scalar field.
Such models are worked out, for example, in Ref [8]. Bellini
et al, on the other hand, have published extensively on the
evolution of the universe from noncompact vacuum Kaluza-Klein
theory [11]. They used the essence of STM theory and
developed a 5D mechanism to explain, by a single scalar field, the
evolution of the universe including inflationary expansion and the
present day observed accelerated expansion.
In general, scalar fields are not the only possibility to describe
the current acceleration of the universe; there are (of course)
alternatives. In particular, one can try to do it by using some
perfect fluid but obeying exotic equations of state, the so-called
Chaplygin gas [9]. This equation of state has recently
raised a certain interest because of its many interesting and, in
some sense, intriguingly unique features. For instance, the
Chaplygin gas represents a possible unification of dark matter and
dark energy, since its cosmological evolution is similar to an
initial dust like matter and a cosmological constant for late times
[10].
In this paper, motivated by higher dimensional theories, we are
interested in constructing a 5$D$ cosmological model which is not
Ricci flat, but is extended to be coupled to a higher dimensional
energy momentum tensor. This confronts the explicit idea of induced
matter in STM theory. Instead, we will show that the higher
dimensional sector of this model may induce a dark pressure, through
a scalar field, in four dimensional universe. The implications of
this dark pressure on the inflationary phase and current
acceleration of the universe will be discussed.
2 The Model
We start with the $5D$ line element
$$dS^{2}=g_{AB}dx^{A}dx^{B},$$
(1)
in which $A$ and $B$ run over both the space-time coordinates
$\alpha,\beta$ and one non compact extra dimension indicated by
$4$. The space-time part of the metric $g_{\alpha\beta}=g_{\alpha\beta}(x^{\alpha})$ is assumed to define the Robertson-Walker line
element
$$ds^{2}=dt^{2}-R^{2}(t)\left(\frac{dr^{2}}{(1-kr^{2})}+r^{2}(d\theta^{2}+\sin^{%
2}\theta d\phi^{2})\right),$$
(2)
where $k$ takes the values $+1,0,-1$ according to a close, flat or
open universe, respectively. We also take the followings
$$g_{4\alpha}=0,\>\>\>\>g_{44}=\epsilon\Phi^{2}(x^{\alpha}),$$
where $\epsilon^{2}=1$ and the signature of the higher dimensional
part of the metric is left general. This choice has been made
because any fully covariant $5D$ theory has five coordinate degrees
of freedom which can lead to considerable algebraic simplification,
without loss of generality. Unlike the noncompact vacuum
Kaluza-Klein theory, we will assume the fully covariant $5D$
non-vacuum Einstein equation
$$G_{AB}=8\pi GT_{AB},$$
(3)
where $G_{AB}$ and $T_{AB}$ are the $5D$ Einstein tensor and
energy-momentum tensor, respectively. Note that the $5D$
gravitational constant has been fixed to be the same value as the
$4D$ one. In the following we use the geometric reduction from 5$D$
to 4$D$ as appeared in [12]. The $5D$ Ricci tensor is given
in terms of the $5D$ Christoffel symbols by
$$R_{AB}=\partial_{C}\Gamma^{C}_{AB}-\partial_{B}\Gamma^{C}_{AC}+\Gamma^{C}_{AB}%
\Gamma^{D}_{CD}-\Gamma^{C}_{AD}\Gamma^{D}_{BC}.$$
(4)
The $4D$ part of the $5D$ quantity is obtained by putting $A\rightarrow\alpha$, $B\rightarrow\beta$ in (4) and
expanding the summed terms on the r.h.s by letting $C\rightarrow\lambda,4$ etc. Therefore, we have
$$\hat{R}_{\alpha\beta}=\partial_{\lambda}\Gamma^{\lambda}_{\alpha\beta}+%
\partial_{4}\Gamma^{4}_{\alpha\beta}-\partial_{\beta}\Gamma^{\lambda}_{\alpha%
\lambda}-\partial_{\beta}\Gamma^{4}_{\alpha 4}+\Gamma^{\lambda}_{\alpha\beta}%
\Gamma^{\mu}_{\lambda\mu}+\Gamma^{\lambda}_{\alpha\beta}\Gamma^{4}_{\lambda 4}%
+\Gamma^{4}_{\alpha\beta}\Gamma^{D}_{4D}-\Gamma^{\mu}_{\alpha\lambda}\Gamma^{%
\lambda}_{\beta\mu}-\Gamma^{4}_{\alpha\lambda}\Gamma^{\lambda}_{\beta 4}-%
\Gamma^{D}_{\alpha 4}\Gamma^{4}_{\beta D},$$
(5)
where $\hat{}$ denotes the $4D$ part of the $5D$ quantities. One
finds the $4D$ Ricci tensor as a part of this equation which may be
cast in the following form
$$\hat{R}_{\alpha\beta}={R}_{\alpha\beta}+\partial_{4}\Gamma^{4}_{\alpha\beta}-%
\partial_{\beta}\Gamma^{4}_{\alpha 4}++\Gamma^{\lambda}_{\alpha\beta}\Gamma^{4%
}_{\lambda 4}+\Gamma^{4}_{\alpha\beta}\Gamma^{D}_{4D}-\Gamma^{4}_{\alpha%
\lambda}\Gamma^{\lambda}_{\beta 4}-\Gamma^{D}_{\alpha 4}\Gamma^{4}_{\beta D}.$$
(6)
Evaluating the Christoffel symbols for the metric $g_{AB}$ gives
$$\hat{R}_{\alpha\beta}={R}_{\alpha\beta}-\frac{\nabla_{\alpha}\nabla_{\beta}%
\Phi}{\Phi}.$$
(7)
Putting $A=4,B=4$ and expanding with $C\rightarrow\lambda,4$ in
Eq.(4) we obtain
$${R}_{44}=\partial_{\lambda}\Gamma^{\lambda}_{44}-\partial_{4}\Gamma^{\lambda}_%
{4\lambda}+\Gamma^{\lambda}_{44}\Gamma^{\mu}_{\lambda\mu}+\Gamma^{4}_{44}%
\Gamma^{\mu}_{4\mu}-\Gamma^{\lambda}_{4\mu}\Gamma^{\mu}_{4\lambda}-\Gamma^{4}_%
{4\mu}\Gamma^{\mu}_{44}.$$
(8)
Evaluating the corresponding Christoffel symbols in Eq.(8)
leads to
$${R}_{44}=-\epsilon\Phi\Box\Phi.$$
(9)
We now construct the space-time components of the Einstein tensor
$$G_{AB}=R_{AB}-\frac{1}{2}g_{AB}R_{(5)}.$$
In so doing, we first obtain the $5D$ Ricci scalar $R_{(5)}$ as
$$R_{(5)}=g^{AB}R_{AB}=\hat{g}^{\alpha\beta}\hat{R}_{\alpha\beta}+g^{44}R_{44}=g%
^{\alpha\beta}(R_{\alpha\beta}-\frac{\nabla_{\alpha}\nabla_{\beta}\Phi}{\Phi})%
+\frac{\epsilon}{\Phi^{2}}(-\epsilon\Phi\Box\Phi)$$
$$=R-\frac{2}{\Phi}\Box\Phi,$$
(10)
where the $\alpha 4$ terms vanish and $R$ is the $4D$ Ricci scalar.
The space-time components of the Einstein tensor is written
$\hat{G}_{\alpha\beta}=\hat{R}_{\alpha\beta}-\frac{1}{2}\hat{g}_{\alpha\beta}R_%
{(5)}$. Substituting
$\hat{R}_{\alpha\beta}$ and $R_{(5)}$ into the space-time
components of the Einstein tensor gives
$$\hat{G}_{\alpha\beta}={G}_{\alpha\beta}+\frac{1}{\Phi}(g_{\alpha\beta}\Box\Phi%
-\nabla_{\alpha}\nabla_{\beta}\Phi).$$
(11)
In the same way, the 4-4 component is written ${G}_{44}={R}_{44}-\frac{1}{2}g_{44}R_{(5)}$, and substituting ${R}_{44}$,
$R_{(5)}$ into this component of the Einstein tensor gives
$$G_{44}=-\frac{1}{2}\epsilon R\Phi^{2},$$
(12)
We now consider the $5D$ energy-momentum tensor. The form of
energy-momentum tensor is dictated by Einstein’s equations and by
the symmetries of the metric (2). Therefore, we may assume a
perfect fluid with nonvanishing elements
$${T}_{\alpha\beta}=(\rho+p){u}_{\alpha}{u}_{\beta}-p{g}_{\alpha\beta},$$
(13)
$${T}_{44}=-\bar{p}g_{44}=-\epsilon\bar{p}\Phi^{2},$$
(14)
where $\rho$ and $p$ are the conventional density and pressure of
perfect fluid in the $4D$ standard cosmology and $\bar{p}$ acts as a
pressure living along the higher dimensional sector. Hence, the
field equations (3) are to be viewed as constraints on
the simultaneous geometric and physical choices of $G_{AB}$ and
$T_{AB}$ components, respectively.
Substituting the energy-momentum components (13), (14)
in front of the $4D$ and extra dimensional part of Einstein tensors
(11) and (12), respectively, we obtain the field
equations111The $\alpha 4$ components of Einstein equation
(3) result in
${R}_{\alpha 4}=0,$
which is an identity with no useful information.
$$G_{\alpha\beta}=8\pi G[(\rho+p)u_{\alpha}u_{\beta}-pg_{\alpha\beta}]+\frac{1}{%
\Phi}\left[\nabla_{\alpha}\nabla_{\beta}\Phi-\Box\Phi g_{\alpha\beta}\right],$$
(15)
and
$$R=16\pi G\bar{p}.$$
(16)
By evaluating the $g^{\alpha\beta}$ trace of Eq.(15) and
combining with Eq.(16) we obtain
$$\Box\Phi=\frac{1}{3}(8\pi G(\rho-3p)+16\pi G\bar{p})\Phi.$$
(17)
This equation infers the following scalar field potential
$$V(\Phi)=-\frac{1}{6}(8\pi G(\rho-3p)+16\pi G\bar{p})\Phi^{2},$$
(18)
whose minimum occurs at $\Phi=0$, for which the equations (15)
reduce to describe a usual $4D$ FRW universe filled with ordinary
matter $\rho$ and $p$. In other words, our conventional $4D$
universe corresponds to the vacuum state of the scalar field $\Phi$.
From Eq.(17), one may infer the following replacements for a
nonvanishing $\Phi$
$$\frac{1}{\Phi}\Box\Phi=\frac{1}{3}(8\pi G(\rho-3p)+16\pi G\bar{p}),$$
(19)
$$\frac{1}{\Phi}\nabla_{\alpha}\nabla_{\beta}\Phi=\frac{1}{3}(8\pi G(\rho-3p)+16%
\pi G\bar{p})u_{\alpha}u_{\beta}.$$
(20)
Putting the above replacements into Eq.(15) leads to
$$G_{\alpha\beta}=8\pi G[(\rho+\tilde{p})u_{\alpha}u_{\beta}-\tilde{p}g_{\alpha%
\beta}],$$
(21)
where
$$\tilde{p}=\frac{1}{3}(\rho+2\bar{p}).$$
(22)
This energy-momentum tensor effectively describes a perfect fluid
with density $\rho$ and pressure $\tilde{p}$. It is very interesting
that the contributions of the non-vacuum states of the scalar field
at higher dimension cancels out exactly the physics of pressure $p$
in four dimensions. The field equations lead to two independent
equations
$$3\frac{\dot{a}^{2}+k}{a^{2}}=8\pi G\rho,$$
(23)
$$\frac{2a\ddot{a}+\dot{a}^{2}+k}{a^{2}}=-8\pi G\tilde{p}.$$
(24)
Differentiating (23) and combining with (24) we obtain
the conservation equation
$$\frac{d}{dt}(\rho a^{3})+\tilde{p}\frac{d}{dt}(a^{3})=0.$$
(25)
The equations (23) and (25) can be used to derive the
acceleration equation
$$\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3\tilde{p})=-\frac{8\pi G}{3}(\rho+%
\bar{p}).$$
(26)
If we choose the open universe ($k=-1$) in Robertson-Walker metric
(2) so that $R=-a^{-2}$ and $\bar{p}=-\frac{1}{16\pi G}a^{-2}$, and insert a power law behavior $\rho=Aa^{\alpha}$ into
the conservation equation (25), then we obtain222A
close universe $k=1$, $R>0$ will result in $\rho=-\frac{1}{16\pi G}a^{-2}<0$ which is not physically viable.
$$\rho=\frac{1}{16\pi G}a^{-2}>0.$$
(27)
By substituting $\rho$ and $\bar{p}$ into the acceleration equation
(26) we find
$$\ddot{a}=0.$$
(28)
Therefore, we conclude that the contributions of non-vacuum states
of the scalar field, living along the higher dimension, can lead to
zero acceleration of the 4$D$ universe, no matter which equation of
state $p=p(\rho)$ is used.
Conclusion
In this paper, we have studied a $(4+1)$-dimensional metric subject
to a $(4+1)$ dimensional energy-momentum tensor in the framework of
noncompact Kaluza-Klein theory. The $4D$ part of the metric is taken
to be Robertson-Walker one subject to the conventional perfect fluid
with density $\rho$ and pressure $p$, and the extra-dimensional part
endowed by a scalar field is subject to the dark pressure $\bar{p}$.
By writing down the reduced $4D$ and extra-dimensional components of
$5D$ Einstein equations we found that our 4$D$ universe corresponds
to the vacuum state of the scalar field. It turned out that the
contributions of the non-vacuum states of the scalar field to the
4$D$ cosmology cancels out exactly the physics of pressure $p$ in
four dimensions and leads to zero acceleration of the 4$D$ universe
for any equation of state. In other words, if the non-vacuum states
of the scalar field at higher dimension would contribute to the 4$D$
cosmology then we would not see the current acceleration or even
expect early inflation of the universe. It is then possible to think
about other universes, living in excited states of the scalar field,
in which neither inflation nor acceleration ever happens.
This model, although introduced in the framework of noncompact
Kaluza-Klein theory, is not of Space-time-matter type as Bellini
et al have already worked out. So, a comparison between the
approach and results of this model and those of Bellini et al
is more constructive: In the Bellini et al approach, the Ricci
flat assumption $R_{AB}=0$ is made where matter as a whole is
induced by the dynamics of extra dimension. They developed a
5D mechanism to explain the ( neutral scalar field governed )
evolution of the universe from inflationary expansion towards a
decelerated expansion followed by the present day observed
accelerated expansion. In this model, however, we assumed a full
5$D$ Einstein equation coupled to a higher dimensional energy
momentum tensor whose components are all independent of 5th
dimension and its one extra dimensional component is a scalar field.
Reduction to four dimensions led us to 4$D$ Einstein equation
coupled to 4$D$ energy momentum tensor ( perfect fluid ) accompanied
by some terms of scalar field contribution induced from extra
dimension. Both models from different approaches try to address the
early inflation and current acceleration of the universe. Bellini
et al explain both early inflation and current acceleration of
the universe by a single scalar field. In the present model,
however, we show that the contributions of non-vacuum states of a
scalar field can destroy early inflation and current acceleration of
the universe. This result is independent of the signature $\epsilon$
by which the higher dimension takes part in the $5D$ metric.
Finally, we comment on the conceptual issue which is usually
considered in higher dimensional theories: why we perceive the
$4$ dimensions of space-time and apparently do not see the fifth
dimension? In old Kaluza-Klein theory this question is answered by
resorting to a cyclic condition imposed on the 5th coordinate. Brane
world cosmology also provides a mechanism by which matter and all
but gravitational interactions stick to the branes. In modern
Kaluza-Klein theory (STM), however, the matter itself and the
induced fifth force manifest as the direct results of the existence
of the 5th dimension. Similarly, in the present model we find that
the extra dimension may manifest through a dark pressure. However,
as we discussed above the existence of this dark pressure, through
non-vacuum states of the scalar field, will contradict the observed
acceleration and even early inflation of the universe, So, it turns
out that there is no such influence of dark pressure in our universe
and the reason why we do not see the higher dimension in this model
is that we are living in the vacuum state of the scalar field.
Acknowledgment
This work has been supported by Research Institute for Astronomy and
Astrophysics of Maragha.
References
[1]
T. Appelquist, A. Chodos and P. G. O. Freund, Modern Kaluza-Klein Theories, Frontiers in Physics Series, (Volume
65), 1986, (Ed. Addison-Wesely).
[2]
P. S. Wesson, Gen. Relativ. Gravit.16, 193
(1984); Space-Time-Matter: Modern Kaluza-Klein Theory, (World
Scientific. Singapore 1999).
[3]
L. Randall, R. Sundrum, Phys. Rev. Lett.83, 4690 (1999); Phys. Rev. Lett. 83, 3370 (1999).
[4]
P. Horava, E. Witten, Nucl. Phys. B475, 94
(1996); Nucl. Phys. B460, 506 (1996).
[5]
S. Perlmutter et al., Bull. Am. Phys. Soc. 29, 1351 (1997); Ap. J. 507, 46 (1998); A. G. Riess et
al., Astron. J. 116 (1998); P. M. Garnavich et al.,
Ap. J. Lett. 493, 53 (1998); Science 279, 1298 (1998);
Ap. J. 509, 74 (1998); B. Schmidt et al., The
High-Z Supernova Search: Measuring Cosmic Deceleration and Global
Curvature of the Universe using Type IA Supernova,
astro-ph/9805200.
[6]
L. Krauss and M. S. Turner, Gen. Rel. Grav. 27,
1137 (1995); J. P. Ostriker and P. J. Steinhardt, Nature 377, 600 (1995); A. R. Liddle, D. H. Lyth, P. T. Viana and M.
White, Mon. Not. Roy. Astron. Soc. 282, 281 (1996).
[7]
R. R. Caldwell, R. Dave and P. J. Steinhardt, Phys.
Rev. Lett. 80, 1582 (1998).
[8]
A. A. Starobinsky, JETP Lett.68, 757 (1998) ; T. D. Saini, S. Raychaudhury, V. Sahni,
A. A. Starobinsky, Phys. Rev. Lett.85, 1162 (2000).
[9]
A. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B511, 265
(2001); N. Bilic, G. B. Tupper and R. Viollier, Phys. Lett. B535, 17 (2002); J. S. Fabris, S.V. Goncalves and P.E. de Souza,
Gen. Rel. Grav.34, 53 (2002); R. Jackiw, A particle field
theorists lectures on supersymmetric, nonabelian fluid mechanics
and D-branes, physics/0010042.
[10]
D. Bazeia, R. Jackiw, Ann. Phys.270, 246 (1998); D. Bazeia, Phys. Rev.
D59, 085007 (1999); R. Jackiw, A. P. Polychronakos, Commun.
Math. Phys.207, 107 (1999); N. Ogawa, Phys. Rev. D62,
085023 (2000).
[11]
J. E. Madriz Aguilar, M. Bellini,
Phys. Lett. B596, 116 (2004); J. E. Madriz Aguilar, M.
Bellini, Eur. Phys. J. C38, 367 (2004); M. Bellini, Nucl.
Phys. B660, 389 (2003); J. Ponce de Leon, Int. J. Mod. Phys.
D15, 1237 (2006).
[12]
P. S. Wesson, J. Ponce de Leon, J. Math. Phys.33, 3883,
(1992); J. Ponce de Leon, P. S. Wesson, J. Math. Phys.34,
4080, (1993). |
Traveling wave solutions for bistable fractional Allen-Cahn equations with a pyramidal front
Hardy Chan
H. Chan - Department of Mathematics, University of British Columbia, Vancouver, B.C., Canada, V6T 1Z2.
hardy@math.ubc.ca
and
Juncheng Wei
J. Wei - Department of Mathematics, University of British Columbia, Vancouver, B.C., Canada, V6T 1Z2.
jcwei@math.ubc.ca
Abstract.
Using the method of sub-super-solution, we construct a solution of $(-\Delta)^{s}u-cu_{z}-f(u)=0$ on $\mathbb{R}^{3}$ of pyramidal shape. Here $(-\Delta)^{s}$ is the fractional Laplacian of sub-critical order $1/2<s<1$ and $f$ is a bistable nonlinearity. Hence, the existence of a traveling wave solution for the parabolic fractional Allen-Cahn equation with pyramidal front is asserted.
The maximum of planar traveling wave solutions in various directions gives a sub-solution. A super-solution is roughly defined as the one-dimensional profile composed with the signed distance to a rescaled mollified pyramid. In the main estimate we use an expansion of the fractional Laplacian in the Fermi coordinates.
J. Wei is partially supported by NSERC of Canada.
1. Introduction
1.1. Traveling waves with local diffusion
Consider the nonlinear diffusion equations
$$v_{t}-\Delta{v}-f(v)=0,\qquad\text{in}~{}\mathbb{R}^{n}.$$
The study of such equations is initiated by Kolmogorov, Petrovsky and Piskunow [42] and Fisher [28].
Such reaction-diffusion equations have numerous applications in sciences
([28], [3], [42], [2], [26], [6], just to name a few)
as a model of genetics and pattern formation in biology, phase transition phenomena in physics, chemical reaction and combustion and many more.
It is natural to look for traveling wave solutions, that is, solutions of the form $v(x,t)=u(x^{\prime},x_{n}-ct)$ where $x=(x^{\prime},x_{n})$ and $c$ is the speed. The equation for $u$ reads as
$$-\Delta{u}-cu_{x_{n}}-f(u)=0,\qquad\text{in}~{}\mathbb{R}^{n}.$$
Planar traveling fronts are obtained by further restricting $u(x)=U(x_{n})$, resulting in an one-variable ODE
$$-U^{\prime\prime}-cU^{\prime}-f(U)=0,\qquad\text{in}~{}\mathbb{R}.$$
For the KPP nonlinearity $f(t)=t(1-t)$ which is monostable, a planar front exists if $c>2\sqrt{f^{\prime}(0)}>0$. In the case of cubic bistable nonlinearity $f(t)=-(t-t_{0})(t-1)(t+1)$, the nonlinearity determines the speed uniquely by
$$c=\dfrac{\int_{-1}^{1}f(t)\,dt}{\int_{-1}^{1}U^{\prime}(t)^{2}\,dt}.$$
These classical results are discussed in [25].
The study of non-planar traveling waves with a unbalanced bistable nonlinearity ($t_{0}\neq 0$) is more interesting.
Ninomiya and Taniguchi [48] proved the existence of a V-shaped traveling wave when $n=2$.
Hamel, Monneau and Roquejoffre [36] obtained a higher dimensional analog with cylindrical symmetry.
Taniguchi [55] found asymptotically pyramidal waves.
He also constructed traveling waves whose conical front has a level set given by any convex compact set in any dimension $n$ [57] (see also [45]).
Generalized traveling fronts, like curved and pulsating ones, are also considered, notably by Berestycki and Hamel [7].
Qualitative properties such as stability and uniqueness of various nonlinearities have also been studied.
The readers are referred to [39], [40], [54], [38] and the references therein.
Hereafter we assume that $f\in{C^{2}}(\mathbb{R})$ is a more general bistable nonlinearity, that is, there exists $t_{0}\in(-1,1)$ such that
$$\left\{\begin{array}[]{l}f(\pm 1)=f(t_{0})=0\\
f(t)<0,\quad\forall t\in(-1,t_{0})\\
f(t)>0,\quad\forall t\in(t_{0},1)\\
f^{\prime}(\pm 1)<0.\end{array}\right.$$
(1.1)
1.2. Fractional Laplacian
One way to define the fractional Laplacian is via integral operator. Let $0<s<1$ and $n\geq 1$ be an integer. Consider the space of functions
$$C_{s}^{2}(\mathbb{R}^{n})=\left\{v\in{C}^{2}(\mathbb{R}^{n}):\int_{\mathbb{R}^%
{n}}\dfrac{\left\lvert v(x)\right\rvert}{(1+\left\lvert x\right\rvert)^{n+2s}}%
\,dx<\infty\right\}.$$
For any function $u\in{C_{s}^{2}}(\mathbb{R}^{n})$, we have the equivalent definitions
$$\begin{split}\displaystyle(-\Delta)^{s}{u}(x)&\displaystyle=C_{n,s}\textnormal%
{P.V.}\,\int_{\mathbb{R}^{n}}\!\dfrac{u(x)-u(x+\xi)}{\left\lvert\xi\right%
\rvert^{n+2s}}\,d\xi\\
&\displaystyle=C_{n,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{n}}\!\dfrac{u(x)-u(%
\xi)}{\left\lvert x-\xi\right\rvert^{n+2s}}\,d\xi\\
&\displaystyle=C_{n,s}\int_{\mathbb{R}^{n}}\!\dfrac{2u(x)-u(x+\xi)-u(x-\xi)}{2%
\left\lvert\xi\right\rvert^{n+2s}}\,d\xi\\
&\displaystyle=C_{n,s}\int_{\mathbb{R}^{n}}\!\dfrac{u(x)-u(x+\xi)+\chi_{D}(\xi%
)\nabla{u}(x)\cdot\xi}{\left\lvert\xi\right\rvert^{n+2s}}\,d\xi\end{split}$$
where $D$ is any ball centered at the origin and
$$C_{n,s}=\left(\int_{\mathbb{R}^{n}}\dfrac{1-\cos(\zeta_{1})}{\left\lvert\zeta%
\right\rvert^{n+2s}}\,d\zeta\right)^{-1}=\dfrac{2^{2s}s\Gamma(\frac{n}{2}+s)}{%
\Gamma(1-s)\pi^{\frac{n}{2}}}.$$
We can also define it as a pseudo-differential operator with symbol $\left\lvert\xi\right\rvert^{2s}$, that is, for any $u\in\mathcal{S}(\mathbb{R}^{n})$, the Schwartz space of rapidly decaying functions,
$$\widehat{(-\Delta)^{s}u}(\xi)=\left\lvert\xi\right\rvert^{2s}\hat{u}(\xi),%
\qquad\text{for}~{}\xi\in\mathbb{R}^{n}.$$
See, for instance, [46].
Caffarelli and Silvestre [14] considered the localized extension problem
$$\begin{cases}\textnormal{div}\,(y^{1-2s}\nabla{v(x,y)})=0,&(x,y)\in\mathbb{R}^%
{n}_{+}:=\mathbb{R}^{n}\times(0,\infty),\\
v(x,0)=u(x),&x\in\mathbb{R}^{n},\end{cases}$$
(1.2)
and proved that the fractional Laplacian is some normal derivative
$$(-\Delta)^{s}u(x)=-\dfrac{\Gamma(s)}{2^{1-2s}\Gamma(1-s)}\lim_{y\to 0^{+}}y^{1%
-2s}v_{y}(x,y).$$
Hence, the fractional Laplacian is a Dirichlet-to-Neumann map.
The $s$-harmonic extension $v$ of $u$ can be recovered by the convolution $v(\cdot,y)=u\ast{P}_{n,s}(\cdot,y)$ where $P_{n,s}$ is the Poisson kernel
$$P_{n,s}(x,y)=\dfrac{\Gamma(\frac{n}{2}+s)}{\pi^{\frac{n}{2}}\Gamma(s)}\dfrac{y%
^{2s}}{\left(\left\lvert x\right\rvert^{2}+y^{2}\right)^{\frac{n}{2}+s}}.$$
The fractional Laplacian can also be understood as the infinitesimal generator of a Lévy process [8] and it arises in the areas of probability and mathematical finance.
Its mathematical aspects have been studied extensively by many authors, for instance
[58], [14], [12], [49], [32], [52], [53] and [10].
In appendix A we list some useful properties.
When $n=1$ let us also write $(-\Delta)^{s}=(-\partial^{2})^{s}$.
1.3. The one-dimensional profile
Consider the equation
$$\begin{cases}(-\partial^{2})^{s}\Phi(\mu)-k\Phi^{\prime}(\mu)-f(\Phi(\mu))=0,&%
\forall\mu\in\mathbb{R}\\
\Phi^{\prime}(\mu)<0,&\forall\mu\in\mathbb{R}\\
\displaystyle\lim_{\mu\to\pm\infty}\Phi(\mu)=\mp 1.\end{cases}$$
(1.3)
Gui and Zhao [34] proved that
Theorem 1.1 (Existence of 1-dimensional profile).
For any $s\in(0,1)$ and for any bistable nonlinearity $f\in C^{2}(\mathbb{R})$ there exists a unique pair $(k,\Phi)$ such that (1.3) is satisfied. Moreover, $k>0$ and $\Phi(\mu)$, $\Phi^{\prime}(\mu)$ decay algebraically as $\left\lvert\mu\right\rvert\to\infty$:
$$1-\left\lvert\Phi(\mu)\right\rvert=O\left(\left\lvert\mu\right\rvert^{-2s}%
\right)\text{ as }\left\lvert\mu\right\rvert\to\infty$$
and
$$0<-\Phi^{\prime}(\mu)=O\left(\left\lvert\mu\right\rvert^{-1-2s}\right)\text{ %
as }\left\lvert\mu\right\rvert\to\infty.$$
Note that here $\Phi$ is the negative of the profile stated in [34]. To fix the phase, we assume that $\Phi(0)=0$.
One may expect that $\Phi^{\prime\prime}(\mu)$ decays like $\left\lvert\mu\right\rvert^{-2-2s}$ as in the almost-explicit example of Cabré and Sire [13] but it would not be as easy to prove because there is no known example of explicit positive function decaying at such rate and satisfying an equation involving the fractional Laplacian.
For our purpose, it is enough to have $\Phi^{\prime\prime}(\mu)=O\left(\left\lvert\mu\right\rvert^{-1-2s}\right)$ as $\left\lvert\mu\right\rvert\to\infty$. It is done by a comparison similar to the one in [34]. We postpone the proof to appendix B.
1.4. Traveling waves with nonlocal diffusion
Nonlocal reaction-diffusion equations often gives a more accurate model by taking into account long-distance interactions.
Equations involving a convolution with various kernels have been studied, as in
[20], [17], [4], [5], [61], [29], [30] and [19].
From now on let us focus on the case with fractional Laplacian. For $c>k$, we consider a three-dimensional nonlocal diffusion equation
$$\mathcal{L}[u]:=(-\Delta)^{s}{u}-cu_{z}-f(u)=0,\qquad\text{in}~{}\mathbb{R}^{3}.$$
(1.4)
We say that $v\in{C}(\mathbb{R}^{3})\cap{L}^{\infty}(\mathbb{R}^{3})$ is a sub-solution if $v=\max_{j}{v_{j}}$ for finitely many $v_{j}\in{C}^{2}(\mathbb{R}^{3})\cap{L}^{\infty}(\mathbb{R}^{3})$ satisfying $\mathcal{L}[v_{j}]\leq 0$ for each $j$.
A super-solution is defined similarly.
In order to state our main result, let us define a pyramid in the sense of [55]. Let $m_{*}=\sqrt{c^{2}-k^{2}}/k>0$ and let $N\geq 3$ be an integer. Let
$$\left\{(a_{j},b_{j})\in\mathbb{R}^{2}\mid 1\leq{j}\leq{N}\right\}$$
be pairs of real numbers satisfying the following properties.
•
$a_{j}^{2}+b_{j}^{2}=m_{*}^{2}$, for each $1\leq{j}\leq{N}$;
•
$a_{j}b_{j+1}-a_{j+1}b_{j}>0$, for each $1\leq{j}\leq{N}$, where we have set $a_{n+1}=a_{1}$ and $b_{n+1}=b_{1}$;
•
$(a_{j},b_{j})\neq(a_{j^{\prime}},b_{j^{\prime}})$ if $j\neq{j^{\prime}}$.
For each $1\leq{j}\leq{N}$ we define the function $h_{j}(x,y)=a_{j}x+b_{j}y$ and we define
$$h(x,y)=\displaystyle\max_{1\leq{j}\leq{N}}h_{j}(x,y).$$
Let us call $\left\{(x,y,z)\in\mathbb{R}^{3}\mid{z}=h(x,y)\right\}$ a pyramid. We decompose $\mathbb{R}^{2}=\bigcup_{j=1}^{N}\Omega_{j}$, where
$$\Omega_{j}=\left\{(x,y)\in\mathbb{R}^{2}\mid{h}(x,y)=h_{j}(x,y)\right\}.$$
It is clear that $h_{j}(x,y)\geq 0$ for $(x,y)\in\overline{\Omega_{j}}$ and hence $h(x,y)\geq 0$ for all $(x,y)\in\mathbb{R}^{2}$. By the assumptions on $(a_{j},b_{j})$, we see that $\Omega_{j}$ are oriented counter-clockwise. The set of all edges of the pyramid is $\Gamma=\bigcup_{j=1}^{N}\Gamma^{j}$ where
$$\Gamma^{j}=\left\{(x,y,z)\in\mathbb{R}^{3}\mid{z}=h_{j}(x,y)~{}\text{for}~{}(x%
,y)\in\partial\Omega_{j}\right\}.$$
Denote
$$\Gamma_{R}=\left\{(x,y,z)\in\mathbb{R}^{3}\mid{\rm dist}\,((x,y,z),\Gamma)>R%
\right\}.$$
The projection of $\Gamma$ on $\mathbb{R}^{2}\times\left\{0\right\}$ is identified as $E=\bigcup_{j=1}^{N}\partial\Omega_{j}\subset\mathbb{R}^{2}$.
In Section 3 we show that
$$v(x,y,z)=\Phi\left(\dfrac{k}{c}(z-h(x,y))\right).$$
(1.5)
is a sub-solution of (1.4). In Sections 4 and 5, we obtain a super-solution in the form
$$V(x,y,z)=\Phi\left(\dfrac{z-\alpha^{-1}\varphi(\alpha{x},\alpha{y})}{\sqrt{1+%
\left\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}\right)+{%
\varepsilon}S(\alpha{x},\alpha{y}).$$
(1.6)
For the precise definition, see (4.3), (4.4), (5.4) and (5.5).
The main result is as follows.
Theorem 1.2.
Given any bistable nonlinearity $f$ and any $c>k$, where $k>0$ is given in Theorem 1.1, there exists a solution $u$ of (1.4) such that $v<u<V$ in $\mathbb{R}^{3}$ where $v$ and $V$ are defined by (1.5) and (1.6) respectively. In particular,
$$\lim_{R\to\infty}\sup_{(x,y,z)\in\Gamma_{R}}\,\left\lvert u(x,y,z)-v(x,y,z)%
\right\rvert=0.$$
2. Motivation
It is worthwhile to sketch the idea in [55] for the standard Laplacian case $s=1$.
Suppose there exists a one-dimensional solution $\Phi(\mu)$ of
$$-\Phi^{\prime\prime}(\mu)-k\Phi^{\prime}(\mu)-f(\Phi(\mu))=0.$$
Let $\rho:\mathbb{R}^{2}\to(0,1]$ be a smooth radial mollifier satisfying $\int\rho=1$ and decaying exponentially at infinity, and $h(x,y)=(\sqrt{2}k)^{-1}\sqrt{c^{2}-k^{2}}(\left\lvert x\right\rvert+\left%
\lvert y\right\rvert)$ be a square pyramid.
Let $\varphi=\rho\ast{h}$ be its mollification and $S(x,y)=c\left/\sqrt{1+\left\lvert\nabla\varphi(x,y)\right\rvert^{2}}\right.-k$ be an auxiliary function.
It is easy to check that $v(x,y,z)=\Phi\left((k/c)(z-h(x,y))\right)$, as a maximum of solutions, is a sub-solution of
$$-\Delta{u}-cu_{z}-f(u)=0.$$
For $\alpha,\varepsilon\in(0,1)$, define
$$V(x,y,z)=\Phi\left(\hat{\mu}\right)+\varepsilon{S}(\alpha{x},\alpha{y}),$$
where
$$\hat{\mu}=\dfrac{z-\alpha^{-1}\varphi(\alpha{x},\alpha{y})}{\sqrt{1+\left%
\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}.$$
Introducing the rescaled function $\Phi_{\alpha}(\mu)=\Phi(\alpha^{-1}\mu)$, we can also write
$$V(x,y,z)=\Phi_{\alpha}\left(\bar{\mu}(\alpha{x},\alpha{y},\alpha{z})\right)+%
\varepsilon{S}(\alpha{x},\alpha{y}),$$
where
$$\bar{\mu}(x,y,z)=\dfrac{z-\varphi(x,y)}{\sqrt{1+\left\lvert\nabla\varphi(x,y)%
\right\rvert^{2}}}.$$
$V$ will be a super-solution if $\alpha$ and $\varepsilon$ are small. Indeed,
$$\begin{split}\displaystyle-\Delta V&\displaystyle=-\Phi^{\prime\prime}(\hat{%
\mu})+\alpha{R}\\
&\displaystyle=k\Phi^{\prime}(\hat{\mu})+f(\Phi(\hat{\mu}))+\alpha{R}\\
\displaystyle-cV_{z}&\displaystyle=-\dfrac{c}{\sqrt{1+\left\lvert\nabla\varphi%
(\alpha{x},\alpha{y})\right\rvert^{2}}}\Phi^{\prime}(\hat{\mu})\end{split}$$
where $R=R(\alpha{x},\alpha{y};\hat{\mu},\alpha,\varepsilon)$ is bounded and each of its terms contains a second or third order derivative of $\varphi$. Then we have
$$\begin{split}&\displaystyle\quad\;\mathcal{L}[V]\\
&\displaystyle=-\Delta{V}-cV_{z}-f(V)\\
&\displaystyle=S(\alpha{x},\alpha{y})\left(-\Phi^{\prime}(\hat{\mu})-%
\varepsilon\int_{0}^{1}\!f^{\prime}(\Phi(\hat{\mu})+t\varepsilon{S})\,dt+%
\alpha\dfrac{R(\alpha{x},\alpha{y})}{S(\alpha{x},\alpha{y})}\right)\end{split}$$
As $R(\alpha{x},\alpha{y})$ decays at an (exponential) rate not lower than $S(\alpha{x},\alpha{y})$ as $\left\lvert x\right\rvert,\left\lvert y\right\rvert\to\infty$, the last term is bounded. By choosing $\alpha\ll\varepsilon\ll 1$ small, we are left with the main term $-\Phi^{\prime}(\hat{\mu})$ or $-\int_{0}^{1}f^{\prime}\,dt$, depending on the magnitude of $\hat{\mu}$, which is positive.
For $1/2<s<1$, we cannot compute the Laplacian pointwisely using the chain rule but we can still arrange $\mathcal{L}[V]$, in terms of the difference $(-\Delta)^{s}(\Phi(\hat{\mu}))-((-\partial^{2})^{s}\Phi)(\hat{\mu})$, as
$$\begin{split}\displaystyle(-\Delta)^{s}{V}=k\Phi^{\prime}(\hat{\mu})+f(\Phi(%
\hat{\mu}))+\tilde{R}\end{split}$$
where the “remainder”
$$\begin{split}\displaystyle\tilde{R}&\displaystyle=\tilde{R}(\alpha{x},\alpha{y%
};\hat{\mu},\alpha,\varepsilon)\\
&\displaystyle=(-\Delta)^{s}(\Phi(\hat{\mu}))-((-\partial^{2})^{s}\Phi)(\hat{%
\mu})+\varepsilon(-\Delta)^{s}(S(\alpha{x},\alpha{y}))\end{split}$$
is now a non-local term. We still have
$${\mathcal{L}}[V]=S(\alpha{x},\alpha{y})\left(-\Phi^{\prime}(\hat{\mu})-%
\varepsilon\int_{0}^{1}\!f^{\prime}(\Phi(\hat{\mu})+t\varepsilon{S})\,dt+%
\dfrac{\tilde{R}(\alpha{x},\alpha{y})}{S(\alpha{x},\alpha{y})}\right).$$
In terms of the rescaled one-dimensional solution $\Phi_{\alpha}$, by the homogeneity of the fractional Laplacian (see Lemma A.2), we have $\tilde{R}=\alpha^{2s}(R_{1}+R_{2})$ where
$$\begin{split}\displaystyle R_{1}&\displaystyle=R_{1}(x,y,z;\alpha)\\
&\displaystyle=(-\Delta)^{s}(\Phi_{\alpha}(\bar{\mu}(\alpha{x},\alpha{y},%
\alpha{z})))-((-\partial^{2})^{s}\Phi_{\alpha})(\bar{\mu}(\alpha{x},\alpha{y},%
\alpha{z}))\\
\displaystyle R_{2}&\displaystyle=R_{2}(x,y;\alpha,\varepsilon)\\
&\displaystyle=\varepsilon((-\Delta)^{s}{S})(\alpha{x},\alpha{y}).\end{split}$$
It remains to show that $R_{1},R_{2}=o\left(\alpha^{-2s}\right)$ as $\alpha\to 0$, uniformly in $(x,y,z)\in\mathbb{R}^{3}$.
This will be done in sections 4 and 5.
Remark 2.1 (On the sub-criticality of $s$).
Since $(-\partial^{2})^{s}{S}$ cannot decay any faster than $\left\lvert(x,y)\right\rvert^{-2s}$ as ${\rm dist}\,((x,y),E)\to\infty$ by its non-local nature, this argument will work out only if $S$ has an algebraic decay.
Hence, instead of an exponentially small mollifier, we must choose $\rho(x,y)=\Omega\left(\left\lvert(x,y)\right\rvert^{-1-2s}\right)$.
On the other hand, in order that $\varphi$ is well defined, it is necessary to take $\rho(x,y)=O\left(\left\lvert(x,y)\right\rvert^{-2}\right)$.
This forces $-1-2s<-2$, or $s>1/2$.
3. The sub-solution
We show that $v$ given by (1.5) is a sub-solution.
Proposition 3.1.
$v$ is a sub-solution of equation (1.4).
Proof.
Let us define, for each $j=1,\dots,N$,
$$v_{j}(x,y,z)=\Phi\left(\dfrac{k}{c}(z-h_{j}(x,y))\right)=\Phi\left(\dfrac{k}{c%
}(z-a_{j}{x}-b_{j}{y})\right).$$
(3.1)
Since $\Phi^{\prime}<0$, we see that $v=\displaystyle\max_{1\leq{j}\leq{N}}v_{j}$.
By Lemma A.4,
$$\begin{split}\displaystyle(-\Delta)^{s}{v_{j}}(x,y,z)&\displaystyle=\left(%
\left(\dfrac{k}{c}\right)^{2}(1+a_{j}^{2}+b_{j}^{2})\right)^{s}(-\partial^{2})%
^{s}\Phi\left(\dfrac{k}{c}(z-a_{j}{x}-b_{j}{y})\right)\\
&\displaystyle=(-\partial^{2})^{s}\Phi\left(\dfrac{k}{c}(z-a_{j}x-b_{j}y)%
\right).\end{split}$$
By (1.3), we have
$$\mathcal{L}[v_{j}]={\mathcal{L}}\left(\Phi\left(\dfrac{k}{c}(z-a_{j}x-b_{j}y)%
\right)\right)=0.$$
By definition, $v$ is a sub-solution of (1.4).
∎
4. The mollified pyramid and an auxiliary function
Most of the materials in this section is technical and is a variation of those in [55].
We define a radial mollifier $\rho\in{C}^{\infty}(\mathbb{R}^{3})$ by $\rho(x,y)=\tilde{\rho}\left(\sqrt{x^{2}+y^{2}}\right)$, where $\tilde{\rho}\in C^{\infty}([0,\infty))$ satisfies the following properties:
•
$0<\tilde{\rho}(r)\leq 1$ and $\tilde{\rho}^{\prime}(r)\leq 0$ for $r>0$,
•
$\displaystyle 2\pi\int_{0}^{\infty}\!r\tilde{\rho}(r)\,dr=1$,
•
$\tilde{\rho}(r)=1$ for $0\leq{r}\leq\tilde{r_{0}}\ll 1$,
•
$\tilde{\rho}(r)=\tilde{\rho}_{0}r^{-{2s}-2}$ for $r\geq r_{0}\gg 2$, where $\tilde{\rho}_{0}>0$ is chosen such that
$$\dfrac{\tilde{\rho}_{0}}{{2s}}\mathrm{B}\left(\dfrac{1}{2},\dfrac{1}{2}+s%
\right)=1$$
(4.1)
and $r_{0}$ satisfies
$${2s}(m_{*}^{2}+2)(2r_{0})^{-{2s}}<1.$$
(4.2)
Define
$$\varphi(x,y)=\rho\ast{h}(x,y).$$
(4.3)
We call $z=\varphi(x,y)$ a mollified pyramid. Define also an auxiliary function
$$\begin{split}\displaystyle S(x,y)&\displaystyle=\dfrac{c}{\sqrt{1+\left\lvert%
\nabla\varphi(x,y)\right\rvert^{2}}}-k\\
&\displaystyle=\dfrac{m_{*}^{2}-\left\lvert\nabla\varphi\right\rvert^{2}}{%
\sqrt{1+\left\lvert\nabla\varphi(x,y)\right\rvert^{2}}\left(c+k\sqrt{1+\left%
\lvert\nabla\varphi(x,y)\right\rvert^{2}}\right)}.\end{split}$$
(4.4)
By direct computation, we have
Lemma 4.1.
For any integers $i_{1}\geq 0$ and $i_{2}\geq 0$, with $i_{1}+i_{2}\leq 3$,
$$\sup_{(x,y)\in\mathbb{R}^{2}}\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2%
}}\varphi(x,y)\right\rvert<\infty.$$
For all $(x,y)\in\mathbb{R}^{2}$, we have
$$h(x,y)<\varphi(x)\leq{h}(x,y)+2\pi{m_{*}}\int_{0}^{\infty}\!r^{2}\tilde{\psi}(%
r)\,dr$$
and $\left\lvert\nabla\varphi(x,y)\right\rvert<m_{*}$. Hence, $0<S(x,y)\leq{c-k}$.
Proof.
The proof can be found in [55], with a slight variation that there is a constant $C_{\tilde{\rho}}$ depending only on $\tilde{\rho}$ such that
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\rho(x,y)\right\rvert\leq{C%
_{\tilde{\rho}}}(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}\rho(x,y),\quad\text{for%
}~{}x^{2}+y^{2}\geq{r_{0}^{2}}$$
and
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\rho(x,y)\right\rvert\leq{C%
_{\tilde{\rho}}}\quad\text{for~{}all}~{}(x,y)\in\mathbb{R}^{2}.$$
∎
In the rest of this section, we study the behavior of $\varphi(x,y)-h(x,y)$ and $S(x,y)$ as well as their derivatives. It turns out that both of them depend on the distance from the edge of the pyramid.
The behavior of these functions of interest can be expressed using a “mollified negative part”. Write $x_{+}=\max\left\{x,0\right\}$ and $x_{-}=\max\left\{-x,0\right\}$. Define $P:[0,\infty)\to(0,\infty)$ by
$$\begin{split}\displaystyle P(x)&\displaystyle=\int_{\mathbb{R}^{2}}\!\rho(x^{%
\prime},y^{\prime})(x-x^{\prime})_{-}\,dx^{\prime}dy^{\prime}\\
&\displaystyle=-\int_{\mathbb{R}^{2}}\!\rho(x^{\prime},y^{\prime})(x^{\prime}-%
x)_{+}\,dx^{\prime}dy^{\prime}\\
&\displaystyle=-\int_{x}^{\infty}\!\left(\int_{-\infty}^{\infty}\!\rho(x^{%
\prime},y^{\prime})\,dy^{\prime}\right)(x-x^{\prime})\,dx^{\prime}.\end{split}$$
Let us state the properties of $P$.
Lemma 4.2.
$P$ is in the class ${C}^{3}([0,\infty))$. Let $0\leq{i}\leq 3$. There exists a constant $C_{P}=C_{P}(s,\tilde{\rho})$ such that $\left\|{P}\right\|_{C^{3}([0,\infty))}\leq{C_{P}}$ and
$$C_{P}^{-1}(1+x)^{-{2s}+1-i}\leq\left\lvert P^{(i)}(x)\right\rvert\leq{C_{P}}(1%
+x)^{-{2s}+1-i}.$$
Moreover, for $x>0$ we have $(-1)^{i}P^{(i)}(x)>0$ and if $x>{r_{0}}$, then
$$\begin{split}\displaystyle P(x)&\displaystyle=\dfrac{1}{({2s}-1)x^{{2s}-1}}.%
\end{split}$$
In particular, $P^{(i)}(x)\to 0$ as $x\to\infty$.
Proof.
Clearly, $P\in{C}^{3}([0,\infty))$ and if $x>0$, then
$$\begin{split}\displaystyle P(x)&\displaystyle=-\int_{x}^{\infty}\!\left(\int_{%
-\infty}^{\infty}\!\rho(x^{\prime},y)\,dy\right)(x-x^{\prime})\,dx^{\prime}>0%
\\
\displaystyle P^{\prime}(x)&\displaystyle=-\int_{x}^{\infty}\!\left(\int_{-%
\infty}^{\infty}\!\rho(x^{\prime},y)\,dy\right)\,dx^{\prime}<0\\
\displaystyle P^{\prime\prime}(x)&\displaystyle=\int_{-\infty}^{\infty}\!\rho(%
x,y)\,dy>0\\
\displaystyle P^{(3)}(x)&\displaystyle=\int_{-\infty}^{\infty}\!\dfrac{x}{%
\sqrt{x^{2}+y^{2}}}\tilde{\rho}^{\prime}\left(\sqrt{x^{2}+y^{2}}\right)\,dy<0.%
\end{split}$$
For $x\geq{r_{0}}$, we have by (4.1),
$$\begin{split}\displaystyle P(x)&\displaystyle=-\int_{x}^{\infty}\!\left(\int_{%
-\infty}^{\infty}\!\dfrac{\tilde{\rho}_{0}}{((x^{\prime})^{2}+y^{2})^{1+s}}\,%
dy\right)(x-x^{\prime})\,dx^{\prime}\\
&\displaystyle=\tilde{\rho}_{0}\mathrm{B}\left(\dfrac{1}{2},\dfrac{1+{2s}}{2}%
\right)\int_{x}^{\infty}\!\dfrac{x^{\prime}-x}{(x^{\prime})^{1+{2s}}}\,dx^{%
\prime}\\
&\displaystyle={2s}\left(\dfrac{1}{({2s}-1)x^{{2s}-1}}-\dfrac{x}{{2s}{x}^{{2s}%
}}\right)\\
&\displaystyle=\dfrac{1}{({2s}-1)x^{{2s}-1}}.\end{split}$$
The decay of the derivatives follows. Since they all have a sign, we have for any $x>0$,
$$\begin{split}\displaystyle 0<P(x)&\displaystyle<P(0)=\int_{0}^{\infty}\int_{-%
\infty}^{\infty}\!x\rho(x,y)\,dydx=2\int_{0}^{\infty}\!r^{2}\tilde{\rho}(r)\,%
dr\\
\displaystyle 0<-P^{\prime}(x)&\displaystyle<-P^{\prime}(0)=\int_{0}^{\infty}%
\int_{-\infty}^{\infty}\!\rho(x,y)\,dydx=\dfrac{1}{2}\\
\displaystyle 0<P^{\prime\prime}(x)&\displaystyle<P^{\prime\prime}(0)=\int_{-%
\infty}^{\infty}\!\rho(0,y)\,dy=2\int_{0}^{\infty}\!\tilde{\rho}(r)\,dr\end{split}$$
To get a bound for $P^{(3)}(x)$, we consider two cases. If $x>r_{0}$, then
$$\left\lvert P^{(3)}(x)\right\rvert<-P^{(3)}(r_{0})=\dfrac{{2s}({2s}+1)}{r_{0}^%
{{2s}+2}}.$$
If $x\leq{r_{0}}$, then we use the change of variable $r=\sqrt{x^{2}+y^{2}}$, to obtain
$$\begin{split}\displaystyle\left\lvert P^{(3)}(x)\right\rvert&\displaystyle=2%
\int_{0}^{\infty}\!\dfrac{x}{\sqrt{x^{2}+y^{2}}}\tilde{\rho}^{\prime}\left(%
\sqrt{x^{2}+y^{2}}\right)\,dy\\
&\displaystyle=2\int_{x}^{\infty}\!\dfrac{x}{\sqrt{r^{2}-x^{2}}}\left\lvert%
\tilde{\rho}^{\prime}(r)\right\rvert\,dr\\
&\displaystyle\leq\sqrt{2x}\int_{x}^{\infty}\!\dfrac{\left\lvert\tilde{\rho}^{%
\prime}(r)\right\rvert}{\sqrt{r-x}}\,dr\\
&\displaystyle\leq\sqrt{2r_{0}}\left(\int_{0}^{r_{0}}\!\dfrac{\left\lvert%
\tilde{\rho}(r+x)\right\rvert}{\sqrt{r}}\,dr+\int_{r_{0}}^{\infty}\!\dfrac{%
\tilde{\rho}_{0}({2s}+2)}{\sqrt{r}(r+x)^{3+{2s}}}\,dr\right)\\
&\displaystyle\leq\sqrt{2}\left(2r_{0}\left\|{\tilde{\rho}^{\prime}}\right\|_{%
L^{\infty}([0,2r_{0}])}+\tilde{\rho}_{0}r_{0}^{-2-{2s}}\right).\end{split}$$
This finishes the proof.
∎
To estimate $\varphi(x,y)-h(x,y)$, it suffices to fix $(x,y)\in\overline{\Omega_{j}}$ and study $\tilde{\varphi}_{j}(x,y)=\varphi(x,y)-h_{j}(x,y)=\rho\ast(h-h_{j})(x,y)$. By this definition, we expect that $\tilde{\varphi}_{j}(x,y)$ to be controlled by the distance from $(x,y)$ to the boundary $\partial\Omega_{j}$ because this distance determine the size of a neighborhood of $(x,y)$ such that $h=h_{j}$.
To fix the notation, we observe that $\overline{\Omega_{j}}\cap\overline{\Omega_{j\pm 1}}$ is a half line on which $h_{j}(x,y)-h_{j\pm 1}(x,y)=(a_{j}-a_{j\pm 1})x+(b_{j}-b_{j\pm 1})y=0$. Let us write, for $(x,y)\in\overline{\Omega_{j}}$,
$$\begin{split}\displaystyle m_{j}^{\pm}&\displaystyle=\sqrt{(a_{j}-a_{j\pm 1})^%
{2}+(b_{j}-b_{j\pm 1})^{2}}\\
\displaystyle\lambda_{j}^{\pm}=\lambda_{j}^{\pm}(x,y)&\displaystyle={\rm dist}%
\,((x,y),\Omega_{j\pm 1})=\dfrac{(a_{j}-a_{j\pm 1})x+(b_{j}-b_{j\pm 1})y}{m_{j%
}^{\pm}}\\
\displaystyle\lambda_{j}=\lambda_{j}(x,y)&\displaystyle={\rm dist}\,((x,y),%
\partial\Omega_{j})=\min\left\{\lambda_{j}^{+},\lambda_{j}^{-}\right\},\\
\end{split}$$
where $\Omega_{j\pm 1}$ is understood to be $\Omega_{j\pm 1\pmod{N}}$. We also write
$$\lambda=\lambda(x,y)={\rm dist}\,((x,y),E)=\min_{1\leq{j}\leq{N}}\lambda_{j}.$$
By the above motivation, we express $\tilde{\varphi}_{j}$ as
$$\begin{split}\displaystyle\tilde{\varphi}_{j}(x,y)&\displaystyle=\rho\ast(h_{j%
}-h_{j+1})_{-}(x,y)+\rho\ast(h_{j}-h_{j-1})_{-}(x,y)+\rho\ast{g_{j}}(x,y)\\
&\displaystyle=m_{j}^{+}P(\lambda_{j}^{+})+m_{j}^{-}P(\lambda_{j}^{-})+\rho%
\ast{g_{j}}(x,y)\end{split}$$
where $g_{j}=h-h_{j}-(h_{j}-h_{j+1})_{-}-(h_{j}-h_{j-1})_{-}$ vanishes on $\overline{\Omega_{j-1}}\cup\overline{\Omega_{j}}\cup\overline{\Omega_{j+1}}$.
We prove that $\rho\ast{g_{j}}$ is an error term, up to three derivatives.
Lemma 4.3.
There exists constants $C_{g}=C_{g}(\tilde{\rho})$ and $\gamma>1$ such that for any $1\leq{j}\leq{N}$, any $(x,y)\in\overline{\Omega_{j}}$, and any integers $i_{1}\geq 0$, $i_{2}\geq 0$ with $i_{1}+i_{2}\leq 3$, we have
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}(\rho\ast{g_{j}})(x,y)%
\right\rvert\leq{C_{g}}\left(1+\sqrt{x^{2}+y^{2}}\right)^{-i_{1}-i_{2}}P(%
\gamma\lambda_{j})$$
and, in particular,
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}(\rho\ast{g_{j}})(x,y)%
\right\rvert\leq{C_{g}}\left\lvert P^{(i_{1}+i_{2})}(\gamma\lambda_{j})\right\rvert.$$
Proof.
We first claim that $\left\lvert g_{j}\right\rvert\leq 3\left\lvert h-h_{j}\right\rvert$ on the whole $\mathbb{R}^{2}$. Indeed, by the definition of $g_{j}$, we have $g_{j}\leq{h-h_{j}}$ and $g_{j}\geq 0$ on $(\left\{h_{j}\leq{h_{j+1}}\right\}\cap\left\{h_{j}\leq{h_{j+1}}\right\})^{c}$. On $\left\{h_{j}\leq{h_{j+1}}\right\}\cap\left\{h_{j}\leq{h_{j+1}}\right\}$, $g_{j}=h+h_{j}-h_{j+1}-h_{j-1}$ and so $g_{j}+(h-h_{j})=2h-h_{j+1}-h_{j-1}$. Thus, $0\leq{g_{j}}+(h-h_{j})\leq 2(h-h_{j})$ on $\mathbb{R}^{2}$. Therefore,
$$\left\lvert g_{j}\right\rvert\leq\left\lvert g_{j}+(h-h_{j})\right\rvert+\left%
\lvert h-h_{j}\right\rvert=3(h-h_{j}),$$
as we claimed.
We have
$$\begin{split}\displaystyle\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}(%
\rho\ast{g_{j}})\right\rvert&\displaystyle=\left\lvert(\partial_{x}^{i_{1}}%
\partial_{y}^{i_{2}}\rho)\ast{g_{j}}\right\rvert\\
&\displaystyle=\left\lvert\sum_{k\neq{j},j\pm 1}\int_{\Omega_{k}}\!(\partial_{%
x}^{i_{1}}\partial_{y}^{i_{2}}\rho)(x-x^{\prime},y-y^{\prime})g_{j}(x^{\prime}%
,y^{\prime})\,dx^{\prime}dy^{\prime}\right\rvert,\\
\end{split}$$
where $k\neq{j\pm 1}$ is understood as $k\not\equiv{j\pm 1}\pmod{N}$.
Suppose $x^{2}+y^{2}>r_{0}^{2}$. From the proof of Lemma 4.1, we can find a constant ${C_{\tilde{\rho}}}$ such that
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\rho(x-x^{\prime},y-y^{%
\prime})\right\rvert\leq{C_{\tilde{\rho}}}((x-x^{\prime})^{2}+(y-y^{\prime})^{%
2})^{-\frac{i_{1}+i_{2}}{2}}\rho(x-x^{\prime},y-y^{\prime}).$$
For $(x^{\prime},y^{\prime})\notin\Omega_{j}$, $(x-x^{\prime})^{2}+(y-y^{\prime})^{2}\geq{x}^{2}+y^{2}$. Hence,
$$\begin{split}&\displaystyle\quad\,\,\left\lvert\partial_{x}^{i_{1}}\partial_{y%
}^{i_{2}}(\rho\ast{g_{j}})\right\rvert\\
&\displaystyle\leq 3{C_{\tilde{\rho}}}(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}%
\sum_{k\neq{j},j\pm 1}\int_{\Omega_{k}}\!\rho(x-x^{\prime},y-y^{\prime})(h_{k}%
-h_{j})(x^{\prime},y^{\prime})\,dx^{\prime}dy^{\prime}\\
&\displaystyle=3{C_{\tilde{\rho}}}(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}\sum_{%
k\neq{j},j\pm 1}\int_{(x,y)-\Omega_{k}}\!\rho(x^{\prime},y^{\prime})(h_{k}-h_{%
j})(x-x^{\prime},y-y^{\prime})\,dx^{\prime}dy^{\prime}\\
&\displaystyle\leq 3{C_{\tilde{\rho}}}(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}%
\sum_{k\neq{j},j\pm 1}\int_{(x,y)-\Omega_{k}}\!\rho(x^{\prime},y^{\prime})\Big%
{(}(h_{j}-h_{k})(x^{\prime},y^{\prime})\\
&\displaystyle\hskip 241.848425pt-(h_{j}-h_{k})(x,y)\Big{)}\,dx^{\prime}dy^{%
\prime}\\
\end{split}$$
By a simple use of intermediate value theorem, we have
$$\left\{(x,y)\in\mathbb{R}^{2}\mid(h_{j}-h_{k})(x,y)=0\right\}\subset\left(%
\mathbb{R}^{2}\backslash\overline{\Omega_{j}}\right)\cup\left\{(0,0)\right\},$$
that is, roughly, the line $\left\{h_{j}=h_{k}\right\}$ is contained in exterior of $\Omega_{j}$. Therefore, we can find a constant $\gamma>1$, depending only on the configuration of the $\Omega_{j}$’s, such that ${\rm dist}\,((x,y),\left\{h_{j}=h_{k}\right\})>\gamma\lambda_{j}$. But for $(x,y)\in\Omega_{j}$,
$${\rm dist}\,((x,y),\left\{h_{j}=h_{k}\right\})=\dfrac{(h_{j}-h_{k})(x,y)}{m_{%
jk}}>0,$$
where $m_{jk}=\sqrt{(a_{j}-a_{k})^{2}+(b_{j}-b_{k})^{2}}$. Hence,
$$\dfrac{(h_{j}-h_{k})(x,y)}{m_{jk}}>\gamma\lambda_{j}.$$
Note also that by a rotation, we can write
$$P(x)=\int_{\mathbb{R}^{2}}\!\rho(x^{\prime},y^{\prime})\left(\dfrac{(h_{k}-h_{%
j})(x^{\prime},y^{\prime})}{m_{jk}}-x\right)_{+}\,dx^{\prime}dy^{\prime}.$$
We now have
$$\begin{split}&\displaystyle\quad\,\,\left\lvert\partial_{x}^{i_{1}}\partial_{y%
}^{i_{2}}(\rho\ast{g_{j}})\right\rvert\\
&\displaystyle\leq 3{C_{\tilde{\rho}}}\left(\max_{1\leq{j,k}\leq{N}}m_{jk}%
\right)(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}\\
&\displaystyle\qquad\cdot\sum_{k\neq{j},j\pm 1}\int_{(x,y)-\Omega_{k}}\!\rho(x%
^{\prime},y^{\prime})\Bigg{(}\dfrac{(h_{j}-h_{k})(x^{\prime},y^{\prime})}{m_{%
jk}}-\dfrac{(h_{j}-h_{k})(x,y)}{m_{jk}}\Bigg{)}\,dx^{\prime}dy^{\prime}\\
&\displaystyle\leq 3{C_{\tilde{\rho}}}\left(\max_{1\leq{j,k}\leq{N}}m_{jk}%
\right)(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}\\
&\displaystyle\qquad\cdot\sum_{k\neq{j},j\pm 1}\int_{\mathbb{R}^{2}}\!\rho(x^{%
\prime},y^{\prime})\Bigg{(}\dfrac{(h_{j}-h_{k})(x^{\prime},y^{\prime})}{m_{jk}%
}-\dfrac{(h_{j}-h_{k})(x,y)}{m_{jk}}\Bigg{)}_{+}\,dx^{\prime}dy^{\prime}\\
&\displaystyle\leq 3{C_{\tilde{\rho}}}\left(\max_{1\leq{j,k}\leq{N}}m_{jk}%
\right)(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}\sum_{k\neq{j},j\pm 1}P\left(%
\dfrac{(h_{j}-h_{k})(x,y)}{m_{jk}}\right)\\
&\displaystyle\leq 3{C_{\tilde{\rho}}}\left(\max_{1\leq{j,k}\leq{N}}m_{jk}%
\right)(x^{2}+y^{2})^{-\frac{i_{1}+i_{2}}{2}}P(\gamma\lambda_{j})\end{split}$$
If $x^{2}+y^{2}\leq{r_{0}^{2}}$, then we use the fact
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\rho(x-x^{\prime},y-y^{%
\prime})\right\rvert\leq{C_{\tilde{\rho}}}\rho(x-x^{\prime},y-y^{\prime}),$$
which is also mentioned in Lemma 4.1. By the exact same argument, the first estimate is established.
To obtain the second one, noting that since the angle of $\Omega_{j}$ (that is, the angle between the lines $h_{j}=h_{j+1}$ and $h_{j}=h_{j-1}$) is less than $\pi$, we have $\sqrt{x^{2}+y^{2}}\geq\lambda_{j}$. Now the right hand side of the two estimates are bounded for $\lambda_{j}\leq{r_{0}}$ and is $O\left(\lambda_{j}^{-{2s}+1-i_{1}-i_{2}}\right)$ for $\lambda_{j}\geq{r_{0}}$. Hence, by enlarging $C_{g}$ if necessary, the proof is completed.
∎
Lemma 4.4.
There exists a constant $C_{\varphi}=C_{\varphi}(\tilde{\rho},\gamma)$ such that for any non-negative integers $i_{1}$, $i_{2}$ with $0\leq{i_{1}+i_{2}}\leq 3$ and any $(x,y)\in\mathbb{R}^{2}\backslash{E}$, we have
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}(\varphi(x,y)-h(x,y))\right%
\rvert\leq{C_{\varphi}}\left\lvert P^{(i_{1}+i_{2})}(\lambda)\right\rvert,$$
and for any $(x,y)\in\mathbb{R}^{2}$,
$$\varphi(x,y)-h(x,y)\geq{C_{\varphi}^{-1}}P(\lambda).$$
In particular, if $2\leq{i_{1}+i_{2}}\leq 3$, then for any $(x,y)\in\mathbb{R}^{2}$, there holds
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\varphi(x,y)\right\rvert%
\leq{C_{\varphi}}\left\lvert P^{(i_{1}+i_{2})}(\lambda)\right\rvert.$$
Proof.
Without loss of generality, let $(x,y)\in\Omega_{j}$. We have
$$\varphi(x,y)-h_{j}(x,y)=m_{j}^{+}P(\lambda_{j}^{+})+m_{j}^{-}P(\lambda_{j}^{-}%
)+\rho\ast{g_{j}}(x,y)$$
and so
$$\begin{split}\displaystyle\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\tilde{%
\varphi}_{j}(x,y)&\displaystyle=m_{j}^{+}(\partial_{x}\lambda_{j}^{+})^{i_{1}}%
(\partial_{y}\lambda_{j}^{+})^{i_{2}}P^{(i_{1}+i_{2})}(\lambda_{j}^{+})\\
&\displaystyle\qquad+m_{j}^{-}(\partial_{x}\lambda_{j}^{-})^{i_{1}}(\partial_{%
y}\lambda_{j}^{-})^{i_{2}}P^{(i_{1}+i_{2})}(\lambda_{j}^{-})+\partial_{x}^{i_{%
1}}\partial_{y}^{i_{2}}(\rho\ast{g_{j}})(x,y)\\
&\displaystyle=(m_{j}^{+})^{1-i_{1}-i_{2}}(a_{j}-a_{j+1})^{i_{1}}(b_{j}-b_{j+1%
})^{i_{2}}P^{(i_{1}+i_{2})}(\lambda_{j}^{+})\\
&\displaystyle\qquad+(m_{j}^{-})^{1-i_{1}-i_{2}}(a_{j}-a_{j-1})^{i_{1}}(b_{j}-%
b_{j-1})^{i_{2}}P^{(i_{1}+i_{2})}(\lambda_{j}^{-})\\
&\displaystyle\qquad+\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}(\rho\ast{g_{j}})%
(x,y).\end{split}$$
By Lemma 4.3, we have
$$\begin{split}\displaystyle\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}%
\varphi(x,y)\right\rvert&\displaystyle\leq\tilde{C}_{1}\left\lvert P^{(i_{1}+i%
_{2})}(\lambda_{j})\right\rvert+C_{g}\left\lvert P^{(i_{1}+i_{2})}(\gamma%
\lambda_{j})\right\rvert\\
&\displaystyle\leq(\tilde{C}_{1}+C_{g})\left\lvert P^{(i_{1}+i_{2})}(\lambda_{%
j})\right\rvert.\end{split}$$
To get the lower bound, we estimate $\tilde{\varphi}_{j}$ in three regions as follows. We choose $r_{1}>r_{0}$ such that for all $\lambda_{j}\geq{r_{1}}$,
$${C_{g}}P(\gamma\lambda_{j})<\dfrac{1}{2}\min_{1\leq{j}\leq{N}}\left(\min\left%
\{m_{j}^{+},m_{j}^{-}\right\}\right)P(\lambda_{j}).$$
On $\left\{(x,y)\in\overline{\Omega_{j}}\mid\lambda_{j}\geq{r_{1}}\right\}$,
$$\tilde{\varphi}_{j}(x,y)\geq\dfrac{1}{2}\min_{1\leq{j}\leq{N}}\left(\min\left%
\{m_{j}^{+},m_{j}^{-}\right\}\right)P(\lambda_{j}).$$
It suffices to show that
$$\inf\left\{\varphi_{j}(x,y)\,\Big{|}\,\lambda_{j}<r_{1},\,\sqrt{x^{2}+y^{2}}%
\geq{r_{2}}\right\}>0$$
for some $r_{2}>0$, since $\tilde{\varphi}_{j}$ attains a positive minimum in any compact set. Without loss of generality we assume that $\lambda_{j}^{+}<r_{1}$. Let $r_{2}$ be large enough such that $B_{r_{1}+2}\subset\overline{\Omega_{j}}\cup\overline{\Omega_{j+1}}$. Note that the outward unit normal of $\Omega_{j}$ on $\partial\Omega_{j}\cap\partial\Omega_{j+1}$ is $-\dfrac{{\bf{a}}_{j}-{\bf{a}}_{j+1}}{m_{j}^{+}}$. We estimate
$$\begin{split}\displaystyle\tilde{\varphi}_{j}(x,y)&\displaystyle=\rho\ast(h-h_%
{j})(x,y)\\
&\displaystyle=\int_{\Omega_{j}^{c}}\!\tilde{\rho}(x-x^{\prime},y-y^{\prime})(%
h-h_{j})(x^{\prime},y^{\prime})\,dx^{\prime}dy^{\prime}\\
&\displaystyle\geq\tilde{\rho}(r_{1}+2)\int_{B_{r_{1}+2}(x,y)\cap\Omega_{j+1}}%
(h_{j+1}-h_{j})(x^{\prime},y^{\prime})\,dx^{\prime}dy^{\prime}\\
&\displaystyle\geq\tilde{\rho}(r_{1}+2)\int_{B_{*}}(h_{j+1}-h_{j})(x^{\prime},%
y^{\prime})\,dx^{\prime}dy^{\prime}\\
\end{split}$$
where $B_{*}\subset{B_{r_{1}+2}(x,y)\cap\Omega_{j+1}}$ is the half ball
$$B_{*}=B_{1}\left((x,y)-(\lambda_{j}^{+}+1)\dfrac{{\bf{a}}_{j}-{\bf{a}}_{j+1}}{%
m_{j}^{+}}\right)\cap\left\{\dfrac{h_{j}-h_{j+1}}{m_{j}^{+}}\geq 1\right\}.$$
Now
$$\begin{split}\displaystyle\tilde{\varphi}_{j}(x,y)&\displaystyle\geq\tilde{%
\rho}(r_{1}+2)\int_{B_{*}}{m_{j}^{+}}\,dx^{\prime}dy^{\prime}\\
&\displaystyle=\dfrac{\pi{m_{j}^{+}}}{2}\tilde{\rho}(r_{1}+2).\end{split}$$
This completes the proof.
∎
Now, in terms of $P$ and $\lambda$ we state a key lemma concerning the decay properties $S(x,y)$ and its derivatives. In the following, it is convenient to use the vector notation ${\bf{a}}_{j}=(a_{j},b_{j})$.
Lemma 4.5.
There exists a constant $C_{S}=C_{S}(s,\tilde{\rho},\gamma)$ such that for any $(x,y)\in\mathbb{R}^{2}$ and any non-negative integers $i_{1}$, $i_{2}$ with $1\leq{i_{1}+i_{2}}\leq 2$,
$$C_{S}^{-1}\left\lvert P^{\prime}(\lambda)\right\rvert\leq{S}(x,y)\leq{C_{S}}%
\left\lvert P^{\prime}(\lambda)\right\rvert,$$
$$\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}S(x,y)\right\rvert\leq{C_{S%
}}\left\lvert P^{(1+i_{1}+i_{2})}(\lambda)\right\rvert$$
and
$$\left\lvert(-\Delta)^{s}{S}(x,y)\right\rvert\leq{C_{S}}(1+\lambda)^{-2s}.$$
In particular, with a possibly larger constant $C_{S}$, we have
$$\left\lvert(-\Delta)^{s}{S}(x,y)\right\rvert\leq{C_{S}}\left\lvert P^{\prime}(%
\lambda)\right\rvert.$$
Proof.
Again, let $(x,y)\in\overline{\Omega_{j}}$. Since $\left\lvert\bf{a}_{j}\right\rvert^{2}=m_{*}^{2}$, we write
$$\begin{split}\displaystyle S&\displaystyle=\dfrac{m_{*}^{2}-\left\lvert\nabla%
\varphi\right\rvert^{2}}{\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{2}}%
\left(c+k\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{2}}\right)}\\
&\displaystyle=\dfrac{-2{\bf{a}}_{j}\cdot\nabla\tilde{\varphi}_{j}-\left\lvert%
\nabla\tilde{\varphi}_{j}\right\rvert^{2}}{\sqrt{1+\left\lvert\nabla\varphi%
\right\rvert^{2}}\left(c+k\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{2}}%
\right)}\\
\end{split}$$
Since the denominator is bounded between two positive numbers, it suffices to show that the dominant term of the numerator is $\left\lvert P^{\prime}(\lambda_{j})\right\rvert$. We have
$$\nabla\tilde{\varphi}_{j}=({\bf{a}}_{j}-{\bf{a}}_{j+1})P^{\prime}(\lambda_{j}^%
{+})+({\bf{a}}_{j}-{\bf{a}}_{j-1})P^{\prime}(\lambda_{j}^{-})+\nabla\rho\ast{g%
_{j}}.$$
so
$$\begin{split}&\displaystyle\quad\,\,-2{\bf{a}}_{j}\cdot\nabla\tilde{\varphi}_{%
j}\\
&\displaystyle=2(\left\lvert{\bf{a}}_{j}\right\rvert^{2}-{\bf{a}}_{j}\cdot{\bf%
{a}}_{j+1})\left\lvert P^{\prime}(\lambda_{j}^{+})\right\rvert+2(\left\lvert{%
\bf{a}}_{j}\right\rvert^{2}-{\bf{a}}_{j}\cdot{\bf{a}}_{j-1})\left\lvert P^{%
\prime}(\lambda_{j}^{-})\right\rvert-2{\bf{a}}_{j}\cdot\nabla\rho\ast{g_{j}}%
\end{split}$$
By Lemma 4.3, $\left\lvert\nabla\rho\ast{g_{j}}\right\rvert\leq{C_{g}}\left\lvert P^{\prime}(%
\gamma\lambda_{j})\right\rvert$ and so
$$\left\lvert-2{\bf{a}}_{j}\cdot\nabla\rho\ast{g_{j}}\right\rvert\leq 2m_{*}{C_{%
g}}\left\lvert P^{\prime}(\gamma\lambda_{j})\right\rvert.$$
Using the inequality $\left\lvert{\bf{a}}+{\bf{b}}+{\bf{c}}\right\rvert^{2}\leq 3(\left\lvert\bf{a}%
\right\rvert^{2}+\left\lvert\bf{b}\right\rvert^{2}+\left\lvert\bf{c}\right%
\rvert^{2})$, we have
$$\begin{split}\displaystyle\left\lvert\nabla\tilde{\varphi}_{j}\right\rvert^{2}%
&\displaystyle\leq 3\left(\left\lvert{\bf{a}}_{j}-{\bf{a}}_{j+1}\right\rvert^{%
2}\left\lvert P^{\prime}(\lambda_{j}^{+})\right\rvert^{2}+\left\lvert{\bf{a}}_%
{j}-{\bf{a}}_{j-1}\right\rvert^{2}\left\lvert P^{\prime}(\lambda_{j}^{-})%
\right\rvert+\left\lvert\nabla\rho\ast{g_{j}}\right\rvert^{2}\right)\\
&\displaystyle\leq 3\left(4\left\lvert P^{\prime}(\lambda_{j})\right\rvert^{2}%
+C_{g}^{2}\left\lvert P^{\prime}(\gamma\lambda_{j})\right\rvert^{2}\right)\\
&\displaystyle\leq 3(4+C_{g}^{2})\left\lvert P^{\prime}(\lambda_{j})\right%
\rvert.\end{split}$$
Let
$$m_{0}=\displaystyle\min_{1\leq{j,k}\leq{N},\,j\neq{k}}\left(\left\lvert{\bf{a}%
}_{j}\right\rvert^{2}-{\bf{a}}_{j}\cdot{\bf{a}}_{k}\right).$$
We can find an $r_{3}>r_{0}$ such that for all $\lambda_{j}\geq{r_{3}}$,
$$3(4+C_{g}^{2})\left\lvert P^{\prime}(\lambda_{j})\right\rvert\leq\dfrac{m_{0}}%
{2}\quad\text{and}\quad 2C_{g}\left\lvert P^{\prime}(\gamma\lambda_{j})\right%
\rvert\leq\dfrac{m_{0}}{2}\left\lvert P^{\prime}(\lambda_{j})\right\rvert.$$
This implies for all $\lambda_{j}\geq{r_{3}}$,
$$m_{0}\left\lvert P^{\prime}(\lambda_{j})\right\rvert\leq\left\lvert-2{\bf{a}}_%
{j}\cdot\nabla\tilde{\varphi}_{j}-\left\lvert\nabla\tilde{\varphi}_{j}\right%
\rvert^{2}\right\rvert\leq(4m_{*}^{2}+m_{0})\left\lvert P^{\prime}(\lambda_{j}%
)\right\rvert.$$
Thus, the lower and upper bounds for $S$ is established.
Next, we compute the derivatives of $S$ to yield
$$\begin{split}\displaystyle S_{x}&\displaystyle=-\dfrac{c\varphi_{x}\varphi_{xx%
}}{(1+\left\lvert\nabla\varphi\right\rvert)^{\frac{3}{2}}}\\
\displaystyle S_{xx}&\displaystyle=\dfrac{c}{(1+\left\lvert\nabla\varphi\right%
\rvert^{2})^{\frac{3}{2}}}\left(\varphi_{x}\varphi_{xxx}+\varphi_{xx}^{2}-%
\dfrac{3\varphi_{x}^{2}\varphi_{xx}^{2}}{1+\left\lvert\nabla\varphi\right%
\rvert^{2}}\right)\\
\displaystyle S_{xy}&\displaystyle=\dfrac{3c\varphi_{x}\varphi_{y}\varphi_{xx}%
\varphi_{yy}}{(1+\left\lvert\nabla\varphi\right\rvert)^{\frac{5}{2}}}.\end{split}$$
Using the Lemmas 4.1, 4.2 and 4.4, it is clear that
$$\begin{split}\displaystyle\left\lvert S_{x}\right\rvert&\displaystyle\leq{c}{m%
_{*}}{C_{\varphi}}P^{\prime\prime}(\lambda)\\
\displaystyle\left\lvert S_{xx}\right\rvert&\displaystyle\leq{c}\left({m_{*}}{%
C_{\varphi}}\left\lvert P^{(3)}(\lambda)\right\rvert+(1+3m_{*}^{2})C_{\varphi}%
^{2}P^{\prime\prime}(\lambda)^{2}\right)\\
&\displaystyle\leq\tilde{C}_{2}(c,m_{*},\tilde{\rho},\gamma)\left\lvert P^{(3)%
}(\lambda)\right\rvert\\
\displaystyle\left\lvert S_{xy}\right\rvert&\displaystyle\leq 3c{m_{*}^{2}}C_{%
\varphi}^{2}P^{\prime\prime}(\lambda)^{2}\\
&\displaystyle\leq\tilde{C}_{2}(c,m_{*},\tilde{\rho},\gamma)\left\lvert P^{(3)%
}(\lambda)\right\rvert\end{split}$$
Hence the $C^{2}$ norm of $S$ is controlled.
To estimate the $s$-Laplacian, we use Corollary A.2 to get
$$\begin{split}\displaystyle\left\lvert(-\Delta)^{s}{S}(x,y)\right\rvert&%
\displaystyle\leq{C_{\Delta}}\left(\left\|{S}\right\|_{\dot{C}^{2}(B_{1}(x,y))%
}+\left\|{S}\right\|_{L^{\infty}(\mathbb{R}^{2})}\right)\\
&\displaystyle\leq{C_{\Delta}}\tilde{C}_{3}\left(\left\lvert P^{(3)}(\lambda-1%
)\right\rvert+c-k\right)\\
&\displaystyle\leq\tilde{C}_{4}\end{split}$$
for $\lambda\leq 2r_{0}$, and
$$\begin{split}&\displaystyle\quad\,\,\left\lvert(-\Delta)^{s}{S}(x,y)\right%
\rvert\\
&\displaystyle\leq{C_{\Delta}}\left(\left\|{S}\right\|_{\dot{C}^{2}(B_{\frac{%
\left\lvert(x,y)\right\rvert}{2}}(x,y))}+\left\|{S}\right\|_{\dot{C}^{1}(B_{%
\frac{\left\lvert(x,y)\right\rvert}{2}}(x,y))}+\left\|{S}\right\|_{L^{\infty}(%
\mathbb{R}^{2})}\left\lvert(x,y)\right\rvert^{-2s}\right)\\
&\displaystyle\leq{C_{\Delta}}\tilde{C}_{3}\left(4\left\lvert P^{(3)}\left(%
\dfrac{\lambda}{2}\right)\right\rvert+2\left\lvert P^{\prime\prime}\left(%
\dfrac{\lambda}{2}\right)\right\rvert+(c-k)\lambda^{-2s}\right)\\
&\displaystyle\leq\tilde{C}_{5}\lambda^{-2s}\end{split}$$
for $\lambda\geq 2r_{0}$, by Lemma 4.2. Thus,
$$\left\lvert(-\Delta)^{s}{S}(x,y)\right\rvert\leq\tilde{C}_{6}(1+\lambda)^{-2s}.$$
The last assertion follows from Lemma 4.2.
∎
5. The super-solution
Let $\alpha,\varepsilon\in(0,1)$ be small parameters (which will be chosen according to (5.4) and (5.5)) and write
$$V(x,y,z)=\Phi\left(\hat{\mu}\right)+\varepsilon{S}(\alpha{x},\alpha{y}),$$
where $S$ is defined in (4.4) and
$$\hat{\mu}=\dfrac{z-\alpha^{-1}\varphi(\alpha{x},\alpha{y})}{\sqrt{1+\left%
\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}.$$
(5.1)
Introducing the rescaled one-dimensional profile
$$\Phi_{\alpha}(\mu)=\Phi(\alpha^{-1}\mu),$$
(5.2)
we can also write
$$V(x,y,z)=\Phi_{\alpha}\left(\bar{\mu}(\alpha{x},\alpha{y},\alpha{z})\right)+%
\varepsilon{S}(\alpha{x},\alpha{y}),$$
where
$$\bar{\mu}(x,y,z)=\dfrac{z-\varphi(x,y)}{\sqrt{1+\left\lvert\nabla\varphi(x,y)%
\right\rvert^{2}}}.$$
We will prove that $V$ is a super-solution when $\varepsilon$ and $\alpha$ are sufficiently small.
Observing that $\bar{\mu}(x,y,z)$ is a linear in $z$ and “almost linear” in $x$ and $y$ as $\lambda(x,y)\to\infty$, it is tempting to compare $(-\Delta)^{s}(\Phi_{\alpha}(\bar{\mu}))$ with $(-\partial^{2})^{s}{\Phi_{\alpha}}(\bar{\mu})$, according to the chain rule, Lemma A.4. It turns out that their difference
$$R_{1}(x,y,z;\alpha)=(-\Delta)^{s}(\Phi_{\alpha}(\bar{\mu}(x,y,z)))-(-\partial^%
{2})^{s}\Phi_{\alpha}(\bar{\mu}(x,y,z))$$
decays at the right order as $\lambda\to\infty$. This crucial estimate is stated as follows.
Proposition 5.1 (Main estimate).
There exists a constant $C_{R_{1}}=C_{R_{1}}(s,\tilde{\rho},\gamma)$ such that for any $(x,y,z)\in\mathbb{R}^{3}$, we have
$$\left\lvert R_{1}(x,y,z;\alpha)\right\rvert\leq{C_{R_{1}}}\alpha^{-s}\left(1+%
\lambda(x,y)\right)^{-2s},$$
uniformly in $z$. We can also write
$$\left\lvert R_{1}(x,y,z;\alpha)\right\rvert\leq{C_{R_{1}}}\alpha^{-s}\left%
\lvert P^{\prime}(\lambda(x,y))\right\rvert.$$
Remark 5.1.
In fact, denoting $\varphi^{\alpha}(x,y)=\alpha^{-1}\varphi(\alpha{x},\alpha{y})$, the first term of $V$ can be expressed as
$$\Phi\left(\dfrac{z-\varphi^{\alpha}(x,y)}{\sqrt{1+\left\lvert\nabla\varphi^{%
\alpha}(x,y)\right\rvert^{2}}}\right).$$
In this way, the estimate of $R_{1}$ gives the first term of the fractional Laplacian in the Fermi coordinates, roughly as $(-\Delta)^{s}=(-\partial_{\mu}^{2})^{s}+\cdots$ if $\mu$ denotes the signed distance to the surface $z=\varphi^{\alpha}(x,y)$, as $\alpha\to 0$.
It is helpful to keep in mind that the decay in $x$ and $y$ comes from the second and third derivatives of $\varphi$. The uniformity in $z$ will follow from the facts that $z$ can be written in terms of $\bar{\mu}(x,y,z)$ and quantities like $\bar{\mu}^{i}\Phi_{\alpha}^{(j)}(\bar{\mu})$ for $0\leq{i}\leq{j}$, $1\leq{j}\leq 2$ are uniformly bounded.
Proof of Proposition 5.1.
We will prove the estimate pointwisely in $(x,y)\in\mathbb{R}^{2}$. To simplify the notations, $\bar{\mu}$ and $\varphi$ and their derivatives will be evaluated at $(x,y,z)$ or $(x,y)$ unless otherwise specified.
First we write down the difference. By definition, we have
$$\begin{split}&\displaystyle\quad\;(-\Delta)^{s}(\Phi_{\alpha}(\bar{\mu}))\\
&\displaystyle=C_{3,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{3}}\dfrac{\Phi_{%
\alpha}(\bar{\mu})-\Phi_{\alpha}(\bar{\mu}(x+\xi,y+\eta,z+\zeta))}{(\xi^{2}+%
\eta^{2}+\zeta^{2})^{\frac{3}{2}+s}}\,d{\xi}d{\eta}d{\zeta}\\
&\displaystyle=C_{3,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{3}}\!\left[\Phi_{%
\alpha}(\bar{\mu})-\Phi_{\alpha}\left(\dfrac{z+\zeta-\varphi(x+\xi,y+\eta)}{%
\sqrt{1+\left\lvert\nabla\varphi(x+\xi,y+\eta)\right\rvert^{2}}}\right)\right]%
\dfrac{d{\xi}d{\eta}d{\zeta}}{(\xi^{2}+\eta^{2}+\zeta^{2})^{\frac{3}{2}+s}}%
\end{split}$$
On the other hand, by Corollary A.1, we have
$$\begin{split}\displaystyle(-\partial^{2})^{s}\Phi_{\alpha}(\bar{\mu})&%
\displaystyle=C_{1,s}\textnormal{P.V.}\,\int_{\mathbb{R}}\!\dfrac{\Phi_{\alpha%
}(\bar{\mu})-\Phi_{\alpha}(\bar{\mu}+\mu)}{\left\lvert\mu\right\rvert^{1+2s}}%
\,d\mu\\
&\displaystyle=C_{3,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{3}}\!\dfrac{\Phi_{%
\alpha}(\bar{\mu})-\Phi_{\alpha}(\bar{\mu}+\mu)}{(\mu^{2}+\nu^{2}+\theta^{2})^%
{\frac{3}{2}+s}}\,d{\mu}d{\nu}d{\theta}\end{split}$$
In order to rotate the axes to the tangent and normal directions of the graph $z=\varphi(x,y)$, we wish to find an orthogonal linear transformation
$\begin{pmatrix}\mu\\
\nu\\
\theta\end{pmatrix}=T\begin{pmatrix}\zeta\\
\xi\\
\eta\end{pmatrix}$
that sends $\mu$ to
$\dfrac{\zeta-\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+\varphi_{x}^{2}+\varphi_{%
y}^{2}}}$.
To this end, we apply Gram-Schmidt process to the basis
$$\dfrac{(1,-\varphi_{x},-\varphi_{y})}{\sqrt{1+\varphi_{x}^{2}+\varphi_{y}^{2}}%
},\quad\dfrac{(\varphi_{x},1,0)}{\sqrt{1+\varphi_{x}^{2}}},\quad\dfrac{(%
\varphi_{y},0,1)}{\sqrt{1+\varphi_{y}^{2}}}$$
of $\mathbb{R}^{3}$ to obtain the orthonormal vectors
$$\dfrac{(1,-\varphi_{x},-\varphi_{y})}{\sqrt{1+\varphi_{x}^{2}+\varphi_{y}^{2}}%
},\quad\dfrac{(\varphi_{x},1,0)}{\sqrt{1+\varphi_{x}^{2}}},\quad\dfrac{(%
\varphi_{y},-\varphi_{x}\varphi_{y},1+\varphi_{x}^{2})}{\sqrt{1+\varphi_{x}^{2%
}}\sqrt{1+\varphi_{x}^{2}+\varphi_{y}^{2}}}.$$
It is easy to check that
$$T=\begin{pmatrix}\dfrac{1}{\sqrt{1+\varphi_{x}^{2}+\varphi_{y}^{2}}}&\dfrac{-%
\varphi_{x}}{\sqrt{1+\varphi_{x}^{2}+\varphi_{y}^{2}}}&\dfrac{-\varphi_{y}}{%
\sqrt{1+\varphi_{x}^{2}+\varphi_{y}^{2}}}\\
\dfrac{\varphi_{x}}{\sqrt{1+\varphi_{x}^{2}}}&\dfrac{1}{\sqrt{1+\varphi_{x}^{2%
}}}&0\\
\dfrac{\varphi_{y}}{\sqrt{1+\varphi_{x}^{2}}\sqrt{1+\varphi_{x}^{2}+\varphi_{y%
}^{2}}}&\dfrac{-\varphi_{x}\varphi_{y}}{\sqrt{1+\varphi_{x}^{2}}\sqrt{1+%
\varphi_{x}^{2}+\varphi_{y}^{2}}}&\sqrt{\dfrac{1+\varphi_{x}^{2}}{1+\varphi_{x%
}^{2}+\varphi_{y}^{2}}}\end{pmatrix}$$
indeed satisfies the required condition.
Under this transformation, we have
$$\begin{split}&\displaystyle\quad\;(-\partial^{2})^{s}\Phi_{\alpha}(\bar{\mu})%
\\
&\displaystyle=C_{3,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{3}}\left[\Phi_{%
\alpha}(\bar{\mu})-\Phi_{\alpha}\left(\bar{\mu}+\dfrac{\zeta-\varphi_{x}\xi-%
\varphi_{y}\eta}{\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{2}}}\right)%
\right]\dfrac{d{\zeta}d{\xi}d{\eta}}{(\zeta^{2}+\xi^{2}+\eta^{2})^{\frac{3}{2}%
+s}}\\
&\displaystyle=C_{3,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{3}}\left[\Phi_{%
\alpha}(\bar{\mu})-\Phi_{\alpha}\left(\dfrac{z+\zeta-\varphi-\varphi_{x}\xi-%
\varphi_{y}\eta}{\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{2}}}\right)%
\right]\dfrac{d{\zeta}d{\xi}d{\eta}}{(\zeta^{2}+\xi^{2}+\eta^{2})^{\frac{3}{2}%
+s}}\end{split}$$
The difference is therefore
$$\begin{split}&\displaystyle\quad\,\,(-\Delta)^{s}(\Phi_{\alpha}(\bar{\mu}))-(-%
\partial^{2})^{s}\Phi_{\alpha}(\bar{\mu})\\
&\displaystyle=C_{3,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{3}}\!\Bigg{[}-\Phi_%
{\alpha}\left(\dfrac{z+\zeta-\varphi(x+\xi,y+\eta)}{\sqrt{1+\left\lvert\nabla%
\varphi(x+\xi,y+\eta)\right\rvert^{2}}}\right)\\
&\displaystyle\qquad\qquad\qquad\qquad+\Phi_{\alpha}\left(\dfrac{z+\zeta-%
\varphi-\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+\left\lvert\nabla\varphi\right%
\rvert^{2}}}\right)\Bigg{]}\,\dfrac{d{\xi}d{\eta}d{\zeta}}{(\xi^{2}+\eta^{2}+%
\zeta^{2})^{\frac{3}{2}+s}}\\
\end{split}$$
We will estimate this integral in
$$D_{\alpha}=\left\{(\xi,\eta,\zeta)\in\mathbb{R}^{3}\,\Bigg{|}\,\sqrt{\xi^{2}+%
\eta^{2}+\zeta^{2}}<\dfrac{\sqrt{\alpha}}{2}\left(C_{P}\left\lvert P^{\prime}(%
\lambda)\right\rvert\right)^{-\frac{1}{{2s}}}\right\}$$
and its complement $D_{\alpha}^{c}$ in $\mathbb{R}^{3}$
and write
$$\begin{split}\displaystyle(-\Delta)^{s}(\Phi_{\alpha}(\bar{\mu}))-(-\partial^{%
2})^{s}\Phi_{\alpha}(\bar{\mu})=C_{3,s}(J_{1}+J_{2}),\end{split}$$
where
$$\begin{split}\displaystyle J_{1}&\displaystyle=\int_{D_{\alpha}}\!\Bigg{[}-%
\Phi_{\alpha}\left(\dfrac{z+\zeta-\varphi(x+\xi,y+\eta)}{\sqrt{1+\left\lvert%
\nabla\varphi(x+\xi,y+\eta)\right\rvert^{2}}}\right)\\
&\displaystyle\qquad\qquad\quad+\Phi_{\alpha}\left(\dfrac{z+\zeta-\varphi-%
\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{%
2}}}\right)\\
&\displaystyle\qquad\qquad\quad-\bar{\mu}\Phi_{\alpha}^{\prime}(\bar{\mu})%
\dfrac{\varphi_{x}\varphi_{xx}\xi+\varphi_{y}\varphi_{yy}\eta}{1+\left\lvert%
\nabla\varphi\right\rvert^{2}}\Bigg{]}\,\dfrac{d{\xi}d{\eta}d{\zeta}}{(\xi^{2}%
+\eta^{2}+\zeta^{2})^{\frac{3}{2}+s}}\\
\end{split}$$
(5.3)
and
$$\begin{split}\displaystyle J_{2}&\displaystyle=\int_{D_{\alpha}^{c}}\!\Bigg{[}%
-\Phi_{\alpha}\left(\dfrac{z+\zeta-\varphi(x+\xi,y+\eta)}{\sqrt{1+\varphi^{%
\prime}(x+\xi,y+\eta)^{2}}}\right)\\
&\displaystyle\qquad\qquad+\Phi_{\alpha}\left(\dfrac{z+\zeta-\varphi-\varphi_{%
x}\xi-\varphi_{y}\eta}{\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{2}}}%
\right)\Bigg{]}\,\dfrac{d{\xi}d{\eta}d{\zeta}}{(\xi^{2}+\eta^{2}+\zeta^{2})^{%
\frac{3}{2}+s}}\\
\end{split}$$
Note that the extra term in $J_{1}$, being odd in $\xi$ and $\eta$, is inserted to get rid of the principal value.
Since $\left\lvert\Phi_{\alpha}\right\rvert\leq 1$, it is easy to see that
$$\begin{split}\displaystyle\left\lvert J_{2}\right\rvert&\displaystyle\leq 2%
\int_{D_{\alpha}^{c}}\!\,\dfrac{d{\xi}d{\eta}d{\zeta}}{(\xi^{2}+\eta^{2}+\zeta%
^{2})^{\frac{3}{2}+s}}\\
&\displaystyle=8\pi\int_{\frac{\sqrt{\alpha}}{2}\left(C_{P}\left\lvert P^{%
\prime}(\lambda)\right\rvert\right)^{-\frac{1}{{2s}}}}^{\infty}\dfrac{r^{2}\,%
dr}{r^{3+2s}}\\
&\displaystyle=\dfrac{4\pi}{s}\left(\dfrac{\sqrt{\alpha}}{2}\left(C_{P}\left%
\lvert P^{\prime}(\lambda)\right\rvert\right)^{-\frac{1}{{2s}}}\right)^{-2s}\\
&\displaystyle=\dfrac{2^{2+2s}\pi}{s}C_{P}\alpha^{-s}\left\lvert P^{\prime}(%
\lambda)\right\rvert\\
&\displaystyle\leq\dfrac{2^{2+2s}\pi}{s}C_{P}^{2}\alpha^{-s}(1+\lambda)^{-2s}%
\end{split}$$
by using Lemma 4.2.
To estimate $J_{1}$, we let $F(t)=F(t,\xi,\eta,\zeta;x,y,z,\alpha)=-\Phi_{\alpha}(\mu^{*}(t))$, where
$$\begin{split}\displaystyle\mu^{*}(t)&\displaystyle=\mu^{*}(t,\xi,\eta,\zeta;x,%
y,z)\\
&\displaystyle=\dfrac{z+\zeta-(1-t)(\varphi+\varphi_{x}\xi+\varphi_{y}\eta)-t%
\varphi(x+\xi,y+\eta)}{\sqrt{1+\left\lvert\nabla\varphi(x+t\xi,y+t\eta)\right%
\rvert^{2}}}.\end{split}$$
Let us introduce the shorthand notation
$$\varphi^{t}=\varphi(x+t\xi,y+t\eta),\quad(\nabla\varphi)^{t}=\nabla\varphi(x+t%
\xi,y+t\eta)$$
and similarly for other derivatives of $\varphi$ for $t\in[0,1]$. So we can rewrite
$$\mu^{*}(t)=\dfrac{z+\zeta-(1-t)(\varphi+\varphi_{x}\xi+\varphi_{y}\eta)-t%
\varphi^{1}}{\sqrt{1+\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}}}.$$
We will use the second order Taylor expansion to write the first two terms in the integrand (considering the denominator as a weight) as
$$F(1)-F(0)=F^{\prime}(0)+\int_{0}^{1}(1-t)F^{\prime\prime}(t)\,dt.$$
The derivative of $\mu^{*}(t)$ can be written as
$$\mu_{t}^{*}(t)=-A(t)-B(t)\mu^{*}(t),$$
where
$$\begin{split}\displaystyle A(t)=A(t,\xi,\eta;x,y)&\displaystyle=\dfrac{\varphi%
^{1}-\varphi-\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+\left\lvert(\nabla\varphi%
)^{t}\right\rvert^{2}}}\\
&\displaystyle=\dfrac{\displaystyle\int_{0}^{1}(1-t^{\prime})\left(\varphi_{xx%
}^{t^{\prime}}\xi^{2}+2\varphi_{xy}^{t^{\prime}}\xi\eta+\varphi_{yy}^{t^{%
\prime}}\eta^{2}\right)\,dt^{\prime}}{\sqrt{1+\left\lvert(\nabla\varphi)^{t}%
\right\rvert^{2}}}\\
\displaystyle B(t)=B(t,\xi,\eta;x,y)&\displaystyle=\dfrac{(\varphi_{x}^{t}%
\varphi_{xx}^{t}+\varphi_{y}^{t}\varphi_{xy}^{t})\xi+(\varphi_{x}^{t}\varphi_{%
xy}^{t}+\varphi_{y}^{t}\varphi_{yy}^{t})\eta}{1+\left\lvert(\nabla\varphi)^{t}%
\right\rvert^{2}}\\
&\displaystyle=\dfrac{\left(\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}%
\right)_{x}\xi+\left(\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}\right)_{y}%
\eta}{2\left(1+\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}\right)}.\end{split}$$
Note that $A_{t}(t)=-A(t)B(t)$ and
$$\begin{split}\displaystyle B_{t}(t)&\displaystyle=\left(\dfrac{\left(\left%
\lvert(\nabla\varphi)^{t}\right\rvert^{2}\right)_{xx}}{2\left(1+\left\lvert(%
\nabla\varphi)^{t}\right\rvert^{2}\right)}-\left(\dfrac{\left(\left\lvert(%
\nabla\varphi)^{t}\right\rvert^{2}\right)_{x}}{1+\left\lvert(\nabla\varphi)^{t%
}\right\rvert^{2}}\right)^{2}\right)\xi^{2}\\
&\displaystyle\qquad+\left(\dfrac{\left(\left\lvert(\nabla\varphi)^{t}\right%
\rvert^{2}\right)_{xy}}{1+\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}}-%
\dfrac{2\left(\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}\right)_{x}\left(%
\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}\right)_{y}}{\left(1+\left\lvert%
(\nabla\varphi)^{t}\right\rvert^{2}\right)^{2}}\right)\xi\eta\\
&\displaystyle\qquad+\left(\dfrac{\left(\left\lvert(\nabla\varphi)^{t}\right%
\rvert^{2}\right)_{yy}}{2\left(1+\left\lvert(\nabla\varphi)^{t}\right\rvert^{2%
}\right)}-\left(\dfrac{\left(\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}%
\right)_{y}}{1+\left\lvert(\nabla\varphi)^{t}\right\rvert^{2}}\right)^{2}%
\right)\eta^{2},\end{split}$$
each of whose term containing at least a third derivative or a product of two second derivatives of $\varphi$, evaluated at $(x+t\xi,y+t\eta)$.
We compute
$$\begin{split}\displaystyle F^{\prime}&\displaystyle=-\Phi_{\alpha}^{\prime}(%
\mu^{*})\mu_{t}^{*},\\
\displaystyle F^{\prime\prime}&\displaystyle=-\Phi_{\alpha}^{\prime\prime}(\mu%
^{*})(\mu_{t}^{*})^{2}-\Phi_{\alpha}^{\prime}(\mu^{*})\mu_{tt}^{*},\\
\displaystyle(\mu_{t}^{*})^{2}&\displaystyle=(A+B\mu^{*})^{2}\\
&\displaystyle=A^{2}+2AB\mu^{*}+B^{2}(\mu^{*})^{2},\\
\displaystyle\mu_{tt}^{*}&\displaystyle=-A_{t}-B_{t}\mu^{*}-B\mu_{t}^{*}\\
&\displaystyle=AB-B_{t}\mu^{*}+B(A+B\mu^{*})\\
&\displaystyle=2AB+(B^{2}-B_{t})\mu^{*},\\
\displaystyle F^{\prime}(0)&\displaystyle=A(0)\Phi_{\alpha}^{\prime}(\mu^{*}(0%
))+B(0)\mu^{*}(0)\Phi_{\alpha}^{\prime}(\mu^{*}(0)).\end{split}$$
Observe that the extra term in the integrand is chosen in such a way
$$-\bar{\mu}\Phi_{\alpha}^{\prime}(\bar{\mu})\dfrac{(\varphi_{x}\varphi_{xx}+%
\varphi_{y}\varphi_{xy})\xi+(\varphi_{x}\varphi_{xy}+\varphi_{y}\varphi_{yy})%
\eta}{1+\left\lvert\nabla\varphi\right\rvert^{2}}=-B(0)\bar{\mu}\Phi_{\alpha}^%
{\prime}(\bar{\mu}),$$
that is similar to the second term in $F^{\prime}(0)$. In fact, if we let
$$\tilde{\mu}(t)=\tilde{\mu}(t,\xi,\eta,\zeta;x,y,z)=\bar{\mu}+t(\mu^{*}(0)-\bar%
{\mu})$$
and
$$\begin{split}\displaystyle G(t)&\displaystyle=G(t,\xi,\eta,\zeta;x,y,z,\alpha)%
\\
&\displaystyle=\tilde{\mu}(t)\Phi_{\alpha}^{\prime}(\tilde{\mu}(t))\\
&\displaystyle=(\bar{\mu}+t(\mu^{*}(0)-\bar{\mu}))\Phi_{\alpha}^{\prime}(\bar{%
\mu}+t(\mu^{*}(0)-\bar{\mu})),\\
\end{split}$$
then
$$\begin{split}&\displaystyle\quad\,\,\mu^{*}(0)\Phi_{\alpha}^{\prime}(\mu^{*}(0%
))-\bar{\mu}\Phi_{\alpha}^{\prime}(\bar{\mu})\\
&\displaystyle=G(1)-G(0)\\
&\displaystyle=\int_{0}^{1}G^{\prime}(t)\,dt\\
&\displaystyle=(\mu^{*}(0)-\bar{\mu})\int_{0}^{1}\left(\Phi_{\alpha}^{\prime}(%
\tilde{\mu}(t))+\tilde{\mu}(t)\Phi_{\alpha}^{\prime\prime}(\tilde{\mu}(t))%
\right)\,dt\\
&\displaystyle=\dfrac{\zeta-\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+\left%
\lvert\nabla\varphi\right\rvert^{2}}}\int_{0}^{1}\left(\Phi_{\alpha}^{\prime}(%
\tilde{\mu}(t))+\tilde{\mu}(t)\Phi_{\alpha}^{\prime\prime}(\tilde{\mu}(t))%
\right)\,dt\end{split}$$
and hence
$$\begin{split}&\displaystyle\quad\,\,F^{\prime}(0)-B(0)\bar{\mu}\Phi_{\alpha}^{%
\prime}(\bar{\mu})\\
&\displaystyle=A(0)\Phi_{\alpha}^{\prime}(\mu^{*}(0))\\
&\displaystyle\qquad+B(0)\dfrac{\zeta-\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+%
\left\lvert\nabla\varphi\right\rvert^{2}}}\int_{0}^{1}\!\left(\Phi_{\alpha}^{%
\prime}(\tilde{\mu}(t))+\tilde{\mu}(t)\Phi_{\alpha}^{\prime\prime}(\tilde{\mu}%
(t))\right)\,dt.\end{split}$$
The integrand in (5.3) now has the expression
$$\begin{split}&\displaystyle\quad\,\,-\Phi_{\alpha}\left(\dfrac{z+\zeta-\varphi%
^{1}}{\sqrt{1+\left\lvert(\nabla\varphi)^{1}\right\rvert^{2}}}\right)+\Phi_{%
\alpha}\left(\dfrac{z+\zeta-\varphi-\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+%
\left\lvert\nabla\varphi\right\rvert^{2}}}\right)\\
&\displaystyle\qquad\,\,-\bar{\mu}\Phi_{\alpha}^{\prime}(\bar{\mu})\dfrac{%
\varphi_{x}\varphi_{xx}\xi+\varphi_{y}\varphi_{yy}\eta}{1+\left\lvert\nabla%
\varphi\right\rvert^{2}}\\
&\displaystyle=F(1)-F(0)-B(0)\bar{\mu}\Phi_{\alpha}^{\prime}(\bar{\mu})\\
&\displaystyle=F^{\prime}(0)-B(0)\bar{\mu}\Phi_{\alpha}^{\prime}(\bar{\mu})+%
\int_{0}^{1}\!(1-t)F^{\prime\prime}(t)\,dt\\
&\displaystyle=A(0)\Phi_{\alpha}^{\prime}(\mu^{*}(0))+B(0)\dfrac{\zeta-\varphi%
_{x}\xi-\varphi_{y}\eta}{\sqrt{1+\left\lvert\nabla\varphi\right\rvert^{2}}}%
\int_{0}^{1}\!(\Phi_{\alpha}^{\prime}(\tilde{\mu})+\tilde{\mu}\Phi_{\alpha}^{%
\prime\prime}(\tilde{\mu}))\,dt\\
&\displaystyle\quad\,\,+\int_{0}^{1}\!(1-t)\Big{[}-\Phi_{\alpha}^{\prime\prime%
}(\mu^{*})\left(A^{2}+2AB\mu^{*}+B^{2}(\mu^{*})^{2}\right)\\
&\displaystyle\qquad\qquad\qquad\qquad-\Phi_{\alpha}^{\prime}(\mu^{*})\left(2%
AB+\left(B^{2}-B_{t}\right)\mu^{*}\right)\Big{]}\,dt\end{split}$$
In $D_{\alpha}\subset{D_{1}}$, we have
$$\sqrt{\xi^{2}+\eta^{2}+\zeta^{2}}\leq\dfrac{1}{2}\left(C_{P}\left\lvert P^{%
\prime}(\lambda)\right\rvert\right)^{-\frac{1}{{2s}}}\leq\dfrac{1+\lambda}{2}$$
and hence
$$1+\lambda(x+t\xi,y+t\eta)\geq 1+\lambda(x,y)-t\sqrt{\xi^{2}+\eta^{2}}\geq%
\dfrac{1+\lambda(x,y)}{2}.$$
We use this to bound, using Lemma 4.4 and Lemma 4.2,
$$\begin{split}\displaystyle\left\lvert\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}%
\varphi(x+t\xi,y+t\eta)\right\rvert&\displaystyle\leq{C_{\varphi}}\left\lvert P%
^{(i_{1}+i_{2})}(\lambda(x+t\xi,y+t\eta))\right\rvert\\
&\displaystyle\leq{C_{\varphi}}{C_{P}}(1+\lambda(x+t\xi,y+t\eta))^{-{2s}+1-i_{%
1}-i_{2}}\\
&\displaystyle\leq{C_{\varphi}}{C_{P}}2^{{2s}-1+i_{1}+i_{2}}(1+\lambda(x,y))^{%
-{2s}+1-i_{1}-i_{2}}\\
&\displaystyle\leq 16{C_{\varphi}}{C_{P}}(1+\lambda)^{-{2s}+1-i_{1}-i_{2}}.%
\end{split}$$
This gives an estimate for $A$, $B$ and $B_{t}$
$$\begin{split}\displaystyle\left\lvert A(t)\right\rvert&\displaystyle\leq 8{C_{%
\varphi}}{C_{P}}\left(\left\lvert\xi\right\rvert^{2}+2\left\lvert\xi\right%
\rvert\left\lvert\eta\right\rvert+\left\lvert\eta\right\rvert^{2}\right)(1+%
\lambda)^{-{2s}-1},\\
\displaystyle\left\lvert B(t)\right\rvert&\displaystyle\leq 32{m_{*}}{C_{%
\varphi}}{C_{P}}(\left\lvert\xi\right\rvert+\left\lvert\eta\right\rvert)(1+%
\lambda)^{-{2s}-1},\\
\displaystyle\left\lvert B_{t}(t)\right\rvert&\displaystyle\leq\tilde{C}_{7}(s%
,m_{*},\tilde{\rho},\gamma)\left(\left\lvert\xi\right\rvert^{2}+\left\lvert\xi%
\right\rvert\left\lvert\eta\right\rvert+\left\lvert\eta\right\rvert^{2}\right)%
(1+\lambda)^{-{2s}-2},\end{split}$$
which are all uniform in $t\in[0,1]$. Note that for the estimate of $B_{t}$ we have used (4.2). Denoting
$$r=\sqrt{\xi^{2}+\eta^{2}+\zeta^{2}}<1\qquad\text{and}\qquad\left\|{A}\right\|_%
{\infty}=\displaystyle\sup_{t\in[0,1]}\left\lvert A(t)\right\rvert,$$
similarly for $B$ and $B_{t}$, we can find a constant $C_{AB}=C_{AB}(s,m_{*},\tilde{\rho},\gamma)$ such that
$$\begin{split}\displaystyle\left\|{A}\right\|_{\infty}&\displaystyle\leq{C}_{AB%
}r^{2}(1+\lambda)^{-{2s}-1},\\
\displaystyle 2\left\|{B}\right\|_{\infty}&\displaystyle\leq{C}_{AB}r(1+%
\lambda)^{-{2s}-1},\\
\displaystyle\left\|{A}\right\|_{\infty}^{2}&\displaystyle\leq{C}_{AB}r^{4}(1+%
\lambda)^{-{4s}-2},\\
\displaystyle 4\left\|{A}\right\|_{\infty}\left\|{B}\right\|_{\infty}&%
\displaystyle\leq{C}_{AB}r^{2}(1+\lambda)^{-{4s}-1},\\
\displaystyle 2\left\|{B}\right\|_{\infty}^{2}&\displaystyle\leq{C}_{AB}r^{2}(%
1+\lambda)^{-{4s}-2},\\
\displaystyle\left\|{B_{t}}\right\|_{\infty}&\displaystyle\leq{C}_{AB}r^{2}(1+%
\lambda)^{-{2s}-2}.\end{split}$$
Note also that by Cauchy-Schwarz inequality,
$$\dfrac{\zeta-\varphi_{x}\xi-\varphi_{y}\eta}{\sqrt{1+\left\lvert\nabla\varphi%
\right\rvert^{2}}}\leq\sqrt{\xi^{2}+\eta^{2}+\zeta^{2}}=r.$$
Denoting $\left\|{\mu^{i}\Phi_{\alpha}^{(j)}}\right\|_{\infty}=\displaystyle\sup_{\mu\in%
\mathbb{R}}\,\left\lvert\mu^{i}\Phi_{\alpha}^{(j)}(\mu)\right\rvert$ for $0\leq{i}\leq{j}\leq 2$ and using Corollary B.1, we can estimate $J_{1}$ as
$$\begin{split}\displaystyle\left\lvert J_{1}\right\rvert&\displaystyle\leq\int_%
{D_{\alpha}}\!\Big{[}\left\lvert A(0)\right\rvert\left\|{\Phi_{\alpha}^{\prime%
}}\right\|_{\infty}+\left\lvert B(0)\right\rvert r\left(\left\|{\Phi_{\alpha}^%
{\prime}}\right\|_{\infty}+\left\|{\mu\Phi_{\alpha}^{\prime\prime}}\right\|_{%
\infty}\right)\\
&\displaystyle\qquad\qquad+\left\|{\Phi_{\alpha}^{\prime\prime}}\right\|_{%
\infty}\left\|{A}\right\|_{\infty}^{2}+2\left\|{A}\right\|_{\infty}\left\|{B}%
\right\|_{\infty}(\left\|{\Phi_{\alpha}^{\prime}}\right\|_{\infty}+\left\|{\mu%
\Phi_{\alpha}^{\prime\prime}}\right\|_{\infty})\\
&\displaystyle\qquad\qquad+\left\|{B}\right\|_{\infty}^{2}(\left\|{\mu\Phi_{%
\alpha}^{\prime}}\right\|_{\infty}+\left\|{\mu^{2}\Phi_{\alpha}^{\prime\prime}%
}\right\|_{\infty})\\
&\displaystyle\qquad\qquad+\left\|{B_{t}}\right\|_{\infty}\left\|{\mu\Phi_{%
\alpha}^{\prime}}\right\|_{\infty}\Big{]}\,\dfrac{d{\xi}d{\eta}d{\zeta}}{(\xi^%
{2}+\eta^{2}+\zeta^{2})^{\frac{3}{2}+s}}\\
&\displaystyle\leq 4\pi{C_{AB}}{C_{\Phi}}(1+\lambda)^{-{2s}-1}\int_{0}^{\frac{%
\sqrt{\alpha}}{2}\left(C_{P}\left\lvert P^{\prime}(\lambda)\right\rvert\right)%
^{-\frac{1}{{2s}}}}\!\Bigg{[}\alpha^{-1}(r^{2}+r^{2})\\
&\displaystyle\qquad\qquad+\alpha^{-2}r^{4}(1+\lambda)^{-{2s}-1}+\alpha^{-1}r^%
{2}(1+\lambda)^{-{2s}}+r^{2}(1+\lambda)^{-{2s}-1}\\
&\displaystyle\qquad\qquad+r^{2}(1+\lambda)^{-1}\Bigg{]}\dfrac{r^{2}\,dr}{r^{3%
+2s}}\\
&\displaystyle\leq 4\pi{C_{AB}}{C_{\Phi}}(1+\lambda)^{-{2s}-1}\int_{0}^{\frac{%
\sqrt{\alpha}}{2}\left(C_{P}\left\lvert P^{\prime}(\lambda)\right\rvert\right)%
^{-\frac{1}{{2s}}}}\!\Big{[}\alpha^{-2}(1+\lambda)^{-{2s}-1}r^{3-2s}\\
&\displaystyle\hskip 213.395669pt+5\alpha^{-1}r^{1-2s}\Big{]}\,dr\\
&\displaystyle=\dfrac{4\pi{C_{AB}}{C_{\Phi}}}{4-2s}\alpha^{-2}(1+\lambda)^{-{4%
s}-2}\left(\dfrac{\sqrt{\alpha}}{2}\right)^{4-2s}(C_{P}\left\lvert P^{\prime}(%
\lambda)\right\rvert)^{-\frac{4-2s}{{2s}}}\\
&\displaystyle\qquad+\dfrac{20\pi{C_{AB}}{C_{\Phi}}}{2-2s}\alpha^{-1}(1+%
\lambda)^{-{2s}-1}\left(\dfrac{\sqrt{\alpha}}{2}\right)^{2-2s}(C_{P}\left%
\lvert P^{\prime}(\lambda)\right\rvert)^{-\frac{2-2s}{{2s}}}\\
&\displaystyle=\dfrac{\pi{C_{AB}}{C_{\Phi}}}{2^{3-2s}(2-s)}\alpha^{-s}(1+%
\lambda)^{-{6s}+2}+\dfrac{5\pi{C_{AB}}{C_{\Phi}}}{2^{1-2s}(1-s)}\alpha^{-s}(1+%
\lambda)^{-{4s}+1}\\
&\displaystyle=\dfrac{5\pi{C_{AB}}{C_{\Phi}}}{2^{1-2s}(1-s)}\alpha^{-s}(1+%
\lambda)^{-{4s}+1}\left(1+\dfrac{1-s}{20(2-s)}(1+\lambda)^{-{2s}+1}\right)\\
&\displaystyle\leq\dfrac{2^{2s}5\pi{C_{AB}}{C_{\Phi}}}{1-s}\alpha^{-s}(1+%
\lambda)^{-{4s}+1}\end{split}$$
The proof is completed by taking
$$C_{R_{1}}=\dfrac{2^{2+2s}\pi}{s}C_{P}^{2}+\dfrac{2^{2s}5\pi}{1-s}{C_{AB}}{C_{%
\Phi}}.$$
∎
We can now prove
Proposition 5.2.
There exist small parameters $\varepsilon$ and $\alpha$ such that $V(x,y,z)$ is a super-solution of (1.4). In fact, $V\in{C}_{s}^{2}(\mathbb{R}^{3})$ and
$${\mathcal{L}}[V]>0\quad\text{and}\quad v<V\quad\text{in}~{}\mathbb{R}^{3}.$$
Proof.
We recall the calculations in Section 2 gives
$$\mathcal{L}[V]=S(\alpha{x},\alpha{y})\left(-\Phi^{\prime}(\hat{\mu})-%
\varepsilon\int_{0}^{1}\!f^{\prime}(\Phi(\hat{\mu})+t\varepsilon{S})\,dt+%
\alpha^{2s}\dfrac{\bar{R}(\alpha{x},\alpha{y})}{S(\alpha{x},\alpha{y})}\right),$$
where $S$ is defined by (4.4) and $\bar{R}=R_{1}+R_{2}$ with
$$\begin{split}\displaystyle R_{1}&\displaystyle=(-\Delta)^{s}(\Phi_{\alpha}(%
\bar{\mu}(\alpha{x},\alpha{y},\alpha{z})))-((-\partial^{2})^{s}\Phi_{\alpha})(%
\bar{\mu}(\alpha{x},\alpha{y},\alpha{z})),\\
\displaystyle R_{2}&\displaystyle=\varepsilon((-\Delta)^{s}{S})(\alpha{x},%
\alpha{y}).\end{split}$$
By Proposition 5.1 and Lemma 4.5, there is a constant $C_{R}=C_{R}(s,c,m_{*},\tilde{\rho},\gamma)$ such that
$${\mathcal{L}}[V]\geq{S}(\alpha{x},\alpha{y})\left(-\Phi^{\prime}(\hat{\mu})-%
\varepsilon\int_{0}^{1}\!f^{\prime}(\Phi(\hat{\mu})+t\varepsilon{S})\,dt-C_{R}%
\alpha^{s}\right).$$
The rest of the proof is similar to [55]. Here we still include it for the sake of completeness.
Since $f$ is $C^{1}$ and $f^{\prime}(\pm 1)<0$, there exists constants $\delta_{*}\in(0,1/4)$ and $\kappa_{1}>0$ such that
$$-f^{\prime}(t)>\kappa_{1}\quad\text{for}\quad 1-\left\lvert t\right\rvert<2%
\delta_{*}.$$
Write $\kappa_{2}=\left\|{f^{\prime}}\right\|_{L^{\infty}(-1-\delta_{*},1+\delta_{*})}$ and
$$\Phi_{*}=\min\left\{-\Phi^{\prime}(\mu)\mid-1+\delta_{*}\leq\Phi(\mu)\leq 1-%
\delta_{*}\right\}.$$
Also, in view of Lemma 4.4 and Lemma 4.5,
$$\dfrac{\varphi(x,y)-h(x,y)}{S(x,y)}\geq\dfrac{{C_{\varphi}^{-1}}P(\lambda)}{C_%
{S}\left\lvert P^{\prime}(\lambda)\right\rvert}\geq{C_{\varphi}^{-1}}{C_{S}^{-%
1}}{C_{P}^{-2}}(1+\lambda)^{-1}$$
for all $(x,y)\in\mathbb{R}^{2}$, so we can write
$$\omega=\min_{(x,y)\in\mathbb{R}^{2}}\dfrac{\varphi(x,y)-h(x,y)}{S(x,y)}\geq{C_%
{\varphi}^{-1}}{C_{S}^{-1}}{C_{P}^{-2}}>0.$$
By the decay of $\Phi^{\prime}(\mu)$, we can find constants $\tilde{C}_{\Phi}$ and $r_{4}$ such that
$$\left\lvert\Phi^{\prime}(\mu)\right\rvert\leq\dfrac{\tilde{C}_{\Phi}}{\left%
\lvert\mu\right\rvert^{1+2s}}\quad\text{for}\quad\left\lvert\mu\right\rvert%
\geq{r_{4}}.$$
Hence
$$\left\lvert\mu\Phi^{\prime}(\mu)\right\rvert\leq\dfrac{\tilde{C}_{\Phi}}{\left%
\lvert\mu\right\rvert^{2s}}\quad\text{for}\quad\left\lvert\mu\right\rvert\geq{%
r_{4}}.$$
Choose
$$\varepsilon<\min\left\{\dfrac{1}{2},\dfrac{\delta_{*}}{c-k},\dfrac{\Phi_{*}}{3%
\kappa_{2}}\right\}$$
(5.4)
and then choose
$$\alpha<\min\left\{\dfrac{1}{2},\left(\dfrac{\varepsilon\kappa_{1}}{2C_{R}}%
\right)^{\frac{1}{s}},\left(\dfrac{\Phi_{*}}{3C_{R}}\right)^{\frac{1}{s}},%
\dfrac{k\omega}{2r_{4}},\dfrac{k\omega}{2}\left(\dfrac{k\varepsilon}{2\tilde{C%
}_{\Phi}}\right)^{\frac{1}{2s}}\right\}.$$
(5.5)
We consider two cases.
Case 1: $1-\left\lvert\Phi(\hat{\mu})\right\rvert<\delta_{*}$. Then for $0\leq{t}\leq 1$, $\left\lvert t\varepsilon{S}\right\rvert\leq\varepsilon(c-k)\leq\delta_{*}$ and so $1-(\Phi(\hat{\mu})+t\varepsilon{S})<2\delta_{*}$. Since $-\Phi^{\prime}(\hat{\mu})>0$, we have
$${\mathcal{L}}[V]\geq{S}(\alpha{x},\alpha{y})(\varepsilon\kappa_{1}-C_{R}\alpha%
^{s})\geq\dfrac{\varepsilon\kappa_{1}}{2}S(\alpha{x},\alpha{y})>0.$$
Case 2: $-1+\delta_{*}\leq\Phi(\hat{\mu})\leq 1-\delta_{*}$. Then
$${\mathcal{L}}[V]\geq{S}(\alpha{x},\alpha{y})\left(\Phi_{*}-\varepsilon\kappa_{%
2}-C_{R}\alpha^{s}\right)\geq\dfrac{\Phi_{*}}{3}S(\alpha{x},\alpha{y})>0.$$
Therefore, ${\mathcal{L}}[V]>0$ on $\mathbb{R}^{3}$, that is, $V$ is a super-solution. Next we prove that $v<V$ on $\mathbb{R}^{3}$, that is, for any $j$ and any $(x,y)\in\overline{\Omega_{j}}$,
$$\Phi\left(\hat{\mu}\right)+\varepsilon{S}(\alpha{x},\alpha{y})>\Phi\left(%
\dfrac{k}{c}(z-a_{j}x-b_{j}y)\right).$$
To simplify the notation, we drop the subscript $j$ in $a_{j}$ and $b_{j}$. We consider two cases.
Case 1: $\hat{\mu}\leq\dfrac{k}{c}(z-ax-by)$. Then it is clear that
$$V(x,y,z)>\Phi(\hat{\mu})\geq\left(\dfrac{k}{c}(z-ax-by)\right).$$
Case 2: $\hat{\mu}=\dfrac{z-\alpha^{-1}\varphi(\alpha{x},\alpha{y})}{\sqrt{1+\left%
\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}>\dfrac{k}{c}(z-ax-by)$. We insert a term $ax+by$ in the left hand side and rearrange the inequality as follows.
$$\dfrac{z-ax-by-\alpha^{-1}(\varphi(\alpha{x},\alpha{y})-a\alpha{x}-b\alpha{y})%
}{\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}>%
\dfrac{k}{c}(z-ax-by),$$
$$\left(\dfrac{c}{\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},\alpha{y})\right%
\rvert^{2}}}-k\right)(z-ax-by)>\dfrac{c(\varphi(\alpha{x},\alpha{y})-a\alpha{x%
}-b\alpha{y})}{\alpha\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},\alpha{y})%
\right\rvert^{2}}},$$
that is,
$$S(\alpha{x},\alpha{y})(z-ax-by)>\dfrac{c(\varphi-h)(\alpha{x},\alpha{y})}{%
\alpha\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}.$$
By the definition of $\omega$, we have
$$z-ax-by>\dfrac{c\omega}{\alpha\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},%
\alpha{y})\right\rvert^{2}}}\geq\dfrac{c\omega}{\alpha}.$$
On the other hand, since $\varphi>h$, we have
$$\hat{\mu}>\dfrac{z-ax-by}{\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},\alpha{y}%
)\right\rvert^{2}}}.$$
Now
$$\begin{split}&\displaystyle\quad\,\,V(x,y,z)-\Phi\left(\dfrac{k}{c}(z-ax-by)%
\right)\\
&\displaystyle\geq\Phi\left(\dfrac{z-ax-by}{\sqrt{1+\left\lvert\nabla\varphi(%
\alpha{x},\alpha{y})\right\rvert^{2}}}\right)-\Phi\left(\dfrac{k}{c}(z-ax-by)%
\right)+\varepsilon{S}(\alpha{x},\alpha{y})\\
&\displaystyle=\dfrac{(z-ax-by)S(\alpha{x},\alpha{y})}{c}\\
&\displaystyle\qquad\cdot\int_{0}^{1}\!\Phi^{\prime}\left(\left(\dfrac{t}{%
\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}+\dfrac%
{k}{c}(1-t)\right)(z-ax-by)\right)\,dt\\
&\displaystyle\qquad+\varepsilon{S}(\alpha{x},\alpha{y})\\
\end{split}$$
Let us write, for $t\in[0,1]$,
$$\begin{split}\displaystyle\mu_{*}=\mu_{*}(t)&\displaystyle=\left(\dfrac{t}{%
\sqrt{1+\left\lvert\nabla\varphi(\alpha{x},\alpha{y})\right\rvert^{2}}}+\dfrac%
{k}{c}(1-t)\right)(z-ax-by)\\
&\displaystyle=\dfrac{k+tS(\alpha{x},\alpha{y})}{c}(z-ax-by).\end{split}$$
We know from Lemma 4.5 that $S>0$, thus $\dfrac{z-ax-by}{c}\leq\dfrac{\mu_{*}}{k}$ and $\mu_{*}\geq\dfrac{k\omega}{\alpha}$. Now
$$\begin{split}&\displaystyle\quad\,\,V(x,y,z)-\Phi\left(\dfrac{k}{c}(z-ax-by)%
\right)\\
&\displaystyle\geq{S}(\alpha{x},\alpha{y})\left(\varepsilon-\dfrac{1}{k}\int_{%
0}^{1}\!\mu_{*}(t)\Phi^{\prime}(\mu_{*}(t))\,dt\right)\\
&\displaystyle\geq{S}(\alpha{x},\alpha{y})\left(\varepsilon-\dfrac{1}{k}\sup_{%
\left\lvert\mu\right\rvert\geq\frac{k\omega}{\alpha}}\left\lvert\mu\Phi^{%
\prime}(\mu)\right\rvert\right)\\
&\displaystyle\geq\dfrac{\varepsilon}{2}S(\alpha{x},\alpha{y})\\
&\displaystyle>0\end{split}$$
This concludes the proof.
∎
6. Proof of the main result
Since we have constructed a sub-solution and a super-solution, it suffices to carry out the monotone iteration argument. See [51], [50].
Proof of Theorem 1.2.
Write $L(u)=(-\Delta)^{s}{u}-cu_{z}+Ku$ where $K>\left\|{f^{\prime}}\right\|_{L^{\infty}([-1,1])}$ is so large that $\left\lvert\xi\right\rvert^{2s}-c\xi_{3}+K>0$ for any $\xi\in\mathbb{R}^{3}$.
Using Lemma A.7, we construct a sequence $\left\{w_{m}\right\}\subset{C}^{2s+\beta}(\mathbb{R}^{3})\cap{L}^{\infty}(%
\mathbb{R}^{3})$ for some $\beta\in(0,1)$ as follows.
Let $w_{0}=v$. For any $m\geq 1$, let $w_{m}$ be the unique solution of the linear equation
$$L(w_{m})=f(w_{m-1})+Kw_{m-1}.$$
We will prove by induction that
$$-1<v=w_{0}<w_{1}<\cdots<w_{m-1}<w_{m}<\cdots<V<1$$
for all $m\geq 1$, using the strong maximum principle stated in Lemma A.6.
If we define $g(u)=f(u)+Ku$, then $g^{\prime}(u)>0$, showing that $f(u_{1})+Ku_{1}-f(u_{2})-Ku_{2}>0$ if $u_{1}>{u_{2}}$.
For $m=1$, we have for each $1\leq{j}\leq{N}$, $L(v_{j})=(-\Delta)^{s}{v_{j}}-c(v_{j})_{z}-f(v_{j})+f(v_{j})+Kv_{j}=f(v_{j})+%
Kv_{j}$ and so
$$L(w_{1}-v_{j})=f(v)+Kv-f(v_{j})-Kv_{j}\geq 0$$
since $v=\max_{j}v_{j}\geq{v_{j}}$ and
$$L(V-w_{1})\geq{f}(V)+KV-f(v)-Kv>0.$$
Hence, $v<w_{1}<V$ unless $w_{1}=v$, giving a non-smooth solution ${\mathcal{L}}[w_{1}]=0$, which is impossible by the regularity of $\mathcal{L}$.
By the Schauder estimate (A.1), since $v\in{C}^{0,1}(\mathbb{R}^{3})\subset{C}^{0,\frac{3}{2}-s}(\mathbb{R}^{3})$, $w_{1}\in{C}^{2,s-\frac{1}{2}}(\mathbb{R}^{3})$.
In particular, $(-\Delta)^{s}{w_{1}}$ is well-defined by the singular integral.
For any $m\geq 2$, if $w_{m-2}<w_{m-1}<V$, then
$$L(w_{m}-w_{m-1})=f(w_{m-1})+Kw_{m-1}-f(w_{m-2})-Kw_{m-2}\geq 0$$
and
$$L(V-w_{m})\geq{f}(V)+KV-f(w_{m-1})-Kw_{m-1}>0.$$
Hence, $w_{m-1}<v_{m}<V$ unless $w_{m-1}=w_{m}$, in which case ${\mathcal{L}}[w_{m}]=0$.
Using the Schauder estimate (A.1) and an interpolation inequality, we have $w_{m}\in{C}^{2,s-\frac{1}{2}}(\mathbb{R}^{3})$ and
$$\begin{split}\displaystyle\left\|{w_{m}}\right\|_{C^{2,s-\frac{1}{2}}(\mathbb{%
R}^{3})}&\displaystyle\leq\left\|{f(w_{m-1})}\right\|_{C^{0,\frac{3}{2}-s}(%
\mathbb{R}^{3})}+K\left\|{w_{m-1}}\right\|_{C^{0,\frac{3}{2}-s}(\mathbb{R}^{3}%
)}\\
&\displaystyle\leq\left\|{f}\right\|_{L^{\infty}({[-1,1]})}+2K\left\|{w_{m-1}}%
\right\|_{C^{0,\frac{3}{2}-s}(\mathbb{R}^{3})}\\
&\displaystyle\leq\tilde{C}_{8}\left(s,K,\left\|{f}\right\|_{L^{\infty}({[-1,1%
]})}\right)+\dfrac{1}{2}\left\|{w_{m-1}}\right\|_{C^{2,s-\frac{1}{2}}(\mathbb{%
R}^{3})}\end{split}$$
Iterating this yields $\left\|{w_{m}}\right\|_{C^{2,s-\frac{1}{2}}(\mathbb{R}^{3})}\leq 2\tilde{C}_{8%
}+\dfrac{1}{2^{m-2}}\left\|{w_{1}}\right\|_{C^{2,s-\frac{1}{2}}(\mathbb{R}^{3})}$.
Therefore, the sequence $\left\{w_{m}\right\}_{m\geq 1}$ is monotone increasing and uniformly bounded in $C^{2,s-\frac{1}{2}}(\mathbb{R}^{3})$.
Hence the pointwise limit
$$u(x)=\lim_{m\to\infty}w_{m}(x)$$
exists and is unique.
By Arzelà-Ascoli theorem, we can extract from each subsequence of $\left\{w_{m}\right\}$ a subsequence converging in $C^{2}(\mathbb{R}^{3})$.
Since $u$ is unique, we conclude that
$$w_{m}\to{u}\qquad\text{in}~{}C^{2}(\mathbb{R}^{3})$$
Clearly, $\mathcal{L}[u]=0$ in $\mathbb{R}^{3}$ and the proof is now complete.
∎
Appendix A Properties of the fractional Laplacian
Here we list some elementary properties of the fractional Laplacian mentioned in [58], [12], [49], [34].
A.1. Basic properties
It is convenient to have the following
Lemma A.1.
For any $a\in\mathbb{R}$, we have
$$\int_{\infty}^{\infty}\!\dfrac{dx}{(x^{2}+a^{2})^{1+s}}=\dfrac{\sqrt{\pi}%
\Gamma(\frac{1}{2}+s)}{\Gamma(1+s)}\dfrac{1}{\left\lvert a\right\rvert^{1+2s}}%
=\dfrac{C_{1,s}}{C_{2,s}}\dfrac{1}{\left\lvert a\right\rvert^{1+2s}}.$$
As a consequence,
$$\int_{\infty}^{\infty}\!\dfrac{dx}{(x^{2}+a^{2})^{\frac{n}{2}+s}}=\dfrac{\sqrt%
{\pi}\Gamma(\frac{n}{2}+s)}{\Gamma(\frac{n+1}{2}+s)}\dfrac{1}{\left\lvert a%
\right\rvert^{n+2s}}=\dfrac{C_{n,s}}{C_{n+1,s}}\dfrac{1}{\left\lvert a\right%
\rvert^{n+2s}}$$
for any integer $n\geq 1$.
Proof.
Without loss of generality, we can assume that $a=1$ by a simple scaling.
Using the substitution $y=1/(x^{2}+1)$, we have $x=\sqrt{1/y-1}$ and $dx=-(1/2)y^{-3/2}(1-y)^{-1/2}\,dy$, so
$$\begin{split}\displaystyle\int_{\infty}^{\infty}\!\dfrac{dx}{(x^{2}+1)^{1+s}}&%
\displaystyle=2\int_{0}^{\infty}\!\dfrac{dx}{(x^{2}+1)^{1+s}}\\
&\displaystyle=\int_{0}^{1}\!y^{s-\frac{1}{2}}(1-y)^{-\frac{1}{2}}\,dy\\
&\displaystyle=\mathrm{B}\left(\dfrac{1}{2},\dfrac{1}{2}+s\right)\\
&\displaystyle=\dfrac{\sqrt{\pi}\Gamma\left(\frac{1}{2}+s\right)}{\Gamma(1+s)}%
,\end{split}$$
where $\mathrm{B}$ denotes the Beta function.
The second equality is obtained by replacing $s$ by $s+n/2$.
∎
Corollary A.1.
If $u(x)=u(x_{1},\dots,x_{n})$ only depends on $x_{1}$, then
$$(-\Delta)^{s}{u}(x_{1})=(-\partial^{2})^{s}{u}(x_{1}).$$
Proof.
Clearly, by induction we have
$$\begin{split}\displaystyle(-\Delta)^{s}{u}(x_{1})&\displaystyle=C_{n,s}%
\textnormal{P.V.}\,\int_{\mathbb{R}^{n}}\!\dfrac{u(x_{1})-u(x_{1}+\xi_{1})}{(%
\xi_{1}^{2}+\cdots+\xi_{n}^{2})^{\frac{n}{2}+s}}\,d{\xi_{n}}\cdots{d}{\xi_{1}}%
\\
&\displaystyle=C_{n-1,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{n-1}}\!\dfrac{u(x%
_{1})-u(x_{1}+\xi_{1})}{(\xi_{1}^{2}+\cdots+\xi_{n-1}^{2})^{\frac{n-1}{2}+s}}%
\,d{\xi_{n-1}}\cdots{d}{\xi_{1}}\\
&\displaystyle\,\,\,\vdots\\
&\displaystyle=C_{1,s}\textnormal{P.V.}\,\int_{\mathbb{R}}\!\dfrac{u(x_{1})-u(%
x_{1}+\xi_{1})}{\left\lvert\xi_{1}\right\rvert^{1+2s}}\,d{\xi_{1}}\\
&\displaystyle=(-\partial^{2})^{s}{u}(x_{1}).\end{split}$$
∎
Lemma A.2 (Homogeneity).
For any admissible $u:\mathbb{R}^{n}\to\mathbb{R}$, $x\in\mathbb{R}^{n}$ and $a\in\mathbb{R}$,
$$(-\Delta)^{s}(u(ax))=\left\lvert a\right\rvert^{2s}(-\Delta)^{s}{u}(ax).$$
In particular, if $u$ is an even function then so is $(-\Delta)^{s}{u}$.
Proof.
This is trivial and follows from a change of variable $\eta=a\xi$. Indeed,
$$\begin{split}\displaystyle(-\Delta)^{s}(u(ax))&\displaystyle=C_{n,s}%
\textnormal{P.V.}\,\int_{\mathbb{R}^{n}}\!\dfrac{u(ax)-u(ax+a\xi)}{\left\lvert%
\xi\right\rvert^{n+2s}}\,d\xi\\
&\displaystyle=C_{n,s}\left\lvert a\right\rvert^{2s}\textnormal{P.V.}\,\int_{%
\mathbb{R}^{n}}\!\dfrac{u(ax)-u(ax+a\xi)}{\left\lvert a\xi\right\rvert^{n+2s}}%
\left\lvert a\right\rvert^{n}\,d\xi\\
&\displaystyle=C_{n,s}\left\lvert a\right\rvert^{2s}\textnormal{P.V.}\,\int_{%
\mathbb{R}^{n}}\!\dfrac{u(ax)-u(ax+\eta)}{\left\lvert\eta\right\rvert^{n+2s}}%
\,d\eta\\
&\displaystyle=\left\lvert a\right\rvert^{2s}(-\Delta)^{s}{u}(ax).\end{split}$$
∎
Lemma A.3 (Commuting with rigid motions).
Suppose $M:\mathbb{R}^{n}\to\mathbb{R}^{n}$ is a rigid motion, that is, for $x\in\mathbb{R}^{n}$, $Mx=Ax+b$ where $A\in{O}(n)$ and $b\in\mathbb{R}^{n}$. Then
$$(-\Delta)^{s}(u(Mx))=((-\Delta)^{s}{u})(Mx).$$
Proof.
It is equivalent to showing that
$$\begin{split}&\displaystyle C_{n,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{n}}\!%
\dfrac{u(Ax+b)-u(Ax+b+A\xi)}{\left\lvert\xi\right\rvert^{n+2s}}\,d{\xi}\\
&\displaystyle\qquad=C_{n,s}\textnormal{P.V.}\,\int_{\mathbb{R}^{n}}\!\dfrac{u%
(Ax+b)-u(Ax+b+\xi)}{\left\lvert\xi\right\rvert^{n+2s}}\,d{\xi}.\end{split}$$
But since $\left\lvert A\xi\right\rvert=\left\lvert\xi\right\rvert$ and $\left\lvert\det{A}\right\rvert=1$, the result follows from a change of variable $\xi\mapsto{A}\xi$.
∎
Combining the above simple results, we obtain a useful chain rule.
Lemma A.4 (Chain rule for linear transformation).
For any admissible $u:\mathbb{R}^{n}\to\mathbb{R}$, and $x,a\in\mathbb{R}^{n}$, there holds
$$(-\Delta)^{s}(u(a\cdot{x}))=\left\lvert a\right\rvert^{s}(-\partial^{2})^{s}{u%
}(a\cdot{x}).$$
Proof.
By Lemma A.2, we may assume that $\left\lvert a\right\rvert=1$. By extending $v_{1}=a\in\mathbb{R}^{n}$ to an orthonormal basis $\left\{v_{1},\dots,v_{n}\right\}$ in $\mathbb{R}^{n}$, we can construct an orthogonal matrix $A\in{O}(n)$ whose $i$-th row is the row vector $v_{i}$. In particular, $(Ax)_{1}=a\cdot{x}$. By Lemma A.3 and Corollary A.1, we have
$$\begin{split}\displaystyle(-\Delta)^{s}(u(a\cdot{x}))&\displaystyle=(-\Delta)^%
{s}(u((Ax)_{1}))\\
&\displaystyle=((-\Delta)^{s}{u})((Ax)_{1})\\
&\displaystyle=((-\partial^{2})^{s}{u})((Ax)_{1})\\
&\displaystyle=(-\partial^{2})^{s}{u}(a\cdot{x}).\end{split}$$
∎
A.2. Decay properties
If a function $u$ decays together with its derivatives $\nabla{u}$ and $D^{2}{u}$ at infinity, $(-\Delta)^{s}{u}$ gains a decay of order $2s$, but never better than $O\left(\left\lvert x\right\rvert^{-2s}\right)$ because of its nonlocal nature. In a more subtle case when $D^{2}{u}(x)$ does not decay in $\left\lvert x\right\rvert$, we can still get a decay of order $1$ from $\nabla{u}$. The precise statement is as follows.
Lemma A.5 (Decay of $(-\Delta)^{s}{u}$).
Suppose $u\in L^{\infty}(\mathbb{R}^{n})$ and $u$ is $C_{s}^{2}$ outside a set $E$ containing the origin. For any $x\in\mathbb{R}^{n}$ with ${\rm dist}\,(x,E)\gg{R}>r>0$, we have
$$\begin{split}&\displaystyle\quad\,\,\left\lvert(-\Delta)^{s}{u}(x)\right\rvert%
\\
&\displaystyle\leq{C_{n,s}}\Bigg{(}\dfrac{r^{2-2s}}{4(1-s)}\left\|{u}\right\|_%
{\dot{C}^{2}\left(B_{r}(x)\right)}+\\
&\displaystyle\qquad\,\,+\dfrac{1}{2s-1}\left(\dfrac{1}{r^{2s-1}}-\dfrac{1}{R^%
{2s-1}}\right)\left\|{u}\right\|_{\dot{C}^{1}\left(B_{R}\backslash{B_{r}}(x)%
\right)}+\dfrac{1}{sR^{2s}}\left\|{u}\right\|_{L^{\infty}(\mathbb{R}^{n})}%
\Bigg{)}.\end{split}$$
where
$$\left\|{u}\right\|_{\dot{C}^{1}(\Omega)}=\sum_{j=1}^{n}\sup_{x\in{\Omega}}%
\left\lvert\dfrac{\partial{u}}{\partial{x_{j}}}(x)\right\rvert,\quad\text{and}%
\quad\left\|{u}\right\|_{\dot{C}^{2}(\Omega)}=\sum_{i,j=1}^{n}\sup_{x\in{%
\Omega}}\left\lvert\dfrac{\partial^{2}{u}}{\partial{x_{i}}\partial{x_{j}}}(x)%
\right\rvert.$$
We think of $E$ as the set of all edges of the pyramid projected on $\mathbb{R}^{2}$. In particular, by taking $r=R=1$, $r=R=\dfrac{\left\lvert x\right\rvert}{2}$ and $r=1$, $R=\dfrac{\left\lvert x\right\rvert}{2}$ respectively, we have
Corollary A.2.
There exists a constant $C_{\Delta}=C_{\Delta}(n,s)$ such that
$$\left\lvert(-\Delta)^{s}{u}(x)\right\rvert\leq{C_{\Delta}}\left(\left\|{u}%
\right\|_{\dot{C}^{2}\left(B_{1}(x)\right)}+\left\|{u}\right\|_{L^{\infty}(%
\mathbb{R}^{n})}\right),$$
$$\left\lvert(-\Delta)^{s}{u}(x)\right\rvert\leq{C_{\Delta}}\left(\left\|{u}%
\right\|_{\dot{C}^{2}\left(B_{\frac{\left\lvert x\right\rvert}{2}}(x)\right)}%
\left\lvert x\right\rvert^{2}+\left\|{u}\right\|_{L^{\infty}(\mathbb{R}^{n})}%
\right)\left\lvert x\right\rvert^{-2s}$$
and
$$\left\lvert(-\Delta)^{s}{u}(x)\right\rvert\leq{C_{\Delta}}\left(\left\|{u}%
\right\|_{\dot{C}^{2}\left(B_{1}(x)\right)}+\left\|{u}\right\|_{\dot{C}^{1}%
\left(B_{\frac{\left\lvert x\right\rvert}{2}}(x)\right)}+\left\|{u}\right\|_{L%
^{\infty}(\mathbb{R}^{n})}\left\lvert x\right\rvert^{-2s}\right).$$
Proof of Lemma A.5.
Integrating in $B_{r}(0)$, $B_{R}\backslash{B_{r}}(0)$ and $B_{R}(0)^{c}$, we have
$$(-\Delta)^{s}{u}(x)=C_{n,s}(I_{1}+I_{2}+I_{3})$$
where
$$\begin{split}\displaystyle I_{1}&\displaystyle=\int_{B_{r}(0)}\!\dfrac{u(x)-u(%
x+\xi)-\nabla{u}(x)\cdot\xi}{\left\lvert\xi\right\rvert^{n+2s}}\,d\xi\\
\displaystyle I_{2}&\displaystyle=\int_{B_{R}\backslash{B_{r}}(0)}\!\dfrac{u(x%
)-u(x+\xi)}{\left\lvert\xi\right\rvert^{n+2s}}\,d\xi\\
\displaystyle I_{3}&\displaystyle=\int_{B_{R}^{c}}\!\dfrac{u(x)-u(x+\xi)}{%
\left\lvert\xi\right\rvert^{n+2s}}\,d\xi\end{split}$$
Using second order Taylor expansion
$$u(x+\xi)=u(x)+\nabla{u}(x)\cdot\xi+\dfrac{1}{2}\xi^{T}D^{2}{u}(\tilde{x})\xi$$
where $\tilde{x}\in B_{r}(x)$, we estimate
$$\begin{split}\displaystyle\left\lvert I_{1}\right\rvert&\displaystyle\leq\int_%
{B_{r}(0)}\!\dfrac{\left\|{D^{2}{u}(\tilde{x})}\right\|_{\infty}\left\lvert\xi%
\right\rvert^{2}}{2\left\lvert\xi\right\rvert^{n+2s}}\,d\xi\\
&\displaystyle\leq\dfrac{1}{2}\left\|{u}\right\|_{\dot{C}^{2}({B_{r}(x)})}\int%
_{B_{r}(0)}\!\dfrac{d\xi}{\left\lvert\xi\right\rvert^{n+2s-2}}\\
&\displaystyle\leq\dfrac{1}{2}\left\|{u}\right\|_{\dot{C}^{2}({B_{r}(x)})}\int%
_{0}^{r}\!\dfrac{d\rho}{\rho^{2s-1}}\\
&\displaystyle\leq\dfrac{r^{2-2s}}{4(1-s)}\left\|{u}\right\|_{\dot{C}^{2}({B_{%
r}(x)})}.\end{split}$$
For the second term, we have
$$\begin{split}\displaystyle\left\lvert I_{2}\right\rvert&\displaystyle\leq\int_%
{B_{R}\backslash{B_{r}}(0)}\!\dfrac{1}{\left\lvert\xi\right\rvert^{n+2s}}\int_%
{0}^{1}\!\left\lvert\nabla{u}(x+t\xi)\cdot\xi\right\rvert\,d{t}d{\xi}\\
&\displaystyle\leq\left\|{\nabla{u}}\right\|_{L^{\infty}(B_{R}\backslash{B_{r}%
}(x))}\int_{B_{R}\backslash{B_{r}}(0)}\!\dfrac{d\xi}{\left\lvert\xi\right%
\rvert^{n-1+2s}}\\
&\displaystyle\leq\left\|{u}\right\|_{\dot{C}^{1}(B_{R}(x))}\int_{r}^{R}\!%
\dfrac{d\rho}{\rho^{2s}}\\
&\displaystyle\leq\dfrac{1}{2s-1}\left(\dfrac{1}{r^{2s-1}}-\dfrac{1}{R^{2s-1}}%
\right)\left\|{u}\right\|_{\dot{C}^{1}(B_{R}(x))}\end{split}$$
The last term is simpler and is estimated as
$$\begin{split}\displaystyle\left\lvert I_{3}\right\rvert&\displaystyle\leq 2%
\left\|{u}\right\|_{L^{\infty}(\mathbb{R}^{n})}\int_{B_{R}(0)^{c}}\!\dfrac{d%
\xi}{\left\lvert\xi\right\rvert^{n+2s}}\\
&\displaystyle\leq 2\left\|{u}\right\|_{L^{\infty}(\mathbb{R}^{n})}\int_{R}^{%
\infty}\!\dfrac{d\rho}{\rho^{1+2s}}\\
&\displaystyle\leq\dfrac{1}{sR^{2s}}\left\|{u}\right\|_{L^{\infty}(\mathbb{R}^%
{n})}.\end{split}$$
This completes the proof.
∎
A.3. The linear operator
Consider the linear operator $L$ acting on functions $u\in{C_{s}^{2}}(\mathbb{R}^{n})$ by
$$L_{0}(u)=(-\Delta)^{s}{u}+b_{0}\cdot\nabla{u}+c_{0}u,$$
where $b_{0}\in\mathbb{R}^{n}$ and $c_{0}\in\mathbb{R}$.
Lemma A.6 (Strong maximum principle).
Suppose $u\in{C}_{s}^{2}(\mathbb{R}^{n})$, $L_{0}(u)\geq 0$, $u(x)\to 0$ as $\left\lvert x\right\rvert\to\infty$ and $c_{0}\geq 0$.
Then either $u(x)>0$, $\forall x\in\mathbb{R}^{n}$, or $u(x)\equiv 0$, $\forall x\in\mathbb{R}^{n}$.
Proof.
Since $u(x)\to 0$ as $\left\lvert x\right\rvert\to\infty$, we know that $\inf_{\mathbb{R}^{n}}u$ is attained at some $x_{1}\in\mathbb{R}^{n}$. Suppose $u(x)\not\equiv 0$. At the global minimum $x_{1}$, we have $(-\Delta)^{s}{u}(x_{1})<0$ and $\nabla{u}(x_{1})=0$. If $u(x_{1})\leq 0$, then $c_{0}u(x_{1})\leq 0$ and $L_{0}[u](x_{1})\leq(-\Delta)^{s}{u}(x_{1})<0$, a contradiction.
∎
Lemma A.7 (Solvability of the linear equation).
Assume that $c_{0}$ is so large that the symbol $\left\lvert\xi\right\rvert^{2s}-c\cdot\xi+M>0$ for any $\xi\in\mathbb{R}^{n}$. Let $\beta\in(0,1)$ be such that $2<2s+\beta<3$.
Then there exists a constant $C_{L_{0}}=C_{L_{0}}(n,s,\beta,\left\lvert b_{0}\right\rvert,c_{0})$ such that for any $f_{0}\in{C}^{\beta}(\mathbb{R}^{n})$,
there is a unique solution $u\in{C}^{2,2s+\beta-2}(\mathbb{R}^{n})$ to the linear equation
$$L_{0}(u)=f_{0},\quad\text{in}~{}\mathbb{R}^{n},$$
satisfying the Schauder estimate
$$\left\|{u}\right\|_{C^{2,2s+\beta-2}(\mathbb{R}^{n})}\leq{C}_{L_{0}}\left\|{f_%
{0}}\right\|_{C^{\beta}(\mathbb{R}^{n})}.$$
(A.1)
The proof involves taking a Fourier transform and a density argument, see [43], [50].
Appendix B Decay of the second order derivative of the profile
Proposition B.1.
Let $\Phi$ be as in Theorem 1.1. As $\left\lvert\mu\right\rvert\to\infty$, we have
$$\Phi^{\prime\prime}(\mu)=O\left(\left\lvert\mu\right\rvert^{-1-2s}\right).$$
Corollary B.1.
For $1/2\leq{s}<1$, there exists a constant $C_{\Phi}=C_{\Phi}(s)$ such that
$$\sup_{\mu\in\mathbb{R}}\,\left\lvert\mu^{i}\Phi^{(j)}(\mu)\right\rvert\leq{C}_%
{\Phi}$$
for $0\leq{i}\leq{j}$ and $1\leq{j}\leq 2$.
Equivalently, if $\Phi_{\alpha}(\mu)=\Phi(\alpha^{-1}\mu)$, then
$$\sup_{{\mu}\in\mathbb{R}}\,\left\lvert\mu\Phi_{\alpha}^{\prime}(\mu)\right%
\rvert,\,\sup_{{\mu}\in\mathbb{R}}\,\left\lvert\mu^{2}\Phi_{\alpha}^{\prime%
\prime}(\mu)\right\rvert\leq{C_{\Phi}},$$
$$\sup_{{\mu}\in\mathbb{R}}\,\left\lvert\Phi_{\alpha}^{\prime}(\mu)\right\rvert,%
\,\sup_{{\mu}\in\mathbb{R}}\,\left\lvert\mu\Phi_{\alpha}^{\prime\prime}(\mu)%
\right\rvert\leq{C_{\Phi}}\alpha^{-1},$$
and
$$\sup_{{\mu}\in\mathbb{R}}\,\left\lvert\Phi_{\alpha}^{\prime\prime}(\mu)\right%
\rvert\leq{C_{\Phi}}\alpha^{-2}.$$
The proof relies on a comparison with the almost-explicit layer solution [13], see also [34]. For $t>0$ and $\mu\in\mathbb{R}$, define
$$p_{t}(\mu)=\dfrac{1}{\pi}\int_{0}^{\infty}\!\cos(\mu{r})e^{-tr^{2s}}\,dr$$
and
$$v_{t}(\mu)=-1+2\int_{-\infty}^{\mu}\!p(t,r)\,dr.$$
They are smooth functions on $\mathbb{R}$ and $v_{t}$ is a layer solution of
$$(-\partial^{2})^{s}{v_{t}}(\mu)=f_{t}(v_{t}(\mu)),\quad\forall\mu\in\mathbb{R}$$
where $f_{t}\in C^{2}([-1,1])$ is an odd bistable nonlinearity satisfying $f_{t}(\pm 1)=-\dfrac{1}{t}$. Moreover, the asymptotic behaviors of $v_{t}^{\prime}$ and $v_{t}^{\prime\prime}$ are given by
$$\lim_{\left\lvert\mu\right\rvert\to\infty}\left\lvert\mu\right\rvert^{1+2s}v_{%
t}^{\prime}(\mu)=\dfrac{4ts\Gamma(2s)\sin(\pi{s})}{\pi}>0$$
$$\lim_{\mu\to\pm\infty}\left\lvert\mu\right\rvert^{2+2s}v_{t}^{\prime\prime}(%
\mu)=\mp\dfrac{4t^{1-\frac{1}{s}}s(1+2s)\Gamma(2s)\sin{\pi{s}}}{\pi}.$$
Proof of Proposition B.1.
Differentiating (1.3) twice, we see that $\Phi^{\prime\prime}$ satisfies
$$(-\Delta)^{s}{\Phi^{\prime\prime}(\mu)}-k\Phi^{\prime\prime\prime}(\mu)=f^{%
\prime}(\Phi(\mu))\Phi^{\prime\prime}(\mu)+f^{\prime\prime}(\Phi(\mu))(\Phi^{%
\prime}(\mu))^{2}.$$
Let $w_{M,t}(\mu)=Mv_{t}^{\prime}(\mu)+\Phi^{\prime\prime}(\mu)$. Then
$$\begin{split}&\displaystyle\quad\ (-\Delta)^{s}{w_{M,t}}(\mu)-kw_{M,t}^{\prime%
}(\mu)+\dfrac{4}{t}w_{M,t}(\mu)\\
&\displaystyle=Mv_{t}^{\prime}(\mu)\left(\dfrac{2}{t}+f_{t}^{\prime}(v_{t}(\mu%
))\right)+M\left(\dfrac{v_{t}^{\prime}(\mu)}{t}-kv_{t}^{\prime\prime}(\mu)%
\right)\\
&\displaystyle\quad+\left(\dfrac{Mv_{t}^{\prime}(\mu)}{t}+f^{\prime\prime}(%
\Phi(\mu))(\Phi^{\prime}(\mu))^{2}\right)+\Phi^{\prime\prime}(\mu)\left(\dfrac%
{4}{t}+f^{\prime}(\Phi(\mu))\right)\end{split}$$
Using the facts that $f_{t}^{\prime}(\pm 1)=-\dfrac{1}{t}$, $f^{\prime}(\pm 1)<0$, $\displaystyle\lim_{\mu\to\pm\infty}v_{t}(\mu)=\pm 1$ and $\displaystyle\lim_{\mu\to\pm\infty}\Phi(\mu)=\mp 1$, we can find large $T\geq 1$ and $\tilde{r}_{1}\geq 1$ such that
$$\dfrac{2}{T}+f_{T}^{\prime}(v_{T}(\mu))>0\quad\text{ and }\quad\dfrac{4}{T}+f^%
{\prime}(\Phi(\mu))<0.$$
By the positivity of $v_{T}^{\prime}(\mu)$, the boundedness of $f^{\prime\prime}$ and the asymptotic behaviors
$$v_{T}^{\prime}(\mu)=O\left(\left\lvert\mu\right\rvert^{-1-2s}\right),\quad v_{%
T}^{\prime\prime}(\mu)=O\left(\left\lvert\mu\right\rvert^{-2-2s}\right)\text{ %
and }\quad\Phi^{\prime}(\mu)=O\left(\left\lvert\mu\right\rvert^{-1-2s}\right),$$
there exist large numbers $M_{1}\geq 1$ and $\tilde{r}_{2}\geq\tilde{r}_{1}$ such that if $M\geq M_{1}$, then
$$\dfrac{v_{T}^{\prime}(\mu)}{T}-kv_{T}^{\prime\prime}(\mu)>0\quad\text{ and }%
\quad\dfrac{M_{1}v_{T}^{\prime}(\mu)}{T}+f^{\prime\prime}(\Phi(\mu))(\Phi^{%
\prime}(\mu))^{2}>0,\quad\forall\left\lvert\mu\right\rvert\geq\tilde{r}_{2}.$$
Therefore, for $M\geq M_{1}$ and $x\in\left\{\left\lvert\mu\right\rvert\geq\tilde{r}_{2}\right\}\cap\left\{\Phi^%
{\prime\prime}<0\right\}$ there holds
$$(-\partial^{2})^{s}{w_{M,T}(\mu)}-kw_{M,T}^{\prime}(\mu)+\dfrac{4}{t}w_{M,T}(%
\mu)>0.$$
Since $v_{T}^{\prime}>0$, there exists $M_{2}\geq M_{1}$ such that
$$w_{M_{2},T}(\mu)=M_{2}v_{T}^{\prime}(\mu)\geq 1>0,\quad\forall\left\lvert\mu%
\right\rvert\leq\tilde{r}_{2}+1.$$
Now we argue by maximum principle that $w_{M_{2},T}(\mu)\geq 0$ for all $\mu\in\mathbb{R}$. Suppose on the contrary that
$$\inf_{\mu\in\mathbb{R}}w_{M_{2},T}(\mu)<0.$$
Since $w_{M_{2},T}(\mu)$ decays as $\left\lvert\mu\right\rvert\to\infty$, the infimum is attained at some $\tilde{\mu}\in\mathbb{R}$. But then
$$(-\partial^{2})^{s}{w_{M_{2},T}}(\tilde{\mu})<0,\quad w_{M_{2},T}^{\prime}(%
\tilde{\mu})=0\quad\text{ and }\quad w_{M_{2},T}(\tilde{\mu})<0,$$
which yields
$$(-\partial^{2})^{s}{w_{M_{2},T}}(\tilde{\mu})-kw_{M_{2},T}^{\prime}(\tilde{\mu%
})+\dfrac{4}{T}w_{M_{2},T}(\tilde{\mu})<0.$$
On the other hand, we see that $\left\lvert\tilde{\mu}\right\rvert>\tilde{r}_{2}+1$ by the choice of $M_{2}$ and $\Phi^{\prime\prime}(\tilde{\mu})<0$ by the definition of $w_{M_{2},T}$, giving
$$(-\partial^{2})^{s}{w_{M_{2},T}}(\tilde{\mu})-kw_{M_{2},T}^{\prime}(\tilde{\mu%
})+\dfrac{4}{T}w_{M_{2},T}(\tilde{\mu})>0,$$
a contradiction. Hence,
$$M_{2}v_{T}^{\prime}(\mu)+\Phi^{\prime\prime}(\mu)\geq 0,\quad\forall\mu\in%
\mathbb{R}.$$
We can now repeat the whole argument, replacing $\Phi^{\prime\prime}(\mu)$ by $-\Phi^{\prime\prime}(\mu)$, to obtain
$$M_{2}v_{T}^{\prime}(\mu)-\Phi^{\prime\prime}(\mu)\geq 0,\quad\forall\mu\in%
\mathbb{R}.$$
Now, for $\mu\neq 0$,
$$\left\lvert\Phi^{\prime\prime}(\mu)\right\rvert\leq M_{2}v_{T}^{\prime}(\mu)%
\leq C\left\lvert\mu\right\rvert^{-1-2s}$$
for some constant $C$. This finishes the proof.
∎
References
[1]
G. Alberti,
Variational models for phase transitions, an approach via $\Gamma$-convergence.
Calculus of variations and partial differential equations (Pisa, 1996), 95–114, Springer, Berlin, 2000.
[2]
S. M. Allen, J. W. Cahn,
A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening.
Acta Metallurgica 27 (1979), no. 6, 1085–1095.
[3]
D. G. Aronson, H. F. Weinberger,
Multidimensional nonlinear diffusion arising in population genetics.
Adv. in Math. 30 (1978), no. 1, 33–76.
[4]
P. W. Bates, X. Chen, A. J. J. Chmaj,
Heteroclinic solutions of a van der Waals model with indefinite nonlocal interactions.
Calc. Var. Partial Differential Equations 24 (2005), no. 3, 261–281.
[5]
P. W. Bates, P. C. Fife, X. Ren, X. Wang,
Traveling waves in a convolution model for phase transitions.
Arch. Rational Mech. Anal. 138 (1997), no. 2, 105–136.
[6]
J. Bebernes, D. Eberly,
Mathematical problems from combustion theory.
Applied Mathematical Sciences, 83. Springer-Verlag, New York, 1989. viii+177 pp. ISBN: 0-387-97104-1.
[7]
H. Berestycki, F. Hamel,
Generalized traveling waves for reaction-diffusion equations.
Perspectives in nonlinear partial differential equations, 101–123, Contemp. Math., 446, Amer. Math. Soc., Providence, RI, 2007.
[8]
J. Bertoin,
Lévy processes.
Cambridge Tracts in Mathematics, 121. Cambridge University Press, Cambridge, 1996. x+265 pp. ISBN: 0-521-56243-0.
[9]
J.-M. Bony, P. Courr‘ege, P. Priouret,
Semi-groupes de Feller sur une variété à bord compacte et problémes aux limites intégro-différentiels du second ordre donnant lieu au principe du maximum.
Ann. Inst. Fourier (Grenoble) 18 1968 fasc. 2, 369–521 (1969).
[10]
C. Brändle, E. Colorado, A. de Pablo, U. Sánchez,
A concave-convex elliptic problem involving the fractional Laplacian.
Proc. Roy. Soc. Edinburgh Sect. A 143 (2013), no. 1, 39–71.
[11]
X. Cabré, E. Cinti,
Sharp energy estimates for nonlinear fractional diffusion equations.
Calc. Var. Partial Differential Equations 49 (2014), no. 1-2, 233–269.
[12]
X. Cabré, Y. Sire,
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates.
Ann. Inst. H. Poincaré Anal. Non Linéaire 31 (2014), no. 1, 23–53.
[13]
X. Cabré, Y. Sire,
Nonlinear equations for fractional Laplacians, II: Existence, uniqueness, and qualitative properties of solutions.
Trans. Amer. Math. Soc. 367 (2015), no. 2, 911–941.
[14]
L. Caffarelli, L. Silvestre,
An extension problem related to the fractional Laplacian.
Comm. Partial Differential Equations 32 (2007), no. 7–9 1245-1260.
[15]
L. Caffarelli, L. Silvestre,
Regularity theory for fully nonlinear integro-differential equations.
Comm. Pure Appl. Math. 62 (2009), no. 5, 597–638.
[16]
L. Caffarelli, L. Silvestre,
Regularity results for nonlocal equations by approximation.
Arch. Ration. Mech. Anal. 200 (2011), no. 1, 59–88.
[17]
X. Chen,
Existence, uniqueness, and asymptotic stability of traveling waves in nonlocal evolution equations.
Adv. Differential Equations 2 (1997), no. 1, 125–160.
[18]
X. Chen, J.-S. Guo, F. Hamel, H. Ninomiya, J.-M. Roquejoffre,
Traveling waves with paraboloid like interfaces for balanced bistable dynamics.
Ann. Inst. H. Poincaré Anal. Non Linéaire 24 (2007), no. 3, 369–393.
[19]
A. M. Cuitiño, M. Koslowski, M. Ortiz,
A phase-field theory of dislocation dynamics, strain hardening and hysteresis in ductile single crystals.
J. Mech. Phys. Solids 50 (2002), no. 12, 2597–2635.
[20]
A. de Masi, T. Gobron, E. Presutti,
Travelling fronts in non-local evolution equations.
Arch. Rational Mech. Anal. 132 (1995), no. 2, 143–205.
[21]
M. del Pino, M. Kowalczyk, J. Wei,
Traveling waves with multiple and nonconvex fronts for a bistable semilinear parabolic equation.
Comm. Pure Appl. Math. 66 (2013), no. 4, 481–547.
[22]
E. Di Nizza, G. Palatucci, E. Valdinoci,
Hitchhiker’s guide to the fractional Sobolev spaces.
Bull. Sci. Math. 136 (2012), no. 5, 521–573.
[23]
L. C. Evans,
Partial differential equations.
Second edition. Graduate Studies in Mathematics, 19. American Mathematical Society, Providence, RI, 2010. xxii+749 pp. ISBN: 978-0-8218-4974-3.
[24]
E. B. Fabes, C. E. Kenig, R. P. Serapioni,
The local regularity of solutions of degenerate elliptic equations.
Comm. Partial Differential Equations 7 (1982), no. 1, 77–116.
[25]
P. C. Fife,
Mathematical aspects of reacting and diffusing systems.
Lecture Notes in Biomathematics, 28. Springer-Verlag, Berlin-New York, 1979. iv+185 pp. ISBN: 3-540-09117-3.
[26]
P. C. Fife,
Dynamics of internal layers and diffusive interfaces.
CBMS-NSF Regional Conference Series in Applied Mathematics, 53. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1988. vi+93 pp. ISBN: 0-89871-225-4.
[27]
P. C. Fife, J. B. Mcleod,
The approach of solutions of nonlinear diffusion equations by travelling front solutions.
Arch. Ration. Mech. Anal. 65 (1977), no. 4, 335–361.
[28]
R. Fisher,
The wave of advance of advantageous genes.
Ann. of Eugenics (1937), no. 1, 355–369.
[29]
A. Garroni, S. Müller,
$\Gamma$-limit of a phase field model of dislocations.
SIAM J. Math. Ana. 36 (2005), no. 6, 1943–1964.
[30]
A. Garroni, S. Müller,
A variational model for dislocations in the line tension limit.
Arch. Ration. Mech. Anal. 181 (2006), no. 3, 535–578.
[31]
D. Gilbarg, N. S. Trudinger,
Elliptic partial differential equations of second order.
Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. xiv+517 pp. ISBN: 3-540-41160-7.
[32]
M. d. M. González,
Gamma convergence of an energy functional related to the fractional Laplacian.
Calc. Var. Partial Differential Equations 36 (2009), no. 2, 173–210.
[33]
C. Gui,
Symmetry of traveling wave solutions to the Allen-Cahn equation in $\mathbb{R}^{2}$.
Arch. Ration. Mech. Anal. 203 (2012), no. 3, 1037–1065.
[34]
C. Gui, M. Zhao,
Traveling wave solutions of Allen-Cahn equation with a fractional Laplacian.
Ann. I. H. Poincaré - AN (2014), http://dx.doi.org/10.1016/j.anihpc.2014.03.005.
[35]
F. Hamel, R. Monneau, J.-M. Roquejoffre,
Stability of travelling waves in a model for conical flames in two space dimensions.
Ann. Sci. École Norm. Sup. (4) 37 (2004), no. 3, 469–506.
[36]
F. Hamel, R. Monneau, J.-M. Roquejoffre,
Existence and qualitative properties of multidimensional conical bistable fronts.
Discrete Contin. Dyn. Syst. 13 (2005), no. 4, 1069–1096.
[37]
F. Hamel, R. Monneau, J.-M. Roquejoffre,
Asymptotic properties and classification of bistable fronts with Lipschitz level sets.
Discrete Contin. Dyn. Syst. 14 (2006), no. 1, 75–92.
[38]
F. Hamel, L. Roques,
Uniqueness and stability properties of monostable pulsating fronts.
J. Eur. Math. Soc. (JEMS) 13 (2011), no. 2, 345–390.
[39]
M. Hiroshi, M. Nara, M. Taniguchi,
Stability of planar waves in the Allen-Cahn equation.
Comm. Partial Differential Equations 34 (2009), no. 7-9, 976–1002.
[40]
A. Mellet, J. Nolen, J.-M. Roquejoffre, L. Ryzhik,
Stability of generalized transition fronts.
Comm. Partial Differential Equations 34 (2009), no. 4-6, 521–552.
[41]
Y.-C. Kim, K.-A. Lee,
Regularity results for fully nonlinear parabolic integro-differential operators.
Math. Ann. 357 (2013), no. 4, 1541–1576.
[42]
A. Kolmogorov, I. Petrovsky, N. Piskunow,
Etude de l’équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique.
Moscou Univ. Bull. Math., 1 (1937), 1–25.
[43]
N. V. Krylov,
Lectures on elliptic and parabolic equations in Hölder spaces.
Graduate Studies in Mathematics, 12. American Mathematical Society, Providence, RI, 1996. xii+164 pp. ISBN: 0-8218-0569-X
[44]
N. V. Krylov,
Lectures on elliptic and parabolic equations in Sobolev spaces.
Graduate Studies in Mathematics, 96. American Mathematical Society, Providence, IR, 2008. xviii+357 pp. ISBN:978-0-8218-4684-1.
[45]
Y. Kurokawa, M. Taniguchi,
Multi-dimensional pyramidal travelling fronts in the Allen-Cahn equations.
Proc. Roy. Soc. Edinburgh Sect. A 141(2011), no. 5, 1031–1054.
[46]
N. S. Landkof,
Foundations of modern potential theory.
Translated from the Russian by A. P. Doohovskoy. Die Grundlehren der mathematischen Wissenschaften, Band 180. Springer-Verlag, New York-Heidelberg, 1972. x+424 pp.
[47]
M. Nagumo,
On prinicipally linear elliptic differential equations of second order.
Osaka Math. J. 6, (1954). 207–229.
[48]
H. Ninomiya, M. Taniguchi,
Traveling curved fronts of a mean curvature flow with constant driving force.
Free boundary problems: theory and applications, I (Chiba, 1999), 206–221, GAKUTO Internat. Ser. Math. Sci. Appl., 13 Gakkōtosho, Tokyo, 2000.
[49]
G. Palatucci, O. Savin, E. Valdinoci,
Local and global minimizers for a variational energy involving a fractional nor.
Ann. Mat. Pura Appl. (4) 192 (2013), no. 4, 673–718.
[50]
A. Petrosyan, C. A. Pop,
Optimal regularity of solutions to the obstacle problem for the fractional Laplacian with drift.
J. Funct. Anal. 268 (2015), no. 2, 417–472.
[51]
D. H. Sattinger,
Monotone methods in nonlinear elliptic and parabolic boundary value problems.
Indiana Univ. Math. J. 21 (1971/72), 979–1000.
[52]
O. Savin, E. Valdinoci,
$\Gamma$-convergence for nonlocal phase transitions.
Ann. Inst. H. Poincaré Anal. Non Linéaire 29 (2012), no. 4, 479–500.
[53]
O. Savin, E. Valdinoci,
Density estimates for a variational model driven by the Gagliardo norm.
J. Math. Pures Appl. (9) 101 (2014), no. 1, 1–26.
[54]
W. Shen,
Existence, uniqueness, and stability of generalized traveling waves in time dependent monostable equations.
J. Dynam. Differential Equations 23 (2011), no. 1, 1–44.
[55]
M. Taniguchi,
Traveling fronts of pyramidal shapes in the Allen-Cahn equations.
SIAM J. Math. Anal. 39 (2007), no. 1, 319–344.
[56]
M. Taniguchi,
The uniqueness and asymptotic stability of pyramidal traveling fronts in the Allen-Cahn equations.
J. Differential Equations 246 (2009), no. 5, 2103–2130.
[57]
M. Taniguchi,
An $(N-1)$-dimensional convex compact set gives an $N$-dimensional traveling front in the Allen-Cahn equation.
SIAM J. Math. Anal. 47 (2015), no. 1, 455-476.
[58]
L. Silvestre,
Regularity of the obstacle problem for a fractional power of the Laplace operator.
Comm. Pure Appl. Math. 60 (2007), no. 1,67–112.
[59]
L. Silvestre,
Hölder continuity for integro-differential parabolic equations with polynomial growth respect to the gradient.
Discrete Contin. Dyn. Syst. 28 (2010), no. 3, 1069–1081.
[60]
L. Silvestre,
On the differentiability of the solution to an equation with drift and fractional diffusion.
arXiv:1012.2401.
[61]
X. Wang,
Metastability and stability of patterns in a convolution model for phase transitions.
J. Differential Equations 183 (2002), no. 2, 434–461. |
New Solution Approaches for the Maximum-Reliability Stochastic Network Interdiction Problem
Eli Towle111etowle@wisc.edu and James Luedtke222jim.luedtke@wisc.edu
Department of Industrial and Systems Engineering, University of Wisconsin – Madison
Abstract
We investigate methods to solve the maximum-reliability stochastic network interdiction problem (SNIP). In this
problem, a defender interdicts arcs on a directed graph to minimize an attacker’s probability of undetected traversal
through the network. The attacker’s origin and destination are unknown to the defender and assumed to be random. SNIP
can be formulated as a stochastic mixed-integer program via a deterministic equivalent formulation (DEF). As the size of this
DEF makes it impractical for solving large instances, current approaches to solving SNIP rely on
modifications of Benders decomposition. We present two new approaches to solve SNIP. First, we introduce a new
DEF that is significantly more compact than the standard DEF. Second, we propose
a new path-based formulation of SNIP. The number of constraints required to define this formulation grows
exponentially with the size of the network, but the model can be solved via delayed constraint generation. We
present valid inequalities for this path-based formulation which are dependent on the structure of the interdicted
arc probabilities. We propose a branch-and-cut (BC) algorithm to solve this new SNIP formulation. Computational
results demonstrate that directly solving the more compact SNIP formulation and this BC algorithm both provide an
improvement over a state-of-the-art implementation of Benders decomposition for this problem.
Keywords. Network interdiction; stochastic programming; integer programming; valid inequalities
Acknowledgments
This work was supported by National Science Foundation grants CMMI-1130266 and SES-1422768.
1 Introduction
Network interdiction problems feature a defender and an attacker with opposing goals. The defender first modifies a network using a finite budget in an attempt to diminish the attacker’s ability to perform a task. These modifications could include increasing arc traversal costs, reducing arc capacities, or removing arcs altogether. The attacker then optimizes his or her objective with respect to the newly modified network. Network interdiction problems have been studied in the context of nuclear weapons interdiction [17, 20], disruption of nuclear weapons projects [6], interdiction of drug smuggling routes [24], and general critical infrastructure defense [5].
In the maximum-reliability stochastic network interdiction problem (SNIP), formulated by Pan et al. [20], an attacker seeks to maximize the probability of avoiding detection while traveling through a directed network from an origin to a destination. There is a chance the attacker will be detected when traversing each arc. A defender may expend resources to place sensors on certain arcs, thereby increasing the probability of detecting an attacker on those arcs. The origin and destination of the attacker are unknown to the defender, and are assumed to be random. The defender’s goal is to minimize the expected value of the attacker’s maximum-reliability path through the network over all origin–destination scenarios. This can be interpreted as minimizing the overall probability of the attacker successfully attacking a critical infrastructure target or smuggling contraband to a target location undetected.
Morton et al. [17] present a deterministic equivalent formulation (DEF) of SNIP derived from the dual of the attacker’s
optimization problem. Pan and Morton [21] improve the DEF by incorporating the values of
uninterdicted maximum-reliability paths into the model constraints. They also derive SNIP-specific “step inequalities”
to strengthen the linear programming (LP) relaxation of the Benders master problem before initiating the Benders
algorithm. These step inequalities are equivalent to the mixing inequalities of Günlük and Pochet
[11]. Bodur et al. [4] investigate the strength of integrality-based cuts in conjunction with
Benders cuts for stochastic integer programs with continuous recourse, and use test instances of the SNIP problem to
demonstrate the value of integrality-based cuts within Benders decomposition.
We propose two new approaches to solve SNIP. First, we present a more compact DEF. This
formulation combines constraints of the DEF for scenarios ending at the same destination, significantly
reducing the number of constraints and variables in the DEF. Second, we present a new formulation of SNIP that includes constraints for
every origin–destination path. Although the formulation size grows exponentially with the number of arcs in the
network, the model can be solved with delayed constrained generation. A path’s reliability function is a supermodular
set function representable with a strictly convex and decreasing function. We propose a branch-and-cut
algorithm that exploits this structure for each path through the network to derive valid inequalities. We consider three cases for the probabilities of
traversing interdicted arcs. Two of these cases use inequalities derived by Ahmed and Atamtürk [1]
for the problem of maximizing a submodular utility function having similar structure.
The maximum-reliability network interdiction problem is closely related to the shortest-path variant, in which the
network defender attempts to maximize the length of the attacker’s shortest path from origin to destination. When the
probabilities of traversing interdicted arcs in a maximum-reliability problem are all strictly positive, the problem is
equivalent, via a logarithmic transformation, to a shortest-path network interdiction problem
[2, 16]. This transformation, however, is not valid when there exists an arc that reduces the attacker’s
probability of successful traversal to zero when it is interdicted by the defender. A deterministic version of the shortest-path network interdiction problem was initially explored by
Fulkerson and Harding [9], and later by Golden [10].
An alternative interdiction problem mostly unrelated to our work is the maximum-flow network interdiction problem,
introduced by Wollmer [23]. In this problem, a defender changes arc capacities in order to minimize the attacker’s maximum flow. Deterministic and stochastic versions of
this problem have been studied by many authors, e.g., [7, 12, 13, 24].
In Section 2, we review relevant existing approaches to solving SNIP. We
propose a compact DEF of SNIP in Section 3. In Section 4, we propose a path-based formulation for SNIP and describe valid inequalities for the relevant mixed-integer set. We describe our computational experiments and results in Section 5.
2 Problem statement and existing results
Let $N$ and $A$ denote the set of nodes and the set of arcs of a directed network. Let $D\subseteq A$ denote the set of arcs available for interdiction. In the first stage of SNIP, the defender may choose to install sensors on a subset of the interdictable arcs. The cost to install a sensor on arc $a\in D$ is $c_{a}>0$. The defender is constrained by installation budget $b>0$. The probability of the attacker’s undetected traversal through arc $a\in A$ without a sensor installed is $r_{a}\in(0,1]$. When a sensor is installed on arc $a\in D$, this probability reduces to $q_{a}\in[0,r_{a})$. Let $\Omega$ be the finite set of origin–destination pairs, where $\omega=(s,t)\in\Omega$ occurs with probability $p_{\omega}>0$ ($\sum_{\omega\in\Omega}p_{\omega}=1$). We assume a path exists from $s$ to $t$ for all $(s,t)\in\Omega$. If not, the defender can discard that scenario from consideration. In the second stage of SNIP, the attacker’s origin and destination are realized. The attacker traverses the network from the origin to the corresponding destination via the maximum-reliability path given the defender’s set of interdicted arcs.
For $s,t\in N$, a simple $s$-$t$ path $P$ is a set of arcs, say $(i_{0},i_{1}),(i_{1},i_{2}),\ldots,$ $(i_{k-1},i_{k})$,
where $i_{0}=s$, $i_{k}=t$, and the nodes $i_{0},i_{1},i_{2},\ldots,i_{|P|}\in N$ are all distinct. Let $\mathcal{P}_{st}$ be the set of all simple paths from $s\in N$ to $t\in N$. Let $S\subseteq D$ be the set of interdicted arcs, as chosen by the defender. The function
$$\displaystyle h_{P}(S)\coloneqq\left(\prod_{a\in P}r_{a}\right)\left(\prod_{a%
\in P\cap S}\frac{q_{a}}{r_{a}}\right)$$
calculates the probability of the attacker traversing the set of arcs $P$ undetected given the interdicted arcs $S$.
In the stochastic network interdiction problem (SNIP), the defender selects arc to interdict to minimize the expected
value of the attacker’s maximum-reliability path, subject to the budget restriction on the selected arcs. SNIP can be formulated as
$$\displaystyle\begin{aligned} \displaystyle\min_{S}&&\displaystyle%
\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}&\displaystyle\max%
\{h_{P}(S)\colon P\in\mathcal{P}_{st}\}&&\\
\displaystyle\textrm{s.t.}&&\displaystyle\sum\limits_{a\in S}c_{a}&%
\displaystyle\leq b.&&\end{aligned}$$
(1)
2.1 Deterministic equivalent formulation
A DEF of SNIP can be obtained by using first-stage binary variables $x_{a}$ to represent whether the defender installs a sensor on arc $a\in D$. That is, $x_{a}=1$ if the defender elects to install a sensor on arc $a\in D$, and $x_{a}=0$ if no sensor is installed on that arc. Second-stage continuous variables $\pi_{i}^{\omega}$ represent the maximum probability of the attacker traveling undetected from $i\in N$ to $t$ in scenario $\omega=(s,t)\in\Omega$.
The formulation uses second-stage constraints to calculate the $\pi$ variables for each scenario $\omega\in\Omega$. The $\pi$ variables are calculated using an LP formulation of the dynamic programming (DP) optimality conditions for calculating a maximum-reliability path. Using the convention $q_{a}\coloneqq r_{a}$ and $x_{a}\coloneqq 0$ for all $a\in A\setminus D$, each $\pi_{i}^{\omega}$ is calculated as
$$\displaystyle\pi_{i}^{\omega}$$
$$\displaystyle=\max_{a=(i,j)\in A}\left\{\pi^{\omega}_{j}\left[r_{a}+(q_{a}-r_{%
a})x_{a}\right]\right\},\quad i\in N,\ \omega\in\Omega.$$
(2)
The maximum probability of the attacker reaching $t$ undetected from node $i\in N$ is the maximum over all forward adjacent nodes $j\in N$ of $\pi_{j}^{t}$, multiplied by $r_{a}$ if the arc is not interdicted, or $q_{a}$ if it is interdicted.
The optimality conditions (2) imply the set of inequalities
$$\displaystyle\pi_{i}^{\omega}$$
$$\displaystyle\geq\pi^{\omega}_{j}\left[r_{a}+(q_{a}-r_{a})x_{a}\right],\quad a%
=(i,j)\in A,\ \omega\in\Omega.$$
(3)
The inequalities (3) are nonlinear. Pan and Morton [21] derive the following DEF of SNIP that uses a linear
reformulation of these inequalities:
$$\displaystyle\min_{x,\pi}$$
$$\displaystyle\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}\pi_{s%
}^{\omega}$$
(4a)
s.t.
$$\displaystyle\smashoperator[]{\sum_{a\in D}^{}}c_{a}x_{a}$$
$$\displaystyle\leq b$$
(4b)
$$\displaystyle\pi_{i}^{\omega}-r_{a}\pi_{j}^{\omega}$$
$$\displaystyle\geq 0,$$
$$\displaystyle a=(i,j)\in A\setminus D,\ \omega\in\Omega$$
(4c)
$$\displaystyle\pi_{i}^{\omega}-r_{a}\pi_{j}^{\omega}$$
$$\displaystyle\geq-(r_{a}-q_{a})u_{j}^{\omega}x_{a},$$
$$\displaystyle a=(i,j)\in D,\ \omega\in\Omega$$
(4d)
$$\displaystyle\pi_{i}^{\omega}-q_{a}\pi_{j}^{\omega}$$
$$\displaystyle\geq 0,$$
$$\displaystyle a=(i,j)\in D,\ \omega\in\Omega$$
(4e)
$$\displaystyle\pi_{t}^{\omega}$$
$$\displaystyle=1,$$
$$\displaystyle\omega=(s,t)\in\Omega$$
(4f)
$$\displaystyle x_{a}$$
$$\displaystyle\in\{0,1\},$$
$$\displaystyle a\in D.$$
(4g)
The objective function (4a) minimizes the expected value of the attacker’s maximum-reliability path.
For each scenario $\omega=(s,t)\in\Omega$, the parameter $u_{j}^{\omega}$ represents the value of the maximum-reliability path
from $j\in N$ to $t$ when no sensors are installed, and hence is an upper bound on $\pi_{j}^{\omega}$. These parameters are calculated in a model preprocessing step.
The linear constraints (4c)–(4e) formulate the nonlinear DP inequalities (3). Constraints (4c) enforce $\pi_{i}^{\omega}\geq r_{a}\pi_{j}^{\omega}$ for all $a=(i,j)\in A\setminus D,\ \omega\in\Omega$. For $\omega\in\Omega$ and $a=(i,j)\in D$, if $x_{a}=0$, then (4d) becomes $\pi_{i}^{\omega}\geq r_{a}\pi_{j}^{\omega}$, which dominates (4e). On the other hand, if $x_{a}=1$, then (4e) implies $\pi_{i}^{\omega}-r_{a}\pi_{j}^{\omega}\geq-(r_{a}-q_{a})u_{j}^{\omega}$,
(4e) dominates (4d). Thus, in either case (4c)–(4e) are equivalent to
(3). Since the variables $\pi_{s}^{\omega}$,
$\omega=(s,t)\in\Omega$, have positive coefficients in the objective, this implies the equations (2) will be
satisfied in an optimal solution.
2.2 Benders decomposition
Directly solving the DEF (4) with a mixed-integer programming solver may be too time-consuming due to
its large size. Benders decomposition can be used to decompose large problems like SNIP. After introducing the SNIP
formulation, Pan and Morton [21] outline a Benders decomposition algorithm for the SNIP DEF (4). For a fixed vector $x\in[0,1]^{D}$ satisfying $\sum_{a\in D}c_{a}x_{a}\leq b$, the $\pi_{j}^{\omega}$ variables in (4) can be obtained by solving
$$\displaystyle E^{st}(x)\coloneqq\min_{\pi}$$
$$\displaystyle\pi_{s}$$
(5a)
s.t.
$$\displaystyle\pi_{i}-r_{a}\pi_{j}$$
$$\displaystyle\geq 0,$$
$$\displaystyle a=(i,j)\in A\setminus D$$
(5b)
$$\displaystyle\pi_{i}-r_{a}\pi_{j}$$
$$\displaystyle\geq-(r_{a}-q_{a})u_{j}^{\omega}x_{a},$$
$$\displaystyle a=(i,j)\in D$$
(5c)
$$\displaystyle\pi_{i}-q_{a}\pi_{j}$$
$$\displaystyle\geq 0,$$
$$\displaystyle a=(i,j)\in D$$
(5d)
$$\displaystyle\pi_{t}$$
$$\displaystyle=1$$
(5e)
for each scenario $\omega=(s,t)\in\Omega$. The dual of (5) is
$$\displaystyle\begin{aligned} \displaystyle\max_{y,z}&&\displaystyle y_{t}-%
\smashoperator[]{\sum_{a=(i,j)\in D}^{}}(r_{a}-q_{a})u_{j}^{\omega}y_{ij}x_{a}%
&&&\\
\displaystyle\textrm{s.t.}&&\displaystyle\smashoperator[]{\sum_{(s,j)\in A}^{}%
}(y_{sj}+z_{sj})&\displaystyle=1&&\\
&&\displaystyle\smashoperator[]{\sum_{(i,j)\in A}^{}}(y_{ij}+z_{ij})-%
\smashoperator[]{\sum_{a=(j,i)\in A}^{}}(r_{a}y_{ji}+q_{a}z_{ji})&%
\displaystyle=0,&&\displaystyle i\in N\setminus\{s,t\}\\
&&\displaystyle y_{t}-\smashoperator[]{\sum_{a=(j,t)\in A}^{}}(r_{a}y_{jt}+q_{%
a}z_{jt})&\displaystyle=0&&\\
&&\displaystyle y_{ij},z_{ij}&\displaystyle\geq 0,&&\displaystyle(i,j)\in A\\
&&\displaystyle y_{t}&\displaystyle\geq 0.\end{aligned}$$
(6)
Dual variables $y_{ij}$ correspond to DEF constraints (5b) and (5c), $z_{ij}$ is the dual variable for constraints (5d), and $y_{t}$ is the dual variable for constraint (5e). We fix $z_{ij}\coloneqq 0$ for all $(i,j)\in A\setminus D$, as constraint (5d) only applies to arcs $D$.
The Benders master problem is as follows:
$$\displaystyle\min_{x,\theta}$$
$$\displaystyle\sum\limits_{\omega\in\Omega}p_{\omega}\theta^{\omega}$$
(7a)
s.t.
$$\displaystyle\smashoperator[]{\sum_{a\in D}^{}}c_{a}x_{a}$$
$$\displaystyle\leq b$$
(7b)
$$\displaystyle\theta^{\omega}\geq\bar{y}_{t}^{\omega}$$
$$\displaystyle-\smashoperator[]{\sum_{a=(i,j)\in D}^{}}(r_{a}-q_{a})u_{j}^{%
\omega}\bar{y}_{ij}^{\omega}x_{a},$$
$$\displaystyle(\bar{y}^{\omega},\bar{z}^{\omega})\in K^{\omega},\ \omega=(s,t)\in\Omega$$
(7c)
$$\displaystyle\theta^{\omega}$$
$$\displaystyle\geq 0,$$
$$\displaystyle\omega\in\Omega$$
(7d)
$$\displaystyle x_{a}$$
$$\displaystyle\in\{0,1\},$$
$$\displaystyle a\in D.$$
(7e)
The Benders algorithm begins by solving the master problem with no constraints of the form (7c) to obtain a candidate solution $(\bar{x},\bar{\theta})$. At iteration $k$ of the Benders cutting plane algorithm, a dual subproblem (6) is solved for each scenario using the candidate solution $(\bar{x},\bar{\theta})$ to find a dual-feasible extreme point $(\bar{y}^{\omega},\bar{z}^{\omega})$. If a cut in the form of constraint (7c) cuts off the candidate solution $\bar{\theta}^{\omega}$, this cut is added to the Benders master problem by including the point in the set $K^{\omega}$. The updated master problem is solved to generate a new candidate solution $(\bar{x},\bar{\theta})$. This process is repeated until none of the scenario cuts constructed at an iteration of the algorithm cut off the candidate solution obtained by solving the master problem at the previous iteration.
Benders decomposition can also be used in a branch-and-cut algorithm. The integrality constraint
(7e) is relaxed and enforced within the branch-and-bound tree. At each integer-feasible
solution $(\bar{x},\bar{\theta})$ obtained at a node in the master problem branch-and-bound tree, a dual
subproblem (6) is solved for each scenario. Violated cuts are added to the LP formulation of that
node and it is re-solved. When no violated inequalities are found for an integer-feasible solution, the upper bound may be updated and
the node pruned.
Pan and Morton [21] enhance the Benders branch-and-cut algorithm by using optimal solutions to scenario subproblems from previous iterations to create step
inequalities for the relaxed Benders master problem. These additional inequalities are added to the formulation as cuts
to tighten the LP relaxation of the
master problem. This leads to a smaller branch-and-bound tree, which in turn reduces the time spent solving mixed-integer programs.
The general-purpose Benders approach tested by Bodur et al. [4] is very effective in solving SNIP. A key
implementation detail of this approach is to first solve the relaxed Benders master problem,
obtained by removing the integrality restriction on $x$. Once no more Benders inequalities can be added to the LP
relaxation of the master problem, the integrality restriction on $x$ is restored, and the model, including the
identified Benders cuts, is passed to a mixed-integer programming solver to begin the Benders branch-and-cut algorithm. The solver adds its own general-purpose
integrality-based cuts to the formulation using the initial set of Benders cuts. This results in a stronger LP
relaxation bound, and reduces the size of the branch-and-bound tree.
Bodur et al. [4] employed a smaller gap tolerance as a stopping criterion ($10^{-3}$) for their Benders
branch-and-cut implementation than Pan
and Morton [21] used in their experiments ($10^{-2}$). We implemented the Benders
branch-and-cut algorithm as in [4], and tested it with a relative optimality gap tolerance of $10^{-2}$ to compare the
results to those published in [21].
With these matching tolerances, the Benders
algorithm ran $3$–$4$ times faster than Pan and Morton’s algorithm. Although the difference in hardware used in these
experiments makes it
impossible to directly compare the performance of these methods, we conclude that the implementation of Benders
branch-and-cut described in
in [4] is currently among the most efficient ways to solve instances of SNIP, and hence we compare against this approach in our numerical experiments.
3 Compact deterministic equivalent formulation
For scenarios
sharing a destination node, the DP optimality conditions (2) for SNIP are identical. Hence, the
DEF (4) contains redundancies by repeating the LP formulation of these optimality conditions,
constraints (4c)–(4f), for each destination node.
Motivated by this observation, we present a compact DEF of SNIP that groups together second-stage
constraints for scenarios with a common destination. Let $T$ be the set of unique destination nodes: $T=\{t\in N\colon(s,t)\in\Omega$ for some $s\in N\}$.
Second-stage variables $\pi_{j}^{t}$ now represent the probability of the attacker
successfully traveling from node $j\in N$ to destination $t\in T$ in any scenario having destination $t$. We
obtain the following new DEF for SNIP:
$$\displaystyle\min_{x,\pi}$$
$$\displaystyle\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}\pi_{s%
}^{t}$$
(8a)
s.t.
$$\displaystyle\smashoperator[]{\sum_{a\in D}^{}}c_{a}x_{a}$$
$$\displaystyle\leq b$$
(8b)
$$\displaystyle\pi_{i}^{t}-r_{a}\pi_{j}^{t}$$
$$\displaystyle\geq 0,$$
$$\displaystyle a=(i,j)\in A\setminus D,\ t\in T$$
(8c)
$$\displaystyle\pi_{i}^{t}-r_{a}\pi_{j}^{t}$$
$$\displaystyle\geq-(r_{a}-q_{a})u_{j}^{t}x_{a},$$
$$\displaystyle a=(i,j)\in D,\ t\in T$$
(8d)
$$\displaystyle\pi_{i}^{t}-q_{a}\pi_{j}^{t}$$
$$\displaystyle\geq 0,$$
$$\displaystyle a=(i,j)\in D,\ t\in T$$
(8e)
$$\displaystyle\pi_{t}^{t}$$
$$\displaystyle=1,$$
$$\displaystyle t\in T$$
(8f)
$$\displaystyle x_{a}$$
$$\displaystyle\in\{0,1\},$$
$$\displaystyle a\in D.$$
(8g)
Parameter $u_{j}^{t}$ is the value of the attacker’s maximum-reliability path from $j\in N$ to $t\in T$ when no sensors are installed. By definition, $u_{j}^{t}=u_{j}^{\omega}$ for all $\omega=(s,t)\in\Omega$ and $j\in N$.
The objective function (8a) weights each $\pi_{s}^{t}$ variable by its respective scenario probability. The summation of these terms represents the overall probability of a successful attack, which the defender seeks to minimize.
Proposition 1.
Each feasible solution of (4) admits a feasible solution of (8) of equal objective value, and vice versa.
Proof.
Let $(\bar{x},\bar{\pi})$ be a solution to (4). For all $t\in T$, select $s_{t}\in N$ such that $\omega=(s_{t},t)\in\Omega$. Let $\hat{\pi}^{t}_{i}\coloneqq\bar{\pi}^{\omega}_{i}$ for all $i\in N$ and $t\in T$, and let $\hat{x}_{ij}\coloneqq\bar{x}_{ij}$ for all $(i,j)\in D$. $(\hat{x},\hat{\pi})$ is feasible to (8) with objective value $\sum_{\omega=(s,t)\in\Omega}p_{\omega}\hat{\pi}_{s}^{t}=\sum_{\omega=(s,t)\in%
\Omega}p_{\omega}\bar{\pi}_{s}^{\omega}$.
Now, let $(\hat{x},\hat{\pi})$ be a solution to (8). Let $\bar{\pi}_{i}^{\omega}\coloneqq\hat{\pi}_{i}^{t}$ for all $i\in N$ and $\omega=(s,t)\in\Omega$. Let $\bar{x}_{ij}\coloneqq\hat{x}_{ij}$ for all $(i,j)\in D$. $(\bar{x},\bar{\pi})$ is a solution to (4) with objective value $\sum_{\omega\in\Omega}p_{\omega}\bar{\pi}_{s}^{\omega}=\sum_{\omega\in\Omega}p%
_{\omega}\hat{\pi}_{s}^{t}$.
∎
A Benders algorithm can be applied to the compact formulation by introducing variables $\theta^{t}$ to represent the
probability-weighted sum of maximum-reliability path values for scenarios ending at node $t\in T$. We experimented
with a Benders decomposition on the DEF (8) using the Benders algorithm of Bodur et al. [4]. We found that the Benders algorithm performed much worse on this formulation than on the
DEF (4). The reason for this poor performance is that the root relaxation obtained after the
mixed-integer programming solver added its general-purpose cuts was much weaker when using a Benders reformulation of
(8), as compared to the Benders reformulation (7). This is consistent with the results of
[4], which indicates that the integrality-based cuts derived in the formulation (7) can be
stronger than those derived in a “projected” Benders master problem that only uses $\theta^{t}$ variables.
4 Path-based formulation and cuts
We now derive a new mixed-integer linear programming formulation of SNIP based on the path-based formulation (1). With the introduction of binary variables $x_{a}$ to denote whether network arc $a\in D$ is interdicted, we map the set function $h_{P}$ to a vector function $\bar{h}_{P}$. In particular,
for $S\subseteq D$ we let ${\mathchoice{\raisebox{0.0pt}{$\displaystyle\chi$}}{\raisebox{0.0pt}{$%
\textstyle\chi$}}{\raisebox{0.0pt}{$\scriptstyle\chi$}}{\raisebox{0.0pt}{$%
\scriptscriptstyle\chi$}}}^{S}\in\{0,1\}^{D}$ be the characteristic vector of the set $S$: ${\mathchoice{\raisebox{0.0pt}{$\displaystyle\chi$}}{\raisebox{0.0pt}{$%
\textstyle\chi$}}{\raisebox{0.0pt}{$\scriptstyle\chi$}}{\raisebox{0.0pt}{$%
\scriptscriptstyle\chi$}}}^{S}_{a}=1$ if $a\in S$, and $0$ otherwise.
Then we define $\bar{h}_{P}:\{0,1\}^{D}\rightarrow\mathbb{R}$ as
$$\displaystyle\bar{h}_{P}(x)\coloneqq\left[\prod_{a\in P}r_{a}\right]\left[%
\prod_{a\in P\cap D}\left(\frac{q_{a}}{r_{a}}\right)^{x_{a}}\right]$$
so that $h_{P}(S)=\bar{h}_{P}({\mathchoice{\raisebox{0.0pt}{$\displaystyle\chi$}}{%
\raisebox{0.0pt}{$\textstyle\chi$}}{\raisebox{0.0pt}{$\scriptstyle\chi$}}{%
\raisebox{0.0pt}{$\scriptscriptstyle\chi$}}}^{S})$ for $S\subseteq D$.
Then model (1) can be formulated as
$$\displaystyle\begin{aligned} \displaystyle\min_{x,\pi}&&\displaystyle%
\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}&\displaystyle\pi_{%
s}^{t}&&\\
\displaystyle\textrm{s.t.}&&\displaystyle\sum\limits_{a\in D}c_{a}x_{a}&%
\displaystyle\leq b&&\\
&&\displaystyle\pi_{s}^{t}&\displaystyle\geq\max\{\bar{h}_{P}(x)\colon P\in%
\mathcal{P}_{st}\},&&\displaystyle(s,t)\in\Omega\\
&&\displaystyle x_{a}&\displaystyle\in\{0,1\},&&\displaystyle a\in D.\end{aligned}$$
(9)
We use the special structure of $h_{P}$ for each $P\in\mathcal{P}_{st}$ to build a linear formulation of model (9).
Definition 1 (E.g., [22]).
$h\colon 2^{\mathcal{N}}\rightarrow\mathbb{R}$ is a supermodular set function over a ground set $\mathcal{N}$ if $h(S_{1}\cup\{a\})-h(S_{1})\leq h(S_{2}\cup\{a\})-h(S_{2})$ for all $S_{1}\subseteq S_{2}\subseteq\mathcal{N}$ and $a\in\mathcal{N}\setminus S_{2}$.
Proposition 2.
$h_{P}(\cdot)$ is a supermodular set function.
Proof.
Let $S_{1},S_{2}\subseteq D$ with $S_{1}\subseteq S_{2}$. Let $a^{\prime}\in D\setminus S_{2}$. The result is immediate if $a^{\prime}\notin P$. Therefore, assume $a^{\prime}\in P$ to obtain
$$\displaystyle h_{P}(S_{1}\cup\{a^{\prime}\})-h_{P}(S_{1})$$
$$\displaystyle=\left(\prod_{a\in P}r_{a}\right)\left(\prod_{a\in P\cap S_{1}}%
\frac{q_{a}}{r_{a}}\right)\left(\frac{q_{a^{\prime}}}{r_{a^{\prime}}}-1\right)$$
$$\displaystyle\leq\left(\prod_{a\in P}r_{a}\right)\left(\prod_{a\in P\cap S_{2}%
}\frac{q_{a}}{r_{a}}\right)\left(\frac{q_{a^{\prime}}}{r_{a^{\prime}}}-1\right)$$
$$\displaystyle=h_{P}(S_{2}\cup\{a^{\prime}\})-h_{P}(S_{2}).$$
$$\displaystyle\,\qed$$
∎
The maximum of a set of supermodular functions is not in general a supermodular function, so $\max_{P\in\mathcal{P}_{st}}h_{P}(S)$ is not necessarily supermodular. To exploit the supermodular structure for each individual path, we consider an equivalent formulation of (9) that contains an inequality for every scenario $(s,t)\in\Omega$ and every path from $s$ to $t$:
$$\displaystyle\min_{x,\pi}$$
$$\displaystyle\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}\pi_{s%
}^{t}$$
(10a)
s.t.
$$\displaystyle\sum\limits_{a\in D}c_{a}x_{a}$$
$$\displaystyle\leq b$$
(10b)
$$\displaystyle\pi_{s}^{t}$$
$$\displaystyle\geq\bar{h}_{P}(x),$$
$$\displaystyle P\in\mathcal{P}_{st},\ (s,t)\in\Omega$$
(10c)
$$\displaystyle x_{a}$$
$$\displaystyle\in\{0,1\},$$
$$\displaystyle a\in D.$$
(10d)
Each second-stage constraint includes a supermodular function on the right-hand side.
Consider the mixed-integer feasible region of (10) for a fixed scenario $(s,t)\in\Omega$ and path $P\in\mathcal{P}_{st}$:
$$\displaystyle H_{P}\coloneqq\left\{(x,\pi)\in\{0,1\}^{D}\times\mathbb{R}\colon%
\pi\geq\bar{h}_{P}(x)\right\}.$$
(11)
We are interested in a linear formulation of $H_{P}$. Let $\rho_{P}^{a}(S)\coloneqq h_{P}(S\cup\{a\})-h_{P}(S)$ be the marginal difference function of the set function $h_{P}$ with respect to arc $a\in D$. Nemhauser et al. [19] provide an exponential family of linear inequalities that can be used to define $H_{P}$. Applied to $H_{P}$, these inequalities are
$$\displaystyle\pi$$
$$\displaystyle\geq h_{P}(S)-\smashoperator[]{\sum_{a\in S}^{}}\rho_{P}^{a}(D%
\setminus\{a\})(1-x_{a})+\smashoperator[]{\sum_{a\in D\setminus S}^{}}\rho_{P}%
^{a}(S)x_{a},\phantom{\emptyset}S\subseteq D$$
(12)
$$\displaystyle\pi$$
$$\displaystyle\geq h_{P}(S)-\smashoperator[]{\sum_{a\in S}^{}}\rho_{P}^{a}(S%
\setminus\{a\})(1-x_{a})+\smashoperator[]{\sum_{a\in D\setminus S}^{}}\rho_{P}%
^{a}(\emptyset)x_{a},\phantom{D}S\subseteq D.$$
(13)
Only one of the sets of inequalities (12) and (13) is required to define $H_{P}$
[18]. Using these inequalities, the feasible region of $H_{P}$ can be formulated as
$$\displaystyle H_{P}\coloneqq\left\{(x,\pi)\in\{0,1\}^{D}\times\mathbb{R}\colon%
\eqref{eq:ineq1}\textrm{ or }\eqref{eq:ineq2}\right\}.$$
The number of inequalities required to define model (10) in this manner grows exponentially with the number
of arcs in a path, and the number of $s$–$t$ paths grows exponentially with the size of the network. Enumerating all
$s$–$t$ paths for scenario $(s,t)\in\Omega$ is impractical. Nevertheless, (10) lends itself to a delayed
constraint generation algorithm. Instead of adding inequalities for all possible paths, we add violated valid
inequalities as needed (see Section 4.4).
To demonstrate the potential of this formulation, we next show that if we could obtain
the convex hull of each set $H_{P}$, this would yield a relaxation that is at least as strong as the LP relaxation of the
DEF (4) (and (8)). This implies that the path-based formulation has the potential
to yield a better LP relaxation if we can identify strong valid inequalities for $\operatorname{conv}(H_{P})$.
Theorem 1.
Consider the following continuous relaxation of (10):
$$\displaystyle\begin{aligned} \displaystyle\min_{\pi}&&\displaystyle%
\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}\pi_{s}^{\omega}&&&%
\\
\displaystyle\emph{s.t.}&&\displaystyle\sum\limits_{a\in D}c_{a}x_{a}&%
\displaystyle\leq b&&\\
&&\displaystyle(x,\pi_{s}^{\omega})&\displaystyle\in\operatorname{conv}(H_{P})%
,&&\displaystyle P\in\mathcal{P}_{st},\ \omega=(s,t)\in\Omega\\
&&\displaystyle x_{a}&\displaystyle\in[0,1],&&\displaystyle a\in D.\end{aligned}$$
(14)
For all $(\bar{x},\bar{\pi})$ feasible to (14), there exists $\hat{\pi}$ such that
$(\bar{x},\hat{\pi})$ is feasible to (4) and has objective value in (4) not greater than the
objective value of $(\bar{x},\bar{\pi})$ in (14).
The above theorem is a consequence of Lemma 1. For a fixed $(s,t)\in\Omega$ and $x\in[0,1]^{D}$ satisfying $\sum_{a\in D}c_{a}x_{a}\leq b$, define
$$\displaystyle O^{st}(x)\coloneqq\min_{\pi_{s}^{t}}$$
$$\displaystyle\pi_{s}^{t}$$
s.t.
$$\displaystyle(x,\pi_{s}^{t})$$
$$\displaystyle\in\operatorname{conv}(H_{P}),$$
$$\displaystyle P\in\mathcal{P}_{st}.$$
Recall that $E^{st}(x)$ is defined in (5).
Lemma 1.
Let $\omega=(s,t)\in\Omega$ and $x\in[0,1]^{D}$ satisfy $\sum_{a\in D}c_{a}x_{a}\leq b$. Then,
$$E^{st}(x)\leq O^{st}(x).$$
Proof.
The vector of all ones in $\mathbb{R}^{|N|}$ is feasible to (5). A feasible solution to
(6) can be constructed as follows. Let $P=\{(i_{1},i_{2}),(i_{2},i_{3}),\ldots,(i_{n-1},i_{n})\}\in\mathcal{P}_{st}$ be a simple $s$–$t$ path, where $i_{1}=s$ and $i_{n}=t$. Let $\bar{y}_{i_{1},i_{2}}=1$. For $k=2,\ldots,n$, let $\bar{y}_{i_{k},i_{k+1}}=r_{i_{k-1},i_{k}}\bar{y}_{i_{k-1},i_{k}}$. Let $\bar{y}_{ij}=0$ for all $(i,j)\in A\setminus P$, and $\bar{z}_{ij}=0$ for all $(i,j)\in A$. Then $(\bar{y},\bar{z})$ is feasible to (6). Because the feasible
regions of (5) and (6) are nonempty and the objectives are bounded below by
zero, both problems have an optimal solution.
Let $\pi^{*}$ be an optimal solution to (5), with corresponding optimal dual solution $(y^{*},z^{*})$.
Assume first there exists an $s$–$t$ path $P^{*}$ such that one of the constraints (5b)–(5d) is
satisfied at equality for all $a\in P^{*}$. Thus,
$$\displaystyle\pi^{*}_{i}=\max\left\{r_{a}\pi^{*}_{j}-(r_{a}-q_{a})u_{j}^{%
\omega}x_{a},\ q_{a}\pi^{*}_{j}\right\}$$
for all $a=(i,j)\in P^{*}$.
Removing constraints for arcs $a\in A\setminus P^{*}$ from the feasible region of (5) does not change
the problem’s optimal solution. However, the optimal objective value of the resulting formulation, $\pi^{*}_{s}=E^{st}(x)$, is a lower bound on $\min\{\pi_{s}\colon(x,\pi_{s})\in\operatorname{conv}(H_{P^{*}})\}$, because
$\operatorname{conv}(H_{P^{*}})$ is a subset of the modified feasible region of $\eqref{eq:extfixed}$. Therefore,
$$E^{st}(x)\leq\min\{\pi_{s}\colon(x,\pi_{s})\in\operatorname{conv}(H_{P^{*}})\}%
\leq O^{st}(x).$$
Finally, assume there is no $s$–$t$ path $P^{*}\in\mathcal{P}_{st}$ such that one of the constraints (5b)–(5d) is satisfied at equality for all $a\in P^{*}$.
Consider any $s$–$t$ path $P\in\mathcal{P}_{st}$. By assumption, there exists an arc $(i,j)\in P$ such that constraints
(5b)–(5d) are not binding. By complementary slackness, the corresponding dual optimal
$y^{*}_{ij}$ and $z^{*}_{ij}$ are $0$, and the flow of that path through the network is $0$. Because there exists no
sequence of arcs through the dual network of (6) that has positive flow, the dual optimal value is
bounded above by $0$.
Thus $E^{st}(x)\leq 0\leq O^{st}(x)$. ∎∎
Proof of Theorem 1.
From Lemma 1, we have
$$\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}\bar{\pi}_{s}^{%
\omega}\geq\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}O^{st}(%
\bar{x})\geq\smashoperator[]{\sum_{\omega=(s,t)\in\Omega}^{}}p_{\omega}E^{st}(%
\bar{x}).$$
This implies there exists $\hat{\pi}\in\mathbb{R}^{N\times\Omega}$ such that $(\bar{x},\hat{\pi})$ is feasible to (4) with objective value $\sum_{\omega=(s,t)\in\Omega}p_{\omega}\hat{\pi}_{s}^{\omega}\leq\sum_{\omega=(%
s,t)\in\Omega}p_{\omega}\bar{\pi}_{s}^{\omega}$. ∎∎
Computational experiments by Ahmed and Atamürk [1] show that the inequalities (12) and
(13) may provide a poor approximation of the set $\operatorname{conv}(H_{P})$.
Thus, in the following sections, we describe additional inequalities that can used to approximate $\operatorname{conv}(H_{P})$ for all
$P\in\mathcal{P}_{st}$. We consider three different classes of inequalities, based on the traversal probabilities of interdicted arcs.
4.1 Inequalities for the $q>0$ case
In this section, we assume $q_{a}>0$ for all $a\in D$. For a ground set $\mathcal{N}$, Ahmed and Atamürk [1] study valid inequalities for the mixed-integer set
$$\displaystyle F=\left\{(x,w)\in\{0,1\}^{\mathcal{N}}\times\mathbb{R}\colon w%
\leq f(\textstyle\sum_{i\in\mathcal{N}}\alpha_{i}x_{i}+\beta)\right\},$$
(15)
where $\alpha\in\mathbb{R}^{\mathcal{N}}_{+}$, $\beta\in\mathbb{R}$, and $f$ is a strictly concave, increasing, differentiable function. They derive valid inequalities for the set $F$ and prove that they dominate the submodular inequality equivalents of (12) and (13) applied to $F$. These improved inequalities are shown empirically to yield significantly better relaxations than (12) and (13).
We translate the set $H_{P}$ to match the structure of $F$. Let $\mathcal{N}\coloneqq D$, $w\coloneqq-\pi$, $f(u)\coloneqq-\exp(-u)$, $\beta\coloneqq-\sum_{a\in P}\log(r_{a})$. For $a\in D$, let
$$\displaystyle\alpha_{a}\coloneqq\begin{cases}\log(r_{a})-\log(q_{a})&\textrm{%
if }a\in P\cap D,\\
0&\textrm{otherwise.}\end{cases}$$
Because $\log(r_{a})>\log(q_{a})$ for all $a\in D$, we have $\alpha\in\mathbb{R}^{D}_{+}$. With these definitions, $H_{P}$ is expressed in the form of $F$.
We now describe how to calculate valid inequalities for $H_{P}$ using the results of Ahmed and Atamtürk
[1]. Without loss of generality, let $D\setminus S\coloneqq\{1,2,\ldots,m\}$ be indexed such that
$\alpha_{1}\geq\alpha_{2}\geq\ldots\geq\alpha_{m}$, and let $A_{k}=\sum_{a=1}^{k}\alpha_{a}$ for $k\in D\setminus S$, with $A_{0}=0$. Also define $\alpha(S)=\sum_{a\in S}\alpha_{a}$. The subadditive lifting inequality
$$\displaystyle\pi\geq h_{P}(S)-\smashoperator[]{\sum_{a\in S}^{}}\phi(-\alpha_{%
a})(1-x_{a})+\smashoperator[]{\sum_{a\in D\setminus S}^{}}\rho_{P}^{a}(S)x_{a}$$
(16)
is valid for $H_{P}$ for any set of interdicted arcs $S\subseteq D$, where $\phi$ is calculated as follows [1]. Consider the function $\zeta\colon\mathbb{R}_{-}\rightarrow\mathbb{R}$, calculated according to Algorithm 1.
For a provided value of $\eta$, Algorithm 1 returns a specific $k$. With this $k$, let $\phi\colon\mathbb{R}_{-}\rightarrow\mathbb{R}$ be defined as
$$\displaystyle\phi(\eta)\coloneqq\begin{cases}\zeta(\mu_{k}-A_{k-1})+\rho_{P}^{%
k}(S)\frac{b_{k}(\eta)}{\alpha_{k}}&\textrm{if }\mu_{k}-A_{k}\leq\eta\leq\mu_{%
k}-A_{k-1},\\
\zeta(\eta)&\textrm{otherwise,}\end{cases}$$
where $\mu_{k}=-\log(-\rho_{P}^{k}(S)/\alpha_{k})-\alpha(S)-\beta$ and $b_{k}(\eta)=\mu_{k}-A_{k-1}-\eta$. Inequality (16) dominates the general supermodular inequality (12) [1].
We now construct Ahmed and Atamtürk’s inequalities that dominate the supermodular inequalities (13). Let
$S=\{1,2,\ldots,n\}$ be indexed such that $\alpha_{1}\geq\alpha_{2}\geq\ldots\geq\alpha_{n}$. Also, let
$A_{k}=\sum_{a=1}^{k}\alpha_{a}$ for $k\in S$, with $A_{0}=0$. We define the function $\xi\colon\mathbb{R}_{+}\rightarrow\mathbb{R}$ to be calculated according to Algorithm 2.
With the $\eta$-dependent $k$ calculated in Algorithm 2, let
$$\displaystyle\psi(\eta)\coloneqq\begin{cases}\xi(A_{k}-\nu_{k})+\rho_{P}^{k}(S%
\setminus\{k\})\frac{b_{k}(\eta)}{\alpha_{k}}&\textrm{if }A_{k-1}-\nu_{k}\leq%
\eta\leq A_{k}-\nu_{k},\\
\xi(\eta)&\textrm{otherwise,}\end{cases}$$
where $\nu_{k}=\alpha(S)+\log(-\rho_{P}^{k}(S\setminus\{k\})/\alpha_{k})-\beta$ and $b_{k}(\eta)=A_{k}-\nu_{k}-\eta$ for every $k\in\{1,2,\ldots,r\}$. For any $S\subseteq D$, the inequality
$$\displaystyle\pi\geq h_{P}(S)-\smashoperator[]{\sum_{a\in S}^{}}\rho_{P}^{a}(S%
\setminus\{a\})(1-x_{a})-\smashoperator[]{\sum_{a\in D\setminus S}^{}}\psi(%
\alpha_{a})x_{a}$$
(17)
is valid for $H_{P}$. For all $a\in D\setminus P$, the coefficients in inequalities (16) and (17) are $0$.
We next discuss separation of the inequalities (16) and (17) for use in a delayed
constraint generation framework.
Given a point $(\bar{x},\bar{\pi})\in[0,1]^{D}\times\mathbb{R}$, we want to determine if there is a set $S$ such that $(\bar{x},\bar{\pi})$ violates either (16) or (17). For an integral vector $\bar{x}$, we construct the set $S=\{a\in D\colon\bar{x}_{a}=1\}$. If $\bar{\pi}<h_{P}(S)$, then inequalities (16) and (17) are violated and can be added to the formulation of $H_{P}$ to cut off $(\bar{x},\bar{\pi})$.
If $\bar{x}$ is not integral, we use Ahmed and Atamtürk’s scheme to heuristically construct a set $S$ using
$\bar{x}$. This set is used in inequalities (16) and (17) to cut off $(\bar{x},\bar{\pi})$, if possible. We first consider constructing a set $S$ to apply inequality (16). We first solve the following nonlinear program:
$$\displaystyle\max_{z\in[0,1]^{D}}\ \frac{-\bar{\pi}+1}{-\exp(-\textstyle\sum_{%
a\in D}\alpha_{a}z_{a}-\beta)+1}+\smashoperator[]{\sum_{a\in D}^{}}\frac{\rho_%
{P}^{a}(\emptyset)}{h_{P}(\emptyset)+1}\bar{x}_{a}(1-z_{a}).$$
(18)
Given a solution $\bar{z}$, we greedily round the fractional components in $\bar{z}$ that result in the least reduction
in objective value when rounded to either $0$ or $1$. Given the rounded solution, say $\bar{z}^{\prime}$, we set $S=\{a\in D\colon\bar{z}^{\prime}_{a}=1\}$. If $\pi<h_{P}(S)-\sum_{a\in S}\phi(-\alpha_{a})(1-\bar{x}_{a})+\sum_{a\in D%
\setminus S}\rho_{P}^{a}(S)\bar{x}_{a}$, valid inequality (16) can be added to the relaxation to cut off $(\bar{x},\bar{\pi})$.
We solve a related nonlinear program to find a set $S$ to apply inequality (17):
$$\displaystyle\max_{z\in[0,1]^{D}}\ \frac{-\bar{\pi}+1}{-\exp(-\textstyle\sum_{%
a\in D}\alpha_{a}z_{a}-\beta)+1}-\smashoperator[]{\sum_{a\in D}^{}}\frac{\rho_%
{P}^{a}(\emptyset)}{h_{P}(\{a\})+1}(1-\bar{x}_{a})z_{a}.$$
(19)
We again greedily round the solution of (18) to obtain the characteristic vector of $S$. If $\pi<h_{P}(S)-\sum_{a\in S}\rho_{P}^{a}(S\setminus\{a\})(1-\bar{x}_{a})-\sum_{a%
\in D\setminus S}\psi(\alpha_{a})\bar{x}_{a}$, inequality (17) cuts off $(\bar{x},\bar{\pi})$.
Yu and Ahmed [25] consider deriving stronger valid inequalities by imposing a knapsack constraint on the set $F$ given in equation (15). In particular,
$$\displaystyle F=\left\{(x,w)\in\{0,1\}^{\mathcal{N}}\times\mathbb{R}\colon w%
\leq f(\textstyle\sum_{i\in\mathcal{N}}\alpha_{i}x_{i}+\beta),\ \sum_{i\in%
\mathcal{N}}x_{i}\leq k\right\},$$
where $k>0$. Although our test instances include this structure, we do not explore applying their
results here. In particular, when interdicting arcs along an individual path, the knapsack constraint is only
relevant if the number of arcs in the path under consideration is larger than the defender’s budget.
4.2 Inequalities for $q=0$ case
When $q_{a}=0$ for all $a\in D$, $\log(q_{a})$ is no longer defined, and the results of Section 4.1 do not apply. We consider a different class of inequalities for this special case of $q$.
If all interdicted arc probabilities are $0$, $\bar{h}_{P}(x)$ simply reduces to
$$\displaystyle\bar{h}_{P}(x)=\left[\,\prod_{a\in P}r_{a}\right]\left[\enspace\ %
\smashoperator[]{\prod_{a\in P\cap D}^{}}\,(1-x_{a})\right].$$
If any arc $a\in P\cap D$ is interdicted, $\bar{h}_{P}(x)$ equals $0$. By setting $\hat{\pi}\coloneqq\pi/\prod_{a\in P}r_{a}$, our relevant mixed-integer linear set is
$$\displaystyle H_{P}$$
$$\displaystyle=\left\{(x,\hat{\pi})\in\{0,1\}^{D}\times\mathbb{R}\colon\hat{\pi%
}\geq\textstyle\prod_{a\in P\cap D}(1-x_{a})\right\}.$$
We are again interested in valid inequalities for $H_{P}$. The inequalities
$$\displaystyle\hat{\pi}$$
$$\displaystyle\geq 1-\smashoperator[]{\sum_{a\in P\cap D}^{}}x_{a}$$
(20)
and $\hat{\pi}\geq 0$ are valid for $H_{P}$ [8] and define its convex hull [3].
4.3 Inequalities for the mixed case
We now consider the case where $q_{a}>0$ for some, but not all, $a\in D$. Let $P_{+}\coloneqq\{a\in P\cap D\colon q_{a}>0\}$ and $P_{0}\coloneqq\{a\in P\cap D\colon q_{a}=0\}$.
For $S\subseteq A$ let $r(S)\coloneqq\prod_{a\in S}r_{a}$. We write $\bar{h}_{P}(x)$ as
$$\displaystyle\bar{h}_{P}(x)$$
$$\displaystyle=r(P\setminus D)\bar{h}_{P_{+}}(x)\bar{h}_{P_{0}}(x).$$
(21)
We introduce two new variables, $\pi_{+}\in\left[0,r(P_{+})\right]$ and $\pi_{0}\in[0,r(P_{0})]$ to
represent $\bar{h}_{P_{+}}(x)$ and $\bar{h}_{P_{0}}(x)$, respectively, and arrive at the formulation:
$$\displaystyle\pi$$
$$\displaystyle\geq r(P\setminus D)\pi_{+}\pi_{0}$$
(22)
$$\displaystyle\pi_{+}$$
$$\displaystyle\geq\bar{h}_{P_{+}}(x)$$
(23)
$$\displaystyle\pi_{0}$$
$$\displaystyle\geq\bar{h}_{P_{0}}(x).$$
(24)
Our approach is to use results from Sections 4.1 and 4.2 to derive valid inequalities for
(23) and (24), respectively, and
relax the nonconvex constraint (22) using the McCormick inequalities [15], which in this
case reduce to
$$\displaystyle\pi$$
$$\displaystyle\geq r(P\setminus D)\left[r(P_{0})\pi_{+}-r(P_{+})(r(P_{0})-\pi_{%
0})\right]$$
(25)
and $\pi\geq 0$.
Let $\gamma^{S}\in\mathbb{R}^{P_{+}}$ and constant $\delta^{S}\in\mathbb{R}$ be coefficients from one inequality of the form
(16) (with $P_{+}$ taking the place of $P$) for some $S\subseteq P_{+}$. An inequality of the form (17) may also be used. We relax (23) to the constraint
$$\displaystyle\pi_{+}$$
$$\displaystyle\geq\smashoperator[]{\sum_{a\in P_{+}}^{}}\gamma^{S}_{a}x_{a}+%
\delta^{S}.$$
(26)
Inequality (24) is relaxed as $\pi_{0}\geq 0$, and, applying (20),
$$\displaystyle\pi_{0}$$
$$\displaystyle\geq r(P_{0})\left(1-\smashoperator[]{\sum_{a\in P_{0}}^{}}x_{a}%
\right).$$
(27)
Finally, we project variables $\pi_{+}$ and $\pi_{0}$ out of inequality (25) using Fourier-Motzkin elimination with constraints (26)–(27) and $\pi_{0}\geq 0$. This results in the two inequalities
$$\displaystyle\pi$$
$$\displaystyle\geq r(P\setminus D)r(P_{0})\left[\ \smashoperator[]{\sum_{a\in P%
_{+}}^{}}\gamma^{S}_{a}x_{a}+\delta^{S}-r(P_{+})\smashoperator[]{\sum_{a\in P_%
{0}}^{}}x_{a}\right]$$
(28)
$$\displaystyle\pi$$
$$\displaystyle\geq r(P\setminus D)r(P_{0})\left[\ \smashoperator[]{\sum_{a\in P%
_{+}}^{}}\gamma^{S}_{a}x_{a}+\delta^{S}-r(P_{+})\right].$$
(29)
Inequality (29) is dominated by $\pi\geq 0$, because $\sum_{a\in P_{+}}\gamma^{S}_{a}x_{a}+\delta^{S}\leq\pi_{+}\leq r(P_{+})$. Therefore, inequality (28) is the only inequality we obtain from this
procedure for the given inequality (16) defined by $S$.
We next argue that for any integer solution $(\bar{x},\bar{\pi})\in\{0,1\}^{D}\times\mathbb{R}_{+}$, if it is not feasible
(i.e., $(\bar{x},\bar{\pi})\notin H_{P}$),
then we can efficiently find an inequality of the form (28) that this solution violates. Indeed,
first observe that if $\bar{x}_{a}=1$ for any $a\in P_{0}$ then $\bar{h}_{P}(\bar{x})=0\leq\bar{\pi}$, by
assumption, and hence the solution is feasible. So, we may assume $\bar{x}_{a}=0$ for all $a\in P_{0}$. We set $S=\{a\in P_{+}:\bar{x}_{a}=1\}$, which yields
$\sum_{a\in P_{+}}\gamma^{S}_{a}\bar{x}_{a}+\delta^{S}=h_{P_{+}}(S)=\bar{h}_{P_%
{+}}(\bar{x})$.
Inequality (28) thus yields
$$\pi\geq r(P\setminus D)r(P_{0})\left[\bar{h}_{P_{+}}(\bar{x})-0\right]=\bar{h}%
_{P}(\bar{x})$$
by (21) since $\bar{h}_{P_{0}}(\bar{x})=r(P_{0})$.
4.4 Path-based branch-and-cut algorithm
We next describe how the inequalities derived in Sections 4.1–4.3 can be used within a
branch-and-cut algorithm to solve the path-based formulation (10).
Similar to the use of Benders decomposition in a branch-and-cut algorithm described in Section 2.2, the algorithm is
based on solving a master problem via branch-and-cut in which the constraints (10c) are relaxed and approximated with
cuts.
Let $(\bar{x},\bar{\pi})$ be a solution obtained in the branch-and-cut algorithm with integral $\bar{x}$.
Since (10c) are relaxed, we must
check if this solution satisfies these constraints, and if not, finding a cut that this solution violates.
We assign arc traversal probabilities $\sigma_{a}$ for $a\in D$ according to the formula
$$\displaystyle\sigma_{a}$$
$$\displaystyle=r_{a}^{1-\bar{x}_{a}}q_{a}^{\bar{x}_{a}}.$$
(30)
For each $\omega=(s,t)\in\Omega$, we find a maximum-reliability path $\bar{P}\in\mathcal{P}_{st}$. If $\bar{\pi}_{s}^{t}\geq\bar{h}_{\bar{P}}(\bar{x})$ then this solution is feasible to (10c) for this $\omega$, since, by
construction, $\bar{h}_{\bar{P}}(\bar{x})=\max_{P\in\mathcal{P}_{st}}\bar{h}_{P}(\bar{x})$. Otherwise, depending on if $q_{a}=0$ for any or all $a\in\bar{P}$, we derive a cut from one of Sections 4.1–4.3, using
this path $\bar{P}$ and scenario $\omega$. By construction, for the solution $\bar{x}$, this cut enforces that $\pi_{s}^{t}\geq\bar{h}_{\bar{P}}(\bar{x})$, and hence (i) cuts off the current infeasible solution $(\bar{x},\bar{\pi})$, and
(ii) implies that any solution $(x,\pi)$ of the updated master problem with $x=\bar{x}$ will satisfy
(10c) for this $\omega$. This ensures
that the branch-and-cut algorithm is finite (since at most finitely many such cuts are needed at finitely many integer solutions) and
correct (since any infeasible solution obtained in the algorithm will be cut off).
One may also attempt to generate cuts at solutions $(\bar{x},\bar{\pi})$ in which $\bar{x}$ is not necessarily
integer, in order to improve the LP relaxation.
To find a path for a scenario $\omega=(s,t)$ in this case, we again use formula (30) to define arc
reliabilities, and then find the most reliable $s$-$t$ path. With this path, we again apply the methods in Sections
4.1–4.3 to attempt to derive a violated cut. When $\bar{x}$ is not integral, this
approach is not guaranteed to find a violated cut, even if one exists. Thus, to increase the chances a cut is found, we
may also consider identifying another path by assigning arc costs as follows:
$$\displaystyle\sigma_{a}$$
$$\displaystyle=(1-\bar{x}_{a})r_{a}+\bar{x}_{a}q_{a},$$
and then finding a maximum-reliability path using these arc reliabilities.
Our preliminary tests did not reveal any obvious benefit of using one particular arc-reliability calculation method.
In our implementation, before starting the branch-and-cut algorithm we solve a relaxation of the master problem (10) in which constraints (10c)
are dropped, and the integrality constraints are relaxed. After solving this LP relaxation,
we attempt to identify violated cuts for the relaxation solution, and if found, we add them to the master relaxation and
repeat this process until no more violated cuts are found.
We then begin the branch-and-cut process with all cuts found when solving the LP relaxation included in the mixed-integer programming
formulation. This allows the solver to use these cuts to generate new cuts in the master problem relaxation. At any point in the
branch-and-bound process where a solution $(\bar{x},\bar{\pi})$ with $\bar{x}$ integral is obtained, we identify if
there is any violated cut (as discussed above) and if so add it to the formulation via the lazy constraint callback function.
5 Computational experiments
We test the different solution methods on SNIP instances from Pan and
Morton [21], which consist of a network of $783$ nodes and $2586$ arcs, $320$ of which
can be interdicted by the defender. Each instance considers the same $456$ scenarios.
We consider three cases for the $q$ vector outlined by Pan and
Morton [21]. In particular, we consider instances with $q\coloneqq 0.5r$, $q\coloneqq 0.1r$, and $q\coloneqq 0$. Pan and Morton also considered $q_{a}$ values independently sampled from a $U(0,0.5)$ distribution.
Since the DEF for most of these instances could be solved within $30$ seconds with little or no branching, we exclude
these in our experiments. The cost of interdiction is $c_{a}=1$ for all $a\in D$. Seven different budget levels were
tested for each network and $q$ level, and there are five test instances for each budget level and $q$ level.
We consider the following algorithms in our tests.
$\bullet$
DEF: Solve DEF (4) with a MIP solver
$\bullet$
C-DEF: Solve compact DEF (8) with a MIP solver
$\bullet$
BEN: Benders branch-and-cut algorithm on DEF (4); refer to (7)
$\bullet$
PATH: Branch-and-cut algorithm on path-based decomposition (1)
DEF and C-DEF are solved using the commercial MIP solver IBM ILOG CPLEX 12.6.3. BEN is the
Benders branch-and-cut algorithm as implemented in Bodur et al. [4, Section 4]. The branch-and-cut algorithms, BEN and PATH, were
implemented within the CPLEX solver using lazy constraint callbacks.
Ipopt 3.12.1 was used to solve nonlinear models (18) and (19) in PATH.
The computational tests were run on a 12-core machine with two 2.66GHz Intel Xeon X5650 processors and
128GB RAM, and one-hour time limit was imposed. We allowed four threads for DEF, while all other algorithms were limited
to one thread. We used the CPLEX default relative gap tolerance of $10^{-4}$ to terminate the branch-and-bound process. The algorithms were written in Julia 0.4.5 using the Julia for Mathematical Optimization framework [14].
In our implementation of the BEN and PATH algorithms, when generating cuts at the root LP relaxation, we used the following
procedure to focus the computational effort on scenarios that yield cuts. After each iteration, we make a list of
scenarios for which violated cuts were found in that iteration. Then, in the next iteration, we first only attempt to identify cuts for scenarios in this list. If
successful, the list is further reduced to the subset of scenarios that yielded a violated cut. If we
fail to find any violated cuts in the current list, we re-initialize the list with all scenarios and attempt to identify violated cuts for each scenario.
Table 1 reports the average computational time, in seconds, over the five instances for each combination
of the vector $q$ and budget level $b$. The average times include only instances solved within the time limit – the
number of unsolved instances are enclosed in parentheses. We find that PATH and C-DEF are consistently faster than both DEF and BEN.
PATH and C-DEF have comparable performance, with C-DEF being somewhat faster on the $q=0.5r$ instances
(the easiest set) and PATH being somewhat faster on the $q=0.1r$ instances (the hardest set).
Table 2 reports the root gap after adding LP cuts for the decomposition algorithms BEN and PATH,
relative to the optimal solution value, before and after the addition of cuts from CPLEX. The root gap is calculated as $(z^{*}-z^{LP})/z^{*}$, where $z^{*}$ is the
instance’s optimal value and $z^{LP}$ is the objective value obtained after running the initial cutting plane loop on
the relaxed master problem. The reported value is the arithmetic mean over the five instances for each combination of
interdicted traversal probabilities $q$ and budget level $b$. DEF, C-DEF, and BEN all have the same LP relaxation value,
so we show this gap only for BEN. The columns under the heading “LP relaxation after CPLEX cuts” refer to the gap that
is obtained at the root node, after
CPLEX finishes its process of adding general-purpose cuts. We exclude DEF from these results since in many instances the
root node has not completed processing in the time limit.
These results show that the initial LP relaxation value obtained using the cuts in the PATH method is very similar to
that obtained using the BEN (or the DEF or C-DEF formulations).
We also find that the general purpose cuts in CPLEX significantly improve the relaxation value for all formulations,
with the greatest improvement coming in the C-DEF formulation. The gaps obtained after the addition of CPLEX cuts are
quite similar between PATH and BEN, with the exception of the $q=0$ instances, where the CPLEX cuts seem to be more
effective for the BEN formulation.
Table 3 displays the average number of branch-and-bound nodes of each algorithm and the time spent generating cuts in BEN and PATH. This includes time spent generating cuts during the initial cutting plane loop as well as within the branch-and-cut process. The mean is over instances that solved within the time limit.
Consistent with the results on root relaxation gaps, we find that C-DEF requires the fewest number of branch-and-bound
nodes, and that PATH and BEN methods require a similar number of branch-and-bound nodes.
On the other hand, significantly less time is spent generating cuts in the PATH method than in the BEN method,
which explains the better performance of PATH.
In particular, generating cuts in PATH requires solving a maximum-reliability path problem for each destination node, as
opposed to the Benders approach which requires solving a linear program for each scenario.
6 Conclusion
We have proposed two new methods for solving maximum-reliability SNIP. The first
method is to solve a more compact DEF, and the second is based on a reformulation derived from considering the
reliability for each path separately. Both of these methods outperform state-of-the-art methods on a class of SNIP
instances from the literature. An interesting direction for future research is to investigate whether ideas similar to the path-based
formulation might be useful for solving different variants of SNIP, such as the
maximum-flow network interdiction problem.
References
[1]
Shabbir Ahmed and Alper Atamtürk.
Maximizing a class of submodular utility functions.
Math. Program., 128(1-2):149–169, 2011.
[2]
Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin.
Network flows: theory, algorithms, and applications.
Prentice Hall, Upper Saddle River, NJ, USA, 1993.
[3]
Faiz A. Al-Khayyal and James E. Falk.
Jointly constrained biconvex programming.
Math. of Oper. Res., 8(2):273–286, 1983.
[4]
Merve Bodur, Sanjeeb Dash, Oktay Günlük, and James Luedtke.
Strengthened Benders cuts for stochastic integer programs with
continuous recourse.
INFORMS J. on Comput., 29(1):77–91, 2016.
[5]
Gerald Brown, Matthew Carlyle, Javier Salmerón, and Kevin Wood.
Defending critical infrastructure.
Interfaces, 36(6):530–544, 2006.
[6]
Gerald G. Brown, W. Matthew Carlyle, Robert C. Harney, Eric M. Skroch, and
R. Kevin Wood.
Interdicting a nuclear-weapons project.
Oper. Res., 57(4):866–877, 2009.
[7]
Kelly J. Cormican, David P. Morton, and R. Kevin Wood.
Stochastic network interdiction.
Oper. Res., 46(2):184–197, 1998.
[8]
R. Fortet.
Applications de l’algèbre de Boole en recherche
opérationelle.
Rev. Fr. d’Inform. et de Rech. Opér., 4(14):17–25, 1960.
[9]
Delbert R. Fulkerson and Gary C. Harding.
Maximizing the minimum source-sink path subject to a budget
constraint.
Math. Program., 13(1):116–118, 1977.
[10]
Bruce Golden.
A problem in network interdiction.
Nav. Res. Logist. Q., 25(4):711–713, 1978.
[11]
Oktay Günlük and Yves Pochet.
Mixing mixed-integer inequalities.
Math. Program., 90(3):429–457, 2001.
[12]
Raymond Hemmecke, Rüdiger Schultz, and David L. Woodruff.
Interdicting stochastic networks with binary interdiction effort.
In David L. Woodruff, editor, Netw. Interdiction and Stoch.
Integer Program., pages 69–84. Springer US, 2003.
[13]
Udom Janjarassuk and Jeff Linderoth.
Reformulation and sampling to solve a stochastic network interdiction
problem.
Networks, 52(3):120–132, 2008.
[14]
Miles Lubin and Iain Dunning.
Computing in operations research using Julia.
INFORMS J. on Comput., 27(2):238–248, 2015.
[15]
Garth P. McCormick.
Computability of global solutions to factorable nonconvex programs:
Part I–Convex underestimating problems.
Math. Program., 10(1):147–175, 1976.
[16]
David P. Morton.
Stochastic network interdiction.
In James J. Cochran, editor, Wiley Encyclopedia of Operations
Research and Management Science. John Wiley & Sons, Inc., 2011.
[17]
David P. Morton, Feng Pan, and Kevin J. Saeger.
Models for nuclear smuggling interdiction.
IIE Trans., 39(1):3–14, 2007.
[18]
George L. Nemhauser and Laurence A. Wolsey.
Integer and combinatorial optimization.
Wiley-Interscience, New York, NY, USA, 1988.
[19]
George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher.
An analysis of approximations for maximizing submodular set functions
– I.
Math. Program., 14(1):265–294, 1978.
[20]
Feng Pan, William S. Charlton, and David P. Morton.
A stochastic program for interdicting smuggled nuclear material.
In David L. Woodruff, editor, Netw. Interdiction and Stoch.
Integer Program., pages 1–19. Springer US, 2003.
[21]
Feng Pan and David P. Morton.
Minimizing a stochastic maximum-reliability path.
Networks, 52(3):111–119, 2008.
[22]
Alexander Schrivjer.
Combinatorial optimization: polyhedra and efficiency.
Springer, 2003.
[23]
Richard Wollmer.
Removing arcs from a network.
Oper. Res., 12(6):934–940, 1964.
[24]
R. Kevin Wood.
Deterministic network interdiction.
Math. and Comput. Model., 17(2):1–18, 1993.
[25]
Jiajin Yu and Shabbir Ahmed.
Maximizing a class of submodular utility functions with constraints.
Math. Program., 2016. |
Leptogenesis
E.A. Paschos
Abstract
I present the theoretical basis for
Leptogenesis and its implications for the structure of
the universe. It is suggested that density fluctuations
grow during the transition period and remnants of this
effect should be sought in the universe. The relation
between theories with Majorana neutrinos and low energy
phenomena, including oscillations, advanced considerably
during the past two years with a consistent picture
developed in several models.
\@input
eapprocindien.not
\@inputjrinf.not
PRAMANA © Indian Academy of Sciences
— journal of
physics pp. 0–
LeptogenesisUniversity of Dortmund, Institute of Physics,
44221 Dortmund, Germany
Keywords. Leptogenesis, Baryon asymmetry, Majorana
neutrinos, density fluctuations
PACS Nos 13.15.+g, 12.15.Ff, 14.60.Gh
1. Introduction
The discovery of neutrino oscillations signals the
extension of the standard model with the introduction
of right–handed neutrinos. The evidence so far shows
the mixing of $\nu_{e}\to\nu_{\mu}$ (solar) and
$\nu_{\mu}\to\nu_{\tau}$ (atmospheric) with very
small mass differences and large mixing angles.
The mixing of $\nu_{e}\to\nu_{\tau}$ has not been observed
yet and consequently the mixing angle must be much
smaller.
The oscillations require right–handed components
for the states in order to produce neutrino masses.
Introducing the right–handed components of neutrinos
leads to several new issues to be investigated.
1)
Are the neutrinos Dirac or Majorana
particles? In other words, should we introduce
only Dirac type terms in the mass matrices
or also Majorana mass terms. When we follow the
philosophy that all allowed mass terms must be
included, then Majorana terms must be present.
2)
Is the mixing observed so far related
to other phenomena? An attractive possibility
is related to the generation of a lepton asymmetry
in the early universe, which was subsequently
converted into a Baryon asymmetry.
3)
Is it possible to find evidence for
the Majorana nature of neutrinos?
4)
Are the low energy phenomena related
to the high energy theory we need for Leptogenesis?
This field has become very interesting because
there are many unanswered questions, some of
which can be investigated experimentally. I will
address some of these issues in this talk.
2. Majorana Neutrinos
The neutrinos participate in their observed
interactions as
left–handed particles. The right–handed component
of the Dirac spinor was intentionally left out believing that
neutrinos were massless. The mixing phenomena
require the introduction of right–handed components.
With them we have the choice of two mass terms
$$\displaystyle\bar{\nu}_{L}N_{R}$$
$$\displaystyle{\rm Dirac}$$
$$\displaystyle\Delta L=0$$
(1)
$$\displaystyle\bar{N}_{R}^{C}N_{R}$$
$$\displaystyle{\rm Majorana}$$
$$\displaystyle\Delta L=2\,.$$
(2)
One may think of introducing also the term
$$\quad\quad\quad\quad\quad\quad\bar{\nu}_{L}^{C}\nu_{L}$$
(3)
but this carries weak isospin of two units and must
couple to a Higgs triplet which is absent in the
standard model. It is natural to introduce the terms
in (1) and (2) and look for solutions of the mass
matrix. In general, we obtain solutions
$$\quad\quad\quad\quad\quad\quad\psi=\frac{1}{\sqrt{2}}(N_{R}+N_{R}^{C})$$
(4)
which are Majorana states and even under charge
conjugation (C). Introducing bare Majorana mass
terms, like the one in eq. (2), one obtains
physical states which are C–eigenstates. In
theories with Majorana mass terms it is possible
to introduce interactions with scalar particles,
like
$$\quad\quad\quad\quad\quad\quad h_{ij}\bar{\ell}_{L}î\phi\,\psi j$$
(5)
where $\ell_{L}^{i}$ is the left–handed doublet (leptons),
$\phi$ the ordinary Higgs doublet and $h_{ij}$ the
coupling constant with $i$ and $j$ referring to
various generations.
The result is that such interactions together with
the mass terms produce physical states which are neither
C– nor CP–eigenstates.
This property was emphasized in the article by Fukugida
and Yanagida [?] that the decays of heavy
Majorana states generate a lepton asymmetry. Later on,
it was observed by Flanz, Sarkar and myself [?]
that the construction of the physical states contains
an additional lepton asymmetry.
In the latter case, the situation is analogous to the
$K^{0}$ and $\bar{K}^{0}$ states, which mix through the box
diagrams and produce $K_{L}$ and $K_{S}$ as physical states.
The mixing of the neutrinos originates from fermionic
self–energies which contribute to the mass matrix of
the particles. The final result is the creation of
an asymmetry in the decays of heavy Majorana particles,
which depends on a modified Dirac term $\tilde{m}_{D}$ of
the mass matrix (the modification is discussed in
section 5). The final result is
$$\displaystyle\varepsilon$$
$$\displaystyle=$$
$$\displaystyle\frac{\Gamma(N_{R_{i}}\to\ell)-\Gamma(N_{R_{i}}\to\ell^{c})}{%
\Gamma(N_{R_{i}}\to\ell)+\Gamma(N_{R_{i}}\to\ell^{c})}$$
(6)
$$\displaystyle=$$
$$\displaystyle\frac{1}{8\pi v^{2}}\frac{1}{(\tilde{m}_{D}^{+}\,\tilde{m}_{D})_{%
11}}\sum_{j=2,3}{\mathrm{Im}}\left((\tilde{m}_{D}^{+}\,\tilde{m}_{D})_{1j}^{2}%
\right)f(x)$$
(7)
with
$f(x)=\sqrt{x}\left\{\frac{1}{1-x}+1-(1+x)\ln\left(\frac{1+x}{x}\right)\right\}$,
$x=\left(\frac{M_{j}}{M_{1}}\right)^{2}$, $M_{1}$ the mass
of the lightest Majorana neutrino and $v$ the vacuum
expectation value of the standard Higgs. The term
$\left(\frac{1}{1-x}\right)$ comes from the mixing of
the states [?] and the rest from the interference of
vertex corrections with Born diagrams [?].
The above formula is an approximation for the case
when the two masses are far apart. In case they are
close together there is an exact formula, showing
clearly a resonance phenomenon [?] from
the overlap of the two wave functions.
The origin of the asymmetry has also been studied in
field theories [?] and supersymmetric models [?,?]. The purpose of these articles,
especially [?], was to justify the formalism
and eliminate some objections.
3. (B–L)–Asymmetry
According to the scenario described above, the
(B–L) quantum number is violated easily through
Majorana neutrinos. We assign to particles lepton
and baryon quantum numbers, as follows:
$n_{B}=1/3$ for each baryon, $n_{L}=+1$ for each
lepton and the negative numbers for their antiparticles.
Then there is a combined number
$$n_{C}=3n_{B}-n_{L}=B-L$$
which is conserved. More explicitly we assign
$$\displaystyle n_{L}=-1$$
$$\displaystyle n_{C}=+1\quad{\rm for}\,{\rm antimuons}\quad\mu^{+}\quad{\rm and%
}\quad\bar{\nu}_{\mu}$$
(8)
$$\displaystyle n_{L}=+1$$
$$\displaystyle n_{C}=-1\quad{\rm for}\,{\rm muons}\quad\quad\mu^{-}\quad{\rm and%
}\quad{\nu}^{\mu}$$
(9)
$$\displaystyle n_{B}=+1$$
$$\displaystyle n_{C}=+1\quad{\rm for}\,{\rm baryons:}\,\,{\rm protons}\,{\rm and%
}\,{\rm neutrons}$$
(10)
$$\displaystyle n_{B}=-1$$
$$\displaystyle n_{C}=-1\quad{\rm for}\,{\rm antiprotons}\,{\rm and}\,{\rm
antineutrons}$$
(11)
Under laboratory conditions processes involving
violation of $n_{B}$ and $n_{L}$ play a negligible
role, but they were important during the early
state of the universe. Violation of $n_{B}$ will
produce proton decays which so far have been shown
to be
small. It was thought that Grand Unified Theories
(GUT) could violate the (B–L) or the B quantum
number. However, there is a theorem which says
that the decays of heavy Gauge Bosons into quarks
and leptons with the conventional quantum numbers
involve operators of dimensions higher than six [?].
This suppresses proton decay and the generation
of a baryon asymmetry in GUT. The standard model
obeys this rule, but it has another property:
topological solutions of the theory (sphalerons)
conserve to a high degree of accuracy B–L but
violate B+L [?]. Thus there is the attractive
possibility to generate a net (B–L) in the decays
of heavy Majorana particles and subsequently convert
a fraction of it into a baryon asymmetry. This
scenario was named Leptogenesis and presents an
attractive possibility, perhaps the only viable
one, for Baryon generation.
4. Lepton Asymmetric Universe
As mentioned already, a lepton asymmetry is generated
either in the decays or the construction of the
physical states. The two possibilities may be
distinguished by their consequences. The C– and
CP–asymmetric states will be physical and propagating
states, if during their life times they interact many
times with particles in their surroundings.
A typical interaction is shown in Fig. 1 with the
two couplings being present in the theory.
Taking the density of states in the early universe to be
$n=\frac{2}{\pi^{2}}T^{3}$
and calculating the cross section at an energy $E=T$,
we obtain
$$n\cdot\sigma\cdot u=\frac{|h_{t}|^{2}|h_{\ell}|^{2}}{8\pi^{3}}\,T$$
(12)
with $\sigma$ the cross section, $u$ their relative
velocity of the particles and $h_{t}$, $h_{\ell}$ the
couplings of the
Higgses to quarks and leptons, respectively.
At that early time the decay width of the moving leptons
with mass $M_{N}$ is
$$\Gamma_{N}=\frac{|h_{\ell}|^{2}}{16\pi}\frac{M_{N}^{2}}{T}\,,$$
(13)
consequently
$$\frac{n\cdot\sigma\cdot u}{\Gamma_{N}}=\frac{2}{\pi^{2}}\,|h_{t}|^{2}\left(%
\frac{T}{M_{N}}\right)^{2}\,.$$
(14)
Thus, at an early stage of the universe with $T\gg M_{N}$,
when the mixed states are created, they live long enough
so that
in one life–time they have many interactions with
the surroundings. They are incoherent states.
As the universe started deviating from
thermal equilibrium, it became lepton asymmetric and
the transition period lasted for a
relatively long time [?,?]. Estimates for the decays of the
lightest Majorana [?] and the development of the lepton asymmetry
are shown in figure 2.
The total asymmetry is given by
$$Y=D\epsilon$$
(15)
with $D$ a dilution factor and $\epsilon$ given in eq. (6). The dilution
factor is obtained from solutions of the Boltzmann equations [?],
which depend on $\kappa=\frac{\Gamma}{H}$ with $\Gamma$ the decay width and
$H$ the Hubble constant. The asymmetry starts from zero and grows reaching
eventually a constant value asymptotically.
The figure indicates that the heavy states are
incoherent from a temperature $T_{h}=10\,M_{1}$
down to a temperature $T_{l}=M_{1}/10$.
During this period the right–handed neutrinos are
non–relativistic and will cause fluctuations to
grow. Heavy neutrinos, $M_{N}\sim 10^{8}$ GeV, have an
interaction length and an interaction horizon.
Fluctuations within the interaction horizon will be washed out,
but fluctuations at larger scales will grow. Matter
within the
larger region will continue gathering together,
forming gravitational wells which attract more
matter.
As the temperature of the universe approached the
mass of the lightest right–handed neutrinos the
recombination of lighter particles to Majorana
states ceased. From the decays of the Majorana
states comes the generation of a lepton asymmetry.
An excess of
leptons survives all the way down to the electroweak
phase transition. During their journey from the
decay of the Majoranas to the electroweak phase
transition a part of the leptons was transformed
into a baryon excess. With the conversion of
leptons into quarks, density fluctuations which
were formed during the transition period and later
on, are transformed into fluctuations of matter.
This scenario remains an attractive possibility
for generating the baryon asymmetry in the universe.
We need now observables supporting
this scenario.
Three interesting questions come to
mind.
(1)Is the excess of baryons
produced in Leptogenesis consistent with the
observed amount
of matter relative to the photons observed in the
universe?
(2)Are the neutrino oscillations and the
CP asymmetries observed at low energies related to
the CP in the early universe?
(3)Are there remnants of this scenario at
cosmological scales that verify and support,
or perhaps contradict, Leptogenesis?
The answer to the first question is positive.
Numerical studies have shown that a consistent
picture emerges provided that
(i)
the dilution factor in the
out–of–equilibrium decays is $D\sim 10^{-3}$, and
(ii)
the asymmetry $\varepsilon$ from
individual decays is of order $10^{-4}$ to
$10^{-5}$ and for $g_{*}=100$ degrees of freedom it gives the correct amount of matter relative to photons.
The answer to the second question is again positive.
There is now a flurry of activity with many models
proposed to relate the laboratory observations with
Leptogenesis. I will discuss some of them in the
next section.
5. Consequences of Leptogenesis
Several groups developed models with massive neutrinos
which include neutrino oscillations, CP-violation
in the leptonic sector and the generation of a
lepton asymmetry. A general approach includes the
standard model in a larger symmetry group and shows
that the parameters of the theory are consistent
with both the low energy phenomena and the
generation of a lepton–asymmetry [?]– [?]. Most of these
models
incorporate the see–saw mechanism
$$m_{\nu}=m_{D}\,\frac{1}{M_{R}}\,m_{D}^{T}$$
(16)
with $m_{D}$ the Dirac mass matrix and $M_{R}$
the mass matrix for the right–handed neutrinos.
We note that Dirac matrices that occur here and in
eq. (6) are slightly different. As we diagonalize
the right–handed mass matrix a unitary matrix
$U_{R}$ appears. In the lepton–asymmetry occurs
the product
$$\tilde{m}_{D}=m_{D}U_{R}$$
(17)
and thus the structure of the right–handed sector
influences the asymmetry.
Now many models imbed the standard model into a
larger group and determine $m_{D}$ from the low energy
structure of the theory, including recent observations,
and deduce $U_{R}$ from the structure of the enlarged
theory. The effort in this approach is to identify
general aspects common in several theories, like
SO(10), SU(5), Frogatt–Nielsen or texture models [?,?].
In several models the low energy phase from $m_{D}$
does not appear in the lepton asymmetry [?],
which means
that the entire phase comes from the right–handed
sector. However, there are models where the
phases which are responsible for leptogenesis are
the ones that generate CP–violation at low energies.
A second approach enlarges the group to be
left–right symmetric [?], i.e. $SU(2)_{L}\otimes SU_{R}(2)$.
In this case a Majorana mass term is also present
in the left–handed sector. The mass relation is
now modified
$$m_{\nu}=m_{LL}-m_{D}\,\frac{1}{M_{R}}\,m_{D}^{T}$$
(18)
where $m_{LL}$ is a Majorana mass term for the
left–handed neutrinos. The introduction of this
term is possible with the introduction of Higgs
triplets for the left– and right–handed sectors
of the theory. The mass matrices are given as
$$m_{L}=Fv_{L}\quad\quad{\rm and}\quad\quad m_{R}=Fv_{R}$$
with $F$ the same matrix and $v_{L,R}$ the vacuum
expectation values of the triplet Higgses in the
left and right sectors of the theory. The fact
that both mass matrices are proportional to the
matrix $F$ simplifies the situation, because the
unitary matrices which diagonalize the mass matrices
for left–handed and right–handed fermions are now
the same.
In case that the see–saw term in (14) is important
only the top quark contributes to the Dirac matrix
and a consistent solution was found. Alternatively,
when $m_{D}$ is proportional to the charged
lepton mass matrix the see–saw term in eq. (13) is
negligible, because $M_{R}$ is very heavy [?].
In this case the out–of–equilibrium condition is
automatically fulfilled for typical values of the
parameters.
In addition, the baryon asymmetry is of the correct
magnitude and the large mixing angle solution for the
solar problem is preferred.
In the laboratory, lepton number violation produces
neutrinoless double beta decay of certain nuclei
$(A,Z)\to(A,Z+2)+2e^{-}$ [?,?]. The decay width in these
extremely rare processes is proportional to the
square of an effective neutrino mass
$$\langle m_{ee}\rangle=\sum_{i}U^{2}_{ei}m_{i}$$
originating from the Majorana sector. Since the
matrix elements $U_{ei}$ are complex cancellations
can take place. It is therefore interesting to ask
if the values of the parameters, which produce an
acceptable baryon asymmetry, also deliver a sizeable
$\langle m_{ee}\rangle$. In the simple model,
mentioned in the previous paragraph, the effective
neutrino mass $\langle m_{ee}\rangle$ can have
values which are close to the experimental
bound on $\langle m_{ee}\rangle$ so that new
experiments will be sensitive to neutrinoless
double–beta decay.
6. CP Asymmetry in the Leptonic Sector
In parallel to these activities there are serious
plans to measure CP violation in the leptonic sector.
The experiments will measure the difference
$$P(\nu_{\alpha}\to\nu_{\beta})-P(\bar{\nu}_{\alpha}\to\bar{\nu}_{\beta})$$
which requires measuring $\nu_{\alpha}$, $\nu_{\beta}$,
and $\bar{\nu}_{\alpha}$, $\bar{\nu}_{\beta}$
interactions with hadrons at two different places. This demands
precise knowledge of the neutrino–hadron cross sections
at low energies. Even though neutrino–hadron
interactions are more than 30 years old the low energy
cross sections are still poorly known. In addition
since the targets are very large, detailed instrumentation
of the detectors is not always possible. For these
reasons there is an increasingly active community of
physicists concerned with these matters.
They are concerned in measuring the neutrino cross
sections carefully and at the same time developing
accurate theoretical calculations.
The reactions of interest are
quasi–elastic scattering, resonance production and
the transition region
to deep inelastic scattering. There is already an
effort in this direction with a conference organized
every year (NUINT 01 and 02).
A second aspect deals with the fact that the reactions
occur in light and medium–heavy nuclei where additional
corrections are present. Fig. 3 shows the
neutrino nucleon cross section for $E_{\nu}<6$ GeV
from a Brookhaven experiment [?].
The contributions of
quasi–elastic scattering and resonance production
are clearly evident in the data up to $\sim 2$ GeV. They
produce the structure which looks like a step. The
theoretical curve reproduces the data [?]. More work and
cross–checks will be necessary in order to obtain a
precise understanding of the data, required in order
to be able to observe CP–violation. I presented only
one figure as an example, however, more results are
available and several studies are in progress [?].
7. Summary
Leptogenesis presents a very attractive mechanism for
triggering the baryon asymmetry in the universe. It
has implications for the development of fluctuations
and inhomogeneities triggered by Majorana neutrinos
and later on by ordinary matter. This topic
is very interesting and deserves further investigation [?]
A second development studies the connection between
the theoretical framework proposed for the heavy
Majorana states and laboratory phenomena including
neutrino oscillations. Here there are many models
which try to find out common aspects and fulfill the
general conditions required by Leptogenesis.
A consistent picture has been developed in many models
and new developments are expected.
8. Acknowledgment
I wish to thank Dr. W. Rodejohann for discussions and Dr. A.
Srivastrava for correspondence concerning matter and lepton
density fluctuations during the transition period.
REFERENCES
[1]
M. Fukugita and T. Yanagida,
Phys. Lett. B174 (1986) 45, the ideas
were discussed in this reference with the detailed
analysis presented by M. Luty, Phys. Rev.
D45 (1992) 465
[2]
M. Flanz, E.A. Paschos and
U. Sarkar, Phys. Lett. B354
(1995) 248
[3]
M. Flanz, E.A. Paschos, U. Sarkar
and J. Weiss, Phys. Lett. B389
(1996) 693
[4]
A. Pilaftsis, Phys. Rev.
D56 (1997) 5431
[5]
L. Covi, E. Roulet and F. Vissani,
Phys. Lett. B384 (1996) 169
[6]
W. Buchmüller and M. Plümacher,
B431 (1998) 354
[7]
S. Weinberg, Phys. Rev. Lett.
43 (1979) 1566;
F.W. Wilczek and A. Zee,
Phys. Rev. Lett. 43 (1979) 1571;
R. Barbieri, D. Nanopoulos and A. Masiero,
Phys. Lett. B98 (1981) 191
[8]
S.Y. Khlepnikov and M.E. Shaposhnikov,
Nucl. Phys. B308 (1988) 885
and
references therein
[9]
M. Flanz and E.A. Paschos, Phys. Rev. D58 (1998) 113009
[10]
E. Akhmedov, V.A. Rubakov and A.Y. Smirnov,
Phys. Rev. Lett. B81 (1998) 1359
[11]
J. Ellis, S. Lola and D.V. Nanopoulos,
Phys. Lett. B452 (1999) 87
[12]
G. Lazarides and N. Vlachos, Phys. Lett.
B459 (1999) 482
[13]
M.S. Berger and B. Brahmachari,
Phys. Rev. D60 (1999) 073009
[14]
K. Kang, S.K. Kang and U. Sarkar, Phys. Lett. B486 (2000) 391
[15]
H. Goldberg, Phys. Lett
B474 (2000) 389
[16]
M. Hirsch and S.F. King,
Phys. Rev. D64 (2001) 113005 and
hep–ph/0211228
[17]
H. Nielsen and Y. Takanishi,
Phys. Lett. B507 (2001) 241
[18]
W. Buchmüller and D. Wyler, Phys. Lett.
B521 (2001) 291
[19]
D. Falcone and F. Tramontano, Phys. Lett.
B506 (2001) 1;
F. Buccella et al., Phys. Lett.
B524 (2002) 241
[20]
G.C. Branco et al., Nucl Phys.
B617 (2001) 475 and Nucl. Phys.
B640 (2002) 202.
[21]
A.S. Joshipura, E.A. Paschos and W. Rodejohann,
Nucl. Phys. B611 (2001) 227
and JHEP 0108 (2001) 29
[22]
J.A. Casas and A. Ibarra, Nucl. Phys.
B618 (2001) 171
[23]
M.N. Rebelo, Phys. Rev.
D67 (2003) 013008
[24]
W. Rodejohann, Phys. Lett.
B542 (2002) 100 ;
K.R.S. Balaji and W. Rodejohann Phys. Rev. D65 (2002) 093009
[25]
S. Davidson and A. Ibarra,
Nucl. Phys. B648 (2003) 345
[26]
S. Kaneko and M. Tanimoto Phys. Lett. B551 (2003) 127
[27]
N.J. Baker et al., Phys. Rev.
D25 (1982) 617
[28]
E.A. Paschos,
in Proc. Suppl. of Nucl. Phys. B112 (2002) 89
[29]
Proceedings of the Int. Workshop
NUINT 01 and in slides of NUINT 02 |
Spectroscopic diagnostics of dust formation and evolution in classical nova ejecta
Steven N. Shore
1
Dipartimento di Fisica ”Enrico Fermi”, Università di Pisa;
1steven.neil.shore@unipi.it
2INFN- Sezione Pisa, largo B. Pontecorvo 3, I-56127 Pisa, Italy
2
N. Paul Kuin
3Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, ,Surrey RH5 6NT, UK
3
Elena Mason
4INAF-OATS, Via G.B. Tiepolo, 11, I-34143, Trieste, Italy
4
Ivan De Gennaro Aquino
5Hamburger Sternwarte, Gojenbergsweg 112, D-21029 Hamburg, Germany
5
(First version: 10/4/2018, Revised 02/06/2018 ; accepted —)
Key Words.:
stars: novae, cataclysmic variables; line: profiles; stars: individual …
A fraction of classical novae form dust during the early stages of their outbursts. The classical CO nova V5668 Sgr (Nova Sgr. 2015b) underwent a deep photometric minimum about 100 days after outburst that was covered across the spectrum. A similar event was observed for an earlier CO nova, V705 Cas (Nova Cas 1993) and a less optically significant event for the more recent CO nova V339 Del (Nova Del 2013). This study provides a “compare and contrast” of these events to better understand the very dynamical event of dust formation. We show the effect of dust formation on multiwavelength high resolution line profiles in the interval 1200Å - 9200Å using a biconical ballistic structure that has been applied in our previous studies of the ejecta. We find that both V5668 Sgr and V339 Del can be modeled using a grey opacity for the dust, indicating fairly large grains ($\gtrsim$ 0.1$\mu$) and that the persistent asymmetries of the line profiles in late time spectra, up to 650 days after the event for V5668 Sgr and 866 days for V339 Del, point to the survival of the dust well into the transparent, nebular stage of the ejecta evolution. This is a general method for assessing the properties of dust forming novae well after the infrared is completely transparent in the ejecta.
1 Introduction
Dust formation in expanding ejecta and stellar atmospheres has been an open question for many decades. For classical novae, the possibility of some sort of grain production was first raised in the 1930s, connected with the early observations and interpretation of the photometric development of DQ Her 1934 (see Payne-Gaposchkin 1957). In this paper we study the recent deep minimum dust former V5668 Sgr as a paradigmatic case, employing multiwavelength spectroscopy using high spectral resolution. We show that such an approach provides unique information based on line profile development on the evolution and distribution of the dust during and after the grain formation stage. Specifically, we show how it is possible to consistently simulate the line profile changes during and after the event within a self-consistent model, a single ejection ballistically expanding shell, to derive information about the dust properties and survival. We present evidence that the dust survives the interval of intense XR irradiation during the optically thin supersoft X-ray source stage following the explosion, and that the grains are therefore likely to survive until mixed with the interstellar gas at late times. Finally, using the evidence from line profile and photometry, we connect the dust forming events in several recent CO novae with that in V5668 Sgr, especially V339 Del and V1369 Cen, and revisit the dust forming episodes in several historical classical novae.
2 Modeling the effect of dust on the profile
At present there is no “first principles” theory of dust formation in nova ejecta. A number of proposals, none definitive, have been presented over the years for the formation and growth of the grains. Each depends on a specific scenario, such as chemistry (Rawlings 1988), kinetic agglomeration and photoionization processing (Shore & Gehrz 2004), and shock driven chemistry (Derdzinski et al. 2017). All have in common that the matter must be sufficiently kinetically cold to allow grain growth but when this occurs is a disputed matter. Equally contentious is the fate of the grains. The DQ Her-like events have been the optical indication of dust formation and/or growth in the post-maximum stage, but the signature of the process is written in the infrared (e.g., Evans & Gehrz 2012). This includes continuum and PAH emission features in the near IR and the broad silicate-related emission features at 10-20$\mu$. The decrease in the IR luminosity is usually interpreted as grain destruction when the ejecta become sufficiently transparent that ultraviolet and X-ray illumination by the remnant hot white dwarf reaches the part of the ejecta harboring the newly formed dust. We will show in this section that there is a completely independent means for understanding the growth and destruction process in the ejecta using spectroscopic observations of systematic changes in the emission line profiles arising from the expanding ejecta.
In light of this theoretical uncertainty, we approach the problem phenomenologically. We assume that dust forms in an intermediate part of the ejecta where the temperature has fallen below some critical value (e.g. the Debye temperature) and that has a sufficient density that the process is to proceed. The ejecta are assumed to be ballistic (e.g. Shore 2013, Mason et al. 2018 and references therein) with an aspherical (bipolar) geometry. While the latter is a quite general feature of classical nova ejecta, it is not essential for the argument. In contrast, however, the ballistic nature of the event is basic. It provides a unique correspondence between a location in the line profile in radial velocity, relative to the observer, and radial position within the ejecta since the expansion velocity is linear relative to distance $r$ from the white dwarf. The geometric parameters are (i) the radial thickness of the ejecta scaled to the maximum radius $\Delta r/R(t)=\Delta r/(v_{max}t)$, (ii) the maximum (ballistic) expansion velocity $v_{max}$; (iii) the opening half-angle of the cone $\theta$; and (iv) the inclination, $i$, of the axis to the line of sight. Since the mass of the ejecta is constant, the density varies as $\rho\sim r^{-3}$ , $\Delta r/R(t)$ is constant from self-similarity, and the dust optical depth varies with time as $\tau_{d}\sim t^{-2}$ (assuming no changes in grain properties, see below). For the emission lines, we assume they are optically thin, formed under isothermal conditions, and form with an emissivity law $\epsilon\sim\rho^{n}$ where $n$ is either 1 or 2, appropriate for collisional excitation and recombination. For the emission lines, a set of uniform random numbers is generated
to choose the locations of parcels of gas within the ejecta. The
observed profile is then formed by integrating through the cone
whose inclination to the line of sight is $i$, and the profile is
binned in radial velocity, in most cases $\Delta v_{rad}=$ 30 km s${}^{-1}$. In most of the
simulations, to mimic the fragmented structure of the observed
profiles, a comparatively small number of packets are formed,
usually a (few)$\times 10^{4}$, hence the jagged appearance in the
profiles we present here. It has been noted in the observations
that there is considerable fine structure on the emission line and since we aim to capture the range of geometric parameters (e.g.
solid angle subtended by the ejecta) rather than fit the individual
lines, these free parameters are varied until a good match is
obtained. To emphasize, we aim at constraints on the geometry, density structures, and dynamics and not at deriving specific parameter values by fitting the line profiles. If the grains appear in some interval $[r_{min},r_{max}]$, both scaled to $R$, this corresponds to some velocity interval $\pm[v_{rad,min},v_{rad,max}]$ so the obscuration can be easily included in the previously employed Monte Carlo modeling (e.g. De Gennaro Aquino et al. 2014, Mason et al. 2018) by inserting a grey absorber in the ejecta in the selected radial interval since the emitting gas and the dust coexist.
The effect of the dust absorption is to shadow a portion of the ejecta. Both lobes must be seen through the screen (which is also situated within the line forming gas) but with the receding portion being the more heavily absorbed. Whatever the obscuration of the approaching lobe, and that will be mainly at the lower velocities, the receding part of the profile will always be weaker. Two simple consequences are that the profile will be asymmetric and, even at relatively low resolution, the mean emission will be blueshifted. We note that this is contrary to the effect of line opacity, as in a P Cyg profile. There is no fine tuning here, the dust opacity, being continuous, is grey across the line profile and the extinction depends only on the line of sight column density of the dust. How the line profile changes because of an increasingly larger dust column density is shown in Fig 1. 111It has been known for some time, since the original realization by Lucy et al. (1989) for SN 1987A and Pozzo et al. (2004) for a more general study, that supernovae also show profile asymmetries that indicate dust formation, for instance, Bevan & Barlow (2016) and references therein. The nova case is less ambiguous, in large part due to the relative simplicity of the geometry and dynamics, and also because the ejecta masses are so small. Another reason is that the central source remains alight after the explosion and the ejecta are passive screens, unlike supernovae.
The dust is assumed to have no effect on the intrinsic line formation, we treat it as an absorbing screen, and for simplicity we ignore the effects of scattering on the NLTE line formation and photometry. We will, however, return to this point in the discussion (below) since it too has a consequence for the connection between the photometric and spectroscopic signatures for nonspherical ejecta. Even in cases where the line of sight barely intersects the inner ejecta because of the inclination of the ejecta, the line profiles will be asymmetric. Dust forming novae that do not show a deep photometric minimum can still be identified by their spectroscopic development. The dust optical depth is derived from the geometric factors alone. The change in the dust luminosity and temperature can also be derived immediately from the ballistic scaling of $\tau_{d}$, although the value of the temperature and luminosity at any stage requires additional information (see discussion, below).
An interesting feature of the models is the information provided by the profiles on the radial location of the dust within the ejecta. We show examples of changing the thickness and location of the dust containing region of the ejecta in Fig. 2. If concentrated in the inner ejecta, the line asymmetry is pronounced but with no change in the maximum velocity. The thicker the zone, or the more peripheral it is, the greater the obscuration of the outer ejecta and maximum observed recession velocity should decrease, while the blue wing of the profile remains virtually unchanged. The effect of changing the inclination of the bicone relative to the line of sight is shown in Fig. 3. In each case, the top panel shows the predicted unobscured profile, the bottom shows the effect of the dust (at fixed radial location) for the same geometry. We are neglecting the effects of scattering (see, however, Shore (2013) and the discussion, below).
We now turn to the observations. In the next sections we show how this approach can be applied to two recent classical novae that are known, from multiwavelength observations, to have formed dust during their outbursts.
3 Observations and data reduction
Nova V5668 Sgr = nova Sgr 2015b was first detected by J. Search (CBET 4080) 2015 March 15.634 UT (JD 2457097.134, which we take as the zero point for dating) at a visual magnitude of around +6. The optical light curve post-maximum ($m_{V}\approx 4$) evinced a slow mean decrease while showing substantial, coherent excursions in magnitude over the next couple of months. The light curve continued in this manner until Jun.1 when it entered the first stage of a major photometric decline of the sort experienced by DQ Her. The visual band light curve is shown in Fig. 4 based on AAVSO photometry. The multiwavelength view of the outburst has been presented by Gehrz et al. (2018), to which we refer the reader for more details. Infrared observations have also been described by Banerjee et al. (2016). Here we concentrate on the implications of the high resolution line profile developments as they reveal the dust formation and evolution.
Our observational sequence covers the early stages and extends past the end of the dust event (see appendix A for the journal of observations) using HST/STIS in the UV and with VLT/UVES in the optical. The UVES optical spectra, originally taken under the DDT program 294.D-5051, were downloaded from the ESO archive. The observations consist of ten epochs between day 63 and 186 after outburst and cover the wavelength range $\sim$3000-9500 Å at a resolution R$\sim$60000 (Table A1).
The STIS spectrum was taken on day 235 after outburst (DDT program 14449). The HST visit was only a single orbit and used two different setups of the medium resolution (R$\sim$30000) echelle grating E230M to cover the wavelength range 1200-2900Å(Table A2). All the data sets were reduced using each instrument pipeline. The optical spectra were flux calibrated using the master response function built on a number of spectrophotometric standard stars observed in photometric conditions. Blue losses below the Balmer series head occurred occasionally so the absolute calibration shortward of $\approx$3800Å is unreliable.
The nova was also observed with Neil Gehrels Swift Observatory/UVOT and the UVOT monitored the nova between day 100 and day 420 after outburst in the broadband filters UVM2, UVW1, UVW2 (having central wavelength $\lambda_{c}$ 2246, 2600, and 1928 Å, respectively). Low resolution grism spectra (R$\sim$75-100 and $\lambda$-range=1700 to 5000Å) were taken between day 92 and 848 (see TableA3, A4). The UVOT grism data reduction was done using Kuin (2014) and Kuin et al. (2015) version 2.2, flagging bad spectral areas222see \urlhttp://www.mssl.ucl.ac.uk/www_astro/uvot/, and summing spectra from individual images to improve the signal to noise. For many spectra the brightest emission line cores exceeded the maximum brightness for recovering a flux calibration. The continuum and weaker lines were in the calibrated regime. Special care needs to be taken when interpreting the UVOT grism observations, see Appendix B for details. The accuracy of the UVOT photometry depends on the source’s brightness. For sources with brightness between 23rd and 13th magnitude, the photometry is properly extracted using aperture photometry on the images. For much brighter sources, up to about 9th magnitude, the readout streak that is while the image data is moved to the readout buffer in a fraction of the frame time has been calibrated to give photometry up to about 9th magnitude. Even brighter sources also produce a readout streak, but their brightness shows inconsistent results with magnitude, see Page et al. (2013).
4 V5668 Sgr through the dust event and its clearance
The optical and UV light curves of V5668 Sgr show that the deep minimum began around 90 days after outburst when infrared continuum measurements (Fig. 4) indicated a rapid increase (ATel #8753). Until that time, the V magnitude had been decreasing only a few tenths of a magnitude per day. Subsequently, the brightness rapidly decreased, reaching a minimum around day 120, followed by a slower recovery. We (somewhat arbitrarily) take the end of the deep optical minimum to be around day 170.
4.1 V5668 Sgr: ionization dependent profile structure and extinction
An important feature of the high resolution spectroscopic data is its ability to dissect the structure of the ejecta. The line profiles provide a picture of how the ionization changes within the ejecta because of the simplicity of the ballistic kinematics. We start with an example of one of the most important set of lines, the [O I] 6300, 6364Å doublet. These are always optically thin and are ground state transitions. Their ratio, when the density is sufficiently low for the ejecta to be considered in the nebular stage, is 3 (by the time of the displayed observations the [O I] 5577Å line was no longer visible). Thus, when the densities are low and the overlying absorption from the metallic lines is gone, the profiles trace the density structure.
We show an example of the time sequence for the [O I] 6300Å line in Fig. 5. The panels show the ESO spectral sequence during the dust events in V5668 Sgr (left) and V339 Del (right). We concentrate, for the moment, on the V5668 Sgr sequence and will return to the comparative discussion in sec. 5. The spectrum variations show clearly the disappearance of the redward part of the profile as early as around day 90, just prior to the onset of the deep visual photometric minimum. Before that day, the profile was symmetric and the inferred $\tau_{d}$ was much smaller than unity. We note that the extreme positive radial velocity of the wings does not change significantly despite the onset of line asymmetry. We interpret this as the starting epoch for grain growth. How the different ionization stages provide information about the process can be seen from a comparison of lines of two nitrogen ions, N${}^{+}$ and N${}^{3+}$, shown in Fig. 6. We over-plot the model profiles for the two transitions using the optical depths derived from simulating the [O I] lines and the optical lines before the start of the photometric descent. Resonance lines in the ultraviolet spectrum are uniquely suited to studying the structure of the ejecta and the V5668 Sgr spectra provide a well distributed set of elements manifesting a range of ionization stages. In particular, nitrogen and carbon were present in three ionization stages, from singly to four times ionized. In addition, recombination lines of a range of ions were present. Table 1 presents the UV peak ratios measured on a broad range of ionization stage transitions for V5668 Sgr. In all profiles, the blue to red peak ratio is nearly the same. The dust therefore seems to have been obscuring the ejecta without a change in the ionization structure. We return to this point in sect. 6.
One way to check whether the line profile asymmetry is caused by density differences within the ejecta is to use the same procedure for obtaining the electron density that has been described in Mason et al. (2018). We assumed the H$\gamma$ profile is the same as H$\beta$ on day 186, the last day for which we have high resolution ESO spectra. Taking E(B-V)=0.3 from the Ly$\alpha$ and 21 cm H I analysis (Gehrz et al. 2018), we scaled the H$\beta$ to subtract the contamination on [O III] 4363Å. The nebular line ratio, F(5007Å)/F(4959Å) $\approx$3, so the Balmer lines should have the same optically thin emission profiles. Were the peaks due to density differences, we would expect the diagnostics to indicate a strong asymmetry with radial velocity. In Fig. 7, we show this ratio and, even without deriving a specific value for the density, there is no strong asymmetry within the radial velocity range of the peaks.
For V5668 Sgr we have the advantage of a high resolution infrared spectrum at approximately the same time as the STIS observations that includes the He II 1.96$\mu$ line.333We sincerely thank Prof. Banerjee for providing these data. See also Gehrz et al. (2018) for additional line profiles. The IR spectrum was taken on 2016 Feb. 28 at Mt. Abu with a resolution of $\approx$1000. We compare the STIS He II 1640Å with that profile in Fig. 8. Notice that the IR line peak ratio is 1.10$\pm$0.05 while at about the same time the UV line shows a ratio about 60% greater. This further supports our contention that such profile asymmetries are not the result of density differences in the two lobes of the ejecta.
4.2 Extinction law and dust properties
The wavelength dependence of the dust extinction is provided by comparing the asymmetries of the UV and optical profiles of the He II lines. Since the 1640Å, 4686Å and 5411Å lines arise from recombination, their ratio has been used like the Balmer lines to estimate interstellar extinction. We can, however, extend this by noting that the two sides of the emission profile arise in different parts of the ejecta and, therefore, sustain differential internal extinction. The overall effect of any intervening interstellar medium toward the source is not important for the relative asymmetries. Guided by the simulations, we can take the flux ratio between two peaks to provide the optical depth $\tau_{d}\approx\pi a^{2}C_{abs}(\lambda)N_{d}$, where $C_{abs}$ is the absorption efficiency (Draine & Lee 1984), $a$ is the grain radius, and N${}_{d}$ is the dust column density. Since the geometry is obtained independently from modeling the overall emission line profiles, the dust volumetric number density can be estimated using the maximum velocity and the time since outburst (e.g. Mason et al. 2018).
The infrared has a far lower dust optical depth than the visible or ultraviolet, the well exploited advantage that during the Fe-curtain stage of the outburst, so that spectral range provides a view through the ejecta at a time when the shorter wavelengths are obscured (e.g., Hayward et al. 1996; Evans & Gehrz 2012). The same profile structures that appear elsewhere appear in the infrared at an earlier epoch than at shorter wavelengths. This advantage is reversed for the post-dust forming stage. For the standard interstellar extinction law, for instance, the absorption ratio is $\sim$12 at He II 4686Å and $>$700 for He II 1640Å. The choice of wavelengths is for comparison between profiles of a single species. Thus, even when the infrared is virtually transparent, the ultraviolet can continue to show the effects of differential obscuration for a much longer time. This is what the V339 Del sequence and the V5668 Sgr single epoch data show. The UV profiles for V339 Del are much more asymmetric than the IR lines at the same date after outburst. Furthermore, the structure evident from the comparison of ionized species, such as He II and N IV, is more asymmetric than simultaneously observed Balmer and optical [O I] lines.
A quantitative constraint on the extinction law for the dust can therefore be obtained from the profiles of ultraviolet and optical He II recombination lines that are widely spaced in wavelength. For instance, we compared the 5411Å (day 186) and 1640Å lines (day 235) (blending caused difficulties for the He II 4686Å profile so we opted for the weaker line in this case). If the dust extinction had a wavelength dependence like the interstellar law, with $\tau_{d}\approx 1$ derived from the optical line using the blue to red peak ratio (about 1.4), that on the ultraviolet line would be about 2. Instead, the two values are nearly equal, around 1.4, an indication that the dust is grey. This also agrees with the lack of a strong enhanced absorption at around 2200Å. While this latter behavior could result from silicate grains, the same extinction law (grey and no bump) results even if the dust is carbonaceous (Draine & Lee 1984) provided the grain radius is $\gtrsim$0.1$\mu$. A similar conclusion was reached from the UV variations of V705 Cas (Shore et al. 1994). We also note that besides the infrared coronal [Si VI] line, all other lines discussed in the late time IR spectra of V5668 Sgr were almost perfectly symmetric (Gehrz et al. 2018) at only 100 days after the STIS spectrum, lending further support to dust – not density – as the cause of the profile structure at shorter wavelengths.
Scattering in aspherical ejecta has a secondary effect on the visible continuum, depending on $C_{sca}$. Since the peak ratio yields information only about $C_{abs}$, the infrared emissivity rise at the onset of dust formation may be accompanied by a color tendency that makes the continuum appear relatively bluer. Even if the wavelength dependence of the absorption is relatively flat, the scattering may increase toward shorter wavelengths. This depends on the size spectrum of the grains and the relative angle of viewing the ejecta. Another effect of the scattering depends on where the dust forms in the ejecta, as outlined in Shore (2013). During the outburst of novae having the type of light curve called “C” (cusp) by Strope et al. (2010), emission lines may increase in intensity and broaden when the dust forms if the dust has a scattering coefficient comparable to that for absorption. Draine & Lee (1984) show that large grains behave this way.
The dust mass can be estimated using the radius $R(t)$, a choice of the possible grain mass density (we use the Evans et al. (2017) estimate from their study of V339 Del, $\rho_{d}=2.2$ g cm${}^{-3}$), and only geometric properties of the ejecta. At day 250, the radius was 10${}^{15}$ cm, thus giving a column density for the grains of about $10^{-6}$cm${}^{-2}$, so the derived dust mass is a few times $10^{-9}$M${}_{\odot}$. This is smaller than the derived dust mass in Gehrz et al. (2018) but the grain sizes are also quite different in that work, by about a factor of ten (the study used 2$\mu$ grains for the maximum grain size) and it is based on an assumed distance to derive the infrared luminosity. Accounting for that difference raises our mass by a factor of three. We note, however, that the calculations of dust mass, as opposed to column density, depend on the assumed geometry and the published estimates assume sphericity. Our derived mass is actually nearly the same as the CO mass spectroscopically derived in Banerjee et al. (2016). We remark that our spectroscopic estimate is independent of any distance estimates to the nova or luminosity considerations while any value based on intensity requires a distance. Tracing the optical depth back to the photometric minimum and assuming a constant dust to gas ratio, $\tau_{d}\sim t^{-2}$ gives an optical depth of around 6 for day 105, consistent with the value $\approx$ 7.5 cited by Gehrz et al. (2018) and with the depth of the minimum obtained from the spectroscopic sequence (Fig. 5). It thus appears that the grains survived long after the end of the photometric event. Since the emissivity of the dust is expected to have $\beta\approx 2$ for Mie scattering in the infrared, the dust temperature at 250 days should scale as $T_{d}(t)\sim(R(t)/R(t_{0}))^{-1/3}\approx 0.7T_{d}(t_{0})$, so if at IR maximum the temperature was about 1000K, at times later than around day 250 it should be around 700 K for optically thin emission. This is consistent with the reported infrared continuum measurements (Gehrz et al. 2018).
4.3 Continuum variations during the dust event
Our grism coverage for V5668 Sgr is similar to the low resolution 1200-3000Å spectroscopic sequence obtained with IUE (Shore 1994) for V705 Cas. In addition, we have the UVES flux calibrated optical spectral sequence that was unavailable in the V705 Cas coverage. The two sequences are shown in Fig. 9, keyed to the optical light curve (shown at right on the two panels). The top panel shows the UV grism sequence, the bottom displays the contemporaneous sequence of binned optical spectra. At the start of the dust event, the ultraviolet was at the end of the Fe-curtain stage, which complicates the interpretation. The absorption bands were changing along with the overall continuum level as the Fe-curtain became progressively more transparent. The net decrease is, however, approximately uniform in the mean level across the entire spectrum. The uniformity of the extinction is more clearly seen in the lower panel, where we show the UVES sequence. The changes in continuum level are obvious and wavelength independent (i.e., grey). Note, for instance, that the O I lines at 7773Å and 8446Å vary only in absolute, not relative, intensity during this descent in brightness.
5 A tale of two novae: comparison of V5668 Sgr with V339 Del
In this section, we compare the STIS spectrum of V5668 Sgr with V339 Del, another CO nova observed with good cadence and very high resolution in the UV by STIS, as described in Shore et al. (2016). Our original intention was to have a baseline for comparison to see what might be the differences in their basic properties since the spectra are of comparable quality and precision. We show some representative line profiles in Fig. 10. The N IV] 1486Å, aside from being a recombination doublet, is also a resonance transition while He II 1640Å is from recombination. To say they are close seems an understatement. This cannot be mere coincidence, it bespeaks a common underlying mechanism.
Since, for ballistic motion, the form of the profile is most sensitive to the overall symmetry of the ejecta, the comparison indicates either that the structure of the ejecta are nearly the same for two completely unrelated objects or that they share a “third parameter” besides abundance and geometry. That property appears to be that both formed dust. This is supported by the UV light curve of V339 Del, shown in Fig. 11, that shows a far greater decrease in the UV at about day 77 than was seen in the optical photometry. The difference in the photometric behavior of these two novae can easily be accounted for assuming different inclinations and geometries. The covering factor for V5668 Sgr was clearly the larger, the inner ejecta and central star were almost completely obscured at the maximum of the dust optical depth while for V339 they were more visible. The same conclusion is drawn from the [O I] sequence in Fig. 5. Note that the onset of the profile asymmetry was a bit before day 54, before the UV descent. This could mean that the obscuration of the inner ejecta on the central source was substantially lower than for V5668 Sgr but the lobes of the ejecta were still self-obscured as the dust began forming. Even if the cone parameters are similar they are not the same and it suffices to incline V339 Del a bit more to the line of sight to considerably diminish the maximum obscuration. The rest of the photometric evolution, as the dust clears, should be virtually the same. The IR continues to radiate as the radiation temperature drops and, taking the simplest hypothesis that the grain properties do not change, the line profiles will also change similarly.
5.1 Infrared development compared
The infrared view of the V339 Del dust forming event was presented by Evans et al. (2017). We will concentrate on their discussion of the line profile evolution, since this was followed with sufficient resolution to compare with optical data of the event. The Paschen series line profiles were initially dominated by the P Cyg absorption component of the optically thick ejecta until around day 50. Thereafter, as Evans et al. show in their Fig. 5, once the line absorption had disappeared from the blue side of the profiles, the lines became shifted toward the blue and asymmetric with a suppressed red wing compared to the earlier stages. They briefly mention the possibility that the line shape may have been altered by dust obscuration but do not further explore its implications. The profiles then recovered symmetry, appeared double peaked, and remained the same thereafter. The authors interpret this as evidence of grain destruction by exposure to the hard radiation coming from the central source.
Based on the IR alone this is a consistent picture but, when combined with other wavelength intervals, the spectroscopic development leads to a different conclusion. In particular, our late optical and UV observations of V339 Del (from day 635 to 866, see Mason et al. 2018) systematically showed asymmetric line profiles that evinces the survival of the grains well after the Evans et al. claim. The spectra from day 650 (Fig. 13) require $\tau_{d}\approx 1$, corresponding to a dust mass estimate of $6\times 10^{-10}$M${}_{\odot}$ for $\approx 0.1\mu$ grains. Evans et al. (2017) derive a dust mass of about $5\times 10^{-9}$M${}_{\odot}$ but use a larger grain radius and underestimate the rate of expansion of the ejecta. The ratio of the two estimates of optical depth scales approximately $v_{max}^{3}a^{2}(\Omega/4\pi)$, where $\Omega$ is the total solid angle of the bicone (see Shore et al. (2016) for further discussion of the modeling). Note that the published mass estimate assumes spherical ejecta. Given the difference in assumed parameter values, our estimate should be a factor of $\approx 3$ lower than the published value.
A check on the greyness of the dust is provided by comparing our UV and optical He II lines from around day 640 with fig. 5 of Evans et al. (2017); the infrared helium and Paschen line profiles were symmetric on day 683. This is consistent with a minimal difference in density of the bicones and that the grains were still present in the expanding material, instead of being destroyed earlier by photon and collisional processes. Banerjee et al. (2016) derived a dust mass for V5668 Sgr of about $3\times 10^{-7}$M${}_{\odot}$ based on the infrared energy distribution. V5668 Sgr had a strong molecular precursor stage of CO (no other molecules were reported) with a proposed near equality for ${}^{12}$C relative to ${}^{13}$C based on the emission bandheads in the near IR. Gehrz et al. (2018) found a similar mass for the dust as Banerjee et al. (2016), assuming a mean density of about 2 g cm${}^{-3}$, and no indication of the silicate emission features. This is also consistent with the CO nova identification.
The comparison of the optical and UV He II lines for V339 Del, results in dust properties that are similar to V5668 Sgr. Using the high resolution profiles presented in Shore et al. (2016) and Mason et al. (2018), the red to blue peak ratios show only small differences between the two wavelength intervals. For the He II lines on day 88, the peak ratio was 2.4 for 1640Å and $\geq$1.9 for 4686Å. The latter was blended so this is a lower limit based on extrapolating the wing . For the last spectra for the purpose, from 2015 May, the He II 4686Å line shows a peak ratio of 1.1$\pm$0.1 that is the same as He II 1640Å. These last profiles are compared in Fig. 14. This was around Day 650, that corresponds to a late time IR detection reported by Evans et al. (2017). The He II profiles were more asymmetric than those of V5668 Sgr, which may indicate a difference in the internal distribution of the dust (the peak ratios depend more on $\tau_{d}$ than the radial shell within the ejecta and on the ionization structure, see Fig. 2).
The depth of the UV photometric minimum was also smaller than V5668 Sgr by about a factor of 20, $3<\tau_{3000,max}<4$ at around day 80, as we show in Figures 11 and 12. The optical depth on day 88 is consistent with $\tau_{d,1640}\approx 2$, so that at the start of the dust forming episode the optical depth in the UV was about 2.8, which is also consistent with a factor of 10 to 20 decrease in the photometry. Ballistically extrapolating forward to around day 650 assuming no intrinsic changes, the predicted optical depth at 1640Å is about 0.3. This is about a factor of 3 smaller than the value implied by the models for that last epoch, $\tau_{d}\approx 0.9$ but in the opposite sense with respect to expectations of grain destruction. Although the explanation for this failure remains open at present, it seems that the overall development of the two novae was quite similar. The grains had a wavelength independent opacity, like those in V5668 Sgr, and the dust optical depth at maximum was also large despite the rather insignificant glitch in the visual light curve.
The implication of this last result is interesting in that the minimum for V339 Del was so much briefer in duration and shallower in magnitude than either V705 Cas or V5668 Sgr. One possibility is that the mass of dust formed was significantly lower in V339 Del than the “deep dippers”. Interestingly, V339 Del did not show any infrared CO emission during the optically thick Fe curtain stage, although the atomic carbon lines were strong. The inferred N/C value is consistent with expectations of a TNR yield ratio for a CO nova (Mason et al. 2018) based on late stage absorption lines from the ejecta. Additionally, although we do not have absolute abundances for the individual elements, the [Ne IV] 1602Å and [Ne V] 1575Å lines were certainly absent, supporting this subclass identification. Thus, overall, there was nothing particularly deviant about the white dwarf or the abundances in V339 Del compared with other CO novae. The onset of the dust formation, signaled by both the UV photometric descent and infrared rise, was early but not exceptionally so. The shell geometries and dynamics off the two novae were not that different yet the dust mass yields seem to differ by as much as a factor of 10. The comparison thus points to there being a range of dust yields in classical nova ejecta from the same abundance class of the progenitor white dwarf.
6 Dust signatures in other novae
For classical novae, the possibility of some sort of grain production was first raised in the 1930s, connected with the early observations and interpretation of DQ Her 1934. This nova has become the phenomenological prototype of a specific photometric behavior among novae; Stratton & Manning (1939), Payne-Gaposchkin & Whipple (1939), Payne-Gaposchkin (1957), McLaughlin (1960), and Beer (1974) provided extensive summaries of the various stages of the spectral development. In particular, Payne-Gaposchkin (1957) wrote “The bright lines, especially the forbidden lines had been saddle-shaped for some time. Although they persisted during the drop in brightness they all decreased greatly in intensity, and they redward maxima all but disappear”. In the Stratton & Manning photographic atlas, the [OI] line shows the onset and increase of the asymmetry prior to Apr. 25, the last date presented. All reports of the DQ Her event highlighted the changes in the optical emission line profiles just before and during the deep minimum that began about 60 days after outburst. With hindsight, in light of these studies, it has been inferred that T Aur 1891 was another such event (e.g., Clerke 1903, Payne-Gaposchkin 1957). The deep dust forming event was observed when, around 1892 Mar. 7, the nova began a steep decline from about a visual magnitude of 5.6 to below 12 mag by Mar. 28. Vogel (1892) and Campbell (1892) report changes in the symmetry of the line profiles although these are more subjective than for DQ Her and later novae.
In the CCD era, the best studied deep minimum dust former was V705 Cas 1993. This chanced to occur during the lifetime of the IUE satellite and the sequence of low resolution spectra spanning the onset of the dust event was presented in Shore et al. (1994). The event was also covered extensively in the infrared using spectrophotometry and photometry (Evans et al. 2017). The grism data for V5668 Sgr is comparable, as we noted, but because there is no published high resolution profile study of V705 Cas we cannot further compare the ejecta structure or details of the dust formation so we pass to the next historical example. From the 1200-3000Å region alone and using only low resolution ($\approx$300) spectrophotometry during the event, the derived dust absorption was grey.
Published line profiles for V723 Cas (Evans et al. 2003) show the usual biconical double peak at day 200, but only infrared lines were shown. Optical spectra presented by Goranskij et al. (2007) from around day 200 are symmetric and similar to the infrared profiles, indicating an optically thin, biconical structure. Based on the spectra, there appears to be no signature of dust in the ejecta. This nova, by all accounts, was exceptional in being an extremely slowly developing event. Nevertheless, it would appear that not all CO novae form dust, or some do so in sufficient quantity that the emission lines are not strongly affected or are formed over a more extended zone of the ejecta than that harboring dust.
The peculiar case of V2362 Cyg 2006 actually fits well into this overall picture of dust formation and evolution. Lynch et al. (2008) presented a comprehensive summary of the photometric and spectroscopic development of the outburst. This nova displayed one of the anomalous types of light curves that shows a secondary maximum, at around the same time as the DQ Her-type dippers show a deep minimum, with the characteristic infrared signature of dust formation. At no time, however, did the emission lines show any asymmetry. Instead, at the maximum of the re-brightening event, they significantly broadened to about the same width as in the optically thick stage but without associated P Cyg-type absorption. Our argument regarding the dust is essentially the same for this nova except that the bicones were oriented such that they were at very large inclination to the line of sight (Shore 2013) and the broadening is the result of scattering with about the same efficiency as absorption.
Some indication of the effect may be present in the spectra shown for V809 Cep by Munari et al. (2014); in their Fig. 11 the day 220 and day 352 profiles of [O III] 5007Å shows the asymmetry we have described and this is a nova for which dust formation was also inferred from the infrared. The nova V1425 Aql is a counter-example of the notion that all CO novae are dust formers. Infrared observations were, however, obtained only 600 days after outburst, more than 200 days after the inferred XR turnoff and when the source was too faint in the UV for further observations (Lyke et al. 2001). Line profiles shown in Evans & Gehrz (2012) are consistent with the later, optically thin infrared lines of different ionization stages displayed by V339 Del (Evans et al. 2017). Had there been an infrared component at that time it would have been too weak .
The same spectroscopic signature was observed in the recent dust forming CO nova V1324 Sco (Finzell et al. 2018). Although the line profiles presented in that paper cover only the period before the main event, that part of the expansion when the ejecta transitioned to optically thin in the Fe-curtain, one Balmer line profile taken 800 days after the deep minimum shows precisely the same signature as we have shown here for late spectra. What had been a symmetric profile when the line absorption vanished was blue-asymmetric. Although there were no spectra taken during the deep minimum with sufficient resolution to be useful, the persistence of the dust signature nearly two years after the event supports our contention that the dust is formed during the minimum and survives thereafter. More to the point, as Sakon et al. (2016) demonstrate for V1280 Sco, the dust is still detected spectrophotometrically even 2000 days after outburst.
We finally come to V1369 Cen. In our earlier analysis (Mason et al. 2018) we concentrated on the spectroscopic indicators of geometry and dynamics. But we noted that a peculiar asymmetry appeared in the profiles even in the late time spectra that was not evident in the plasma diagnostics. The similarity of the late time profiles to those of V5668 Sgr and V339 Del suggest that, possibly, here too some dust was formed in the outburst. From the comparison of the STIS spectra from 2014 Nov. 3 and 2014 Mar. 7, the later one is more symmetric although the peak ratio is 1.0$\pm$0.02. The N IV] 1486Å line is more indicative, the ratio in the earlier spectrum was 1.20 while in the second it is indistinguishable from unity. We also note that the AAVSO photometry for this nova shows a brief minimum of about $\Delta V\approx 1$ mag (about the same for B) around days 80 to 100, similar to most dust forming novae. We note, however, that this nova showed an anomalously large N/C excess relative to solar and, perhaps, this plays some role in dust formation. While neither the profile modeling nor the photometric variations is a “smoking gun”, we suggest that it is consistent with such an event and, as such, argues that grain formation in classical novae is a more common feature than photometry alone would indicate.
7 Discussion
There post-visual maximum increase in infrared continuum emission rising toward longer wavelengths and longward of around 2$\mu$ is the one, unambiguous, signature of the appearance of grains in the ejecta: (see, e.g., Evans & Gehrz 2012). This broadband emission can include the SiO and SiC emission features, but these may not be present depending on the dust properties. The decrease in optical brightness, the “deep minimum” characteristic of the DQ Her-type events, may not happen if the dust laden parts of the ejecta do not obscure a sufficient solid angle of the pseudo-photosphere. In addition, the line profiles will always have some random asymmetries (Mason et al. 2018) resulting from the explosive, single ejection nature of the outburst, but those will not be of a unique type or change in the systematic way we have discussed.
The infrared has so low a dust opacity relative to the UV that the lines can return to their early symmetric form in a comparatively short time; the continuum emission will be observable for far longer. The flux should, however, decrease in time because of dilution and (depending on the details of the grains, i.e., their composition and size), the temperature should steadily decrease. Since the structures are frozen within the ejecta, any change in IR continuum emission should be a simple power law in time. This suggests that the ejecta from dust formers may be visible in millimeter continuum photometry and imaging for far longer than would be inferred from the disappearance of the IR line profile asymmetries. Infrared emission from optically thin illuminated grains will persist as long as the grains survive and the central WD remains luminous. The turn-off times for XR emission have been derived from the cessation of the SSS stage (Schwarz et al. 2011) or from the onset of expansion-controlled recombination (e.g., Vanlandingham et al. 2001, Shore et al. 2014). Infrared dust emission will, however, still be powered by the unobservable EUV and FUV emission before the WD returns to quiescence. Thus, if we assume a constant grain radius and wavelength dependent absorption efficiency $Q_{\rm eff}\sim\lambda^{-\beta}$, where $\beta\approx$ 2, then the grain temperature should scale as $T_{IR}\sim t^{-2/(\beta+4)}$. For V339 Del, the dust temperature at day 36 was around 1600K, around day 650, it was inferred to be about 650 K (Evans et al. 2017); we would expect $T_{IR}\sim$ 600K. In other words, the grains likely survived the XR phase and the emission continued until the central source turns off. This is consistent with the persistence of the line asymmetries in the UV and optical (De Gennaro Aquino et al. 2015, Shore et al. 2016). For V5668 Sgr, the infrared emission remained detectable even after the appearance of infrared coronal lines (Banerjee et al. 2016). Note that the photodestruction and collision rates decrease faster than the dust optical depth so the high ionization is not necessarily an indication that the grains are gone.
Among the ONe novae for which we have high resolution observations at sufficiently long times, V959 Mon (Shore et al. 2014),
V382 Vel (Della Valle et al. 2002, Shore et al. 2003), LMC 2000 (Shore et al. 2003), and V1974 Cyg (Shore et al. 1993) were obtained at around day 150, none shows the asymmetries found in the CO novae. None shows any anomalous event in its light curve of the sort observed in the three CO novae. While there is little infrared data available for any of the ONe novae, the later time nebular spectra are consistent with only optically thin gas without a grain component. V838 Her was a clear outlier in having displayed a weak infrared excess that has been interpreted as a dust event, but this was by all measures an extreme outburst (e.g., Schwarz et al. 2007). It may be mere coincidence, but given the similarities of the ONe together (e.g., Shore et al. 2013), it seems likely that there is an essential difference between the two abundance classes regarding dust yields. We conjecture that dust formation is not only common in the CO novae, something that has been known for a long time, but that neither the recurrent novae (like T Pyx and U Sco) nor the ONe group produce significant amounts of dust, if any. This implies that the presolar grains attributed to novae are from the CO group, hence their isotopic abundance patterns should be a fossil record of the nucleosynthesis in that subgroup that cannot be otherwise teased out of the spectroscopy, for example, Amari & Loders (2007) and Illiadis et al. 2018).
To continue this point, the scenario of dust survival also aligns with evidence for a nova contributor to the presolar grain samples. Recall that even “classical” novae presumably recur on long timescales. Take this timescale to be about $10^{4}$ yrs (a popular choice based on the hibernation picture). With a few dust forming novae per year in the Galaxy, each contributing about the same amount, we expect to have accumulated a few solar masses of nova enriched grains in the course of Galactic evolution. Consequently, novae are not major players in the dust budget but they have a unique nucleosynthetic signature (e.g., Casanova et al. 2016).
8 Conclusions
In summary, we find that there is no need to invoke dust destruction at late times in either V5668 Sgr or V339 Del, even under irradiation by the X-ray and EUV from the central star. It appears that there was little change in the dust properties over relatively long times after the photometric minimum. In addition, the evidence from the UV and optical spectrophotometric variations for both novae, like those of V705 Cas, is that the dust was not present before the start of the steep decline, that it grew rapidly, and that the event approximately coincided with the beginning of transparency in the ultraviolet. After the opacity maximum, the dust absorption optical depth merely decreases through expansion-driven dilution. The grains are large, with radii of at least 0.1$\mu$. For V5668 Sgr, the optical and UV profiles should become increasingly symmetric; at around 800 days after outburst, the dust optical depth will have fallen to about $\tau_{d}\lesssim$0.1. There are no observations yet from such late stages of the expansion. The grains would, however, continue to emit even at this late time if the central source is still powering the emission. Assuming, again for argument’s sake, that the white dwarf bolometric luminosity remains constant for a very extended interval. The temperature should still be about 500 K, after almost three years, which ceteris paribus is an upper limit. It would be of considerable interest to attempt an ALMA observation of the dust at late stages at sub-mm and mm wavelength.
This scenario of dust survival also aligns with evidence from pre-solar grains, for which a nova contribution has been identified based on isotopic ratios (e.g., Amari & Loders 2007; Illiadis et al. 2018). Recall that even “classical” novae presumably recur on long timescales. Again, for argument’s sake, take this timescale to be about $10^{4}$ yrs (a popular choice based on the hibernation picture). With a few dust forming novae per year in the Galaxy, each contributing about the same amount, we expect to have accumulated a few solar masses of nova enriched grains in the course of Galactic evolution. Novae are, consequently, is not major players in the dust budget but they have a unique nucleosynthetic signature (e.g., Casanova et al. 2016).
We remark in closing that our survey of the literature highlights a serious lacuna: the rarity of published high resolution optical and infrared line profiles for dust forming novae. Most of the discussions have been based on low resolution flux measurements and spectral energy distributions. There is also very little spectropolarimetry at sufficient resolution to study the three dimensional structure of the ejecta across line profiles. Nevertheless, as we have discussed, there are a number of comprehensive spectroscopic studies in the last few decades that provide evidence in support of our hypothesis. We contend that the similarity of the spectroscopically and photometrically derived dust masses and properties should encourage observers to put more effort into obtaining the necessary data. The reward will be a significant extension in our understanding of the dust properties and evolution in novae.
Acknowledgements.
Based on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 312430 (OPTICON). Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program # 13828. Support for program #13828 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555 and ESA HST DDT proposal 14449. NPMK was supported under a grant from the UK Space Agency.
We acknowledge the useful discussions within the Swift Nova-CV community, the program was supported by the
Swift PI, Neil Gehrels, who supported this research with the Swift resources. We thank the Swift planners and operators for the extra work required with the grism offsets. We sincerely thank Prof. D.P.K. Banerjee for providing the late time infrared spectrum of V5668 Sgr. We thank Kim Page, Jordi José, Greg Schwarz, Bob Gehrz, Nye Evans, and Bob Williams for discussions and exchanges, and SNS thanks the Astronomical Institute of the Charles University for a visiting appointment during which this work was begun.
References
Corrales (2015)
Amari, S. & Lodders, K. 2007, HiA, 14, 349
Corrales (2015)
Banerjee, D.P.K., Joshi, V., Srivastava, M. & N. M. Ashok, N.M. 2016, ATel 8753
Corrales (2015)
Banerjee, D. P. K., Srivastava, M. K., Ashok, N. M., & Venkataraman, V. 2016, MNRAS, 455, L.109
Corrales (2015)
Beer, A. 1974,, Vistas Astr., 16, 179
Corrales (2015)
Bevan, A. & Barlow, M.J. 2016, MNRAS, 456, 1269
Corrales (2015)
Campbell, W.W. 1892, PASP, 4, 231
Corrales (2015)
Casanova, J., José, J., Garcia-Berro, E., & Shore, S. N. 2016, A&A, 595, A28
Corrales (2015)
Clerke, A. 1903, Problems in Astrophysics (London: Adam & Charles Black)
Corrales (2015)
De Gennaro Aquino, I., Schröder, K.-P., Mittag, M., Wolter, U. et al. 2015, A&A, 581, 134
Corrales (2015)
De Gennaro Aquino, I., Shore, S. N., Schwarz, G. J., Mason, E. et al. 2014, A&A, 562, A28
Corrales (2015)
Della Valle, M., Pasquini, L., Daou, D. & Williams, R. E 2002, A&A, 390, 155
Corrales (2015)
Derdzinski, A. M.; Metzger, B. D., & Lazzati, D. 2017, MNRAS, 469, 1314
Corrales (2015)
Draine, B. T. & Lee, H. M. 1984, ApJ, 285, 89
Corrales (2015)
Evans, A. & Gehrz, R.D. 2012, BASI, 40, 213
Corrales (2015)
Evans, A., Banerjee, D. P. K., Gehrz, R. D., Joshi, V. et al. 2017, MNRAS, 466, 4221
Corrales (2015)
Finzell, T., Chomiuk, L., Metzger, B. D., Walter, F. M. et al. 2018, ApJ, 852, 108
Corrales (2015)
Gehrz, R. D., Evans, A., Helton, L. A., Shenoy, D. P., Banerjee, D. P. K. et al. 2015, ApJ, 812, 132
Corrales (2015)
Gehrz, R.D., Evans, N., Woodward, C.E., Helton, L.A., Banerjee, D.P.K. et al. 2018, ApJ, in press (arXiv:180400575)
Corrales (2015)
Goranskij, V. P., Katysheva, N. A., Kusakin, A. V., Metlova, N. V. t al. 2007, AstBu, 62, 125
Corrales (2015)
Hayward, T.L., Saizar, P., Gehrz, R. D., Benjamin, R. A. et al. 1996, ApJ, 469, 854
Corrales (2015)
Iliadis, C., Downen, L. N, José, Jordi, Nittler, L. R. & Starrfield, S. 2018, ApJ, 855, 76
Corrales (2015)
Kuin, N. P. M., 2014, Astrophysics Source Code Library, record ascl:1410.004
Corrales (2015)
Kuin N. P. M., Landsman, W., Breeveld, A. A., et al., 2015, MNRAS 449, 2514
Corrales (2015)
Lucy, L. B., Danziger, I. J., Gouiffes, C., & Bouchet 1989, LNP, 350, 164
Corrales (2015)
Lyke, J. E., Gehrz, R. D., Woodward, C. E., Barlow, M. J. et al. 2001, AJ, 122, 3305
Corrales (2015)
Lynch, D.K., Woodward, C.E., Gehrz, R.D., Helton, L.A. et al. 2008, AJ, 136, 1815
Corrales (2015)
McLaughlin, D. B. 1937, POMic, 6, 107
Corrales (2015)
Mason, E., Shore, S.N., De Gennaro Aquino, I., Izzo, L., Page, K., and Schwarz, G. J. 2018, ApJ, 853, 27
Corrales (2015)
Munari, U., Maitan, A., Moretti, S. & Tomaselli, S. 2015, NewA, 40, 28
Corrales (2015)
Munari, U., Ochner, P., Dallaporta, S., Valisa, P. et al. 2014, MNRAS, 440, 3420
Corrales (2015)
Page, M. J., Kuin, N. P. M., Breeveld, A. A., et al., 2013, MNRAS, 436, 1684
Corrales (2015)
Payne-Gaposchkin, C. 1957, The Galactic Novae (Amsterdam: North-Holland)
Corrales (2015)
Pozzo, M.;,Meikle, W. P. S., Fassia, A., et al. 2004, MNRAS, 352, 457
Corrales (2015)
Payne-Gaposchkin, C. & Whipple, F. 1939, Har.Cir 433
Corrales (2015)
Rawlings, J.M.C. 1988, MNRAS, 232, 507
Corrales (2015)
Sakon, I., Sako, S., Onaka, T., Nozawa, T. et al. 2016, ApJ, 817, 145
Corrales (2015)
Schwarz, G. J., Ness, J-U, Osborne, J. P., Page, K. L., et al. 2011, ApJS, 197, 31
Corrales (2015)
Schwarz, G. J., Shore, S. N., Starrfield, S.,& Vanlandingham, K. M. 2007, ApJ, 657, 453
Corrales (2015)
Shore, S. N., Schwarz, G., Bond, H. E., Downes, R. A. et al. 2003, AJ, 125, 1507
Corrales (2015)
Shore, S. N.,Starrfield, S., Gonzalez-Riestrat, R., Hauschildt, P. H., & Sonneborn, G. 1994, Natur, 369, 539
Corrales (2015)
Shore, S. N. 2012, BASI, 40, 185
Corrales (2015)
Shore 2013, A&A, 559, L7
Corrales (2015)
Shore, S. N., Mason, E., Schwarz, G. J., Teyssier, F. M., et al. 2016, A&A, 590, A123
Corrales (2015)
Shore, S. N., De Gennaro Aquino, I., Schwarz, G. J., Augusteijn, T., et al. 2013, A&A, 553, A123
Corrales (2015)
Shore, S. N. & Gehrz, R. D. 2004, A&A, 417, 695
Corrales (2015)
Shore, S. N., Sonneborn, G., Starrfield, S., Riestra-Gonzalez, R., & Ake, T. B 1993, AJ, 106, 2408
Corrales (2015)
Starrfield, S.; Iliadis, C., Timmes, F. X., Hix, W. R. et al. 2012, BASI, 40, 419
Corrales (2015)
Stratton, F. J. M. & Manning, W. H. 1939, Atlas of spectra of Nova Herculis 1934 (Cambridge: Solar Physics Observatory)
Corrales (2015)
Strope, R. J., Schaefer, B. E., & Henden, Arne A. 2010, AJ, 140, 34
Corrales (2015)
Vanlandingham, K. M., Schwarz, G. J., Shore, S. N., & Starrfield, S. 2001, AJ, 121, 1126
Corrales (2015)
Vogel, H. 1893, Abhand. Kónlig. Akad. Wissen. (Berlin), 1, 115, A108
Appendix A Journal of Observations: V5668 Sgr
Appendix B Description of the peculiarities in the $Swift$ UVOT Grism spectra
These notes are specific for the spectra taken of Nova V5668 Sgr and V339 Del
presented here. A general discussion of Swift UVOT grism data can be found in
Kuin et al. 2015. The spectra which have been displayed in Fig. 9 have
not been cleaned of contaminating zeroth orders and other defects described below.
The early grism observations of V5668 Sgr were made using a Swift ToO which restricted
the observation to be done without an offset on the detector. Observations thus done
are placed with the target in the center of the field of view, and the spectrum falls
across the center of the detector. We used to option where the filter wheel is rotated
across the aperture, ”clocked” in Swift talk, which masks part of the zeroth orders in
the field. Following the insight of Keith Mason, a change was made to the way the grism
was mounted in the XMM-OM and the grism was mounted such that the first order in this
clocked configuration falls on the detector where the zeroth orders are masked away.
This is the configuration we used for both the UV and Visible grisms (named UG and VG in
the tables).
The nova was initially too bright for the UVOT detector which uses a photon counting
detector consisting of a phosphor screen, 3-stage MCP, fibre-optic, and a CCD where
centroiding of the photon splash gives the photon position. For very high flux the MCP
recharge time limits the measurement, and similarly, the CCD frame time is 11 ms, so
at high fluxes the probability of multiple photons incident in one CCD frame is high.
This coincidence loss can be corrected statistically by measuring enough frames.
These characteristics have some effect on the spectra. The MCP threshold is not reached
for the grism first order. The early time spectra from day 92-97.3 show in the continuum
evidence of coincidence loss that is too high to correct, and the loss peaks where the
grism throughput is highest, around 2700 $\AA$ in the UV grism. The continuum from day
97.7 on is consistent with the photometry and shows no dip around the peak sensitivity.
However, the line emission is much larger than the continuum. The center of the lines that
are formed in the 2000-4000 $\AA$ region are all too bright for a good coincidence loss
correction, except during the faintest part of the minimum and at very late times.
After the dip in brightness, from day 168-230, the coincidence loss once again affect
the line cores.
In the first spectra, which were taken at the center of the detector, the second order
also falls across the first order, starting at 2750 $\AA$, and the second order response
peaks in the UV. However, for the V5668 Sgr spectra, the bright lines of N III] and C III]
at 1750 and 1909 $\AA$ are also very bright in the second order spectrum. These show up
as extra peaks in the first order around 2950 $\AA$ and 3220 $\AA$, respectively.
If the spectra are at a certain offset on the detector then the second order falls next
to the first order, tracing it nearly parallel but merging at higher wavelengths.
Now the N III] and C III] lines do not fall on top of the first order spectrum, but they
turn out to be so strong still, that they create a coincidence loss pattern around them,
stealing photons away from the first order. Due to the offset of the second order the
effect of photon loss in the first order is shifted to slightly longer wavelengths. This
can be seen in the later spectra during the brighter period as an extra pair of dips in
the first order around 3010 and 3250 $\AA$. The effect of the other second order lines
is smaller and difficult to see. The most prominent is the 2325(2) line that falls
near 4300 $\AA$. Unfortunately, mistakes get made with the offset requests, and we
see therefore a varying amount of these competing effects in the spectra.
The faintest spectra become noisy because of the background noise, which in the clocked
observing modes is restricted mostly below 2300 $\AA$. At a flux of $1.010^{-14}$ erg
cm${}^{-2}$ s${}^{-1}$ the S/N level is close to 1.
In the V339 Del spectra, some spectra suffered from first orders near the nova
spectrum caused by other stars in the field. When an unwanted first order falls over
the nova spectrum, just a few photons more can affect the spectrum where the
grism sensitivity is low, since there the conversion from count rate to flux is the
largest. This leads to a flux spectrum showing an upturn in the UV or the yellow/red part
of the spectrum. |
Abstract
Power corrections to the $Q^{2}$ behaviour of the low-order moments
of both the longitudinal and transverse structure functions of proton and
deuteron have been investigated using available phenomenological fits of
existing data in the $Q^{2}$ range between $1$ and $20~{}(GeV/c)^{2}$. The
Natchmann definition of the moments has been adopted for disentangling
properly target-mass and dynamical higher-twist effects in the data. The
leading twist has been treated at next-to-leading order in the strong
coupling constant and the effects of higher orders of the perturbative
series have been estimated using a renormalon-inspired model. The
contributions of (target-dependent) multiparton correlations to both $1/Q^{2}$ and $1/Q^{4}$ power terms have been determined in the transverse
channel, while the longitudinal one appears to be consistent with a pure
infrared renormalon picture in the whole $Q^{2}$-range between $1$ and $20~{}(GeV/c)^{2}$. Finally, the extracted twist-2 contribution in the deuteron
turns out to be compatible with the hypothesis of an enhanced $d$-quark
parton distribution at large $x$.
Sezione ROMA III
Via della Vasca Navale 84
I-00146 Roma, Italy
INFN-RM3 98/2
September 1998
Power corrections in the longitudinal and transverse structure
functions of proton and deuteronaaaTo appear in Nuclear
Physics B.
G. Ricco${}^{(a,b)}$, S. Simula${}^{(c)}$ and M. Battaglieri${}^{(b)}$
${}^{(a)}$Dipartimento di Fisica, Universitá di Genova
Via Dodecanneso 33, I-16146, Genova, Italy
${}^{(b)}$Istituto Nazionale
di Fisica Nucleare, Sezione di Genova
Via Dodecanneso 33, I-16146,
Genova, Italy
${}^{(c)}$Istituto Nazionale di Fisica Nucleare, Sezione
Roma III
Via della Vasca Navale 84, I-00146 Roma, Italy
1 Introduction
The experimental investigation of deep-inelastic lepton-hadron
scattering has provided a wealth of information on the occurrence of Bjorken
scaling and its violations, giving a decisive support to the rise of the
parton model and to the idea of asymptotic freedom. Quantum Chromodynamics
($QCD$) has been thereby proposed as the theory describing the logarithmic
violations to scaling in the asymptotic region and its predictions at
leading ($LO$) and next-to-leading ($NLO$) orders have been nicely confirmed
by the experiments. However, in the pre-asymptotic region the full
dependence of the hadron structure functions on the squared four-momentum
transfer, $Q^{2}\equiv q\cdot q$, is affected also by power corrections,
which can originate from non-perturbative physics and are well beyond the
predictive power of perturbative $QCD$. An important tool for the
theoretical investigation of the $Q^{2}$ behaviour of the structure functions
is the Operator Product Expansion ($OPE$): the logarithmic scale dependence
is provided by the so-called leading twist operators, which in the parton
language are one-body operators whose matrix elements yield the contribution
of the individual partons to the structure functions. On the contrary, power
corrections are associated to higher-twist operators which measure the
relevance of correlations among partons (see, e.g., [1]).
In case of unpolarised inelastic electron scattering the nucleon
response is described by two independent quantities: the transverse
$F_{2}^{N}(x,Q^{2})$ and the longitudinal $F_{L}^{N}(x,Q^{2})$ structure functions,
where $x\equiv Q^{2}/2M\nu$ is the Bjorken variable, with $M$ and $\nu$
being the nucleon mass and the energy transfer in the target rest frame.
Systematic measurement of the transverse structure function of the nucleon,
$F_{2}^{N}(x,Q^{2})$, (more precisely, of the proton and the deuteron
[2, 3, 4, 5, 6]) have been carried out in the kinematical
range $10^{-4}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<%
$}}x\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}1$ and for $Q^{2}$ values up to several hundreds
of $(GeV/c)^{2}$; thus, various phenomenological fits of the data sets are
presently available, like those of Refs. [2, 7, 8]. As for
the longitudinal to transverse ($L/T$) cross section ratio, $R_{L/T}^{N}(x,Q^{2})\equiv\sigma_{L}^{N}(x,Q^{2})/\sigma_{T}^{N}(x,Q^{2})$, the experimentally
investigated kinematical range is $0.0045\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$%
}}x\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}0.7$ and $1\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}Q^{%
2}~{}(GeV/c)^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0%
pt\hbox{$<$}}70$ [4, 5, 6, 9]; however, the sparse and
still fluctuating data are not yet sufficient to put significative
constraints on the phenomenological fits (see Refs.
[10, 11]), particularly in the low-$x$ region ($x\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}0.3$).
Various analyses of power-suppressed terms in the world data on both
$F_{2}^{N}(x,Q^{2})$ and $F_{L}^{N}(x,Q^{2})$ structure functions have been already
carried out; they are based either on the choice of a phenomenological
ansätz [12, 13] or on renormalon-inspired models
[14, 15], adopting for the leading twist the $LO$ or $NLO$
approximations. Very recently, also the effects of the
next-to-next-to-leading order ($NNLO$) have been investigated on $F_{2}^{N}(x,Q^{2})$ and $R_{L/T}^{N}(x,Q^{2})$ [16] as well as on the
parity-violating structure function $xF_{3}^{N}(x,Q^{2})$ [17],
measured in the neutrino and antineutrino scattering off iron by the $CCFR$
collaboration [18]. Present analyses at $NNLO$ seem to indicate that
power corrections in $F_{2}^{N}(x,Q^{2})$, $R_{L/T}^{N}(x,Q^{2})$ and $xF_{3}^{N}(x,Q^{2})$ can be quite small. However, in such analyses the highest value of $x$
is typically limited at $\simeq 0.75$ and also the adopted $Q^{2}$ range is
quite restricted ($Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}%
}5\div 6~{}(GeV/c)^{2}$ at $x\simeq 0.75$) in
order to avoid the nucleon-resonance region; therefore, in such kinematical
conditions the power corrections represent always a small fraction of the
leading-twist term (corrected for target-mass effects). Furthermore, when
power corrections are investigated in a narrow $Q^{2}$-window, it is clear
that variations of the logarithmic dependence of the leading twist can
easily simulate a small power-like term as well as a significative
sensitivity to the higher-twist anomalous dimensions cannot be easily
achieved.
We point out that the smallness of the sum of the various dynamical
higher-twists does not imply the smallness of its individual terms and
therefore the question whether present data are compatible with an
alternate-sign twist expansion is still open. To answer this question, we
extend in this paper the range of values of $Q^{2}$ down to $Q^{2}\simeq 1~{}(GeV/c)^{2}$ in order to enhance the sensitivity to power-like terms. For such
low values of $Q^{2}$ we expect that the effects of target-dependent
higher twists (i.e., multiparton correlations) should show up, thanks also
to the contributions of the nucleon-resonance region and the nucleon elastic
peak. The inclusion of the nucleon-resonance region is clearly worthwhile
also because of parton-hadron duality arguments (see, e.g., Ref.
[19]).
In this paper we analyse the power corrections to the $Q^{2}$
behaviour of the low-order moments of both the longitudinal and transverse
structure functions of proton and deuteron using available phenomenological
fits of existing data in the $Q^{2}$ range between $1$ and $20~{}(GeV/c)^{2}$,
including also the $SLAC$ proton data of Ref. [3] covering the
region beyond $x\simeq 0.7$ up to $x\simeq 0.98$. The Natchmann definition
of the moments is adopted for disentangling properly target-mass and
dynamical higher-twist effects in the data. The leading twist is treated at
$NLO$ in the strong coupling constant and, as far as the transverse channel
is concerned, it is extracted simultaneously with the higher-twist terms
from the data. The effects of higher orders of the perturbative series are
estimated adopting the infrared ($IR$) renormalon model of Ref.
[14], containing both $1/Q^{2}$ and $1/Q^{4}$ power-like terms. It
turns out that the longitudinal and transverse data cannot be explained
simultaneously by a renormalon contribution only; however, we will show that
in the whole $Q^{2}$ range between $1$ and $20~{}(GeV/c)^{2}$ the longitudinal
channel appears to be consistent with a pure $IR$-renormalon picture,
adopting strengths which are not inconsistent with those expected in the
naive non-abelianization ($NNA$) approximation [15] and agree well
with the results of a recent analysis [17] of the $CCFR$ data
[18] on $xF_{3}^{N}(x,Q^{2})$ . Then, after including the $1/Q^{2}$ and
$1/Q^{4}$ $IR$-renormalon terms as fixed by the longitudinal channel, the
twist-4 and twist-6 contributions arising from multiparton correlations are
phenomenologically determined in the transverse channel and found to have
non-negligible strengths with opposite signs. It is also shown that our
determination of multiparton correlation effects in $F_{2}^{N}(X,Q^{2})$ is only
marginally affected by the specific value adopted for the strong coupling
constant at the $Z$-boson mass, at least in the range between $\simeq 0.113$
and $\simeq 0.118$. Finally, an interesting outcome of our analysis is that
the extracted twist-2 contribution in the deuteron turns out to be
compatible with the enhancement of the $d$-quark parton distribution
recently proposed in Ref. [16].
The paper is organised as follows. In the next Section the main features of the $OPE$, the $NLO$ approximation of the leading twist and the usefulness of the Natchmann definition of the moments are briefly reminded. In Section 3 our procedure for the evaluation of the experimental longitudinal and transverse moments is described. Section 4 is devoted to a phenomenological analysis of the data at $NLO$, while the inclusion of the $IR$-renormalon uncertainties is presented in Section 5. The results of our final analysis of the transverse data are collected in Section 6, and our main conclusions are summarised in Section 7.
2 The Operator Product Expansion and the Leading Twist
The complete $Q^{2}$ evolution of the structure functions can be obtained using the $OPE$ [20] of the time-ordered product of the two currents entering the virtual-photon nucleon forward Compton scattering amplitude, viz.
$$\displaystyle T[J(z)~{}J(0)]=\sum_{n,\alpha}~{}f_{n}^{\alpha}(-z^{2})~{}z^{\mu%
_{1}}z^{\mu_{2}}...z^{\mu_{n}}~{}O_{\mu_{1}\mu_{2}...\mu_{n}}^{\alpha}$$
(1)
where $O_{\mu_{1}\mu_{2}...\mu_{n}}^{\alpha}$ are symmetric traceless operators of dimension $d_{n}^{\alpha}$ and twist $\tau_{n}^{\alpha}\equiv d_{n}^{\alpha}-n$, with $\alpha$ labelling different operators of spin $n$. In Eq. (1) $f_{n}^{\alpha}(-z^{2})$ are coefficient functions, which are calculable in $pQCD$ at short-distance. Since the imaginary part of the forward Compton scattering amplitude is simply the hadronic tensor containing the structure functions measured in deep inelastic scattering ($DIS$) experiments, Eq. (1) leads to the well-known twist expansion for the Cornwall-Norton ($CN$) moments of the transverse structure function, viz.
$$\displaystyle\tilde{M}_{n}^{T}(Q^{2})\equiv\int dx_{0}^{1}~{}x^{n-2}~{}F_{2}^{%
N}(x,Q^{2})=\sum_{\tau=2,even}^{\infty}E_{n\tau}(\mu,Q^{2})~{}O_{n\tau}(\mu)~{%
}\left({\mu^{2}\over Q^{2}}\right)^{{\tau-2\over 2}}$$
(2)
where $\mu$ is the renormalization scale, $O_{n\tau}(\mu)$ are the (reduced) matrix elements of operators with definite spin $n$ and twist $\tau$, containing the information about the non-perturbative structure of the target, and $E_{n\tau}(\mu,Q^{2})$ are dimensionless coefficient functions, which can be expressed perturbatively as a power series of the running coupling constant $\alpha_{s}(Q^{2})$.
As it is well known, in case of the leading twist ($\tau=2$) one ends up with a non-singlet quark operator $\hat{O}_{\tau=2}^{NS}$ with corresponding matrix elements $O_{n2}^{NS}\equiv a_{n}^{NS}$ and coefficients $E_{n2}^{NS}$, and with singlet quark $\hat{O}_{\tau=2}^{S}$ and gluon $\hat{O}_{\tau=2}^{G}$ operators with corresponding matrix elements $a_{n}^{\pm}$ and coefficients $E_{n2}^{\pm}$ (after their mixing under renormalization group equations); explicitly one has
$$\displaystyle\tilde{M}_{n}^{T}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\mu_{n}^{T}(Q^{2})+\mbox{higher twists}$$
(3)
$$\displaystyle\mu_{n}^{T}(Q^{2})$$
$$\displaystyle\equiv$$
$$\displaystyle\mu_{n}^{NS}(Q^{2})+\mu_{n}^{S}(Q^{2})$$
(4)
with at $NLO$
$$\displaystyle\mu_{n}^{NS}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle a_{n}^{NS}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}%
\right]^{\gamma_{n}^{NS}}~{}{1+\alpha_{s}(Q^{2})R_{n}^{NS}/4\pi\over 1+\alpha_%
{s}(\mu^{2})R_{n}^{NS}/4\pi}$$
(5)
$$\displaystyle\mu_{n}^{S}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle a_{n}^{-}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}%
\right]^{\gamma_{n}^{-}}~{}{1+\alpha_{s}(Q^{2})R_{n}^{-}/4\pi\over 1+\alpha_{s%
}(\mu^{2})R_{n}^{-}/4\pi}~{}+$$
(6)
$$\displaystyle a_{n}^{+}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}%
\right]^{\gamma_{n}^{+}}~{}{1+\alpha_{s}(Q^{2})R_{n}^{+}/4\pi\over 1+\alpha_{s%
}(\mu^{2})R_{n}^{+}/4\pi}$$
where all the anomalous dimensions $\gamma_{n}^{i}$ ($i=NS,\pm$) and coefficients $R_{n}^{i}$ can be found in, e.g., Ref. [21] for $n\leq 10$ and for a number of active flavours, $N_{f}$, equal to $N_{f}=3,4$ and $5$. At $NLO$ the running coupling constant $\alpha_{s}(Q^{2})$ is given by
$$\displaystyle\alpha_{s}(Q^{2})={4\pi\over\beta_{0}\mbox{ln}(Q^{2}/\Lambda_{%
\overline{MS}}^{2})}\left\{1-{\beta_{1}\over\beta_{0}^{2}}{\mbox{ln ln}(Q^{2}/%
\Lambda_{\overline{MS}}^{2})\over\mbox{ln}(Q^{2}/\Lambda_{\overline{MS}}^{2})}\right\}$$
(7)
where $\Lambda_{\overline{MS}}$ is the $QCD$ scale in the $\overline{MS}$ scheme, $\beta_{0}=11-2N_{f}/3$ and $\beta_{1}=102-38N_{f}/3$. The coefficients $a_{n}^{\pm}$ can be rewritten as follows
$$\displaystyle a_{n}^{+}$$
$$\displaystyle=$$
$$\displaystyle(1-b_{n})~{}\mu_{n}^{S}(\mu^{2})~{}+~{}c_{n}~{}\mu_{n}^{G}(\mu^{2})$$
$$\displaystyle a_{n}^{-}$$
$$\displaystyle=$$
$$\displaystyle b_{n}~{}\mu_{n}^{S}(\mu^{2})~{}-~{}c_{n}~{}\mu_{n}^{G}(\mu^{2})$$
(8)
where $\mu_{n}^{G}(Q^{2})$ is the $CN$ moment of the gluon distribution function
$G(x,Q^{2})$ of order $n$, viz. $\mu_{n}^{G}(Q^{2})=\int_{0}^{1}dx~{}x^{n-2}~{}G(x,Q^{2})$. It turns out that, since quark and gluons are decoupled at large
$x$, one has $b_{n}\sim 1$ and $c_{n}\sim 0$ already for $n\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}4$, so that
only $a_{n}^{-}$ contributes to Eq. (6). Moreover, again for $n\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}4$ one has $\gamma_{n}^{-}\simeq\gamma_{n}^{NS}$ and $R_{n}^{-}(f)\simeq R_{n}^{NS}$ [21], which implies that for $n\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}4$ the evolution
of the leading-twist singlet moments $\mu_{n}^{S}(Q^{2})$ almost coincide with
that of the non-singlet ones $\mu_{n}^{NS}(Q^{2})$. Therefore, when $n\geq 4$,
we will consider for the leading twist term the following $NLO$ expression
$$\displaystyle\mu_{n}^{T}(Q^{2})=\mu_{n}^{NS}(Q^{2})+\mu_{n}^{S}(Q^{2})\to_{n%
\geq 4}a_{n}^{(2)}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}\right]^%
{\gamma_{n}^{NS}}~{}{1+\alpha_{s}(Q^{2})R_{n}^{NS}/4\pi\over 1+\alpha_{s}(\mu^%
{2})R_{n}^{NS}/4\pi}$$
(9)
with $a_{n}^{(2)}\equiv\mu_{n}^{NS}(\mu^{2})+\mu_{n}^{S}(\mu^{2})$. To sum up, the $Q^{2}$ evolution of the leading-twist transverse moments is completely determined by $pQCD$ for each spin $n$ and all the unknown twist-2 parameters reduce to the three matrix elements $a_{2}^{NS}$ and $a_{2}^{\pm}$ in case of the second moment, and only to one matrix element, $a_{n}^{(2)}$, for $n\geq 4$.
The contribution of power-like corrections to the $Q^{2}$ dependence of the moments (2) is due to higher twists corresponding to $\tau\geq 4$. Already at $\tau=4$ the set of basic operators is quite large and their number grows rapidly as $\tau$ increases; moreover, each set of operators mix under renormalization group equations. The short-distance coefficients $E_{n\tau}$ can in principle be determined perturbatively, but the calculations are cumbersome and the large number of the involved matrix elements $O_{n\tau}(\mu)$ makes the resulting expression for the moments of limited use. Therefore, following Ref. [13], in this work we will make use of effective anomalous dimensions for the higher-twist terms (see next Section).
The $OPE$ can be applied also to the longitudinal structure function $F_{L}^{N}(x,Q^{2})$, where power corrections are expected to be more important than in the transverse case, because $F_{L}^{N}(x,Q^{2})$ is vanishing at $LO$. However, $NLO$ effects generate a non-vanishing contribution; more precisely, since the longitudinal structure function is defined as
$$\displaystyle F_{L}^{N}(x,Q^{2})\equiv F_{2}^{N}(x,Q^{2})~{}\left(1+{4M^{2}x^{%
2}\over Q^{2}}\right)~{}{R_{L/T}^{N}(x,Q^{2})\over 1+R_{L/T}^{N}(x,Q^{2})}$$
(10)
where $R_{L/T}^{N}(x,Q^{2})\equiv\sigma_{L}^{N}(x,Q^{2})/\sigma_{T}^{N}(x,Q^{2})$ is the $L/T$ cross section ratio, one has
$$\displaystyle\tilde{M}_{n}^{L}(Q^{2})\equiv\int_{0}^{1}dx~{}x^{n-2}~{}F_{L}^{N%
}(x,Q^{2})=\mu_{n}^{L}(Q^{2})+\mbox{higher twists}$$
(11)
with the leading-twist contribution $\mu_{n}^{L}(Q^{2})$ given at $NLO$ by
$$\displaystyle\mu_{n}^{L}(Q^{2})={\alpha_{s}(Q^{2})\over 4\pi}{1\over n+1}\left%
\{{8\over 3}\mu_{n}^{q}(Q^{2})+{2d\over n+2}\mu_{n}^{G}(Q^{2})\right\}+\mu_{n}%
^{c}(Q^{2})$$
(12)
where $\mu_{n}^{q}(Q^{2})$ is the light-quark flavour contribution to the $n$-th moment of the transverse structure function, $\mu_{n}^{c}(Q^{2})$ is the charm quark contribution (starting when the invariant produced mass $W$ is greater than twice the mass of the charm quark), and $d=12/9$ and $20/9$ for $N_{f}=3$ and $4$, respectively. In what follows Eq. (12) will be evaluated using sets of parton distributions available in the literature (see Section 5).
For massless targets only operators with spin $n$ contribute to the $n$-th $CN$ moments of both the longitudinal and transverse structure functions. When the target mass is non-vanishing, operators with different spins can contribute and consequently the higher-twist terms in the expansion of the $CN$ moments $\tilde{M}_{n}^{T}(Q^{2})$ and $\tilde{M}_{n}^{L}(Q^{2})$ contain now also target-mass terms, which are of pure kinematical origin. It has been shown by Natchmann [22] that even when $M\neq 0$ the moments can be redefined in such a way that only spin-$n$ operators contribute in the $n$-th moment, namely
$$\displaystyle M_{n}^{T}(Q^{2})$$
$$\displaystyle\equiv$$
$$\displaystyle\int_{0}^{1}dx{\xi^{n+1}\over x^{3}}F_{2}^{N}(x,Q^{2}){3+3(n+1)r+%
n(n+2)r^{2}\over(n+2)(n+3)}$$
(13)
$$\displaystyle M_{n}^{L}(Q^{2})$$
$$\displaystyle\equiv$$
$$\displaystyle\int_{0}^{1}dx{\xi^{n+1}\over x^{3}}\left[F_{L}^{N}(x,Q^{2})+{4M^%
{2}x^{2}\over Q^{2}}F_{2}^{N}(x,Q^{2}){(n+1)\xi/x-2(n+2)\over(n+2)(n+3)}\right]$$
(14)
where $r\equiv\sqrt{1+4M^{2}x^{2}/Q^{2}}$ and $\xi\equiv 2x/(1+r)$ is the Natchmann variable. Using the experimental $F_{2}^{N}(x,Q^{2})$ and $F_{L}^{N}(x,Q^{2})$ in the r.h.s. of Eqs. (13-14), the target-mass corrections are cancelled out and therefore the twist expansions of the experimental Natchmann moments $M_{n}^{T}(Q^{2})$ and $M_{n}^{L}(Q^{2})$ contain only dynamical twists, viz.
$$\displaystyle M_{n}^{T}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\mu_{n}^{T}(Q^{2})+\mbox{dynamical higher twists}$$
(15)
$$\displaystyle M_{n}^{L}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\mu_{n}^{L}(Q^{2})+\mbox{dynamical higher twists}$$
(16)
where the leading-twist terms $\mu_{n}^{T}(Q^{2})$ and $\mu_{n}^{L}(Q^{2})$ are given at $NLO$ by Eqs. (4-6) and (12), respectively.
3 Data on Transverse and Longitudinal Moments
For the evaluation of the Natchmann transverse $M_{n}^{T}(Q^{2})$ (Eq. (13)) as well as longitudinal $M_{n}^{L}(Q^{2})$ (Eq. (14)) moments systematic measurements of the experimental structure functions $F_{2}^{N}(x,Q^{2})$ and $F_{L}^{N}(x,Q^{2})$ are required in the whole $x$-range at fixed values of $Q^{2}$. However, such measurements are not always available and therefore we have adopted interpolation formulae (”pseudo-data”), which fit the considerable amount of existing data on the proton and deuteron structure functions.
In case of the transverse channel we have used: i) the Bodek’s fit [2, 7] to the inclusive $(e,e^{\prime})$ $SLAC$ data in the resonance region for values of the invariant produced mass $W$ smaller than $2.5~{}GeV$, and ii) the Tulay’s fit [8] to the world data in the $DIS$ region for $W>2.5~{}GeV$. In order to have continuity at $W=2.5~{}GeV$ the Bodek’s fit has been divided by a factor $1.03$ (at any $Q^{2}$), which is inside the experimental errors. Moreover, we have also taken into account the wealth of $SLAC$ proton data [3] beyond $x\simeq 0.7$ up to $x\simeq 0.98$ through a simple interpolation fit given in Ref. [3]. All these fits cover the range of $x$ which is crucial for the evaluation of the moments considered in this work; therefore, the uncertainties on the moments are related only to the accuracy of the interpolation formulae; the latter are simply given by a $\pm 4\%$ total (systematic + statistical) error reported for the $SLAC$ data [2, 3] and by the upper and lower bounds of the Tulay’s fit quoted explicitly in Ref. [8]. Finally, since the whole set of $DIS$ (unpolarised) $SLAC$ and $BCDMS$ data (on which also the Tulay’s fit is based) is known to favour the value $\alpha_{s}(M_{Z}^{2})\simeq 0.113$ (see Ref. [12]), in what follows the value $\Lambda_{\overline{MS}}=290~{}(240)~{}MeV$ at $N_{f}=3~{}(4)$ will be adopted for the calculation of the running coupling constant at $NLO$ via Eq. (7)bbbThe impact of a different value of $\alpha_{s}(M_{Z}^{2})$, much closer to the updated $PDG$ value $\alpha_{s}(M_{Z}^{2})=0.119\pm 0.002$ [23], will be briefly addressed at the end of Section 5..
As for the longitudinal channel, the structure function $F_{L}^{N}(x,Q^{2})$ is reconstructed via Eq. (10), using for the $L/T$ ratio the phenomenological fit (and the interpolation uncertainties) provided in Ref. [11]. In the nucleon resonance region the ratio $R_{L/T}^{N}(x,Q^{2})$ has been taken from Ref. [24] for the lowest values of $Q^{2}$ (namely $Q^{2}=1\div 2~{}(GeV/c)^{2}$), while it has been assumed to be equal to zero for $Q^{2}>2~{}(GeV/c)^{2}$, which is consistent with the limited amount of available data [25].
Since the $OPE$ is totally inclusive, the contribution of the nucleon elastic channels has to be included in the calculation of the moments, viz.
$$\displaystyle F_{2}^{N}(x,Q^{2})$$
$$\displaystyle=$$
$$\displaystyle F_{2}^{N(inel)}(x,Q^{2})+F_{2}^{N(el)}(x,Q^{2})$$
$$\displaystyle F_{L}^{N}(x,Q^{2})$$
$$\displaystyle=$$
$$\displaystyle F_{L}^{N(inel)}(x,Q^{2})+F_{L}^{N(el)}(x,Q^{2})$$
(17)
and consequently
$$\displaystyle M_{n}^{T}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\left[M_{n}^{T}(Q^{2})\right]_{inel}+\left[M_{n}^{T}(Q^{2})\right%
]_{el}$$
$$\displaystyle M_{n}^{L}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\left[M_{n}^{L}(Q^{2})\right]_{inel}+\left[M_{n}^{L}(Q^{2})\right%
]_{el}$$
(18)
In case of the proton one has
$$\displaystyle F_{2}^{p(el)}(x,Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\delta(x-1)~{}{[G_{E}^{p}(Q^{2})]^{2}+\eta~{}[G_{M}^{p}(Q^{2})]^{%
2}\over 1+\eta}$$
(19)
$$\displaystyle F_{L}^{p(el)}(x,Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\delta(x-1)~{}{[G_{E}^{p}(Q^{2})]^{2}\over\eta}$$
(20)
where $G_{E}^{p}(Q^{2})$ and $G_{M}^{p}(Q^{2})$ are the electric and magnetic (Sachs) proton form factors, respectively, and $\eta\equiv Q^{2}/4M^{2}$. Thus, the contribution of the proton elastic channel is explicitly given by
$$\displaystyle\left[M_{n}^{T}(Q^{2})\right]_{el}$$
$$\displaystyle\to_{proton}$$
$$\displaystyle\left({2\over 1+r^{*}}\right)^{n+1}~{}{3+3(n+1)r^{*}+n(n+2)r^{*2}%
\over(n+2)(n+3)}$$
(21)
$$\displaystyle{[G_{E}^{p}(Q^{2})]^{2}+\eta~{}[G_{M}^{p}(Q^{2})]^{2}\over 1+\eta}$$
$$\displaystyle\left[M_{n}^{L}(Q^{2})\right]_{el}$$
$$\displaystyle\to_{proton}$$
$$\displaystyle{1\over 1+\eta}\left({2\over 1+r^{*}}\right)^{n+1}\left\{[G_{E}^{%
p}(Q^{2})]^{2}-[G_{M}^{p}(Q^{2})]^{2}+\right.$$
(22)
$$\displaystyle\left.{n+1\over n+3}\left[1+{2\over(n+2)(1+r^{*})}\right]{[G_{E}^%
{p}(Q^{2})]^{2}+\eta~{}[G_{M}^{p}(Q^{2})]^{2}\over 1+\eta}\right\}$$
with $r^{*}=\sqrt{1+4M^{2}/Q^{2}}=\sqrt{1+1/\eta}$.
In case of the deuteron the folding of the nucleon elastic channel with the momentum distribution of the nucleon in the deuteron gives rise to the so-called quasi-elastic peak. In the $Q^{2}$-range of interest the existing data are only fragmentary and therefore we have computed $F_{2}^{D(el)}(x,Q^{2})$ and $F_{L}^{D(el)}(x,Q^{2})$ using the procedure of Ref. [26], which includes the folding of the nucleon elastic peak with the nucleon momentum distribution in the deuteron as well as final state interaction effects; such procedure can be applied both at low ($Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}%
}1~{}(GeV/c)^{2}$) and high values of $Q^{2}$ (see Ref. [27]). The results of the calculations, performed using the deuteron wave function corresponding to the Paris nucleon-nucleon potential [28], are compared in Fig. 1 with the $SLAC$ data of Ref. [29] at $Q^{2}=1.75$ and $3.25~{}(GeV/c)^{2}$. It can be seen that the agreement with the data is quite good for the transverse part and still acceptable for the less accurate longitudinal response.
In Figs. 2 and 3 the $Q^{2}$ behaviours of the inelastic, elastic and total transverse moments are reported for the proton and the deuteron, respectively, in the whole $Q^{2}$-range of interest in this work, i.e. $1\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}Q^{%
2}(GeV/c)^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt%
\hbox{$<$}}20$. Note that in case of the deuteron our data refer to the moments of the structure function per nucleon. It can clearly be seen that the inelastic and elastic contributions exhibit quite different $Q^{2}$-behaviours and, in particular, the inelastic part turns out to be dominant for $Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}%
}1~{}(GeV/c)^{2}$ in the second moment and only for $Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}%
}n~{}(GeV/c)^{2}$ in higher order moments. The $Q^{2}$-behaviour of the longitudinal moments is illustrated in Figs. 4 an 5 in case of the proton and the deuteron, respectively. It can be seen that the elastic contribution drops quite fast and it is relevant only at the lowest values of $Q^{2}$ ($Q^{2}\simeq$ few $(GeV/c)^{2}$).
We point out that our present determination of the longitudinal and
transverse experimental moments is clearly limited by the use of
phenomenological fits of existing data (i.e. ”pseudo-data”), which are
required in order to interpolate the structure functions in the whole
$x$-range for fixed values of $Q^{2}$, as well as by the existing large
uncertainties in the determination of the $L/T$ ratio $R_{L/T}^{N}(x,Q^{2})$.
Thus, both transverse data with better quality at $x\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}0.6$ and $Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}%
}10~{}(GeV/c)^{2}$ and more precise, systematic determinations of the $L/T$ cross section ratio are still required to improve our experimental
knowledge of the $Q^{2}$-behaviour of the low-order moments of the nucleon
structure functions.
4 Analysis of the Transverse Data at $NLO$
In this Section we present our analysis of the transverse pseudo-data of Figs. 2 and 3, adopting for the leading twist the $NLO$ expressions (5-6) and for the power corrections a purely phenomenological ansätz. Indeed, as already pointed out, several higher-twist operators exist and mix under the renormalization-group equations; such a mixing is rather involved (see, e.g., Ref. [30] in case of twist-4 operators) and in particular the number of mixing operators increases with the spin $n$. Since complete calculations of the higher-twist anomalous dimensions are not yet available, we use the phenomenological ansätz already adopted in Ref. [13], which in case of the second moment $M_{n=2}^{T}(Q^{2})$ leads to the expansion
$$\displaystyle M_{2}^{T}(Q^{2})=\mu_{2}^{NS}(Q^{2})+\mu_{2}^{S}(Q^{2})+a_{2}^{(%
4)}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}\right]^{\gamma_{2}^{(4%
)}}~{}{\mu^{2}\over Q^{2}}$$
(23)
where the logarithmic $pQCD$ evolution of the twist-4 contribution is
accounted for by the term $[\alpha_{s}(Q^{2})/\alpha_{s}(\mu^{2})]^{\gamma_{2}^{(4)}}$ with an effective anomalous
dimension $\gamma_{2}^{(4)}$ and the parameter $a_{2}^{4}$ represents the overall
strength of the twist-4 term at $Q^{2}=\mu^{2}$. For the $QCD$ renormalization
scale $\mu$ we adopt hereafter the value $\mu=1~{}GeV/c$. We have
explicitly checked that a different choice for $\mu$ does not modify any
conclusion of this work, because (e.g. in Eq. (23)) the values of
the parameter $a_{2}^{4}$ corresponding to two different choices $\mu$ and
$\mu^{\prime}$ turn out to be related by the logarithmic $pQCD$ evolution, i.e.
$a_{2}^{4}(\mu^{\prime})=a_{2}^{4}(\mu)~{}[\alpha_{s}(\mu^{\prime 2})/\alpha_{s%
}(\mu^{2})]^{\gamma_{2}^{4}}~{}\mu^{2}/\mu^{\prime 2}$.
The unknown parameters appearing in Eq. (23) (i.e., the
three twist-2 parameters $a_{2}^{NS}\equiv\mu_{2}^{NS}(Q^{2}=\mu^{2})$, $a_{2}^{S}\equiv\mu_{2}^{S}(Q^{2}=\mu^{2})$ and $a_{2}^{G}\equiv\mu_{2}^{G}(Q^{2}=\mu^{2})$ and the
two twist-4 parameters $a_{2}^{(4)}$ and $\gamma_{2}^{(4)}$) have been
determined by fitting our data for the proton and the deuteron (see Fig.
2(a) and 3(a), respectively), adopting the least-$\chi^{2}$ procedure in the
$Q^{2}$-range between $1$ and $20~{}(GeV/c)^{2}$. It turned out that the singlet
and gluon twist-2 parameters can be directly determined form the data only
in case of the deuteron, but not in case of the proton. Thus, in fitting the
proton data we have kept fixed the parameters $a_{2}^{S}$ and $a_{2}^{G}$ at the
values obtained for the deuteron. Our results are shown in Figs. 6(a) and
7(a) in the whole $Q^{2}$-range of the analysis, while the values obtained for
the various parameters are reported in Table 1, together with the
uncertainties of the fitting procedure corresponding to one-unit increment
of the $\chi^{2}/N$ variable (where $N$ is the number of degrees of
freedom). It can clearly be seen that:
•
the twist-4 contribution to second transverse moment $M_{2}^{T}(Q^{2})$ is
quite small in the proton and is almost vanishing in the deuteron in the
whole $Q^{2}$-range of our analysis; the latter result suggests that the
twist-4 effect in the neutron comes with a sign opposite to that in the
proton at variance with the expectations from the bag model [31];
•
the twist-4 contribution in the proton at $Q^{2}=\mu^{2}=1~{}(GeV/c)^{2}$ (i.e., $a_{2}^{(4)}=0.012\pm 0.010$) turns out to be
significantly smaller than the result $a_{2}^{(4)}=0.030\pm 0.003$ quoted
in Ref. [13], where however the twist-2 term was not simultaneously
fitted to the data, but instead it was calculated using the parton densities
of Ref. [32] evolved at $NLO$. Therefore, we have repeated our
analysis by fixing the twist-2 term at the $GRV$ prediction [32]
obtaining $a_{2}^{(4)}=0.02\pm 0.01$, which is still lower but not
inconsistent within the errors with the result of Ref. [13]. We point
out that a small variation of the twist-2 term can affect significantly the
strength of the small twist-4 term; that is why our uncertainty from a
simultaneous $a_{2}^{NS}$ and $a_{2}^{(4)}$ fit appears to be quite larger than
the one found in Ref. [13];
•
as a cross-check, we have also fitted separately the non-singlet
parameter $a_{2}^{NS}$ to the $"(p-n)/2"$ data, defined as the difference
between the proton and deuteron data, obtaining $a_{2}^{NS}=0.029\pm 0.009$, $a_{2}^{(4)}=0.012\pm 0.010$ and $\gamma_{2}^{(4)}=1\pm 1$.
Combining these results with those found in the deuteron, we expect to get
in case of the proton $a_{2}^{NS}=0.096\pm 0.017$, $a_{2}^{(4)}=0.012\pm 0.010$ and $\gamma_{2}^{(4)}=1\pm 1$, which indeed are in nice agreement
with the proton fit results of Table 1.
As described in Section 2, in case of the transverse moments $M_{n}^{T}(Q^{2})$ with $n\geq 4$ the evolution of the twist-2 contribution can be simplified and assumed to be a pure non-singlet one; therefore, we have considered the following twist expansion
$$\displaystyle M_{n\geq 4}^{T}(Q^{2})=\mu_{n}^{T}(Q^{2})+a_{n}^{(4)}~{}\left[{%
\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}\right]^{\gamma_{n}^{(4)}}~{}{\mu^{2%
}\over Q^{2}}+a_{n}^{(6)}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}%
\right]^{\gamma_{n}^{(6)}}~{}{\mu^{4}\over Q^{4}}$$
(24)
where the leading twist term $\mu_{n}^{T}(Q^{2})$ is given at $NLO$ by Eq.
(9) and $\mu=1~{}GeV/c$. All the five unknown parameters
(i.e., $a_{n}^{(2)}$, $a_{n}^{(4)}$, $\gamma_{n}^{(4)}$, $a_{n}^{(6)}$ and
$\gamma_{n}^{(6)}$) have been determined from the data through the
least-$\chi^{2}$ procedure for each value of $n$. The results are shown in
Figs. 6 and 7, while the values of the parameters are reported in Tables 2
and 3 in case of the proton and deuteron, respectively. Our main results can
be summarised as follows:
•
our twist-2 term, extracted from the proton data together with the
twist-4 and twist-6 contributions, differs only slightly from the
predictions obtained using the set of parton distributions of Ref.
[32] evolved at $NLO$ (see dashed lines in Fig. 6). We have checked
that, by repeating our analysis with the twist-2 term fixed at the $GRV$
prediction, the values of all the higher-twist parameters appearing in Eq.
(24) change for $n>2$ only within the errors reported in Tables
2 and 3;
•
the twist-4 and twist-6 contributions result to have opposite signs and, moreover, at $Q^{2}\sim 1~{}(GeV/c)^{2}$ they are approximately of the same order of magnitude. The negative sign of the twist-6 is clearly due to the fact that a fit with a twist-4 term alone overestimates the low $Q^{2}$ data ($Q^{2}\simeq$ few $(GeV/c)^{2}$). We stress that the opposite signs found for the twist-4 and twist-6 terms make the total higher-twist contribution smaller than its individual terms; in particular, at large $Q^{2}$ the sum of the twist-4 and twist-6 contributions turns out to be a small fraction of the twist-2 term ($\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}10\%$ for $Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}%
}n~{}(GeV/c)^{2}$).
•
the values of the effective anomalous dimensions $\gamma_{n}^{(4)}$ and $\gamma_{n}^{(6)}$ for $n=4,6,8$ result to be around $4.0$ and $2.5$, respectively, i.e. significantly larger than the values of the corresponding twist-2 anomalous dimensions ($\gamma_{n}^{NS}\simeq 0.8\div 1.2$ for $n=4,6,8$ [21]);
•
the uncertainties on the different twist contributions due to the parameter fitting procedure are always within $\pm 15\%$ (see Fig. 7 in case of the deuteron);
•
the twist expansions (23) and (24) appear to work quite well for values of $Q^{2}$ down to $\simeq 1~{}(GeV/c)^{2}$.
An interesting feature of our analysis of the transverse moments is that the leading-twist contribution is extracted from the analysis and not fixed by calculations based on a particular set of parton distributions. The comparison of our extracted twist-2 term with the predictions based on the $GRV$ parton distributions [32] is shown in Fig. 8 for $Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}%
}5~{}(GeV/c)^{2}$. It can clearly be seen that our results and the $GRV$ predictions agree quite well in case of the proton, whereas they differ significantly in case of the deuteron for $n>2$. The inclusion of a new empirical determination of the nuclear effects in the deuteron, obtained in Ref. [33] from the nuclear dependence of the $SLAC$ data, increases only a little bit the disagreement for $n>2$ (see dashed lines in Fig. 8). Since moments with $n>2$ are mostly sensitive to the high-$x$ behaviour of the structure function, the question arises whether our extracted twist-2 moments can be explained by an enhancement of the $d$-quark parton distribution, like the one advocated in Ref. [16], which explicitly reads as $\tilde{d}(x)=d(x)+0.1x(1+x)u(x)$. The modified $GRV$ predictions, including also the empirical nuclear corrections for the deuteron, are shown by the solid lines in Fig. 8 and agree quite well with our results both for the proton and the deuteron. Therefore, our extracted twist-2 moments are clearly compatible with the hypothesis of an enhancement of the $d$-quark distribution at large $x$.
So far the power corrections appearing in Eqs.
(23-24) have been derived assuming for the leading twist
the $NLO$ in the strong coupling constant. Since our main aim is to try to
estimate the target-dependent power corrections generated by multiparton
correlations, it is necessary to estimate the possible effects of higher
orders of the perturbative series, which defines the twist-2 coefficient
functions $E_{n2}(\mu,Q^{2})$ appearing in Eq. (2). However, it is
well known that such series is asymptotic at best and affected by the
so-called $IR$-renormalon ambiguities [34]. More precisely, the
only way to interpret consistently the large order behaviour of the
pertubartion theory leads unavoidably to ambiguities which have the general
form of power-suppressed terms. This happens because certain classes of
high-order radiative corrections to the twist-2 (the so-called fermion
bubble chains) are sensitive to large distances (i.e., to the
non-perturbative domain). Nevertheless it should be stressed that the
$IR$-renormalon contribution to the twist-2 has to cancel against the
ultraviolet quadratic divergencies of twist-4 operators (see, e.g., Refs.
[34, 35, 15]). This means that each twist in the expansion
(2) is not defined unambigously, while the entire sum (i.e., the
complete calculation) is free from ambiguities. Therefore, while the data
cannot be affected by $IR$-renormalon uncertainties, our fitting procedure,
based on the separation among various twists and on the theoretical
perturbative treatment of the leading twist only, can suffer from
$IR$-renormalon ambiguities. The latter are target-independent quantities,
being of pure perturbative nature, while genuine higher-twist effects are
related to multiparton correlations in the target (i.e., target-dependent
quantities).
The $IR$-renormalon picture has been applied to the phenomenology of
deep inelastic lepton-hadron scattering as a guide to estimate the $x$
dependence of power corrections [14, 15]. Such an estimate has
been found to be a good guess in case of the proton and the deuteron
structure functions and this fact may be understandable in terms of the
notion of a universal $IR$-finite effective strong coupling constant
[36] or in terms of the hypothesis of the dominance of the
quadratically divergent parts of the matrix elements of twist-4 operators
[37].
Before closing this Section, we point out that the $IR$-renormalon
contribution behaves as a power-like term, but it is characterised by
twist-2 anomalous dimensions. Thus, our observation that for $n=4,6,8$
the effective anomalous dimensions $\gamma_{n}^{(4)}$ and $\gamma_{n}^{(6)}$
extracted from our $NLO$ fit to the transverse data are significantly
different from the corresponding twist-2 anomalous dimensions, might be an
indication of the presence of multiparton correlation effects (at least) in
the transverse channel. In order to try to disentangle the latter from large
order perturbative effects, we will consider explicitly in the next Section
the power-like terms associated to the $IR$ renormalons as the general
uncertainty in the perturbative prediction of the twist-2 [38].
5 $IR$ Renormalons and the Analysis of the Longitudinal Channel
Within the naive non-abelianization ($NNA$) approximation the contribution of the renormalon chains (i.e. the sum of vacuum polarisation insertions on the gluon line at one-loop level) to the non-singlet parts of the nucleon structure functions $F_{1}^{N}(x,Q^{2})$ and $F_{2}^{N}(x,Q^{2})$ is given by [14]
$$\displaystyle F_{1}^{IR}(x,Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\int_{x}^{1}dz~{}F_{1}^{LT}({x\over z},Q^{2})~{}\left[A_{2}^{IR}{%
D_{2}(z)\over Q^{2}}+A_{4}^{IR}{D_{4}(z)\over Q^{4}}\right]$$
(25)
$$\displaystyle F_{2}^{IR}(x,Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\int_{x}^{1}dz~{}F_{2}^{LT}({x\over z},Q^{2})~{}\left[A_{2}^{IR}{%
C_{2}(z)\over Q^{2}}+A_{4}^{IR}{C_{4}(z)\over Q^{4}}\right]$$
(26)
where the constants $A_{2}^{IR}$ and $A_{4}^{IR}$ are related to the log-moments of an $IR$-finite effective strong coupling constant [36], $F_{1}^{LT}$ and $F_{2}^{LT}$ are the leading-twist structure functions and the coefficient functions $C_{2(4)}$ and $D_{2(4)}$ are given explicitly in Ref. [14]. Thus, the $IR$-renormalon contribution to the transverse and longitudinal moments is explicitly given by
$$\displaystyle\mu_{n}^{T(IR)}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\mu_{n}^{NS}(Q^{2})\left\{{A_{2}^{IR}\over Q^{2}}\tilde{C}_{2}(n)%
+{A_{4}^{IR}\over Q^{4}}\tilde{C}_{4}(n)\right\}$$
(27)
$$\displaystyle\mu_{n}^{L(IR)}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\mu_{n}^{NS}(Q^{2})\left\{{A_{2}^{IR}\over Q^{2}}\left[{8\alpha_{%
s}(Q^{2})\over 6\pi}{\tilde{D}_{2}(n)\over n+1}-{4n\over n+2}\right]+\right.$$
(28)
$$\displaystyle\left.{A_{4}^{IR}\over Q^{4}}\left[{8\alpha_{s}(Q^{2})\over 6\pi}%
{\tilde{D}_{4}(n)\over n+1}-4n{n+1\over n+3}\right]\right\}$$
where in the last equation we have used the non-singlet part of the $NLO$ relation (12). The coefficients $\tilde{C}_{2(4)}(n)$ and $\tilde{D}_{2(4)}(n)$ read as follows [14]
$$\displaystyle\tilde{C}_{2}(n)$$
$$\displaystyle=$$
$$\displaystyle-n-8+{4\over n}+{2\over n+1}+{12\over n+2}+4S_{n}$$
(29)
$$\displaystyle\tilde{C}_{4}(n)$$
$$\displaystyle=$$
$$\displaystyle{1\over 2}n^{2}-{3\over 2}n+16-{4\over n}-{4\over n+1}-{36\over n%
+3}-4S_{n}$$
$$\displaystyle\tilde{D}_{2}(n)$$
$$\displaystyle=$$
$$\displaystyle-n-4+{4\over n}+{2\over n+1}+{4\over n+2}+4S_{n}$$
(30)
$$\displaystyle\tilde{D}_{4}(n)$$
$$\displaystyle=$$
$$\displaystyle{1\over 2}n^{2}+{5\over 2}n+8-{4\over n}-{4\over n+1}-{12\over n+%
3}-4S_{n}$$
where $S_{n}\equiv\sum_{j=1}^{n-1}(1/j)$. As already mentioned, Eqs. (27-28) describe the contributions of $IR$ renormalons to the non-singlet structure functions, while the more involved case of the singlet parts of the $DIS$ structure functions has been recently investigated in Ref. [39]. There it has been shown that the difference between the $IR$-renormalon contributions to the singlet and non-singlet moments is not relevant for $n\geq 4$ thanks to the quark-gluon decoupling at large $x$. Therefore, for $n\geq 4$ it suffices to consider Eqs. (27-28) after substituting $\mu_{n}^{NS}(Q^{2})$ with $\mu_{n}^{T}(Q^{2})$, given at $NLO$ by Eq. (9).
An interesting feature of the $IR$-renormalon terms
(27-28) is that they are mainly governed by the
values of only two (unknown) parameters, $A_{2}^{IR}$ and $A_{4}^{IR}$, which
appear simultaneously both in the longitudinal and transverse channels at
any value of $n$. The signs of $A_{2}^{IR}$ and $A_{4}^{IR}$ are not
theoretically known, because they depend upon the prescription used to
circumvent the renormalon singularities of the Borel integrals, while within
the $NNA$ approximation their absolute values may be provided by [15]
$$\displaystyle|A_{2}^{IR}|={6C_{F}\Lambda_{\overline{MS}}^{2}\over 33-2N_{f}}e^%
{5/3}~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}|A_{4}^{IR}|={3C_{F}\Lambda_{%
\overline{MS}}^{4}\over 33-2N_{f}}e^{10/3}$$
(32)
with $C_{F}=4/3$. Using for $\Lambda_{\overline{MS}}$ the same values adopted for the $NLO$ calculation of $\alpha_{s}(Q^{2})$ (see previous Section), one gets that $|A_{2}^{IR}|$ varies from $0.10$ to $0.13~{}GeV^{2}$, while $|A_{4}^{IR}|$ ranges from $0.015$ to $0.030~{}GeV^{4}$ for $N_{f}=3,4$.
First of all, we have checked whether the power corrections to the
$NLO$ twist-2 contribution in the transverse channel can be explained by
pure $IR$-renormalon terms in the whole range $1\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}Q^{%
2}~{}(GeV/c)^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0%
pt\hbox{$<$}}20$. It turns out that: i) the quality of the resulting fit is not as good
as the one obtained via the expansion (24) and the obtained minima
of the $\chi^{2}/N$ variable can be much larger than $1$; ii) the
extracted values of $|A_{2}^{IR}|$ and $|A_{4}^{IR}|$ result to be much greater
than the $NNA$ expectations (32) and to depend strongly upon the
inclusion of the data at low $Q^{2}$ ($Q^{2}\sim$ few $(GeV/c)^{2}$). Moreover,
the $IR$-renormalon contribution to the longitudinal moments (Eq.
(28)), calculated using the values of $A_{2}^{IR}$ and $A_{4}^{IR}$
determined from the analysis of the transverse channel, leads to a large
overestimation of the longitudinal data, as already noted in Ref.
[15]. These results may be viewed as an effect of the presence of
higher-twist terms generated by multiparton correlations in the transverse
data for $Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}%
}1~{}(GeV/c)^{2}$. To make this statement more
quantitative, we start with the analysis of the longitudinal data adopting
for the power corrections a pure $IR$-renormalon picture; namely, for $n\geq 4$ we have used the following expansion
$$\displaystyle M_{n}^{L}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle\mu_{n}^{L}(Q^{2})+\mu_{n}^{L(IR)}(Q^{2})~{}\to_{n\geq 4}~{}\mu_{%
n}^{L}(Q^{2})+\mu_{n}^{T}(Q^{2})\cdot$$
(33)
$$\displaystyle\left\{{A_{2}^{IR}\over Q^{2}}\left[{8\alpha_{s}(Q^{2})\over 6\pi%
}{\tilde{D}_{2}(n)\over n+1}-{4n\over n+2}\right]+{A_{4}^{IR}\over Q^{4}}\left%
[{8\alpha_{s}(Q^{2})\over 6\pi}{\tilde{D}_{4}(n)\over n+1}-4n{n+1\over n+3}%
\right]\right\}$$
where $\mu_{n}^{L}(Q^{2})$ is given by Eq. (12) and calculated using the $GRV$ parton distributions [32], while $\mu_{n}^{T}(Q^{2})$ (see Eq. (9)) is taken from the $NLO$ analysis of the transverse data made in the previous Section (see Tables 2 and 3 for the values of the twist-2 parameters $a_{n}^{(2)}$).
For each value of $n\geq 4$ we have determined the values of $A_{2}^{IR}$ and $A_{4}^{IR}$ from the least-$\chi^{2}$ fit to the longitudinal data in the whole range $1\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}Q^{%
2}~{}(GeV/c)^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0%
pt\hbox{$<$}}20$; our results are reported in Tables 4-5 and Figs. 9-10 in case of proton and deuteron targets, respectively. It can clearly be seen that: i) the extracted values of $A_{2}^{IR}$ are almost independent of $n$, which means that the $n$-dependence (i.e., the shape in $x$) of the $1/Q^{2}$ power correction is nicely predicted by the $NNA$ approximation; ii) the values obtained for $|A_{2}^{IR}|$ are only slightly larger than the $NNA$ expectation (32); iii) the determination of $A_{4}^{IR}$ is almost compatible with zero and affected by large uncertainties, since the $1/Q^{4}$ power corrections turn out to be quite small in the longitudinal channel; nevertheless; the extracted values are not completely inconsistent with the $NNA$ predictions (32); iv) the power corrections appear to be approximately the same in the proton and deuteron longitudinal channels (as it can be expected from a pure $IR$-renormalon phenomenology).
As for the second moment $M_{2}^{L}(Q^{2})$ we have simplified our analysis by taking only the non-singlet $IR$-renormalon contribution (28), i.e. by totally neglecting its singlet part. This is an approximation, but the resulting fit to the data on $M_{2}^{L}(Q^{2})$ turns out to be quite good as it can be seen from Figs. 9(a) and 10(a), yielding values of $A_{2}^{IR}$ and $A_{4}^{IR}$ only slightly different from the ones previously determined by the analysis of the moments with $n\geq 4$ (see Tables 4 and 5).
To sum up, in case of both proton and deuteron targets a pure
$IR$-renormalon description of power corrections works quite nicely in the
longitudinal channel starting already at $Q^{2}\simeq 1~{}(GeV/c)^{2}$.
Averaging the results of Tables 4 and 5 for $n\geq 4$ only, our
determination of the $IR$-renormalon strength parameters results to be:
$A_{2}^{IR}\simeq-0.132\pm 0.015~{}GeV^{2}$ and $A_{4}^{IR}\simeq 0.009\pm 0.003~{}GeV^{4}$, which, we stress, are not inconsistent with the $NNA$
expectations (32). Moreover, the value found for $A_{2}^{IR}$ nicely
agrees with the corresponding findings of Ref. [17], recently
obtained from a $NLO$ analysis of the $CCFR$ data [18] on $xF_{3}^{N}(x,Q^{2})$.
Before closing this section, we stress again that the
$IR$-renormalon power-like terms should be regarded as the general
uncertainty in the perturbative calculation of the twist-2 term. Thus, it is
worth recalling that our estimates of the $IR$-renormalon parameters have
been obtained by taking the perturbative calculation of the twist-2 term at
$NLO$. We expect that our determination of the $IR$-renormalon parameters
holds only at $NLO$ and would vary if higher orders of the perturbation
theory were included. As a matter of fact, a significative reduction of the
$IR$-renormalon terms seems to be suggested by the recent $NNLO$ results
quoted in Refs. [16] and [17].
6 Final Analysis of the Transverse Data
After having determined the strengths and signs of the twist-4 and twist-6 $IR$-renormalon contributions from the analysis of the longitudinal channel, we can now proceed to the final analysis of the transverse data, adopting the following twist expansion
$$\displaystyle M_{n}^{T}(Q^{2})=\mu_{n}^{T}(Q^{2})+\mu_{n}^{T(IR)}(Q^{2})+a_{n}%
^{(4)}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}\right]^{\gamma_{n}^%
{(4)}}~{}{\mu^{2}\over Q^{2}}+a_{n}^{(6)}~{}\left[{\alpha_{s}(Q^{2})\over%
\alpha_{s}(\mu^{2})}\right]^{\gamma_{n}^{(6)}}~{}{\mu^{4}\over Q^{4}}$$
(34)
where now the higher-twist terms involving the parameters $a_{n}^{(4)}$, $\gamma_{n}^{(4)}$, $a_{n}^{(6)}$ and $\gamma_{n}^{(6)}$ should be related to (target-dependent) multiparton correlation effects. Collecting Eqs. (9) and (27) one has for $n\geq 4$
$$\displaystyle M_{n\geq 4}^{T}(Q^{2})$$
$$\displaystyle=$$
$$\displaystyle a_{n}^{(2)}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}%
\right]^{\gamma_{n}^{NS}}~{}{1+\alpha_{s}(Q^{2})R_{n}^{NS}/4\pi\over 1+\alpha_%
{s}(\mu^{2})R_{n}^{NS}/4\pi}\left\{1+{A_{2}^{IR}\over Q^{2}}\tilde{C}_{2}(n)+{%
A_{4}^{IR}\over Q^{4}}\tilde{C}_{4}(n)\right\}+$$
(35)
$$\displaystyle a_{n}^{(4)}~{}\left[{\alpha_{s}(Q^{2})\over\alpha_{s}(\mu^{2})}%
\right]^{\gamma_{n}^{(4)}}~{}{\mu^{2}\over Q^{2}}+a_{n}^{(6)}~{}\left[{\alpha_%
{s}(Q^{2})\over\alpha_{s}(\mu^{2})}\right]^{\gamma_{n}^{(6)}}~{}{\mu^{4}\over Q%
^{4}}$$
The $IR$-renormalon parameters $A_{2}^{IR}$ and $A_{4}^{IR}$ have been kept fixed at the values $A_{2}^{IR}=-0.132~{}GeV^{2}$ and $A_{4}^{IR}=0.009~{}GeV^{4}$ found in the previous Section, while the values of the five parameters $a_{n}^{(2)}$, $a_{n}^{(4)}$, $\gamma_{n}^{(4)}$, $a_{n}^{(6)}$ and $\gamma_{n}^{(6)}$ have been determined through the least-$\chi^{2}$ procedure and reported in Tables 6 and 7 in case of proton and deuteron, respectively. Comparing with the results obtained without the $IR$-renormalon contributions (see Tables 2 and 3), it can be clearly seen that the values of the twist-2 parameters $a_{n}^{(2)}$ are almost unchanged, while the values of the higher-twist parameters $a_{n}^{(4)}$, $\gamma_{n}^{(4)}$, $a_{n}^{(6)}$ and $\gamma_{n}^{(6)}$ vary only within the uncertainties of the fitting procedure. Note that with the inclusion of the $IR$-renormalon contribution the sum $a_{n}^{(4)}+a_{n}^{(6)}$ is closer to zero, which implies that at $Q^{2}\simeq\mu^{2}=1~{}(GeV/c)^{2}$ the twist-4 and twist-6 terms generated by multiparton correlation almost totally compensate each other. Such an effect is clearly illustrated in Figs. 11 and 12, where the contributions of the twist-2 at $NLO$, of the $IR$-renormalon and of the multiparton correlations are separately reported. From Figs. 11 and 12 it can be also seen that, for $n\geq 4$, the $IR$-renormalon contribution increases significantly around $Q^{2}\sim 1~{}(GeV/c)^{2}$ and could become of the same order of magnitude of the twist-2 term at $NLO$ in case of higher order moments. The effects from multiparton correlations appear to exceed the $IR$-renormalon term only for $Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}%
}2~{}(GeV/c)^{2}$ (at $n\geq 4$).
As for the second moment $M_{2}^{T}(Q^{2})$, following our previous analyses, we apply the $IR$-renormalon correction only to the non-singlet twist-2 term; moreover, the twist-2 parameters $a_{2}^{S}$ and $a_{2}^{G}$ are kept fixed at the values given in Table 1 for the deuteron and only the twist-4 term $a_{n}^{(4)}~{}\left[\alpha_{s}(Q^{2})/\alpha_{s}(\mu^{2})\right]^{\gamma_{n}^{%
(4)}}~{}(\mu^{2}/Q^{2})$ is explicitly considered in the analysis. The resulting value of the twist-2 parameter $a_{2}^{NS}$ is $0.096\pm 0.006$ ($0.066\pm 0.0044$) for the proton (deuteron), which coincides within the uncertainties with the one given in Table 1. The twist-4 parameter $a_{2}^{(4)}$ turns out to be almost compatible with zero, namely $a_{2}^{(4)}=0.01\pm 0.01$ for the proton and $|a_{2}^{(4)}|\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt%
\hbox{$<$}}10^{-3}$ for the deuteron. Considering also the results of the previous Section on the longitudinal channel, our analyses indicate that the smallness of multiparton correlation effects on the second moments (both transverse and longitudinal ones) is consistent with the $Q^{2}$-behaviour of the data starting already at $Q^{2}\simeq 1~{}(GeV/c)^{2}$.
Basing on naive counting arguments (see, e.g., Ref. [40]), one can argue that the twist expansion for the transverse moments at $Q^{2}\simeq\mu^{2}$ can be approximately rewritten as
$$\displaystyle M_{n}^{T}(\mu^{2})\simeq A_{n}^{(2)}\left[1+n\left({\Gamma_{n}^{%
(4)}\over\mu}\right)^{2}-n^{2}\left({\Gamma_{n}^{(6)}\over\mu}\right)^{4}\right]$$
(36)
where $A_{n}^{(2)}$ is the twist-2 contribution and $\Gamma_{n}^{(4)}$ ($\Gamma_{n}^{(6)}$) represents the mass scale of the twist-4 (twist-6) term, expected to be approximately independent of $n$ for $n\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}4$. (Note that in Eq. (36) we have already taken into account the opposite signs of the twist-4 and twist-6 terms as resulting from our analyses). Thus, one gets
$$\displaystyle\Gamma_{n}^{(4)}=\mu\sqrt{{a_{n}^{(4)}\over nA_{n}^{(2)}}}~{}~{},%
~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\Gamma_{n}^{(6)}=\mu\left[{|a_%
{n}^{(6)}|\over n^{2}A_{n}^{(2)}}\right]^{1/4}.$$
(37)
Our results for $\Gamma_{n}^{(4)}$ and $\Gamma_{n}^{(6)}$ at $n=4,6,8$,
obtained taking $a_{n}^{(4)}$ and $a_{n}^{(6)}$ from Tables 6 and 7 and using
for $A_{n}^{(2)}$ the twist-2 term at $NLO$ (i.e., $a_{n}^{(2)}$) plus the
whole $IR$-renormalon contribution (as determined from our fitting procedure
and evaluated at $Q^{2}=\mu^{2}=1~{}(GeV/c)^{2}$), are collected in Fig. 13.
It can clearly be seen that the mass scales of the twist-4 and twist-6 terms
are indeed approximately independent of $n$, viz. $\Gamma_{n}^{(4)}\simeq\Gamma^{(4)}\simeq 0.76~{}GeV$ and $\Gamma_{n}^{(6)}\simeq\Gamma^{(6)}\simeq 0.55~{}GeV$. The value obtained for $\Gamma^{(4)}$ is significantly
higher than the naive expectation $\Gamma^{(4)}\simeq\sqrt{<k_{\perp}^{2}>}\simeq 0.3~{}GeV$ [40, 41], but not very far from the results of
Ref. [13]. Without including the $IR$-renormalon contribution in
$A_{n}^{(2)}$ (i.e., taking only $A_{n}^{(2)}=a_{n}^{(2)}$), the values of
$\Gamma^{(4)}$ and $\Gamma^{(6)}$ would increase by $\simeq 20\%$ and
$\simeq 10\%$, respectively (cf. also Ref. [42]).
Before closing the Section, we want to address briefly the impact that the specific value adopted for the strong coupling constant at the $Z$-boson mass can have on our determination of multiparton correlation effects in the transverse channel. As already mentioned, existing analyses of the whole set of $DIS$ (unpolarised) world data favour the value $\alpha_{s}(M_{Z}^{2})\simeq 0.113$ (see, e.g., Ref. [12]), which is however well below the updated $PDG$ value $\alpha_{s}(M_{Z}^{2})=0.119\pm 0.002$ [23]. Moreover, in Ref. [16] it has been argued that an increase of $\alpha_{s}(M_{Z}^{2})$ up to $\simeq 0.120$ can give rise to a significative decrease of the relevance of the higher-twists effects in the $DIS$ data (up to a reduction by a factor $\simeq 2$). Therefore, we have repeated our analyses of longitudinal and transverse pseudo-data adopting the value $\alpha_{s}(M_{Z}^{2})=0.118$, where a set of parton distributions is available from Ref. [43] (we use the parton distributions only for the calculation of $\mu_{n}^{L}(Q^{2})$ at $NLO$). All the results obtained at the higher value of $\alpha_{s}(M_{Z}^{2})$ have the same quality as those presented at $\alpha_{s}(M_{Z}^{2})=0.113$ with a slight, but systematic increase of the minima of the $\chi^{2}/N$ variable. The $IR$-renormalons parameters $A_{2}^{IR}$ and $A_{4}^{IR}$ are determined again by fitting the longitudinal data in the $Q^{2}$-range from $1$ to $20~{}(GeV/c)^{2}$, and their values are now given by $A_{2}^{IR}\simeq-0.103\pm 0.017$ and $A_{4}^{IR}\simeq 0.005\pm 0.004$, which are compatible within the quoted uncertainties with the corresponding results of the high-$Q^{2}$ analysis of Ref. [16]. Thus, by construction, the sum of the twist-2 term at $NLO$ and the $IR$-renormalon contribution is almost independent of the specific value of $\alpha_{s}(M_{Z}^{2})$ in the longitudinal channel. However, the same happens in the transverse channel as it is clearly illustrated in Fig. 14, where the results obtained at $\alpha_{s}(M_{Z}^{2})=0.113$ and $\alpha_{s}(M_{Z}^{2})=0.118$ are compared in case of the transverse moments with $n\geq 4$ and found to differ only by less than $\simeq 5\%$. Therefore, though the $NLO$ twist-2 terms as well as the $IR$-renormalon contributions are separately sensitive to the specific value of $\alpha_{s}(M_{Z}^{2})$, their sum turns out to be quite independent of $\alpha_{s}(M_{Z}^{2})$. This means that our determination of the multiparton correlation effects is only marginally affected by the specific value adopted for $\alpha_{s}(M_{Z}^{2})$, at least in the range of values from $\simeq 0.113$ to $\simeq 0.118$.
7 Conclusions
We have analysed the power corrections to the $Q^{2}$ behaviour of the
low-order moments of both the longitudinal and transverse structure
functions of proton and deuteron using available phenomenological fits of
existing data in the $Q^{2}$ range between $1$ and $20~{}(GeV/c)^{2}$. The
$SLAC$ proton data of Ref. [3], which cover the region beyond $x\simeq 0.7$ up to $x\simeq 0.98$, as well as existing data in the nucleon-resonance regions have been included in the analysis with the aim of
determining the effects of target-dependent higher-twists (i.e., multiparton
correlations).
The Natchmann definition of the moments has been adopted for
disentangling properly kinematical target-mass and dynamical higher-twist
effects in the data. The leading twist has been treated at the $NLO$ in the
strong coupling constant and, as far as the transverse channel is concerned,
the twist-2 has been extracted simultaneously with the higher-twist terms.
The effects of higher orders of the perturbative series have been estimated
adopting the infrared renormalon model of Ref. [14], containing
both $1/Q^{2}$ and $1/Q^{4}$ power-like terms. It has been shown that the
longitudinal and transverse data cannot be explained simultaneously by the
renormalon contribution only; however, in the whole $Q^{2}$ range between $1$
and $20~{}(GeV/c)^{2}$ the longitudinal channel appears to be consistent with
a pure $IR$-renormalon picture, adopting strengths not inconsistent with
those expected in the naive non-abelianization approximation and in nice
agreement with the results of a recent analysis [17] of the $CCFR$
data [18] on $xF_{3}^{N}(x,Q^{2})$. Then, after including the $1/Q^{2}$
and $1/Q^{4}$ $IR$-renormalon terms as fixed by the longitudinal channel,
the contributions of multiparton correlations to both the twist-4 and
twist-6 terms have been phenomenologically determined in the transverse
channel and found to have non-negligible strengths with opposite signs. It
has been also checked that our determination of the multiparton correlation
effects is only marginally affected by the specific value adopted for
$\alpha_{s}(M_{Z}^{2})$ (at least in the range from $\simeq 0.113$ to $\simeq 0.118$).
An interesting outcome of our analysis is that the extracted twist-2
contribution in the deuteron turns out to be compatible with the enhancement
of the $d$-quark parton distribution recently proposed in Ref. [16].
Let us stress that our analysis is presently limited by: i) the
use of phenomenological fits of existing data (i.e., pseudo-data),
which are required in order to interpolate the structure functions in the
whole $x$-range for fixed values of $Q^{2}$, and ii) by the existing large
uncertainties in the determination of the $L/T$ ratio $R_{L/T}^{N}(x,Q^{2})$.
Therefore, both transverse data with better quality at $x\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}0.6$ and
$Q^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}%
}10~{}(GeV/c)^{2}$ and more precise and systematic determinations of
the $L/T$ cross section ratio, which may be collected at planned
facilities like, e.g., $JLAB~{}@~{}12~{}GeV$, could help to improve our
understanding of the non-perturbative structure of the nucleon. Finally, we
want to point out that, since in inclusive data multiparton correlations
appear to generate power-like terms with opposite signs, seminclusive or
exclusive experiments might offer the possibility to achieve a better
sensitivity to individual non-perturbative power corrections.
Acknowledgments
One of the author (S.S.) gratefully acknowledges Stefano Forte for many useful discussions about renormalons and power corrections during the progress of the paper.
References
[1]
E.V. Shuryak and A.I. Vainshtein: Nucl. Phys. B199
(1982) 451; ib. B201 (1982) 141.
[2]
A. Bodek et al.: Phys. Rev. D20 (1979) 7. S.
Stein et al.: Phys. Rev. D12 (1975) 1884.
[3]
P. Bosted et al.: Phys. Rev. D49 (1994) 3091.
[4]
European Muon Collaboration, J.J. Aubert et al.: Nucl. Phys.
B259 (1985) 189; ib. B293 (1987) 740.
[5]
$BCDMS$ Collaboration: A.C. Benvenuti et al.: Phys. Lett.
B223 (1989) 485; ib. B237 (1990) 592.
[6]
New Muon Collaboration, M. Arneodo et al.: Nucl. Phys. B483 (1997) 3197; ib. B487 (1997) 3.
[7]
L.W. Whitlow et al.: Phys. Lett. B282 (1992) 475.
[8]
$SMC$ Collaboration, B. Adeva et al.: Phys. Rev. D58
(1998) 112001.
[9]
$CDHS$ Collaboration, A. Abramowicz et al.: Phys. Lett. B107 (1981) 141. $CDHSW$ Collaboration, P. Berge et al.: Z. Phys. C49 (1991) 187. S. Dasu et al.: Phys. Rev. D49 (1994) 5641. $E140X$
Collaboration : L.H. Tao el al.: Z. Phys. C70 (1996) 387.
[10]
L.W. Whitlow et al.: Phys. Lett. B250 (1990) 193.
[11]
J. Bartelski et al.: e-print archive hep-ph/9804415.
[12]
M. Virchaux and A. Milsztajn: Phys. Lett. B274
(1992) 221.
[13]
X. Ji and P. Unrau: Phys. Rev. D52 (1995) 72.
[14]
M. Dasgupta and B.R. Webber: Phys. Lett. B382 (1996)
273.
[15]
E. Stein et al.: Phys. Lett. B376 (1996) 177. M. Maul
et al.: Phys. Lett. B401 (1997) 100.
[16]
U.K. Yang and A. Bodek: Phys. Rev. Lett. 82 (1999)
2467.
[17]
A.V. Sidorov and M.V. Tokarev: Nuovo Cim. 110A
(1997) 1401. A.L. Kataev et al.: e-print archive hep-ph/9809500.
[18]
$CCFR-NuTev$ Collaboration, W.G. Seligman et al.: Phys. Rev.
Lett. 79 (1997) 1213.
[19]
G. Ricco et al.: Phys. Rev. C57 (1998) 356; Few-Body
Syst. Suppl. 10 (1999) 423.
[20]
H.D. Politzer: Phys. Rev. Lett. 30 (1973) 1346. D.J.
Gross and F. Wilczek: Phys. Rev. Lett. 30 (1973) 1323. See also F.J.
Yndurain: The Theory of Quark and Gluon Interactions, Springer Verlag
(New York, 1993).
[21]
G. Altarelli: Phys. Rept. 81 (1982) 1.
[22]
O. Natchmann: Nucl. Phys. B63 (1973) 237.
[23]
Paritcle Data Group, C. Caso et al.: Eur. Phys. J. C3
(1998) 1.
[24]
V. Burkert: preprint CEBAF-PR-93-035.
[25]
L.M. Stuart et al.: Phys. Rev. D58 (1998) 032003.
[26]
C. Ciofi degli Atti and S. Simula: Phys. Lett. B325
(1994) 276; Phys. Rev. C53 (1996) 1689.
[27]
M. Anghinolfi et al.: Nucl. Phys. A602 (1996) 405. See
also M. Anghinolfi et al.: J. Phys. G: Nucl. Part. Phys. 21 (1995)
L9.
[28]
M. Lacombe et al.: Phys. Rev. C21 (1980) 861.
[29]
A. Lung et al.: Phys. Rev. Lett. 70 (1993) 718.
[30]
M. Okawa: Nucl. Phys. 172 (1980) 481; ibid. B187 (1981) 71.
[31]
R.L. Jaffe and M. Soldate: Phys. Lett. B105 (1981) 467.
[32]
M. Glück, E. Reya and A. Vogt: Z. Phys. C67 (1995)
433.
[33]
J. Gomez et al.: Phys. Rev. D49 (1994) 4348.
[34]
See for a recent review about renormalons, M. Beneke:
preprint CERN-TH/98-233, July 1998 (e-print archive hep-ph/9807443), and
references therein quoted.
[35]
I.I. Balitsky and V.M. Braun: Nucl. Phys. B311
(1988/89) 541.
[36]
Yu.L. Dokshitzer, G. Marchesini and B.R. Webber: Nucl. Phys.
B469 (1996) 93.
[37]
M. Beneke, V.M. Braun and L. Magnea: Nucl. Phys. B497
(1997) 297.
[38]
V.M. Braun: in Proc. of the $XXX^{th}$ Rencontres de Moriond
QCD and High Energy Hadron Interaction, Les Arcs (France), March
1995, e-print archive hep-ph/9505317.
[39]
E. Stein, M. Maul, L. Mankiewicz and A. Schafer: Nucl.
Phys. B536 (1998) 318.
[40]
A. De Rujula, H. Georgi and H.D. Politzer: Ann. Phys. 103 (1977) 315.
[41]
R.K. Ellis, W. Furmanski and R. Petronzio: Nucl. Phys. B212 (1983) 29.
[42]
G. Ricco and S. Simula: in Proc. of the Int’l Workshop on
JLAB: Physics and Instrumentation with 6-12 GeV Beams, Jefferson
Laboratory (Newport News, USA), June 1998, JLab Press (Newport News, 1999),
p. 313 (also e-print archive hep-ph/9809264).
[43]
A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne:
Eur. Phys. J. C4 (1998) 463. See also A.D. Martin, R.G. Roberts and
W.J. Stirling: Phys. Lett. B387 (1996) 419.
TABLES
Table 1. Values of the twist-2 parameters $a_{2}^{NS}\equiv\mu_{2}^{NS}(\mu^{2})$, $a_{2}^{S}\equiv\mu_{2}^{S}(\mu^{2})$ and $a_{2}^{G}\equiv\mu_{2}^{G}(\mu^{2})$, and of the twist-4 parameters $a_{2}^{(4)}$ and
$\gamma_{2}^{(4)}$, obtained by fitting with Eq. (23) the
pseudo-data for the proton and the deuteron (see Figs. 2(a) and 3(a),
respectively). In case of the proton the parameters $a_{2}^{S}$ and $a_{2}^{G}$ have
been kept fixed at the values obtained for the deuteron. The last row
reports the value of the $\chi^{2}$ of the fit divided by the number of
degrees of freedom.
Table 2. Values of the twist-2 parameter $a_{n}^{(2)}$ and of the higher-twist parameters $a_{n}^{(4)}$, $\gamma_{n}^{(4)}$, $a_{n}^{(6)}$ and $\gamma_{n}^{(6)}$, obtained by fitting with Eq. (24) the proton pseudo-data of Fig. 2 for $n=4,6$ and $8$. The last row reports the value of the $\chi^{2}$ of the fit divided by the number of degrees of freedom.
Table 3. The same as in Table 2, but for the deuteron.
Table 4. Values of the $IR$-renormalon parameters $A_{2}^{IR}$ and $A_{4}^{IR}$ obtained using Eq. (33) to fit the longitudinal proton data of Fig. 4. The twist-2 contribution $\mu_{n}^{L}(Q^{2})$ is calculated at $NLO$ via Eq. (12) adopting the $GRV$ set of parton distributions [32], while $\mu_{n}^{T}(Q^{2})$ (see Eq. (9)) is obtained using the values of $a_{n}^{(2)}$ reported in Table 2. The last row reports the value of the $\chi^{2}$ of the fit divided by the number of degrees of freedom.
Table 5. The same as in Table 4, but for the deuteron.
Table 6. Values of the twist-2 parameter $a_{n}^{(2)}$ and of the higher-twist parameters $a_{n}^{(4)}$, $\gamma_{n}^{(4)}$, $a_{n}^{(6)}$ and $\gamma_{n}^{(6)}$, obtained by fitting with Eq. (35) the proton pseudo-data of Fig. 2 for $n=4,6$ and $8$. The $IR$-renormalon twist-4 and twist-6 contributions are obtained using $A_{2}^{IR}=-0.132~{}GeV^{2}$ and $A_{4}^{IR}=0.009~{}GeV^{4}$ (see text). The last row reports the value of the $\chi^{2}$ of the fit divided by the number of degrees of freedom.
Table 7. The same as in Table 6, but for the deuteron.
FIGURES |
An elastic model for the In–In correlations in In${}_{x}$Ga${}_{1-x}$As semiconductor alloys
A. S. Martins
Belita Koiller and R. B. Capaz
Instituto de Física, Universidade Federal do Rio de
Janeiro
Cx. Postal 68.528, Rio de Janeiro, 21945-970, Brazil
(November 21, 2020)
Abstract
Deviations from randomicity in In${}_{x}$Ga${}_{1-x}$As semiconductor alloys induced by elastic effects are investigated within the Keating potential.
Our model is based on Monte Carlo simulations on large (4096 atoms) supercells, performed
with two types of boundary conditions: Fully periodic boundary conditions represent the
bulk, while periodic boundary conditions along the $x$ and $y$ directions and a free surface in the $z$ direction simulate the epitaxial growth environment.
We show that In–In correlations identified in the bulk tend to be enhanced
in the epitaxially grown samples.
I Introduction
In recent years, order versus disorder in substitutional semiconductor
alloys has motivated several theoretical [1, 2, 3, 4, 5] and experimental [7, 8, 9, 10, 11] studies.
Special emphasis is given to the understanding of the physical mechanisms behind observed deviations from random distributions in the atomic positions. The distribution of different species over the atomic sites in such compounds is largely responsible for variations in electronic properties such as the band gap, the density of states at the Fermi energy and electron confinement levels.
Moreover, the atomic correlations in these systems are responsible for processes such as interface segregation and clustering, which are fundamental in defining the atomic scale structure and roughness of heterostructure interfaces.
In particular, In${}_{x}$Ga${}_{1-x}$As semiconductor alloys with $x\leq 20\%$
have been the subject of experimental studies indicating deviations from randomicity in their atomic configurations [7, 8, 9].
Zheng et al. [7] identified a strong correlation between In atoms in $x=20\%$ samples.
Clusters of 2 – 3 In atoms were reported to form preferentially along the [001] growth direction, in second-nearest-neighbor (2nn) positions.
Chao et al. [8] investigated $x=5\%$ samples and, in apparent contradiction with Zheng’s results, no strong correlations were found in for 2nn pairs along the growth direction, whereas important anti-correlation for first-nearest-neighbor (1nn) pairs along [110] was reported [12].
The absence of clustering of In atoms in $x=12\%$ quantum wires was
also noted by Pfister et al. [9] .
Chao and Zheng emphasize that the mechanism behind the clustering/anticlustering effects in these alloys is the strain induced by In atoms, due to the $7\%$ mismatch between the GaAs and InAs lattice parameters. Both studies [7, 8] were based on In${}_{x}$Ga${}_{1-x}$As alloys grown using molecular beam epitaxy (MBE) on GaAs (001) substrates at temperatures $\sim 800K$.
The alloy samples were kept below the critical thickness on GaAs substrate.
Cross-sectional images for cleaved $(110)$ [7] and $(1\overline{1}0$ [8] surfaces, obtained from scanning tunneling microscopy (STM) were used to determine the In distribution in the alloy. We follow here the orientation convention
of Pashley et al. [13].
The only apparent reason for the discrepancy concerning the second-neighbor clustering tendency in these experiments is the In concentration range: Cluster formation was reported in $x=20\%$ samples [7], while no second-neighbor correlation was identified in samples with smaller values of $x$ [8, 9]. We investigate this possibility presenting a theoretical study for In-In pair correlations within a Keating model potential [14] to describe the elastic interactions in the alloy. The Monte Carlo Metropolis algorithm is used to determine the thermodynamic equilibrium configuration at finite temperatures. Phase-diagram calculations predict that InGaAs alloys are completely miscible at the growth temperature [3], but no detailed theoretical analysis of pair correlation functions has been found in the literature. Therefore, one of the goals of the present work is to determine if the resulting In-In correlations are a consequence of bulk thermodynamics or if growth kinetics is important. Our simulations are then performed within two models, according to the boundary conditions: A bulk model (fully periodic) and an epitaxial growth model (periodic only in the $x$ and $y$ directions).
II Formalism
Following Ref. [8], we define the pair correlation function, $C\left({\bf r}_{12}\right)$ as
$$C\left({\bf r}_{12}\right)=\frac{P\left({\bf r}_{12}\right)}{R\left({\bf r}_{1%
2}\right)},$$
(1)
where $P({\bf r}_{12})=P\left({\bf r}_{1},{\bf r}_{2}\right)$ is the
probability of finding two In atoms at ${\bf r}_{1}$ and ${\bf r}_{2}$,
which, for homogeneous alloys, is only a function of the
relative position ${\bf r}_{12}={\bf r}_{1}-{\bf r}_{2}$, and
$R({\bf r}_{12})$ is the equivalent quantity calculated for a random alloy
with the same composition.
If $C\left({\bf r}_{12}\right)>1,$ we have correlation
between the atoms that form the pair, and if
$C\left({\bf r}_{12}\right)<1,$ we have anticorrelation. Correlation indicates an effective attraction between the atoms, and anticorrelation implies a net repulsion.
The Keating potential provides a good description for elastic energies in III-V semiconductor alloys [15, 16].
It is given by
$$U=\sum\limits_{ij}\frac{3}{8d_{ij}^{2}}\alpha_{ij}\left[{\bf r}_{ij}^{2}-d_{ij%
}^{2}\right]^{2}+\sum\limits_{\angle ijk}\frac{3}{8d_{ij}d_{jk}}\beta_{ijk}%
\left[{\bf r}_{ij}\cdot{\bf r}_{jk}+\frac{d_{ij}d_{jk}}{3}\right]^{2},$$
(2)
where $d_{ij}$ is the equilibrium bond length between atoms $i$ and $j$
in the corresponding zincblend binary compound, ${\bf r}_{ij}$ is the relative position vector between nearest neighbors in
different sublattices and $\alpha_{ij}$
and $\beta_{ijk}$ are the bond-stretching and bond-bending constants, respectively.
The elastic energy thus results from two contributions:
The first summation (performed over nearest-neighbor pairs) refers to the
excess energy due to the bond-length variations and
the second term (three-center term) gives the contribution due to the deviations of bond angles from their ideal tetrahedral values.
Values for $d_{ij},$ $\alpha_{ij}$ and $\beta_{ijk}$
were taken from Ref. [15].
In particular, the difference between $d_{\rm GaAs}=2.448\,$Å and
$d_{\rm InAs}=2.622$ Å leads to the structural mismatch in the system.
For heterogeneous bond angles, $\beta_{ijk}$ was taken as the geometric
mean of the bond-bending constants for the binary compounds:
$$\beta_{ijk}=\sqrt{\beta_{iji}\beta_{kjk}}.$$
(3)
Calculations are carried out in supercells with $N_{x}N_{y}N_{z}$
conventional cubic unit cells of the fcc lattice along the $x$, $y$ and $z$ directions, respectively, resulting in a system with $N=8N_{x}N_{y}N_{z}$ atoms.
In${}_{x}$Ga${}_{1-x}$As semiconductor alloys are modeled by supercells in which ${x\cdot N}/2$ sites of the group-III sublattice are occupied by In atoms and the remaining sites by Ga, while the $N/2$ group-V sublattice sites are occupied by As atoms. For a given configuration, the energy is minimized by relaxing all the atomic positions in the supercell. Initially, we set the atoms to be randomly distributed, and evolution to thermodynamic equilibrium at the growth temperature ($T=800$ K) is carried out through the Monte Carlo Metropolis algorithm, via first-neighbor In $\leftrightarrow$ Ga positions exchanges. The calculations are performed at constant volume. In the bulk model (Section 3) we use the Vegard’s law [6] to give the approximated value for the lattice parameter and for the epitaxial growth model (Section 4) we keep the lattice parameter to the GaAs value, aiming to describe a strained growth process.
III Bulk model
For the bulk model, after $\sim 10^{3}$ MC steps the system attains thermodynamic equilibrium and statistically independent ($\sim 500$ MC steps apart) alloy configurations are collected into an ensemble to calculate the pair correlation function.
This procedure is repeated for 6 initial random alloy configurations, leading to
analyzed ensembles of at least 100 configurations for each temperature and composition.
The third column of Table 1 shows our results for $x=0.05$ and $x=0.20$
in In${}_{x}$Ga${}_{1-x}$As. Notice an anticorrelation (repulsion) between 1nn pairs and a correlation (attraction) between 2nn pairs. These results are in qualitative agreement with experiments. However, quantitative agreement is poor: Chao’s [8] experiments indicate a much stronger anticorrelation for 1nn pairs than what is predicted by bulk thermodynamics. In addition, the observed experimental discrepancy regarding the 2nn correlation cannot be explained as a composition effect within the bulk model: There is no increase in the calculated correlation upon increasing In concentration from 5% to 20%.
Although not significantly larger than one, the 2nn correlation is clearly present. It is somewhat surprising that there can be such an effective attraction (for 2nn pairs) between In atoms based on elastic effects alone. Since In atoms are larger than Ga ones, one should expect that In impurities should compress the lattice locally and therefore repel each other. To see this in more detail, let’s consider the interaction of two isolated In impurities in GaAs. We define a pair interaction energy as
$$\Delta E=U-2E_{0},$$
(4)
where $U$ is the Keating energy for the supercell with two impurities and $E_{0}$ the elastic energy for a single impurity, calculated to be $E_{0}=138$ meV. The interaction energy is displayed in Table 2. Notice a net attraction (negative energy) along the [001] direction and a net repulsion (positive energy) along [110], consistent with the MC results.
The interaction energy is therefore highly anisotropic, as can also be visualized in Figure 1.
These results can in fact be explained by the intricate geometry of the zincblende structure, as we can see by considering the mean Ga-As bond length deviations induced at nearby site by a single In impurity
$${\Delta L_{i}}=\frac{1}{4}\sum\limits_{j=1}^{4}|{\bf r}_{ij}|-d_{\rm GaAs}~{},$$
(5)
where $i$ is a Ga site and the sum in $j$ is performed over the As sites nearest neighbors to $i$. Negative (positive) $\Delta L$ at a given site means that the lattice is locally compressed (expanded) at that site by the nearby In impurity, and therefore suggests that putting a second In impurity at that site will be energetically unfavorable (favorable). One can see from Table 2 that even a larger impurity such as In in GaAs can produce a local expansion of the zincblende structure along certain directions (for instance, the [001] direction).
An exact opposition between the signs of $\Delta L$ and $\Delta E$ can also be readily seen ion the Table, thus supporting the argument given above. The highly anisotropic nature of the pair interactions is in agreement with first-principle calculations for these energies [3]. Interaction energies are determined there from a series expansion in terms of multiatom contributions, fitted to a set of ordered structures. Their results are in qualitative agreement with the present Keating model calculations, giving the most repulsive interactions between first and fourth neighbors [(110) and (220)], while the less repulsive ones correspond to second and third neighbors [(002) and (211) respectively].
Quantitatively, our results in Table 2 are lower than those in Ref.[3] by several meV and, in particular, no attractive pair interactions result from the
first-principle fits.
IV The epitaxial growth model
Aiming at a more realistic model describing the experiments,
we propose a simple description of epitaxial growth:
(i) No periodic boundary conditions are imposed in the $z$ (growth) direction;
(ii) The first two (bottom) monolayers, with $N_{x}=N_{y}=8$, represent the GaAs substrate, and the atomic species are fixed as in bulk GaAs. Atomic positions are fixed according to GaAs only in the $z=0$ plane;
(iii) Growth in the $z$ direction proceeds by adding a single alloy monolayer
at a time, until $N_{z}=8$:
The In $\leftrightarrow$ Ga exchanges in the MC algorithm are restricted to the most
recently grown monolayer (free surface) and the next monolayer is introduced only after thermal equilibrium has been attained. Atomic correlations between successive monolayers are thus induced by the In distribution in the inner layers. This is a reasonable assumption since, at typical MBE growth rates, the time scales for atomic processes at the free surface are much shorter than those in the previously grown planes. In addition, as can be seen in Table 3, there is an elastic-induced attraction between In impurities and the surface. Allowing bulk-surface In exchanges would produce In segregation at the growing surface, an effect that is known to occur but which we do not intend to describe here. In this present model, we do not treat dangling bonds, i.e., the surface is described simply as a truncated semi-infinite solid.
In the fourth column of Table 1, we present the pair correlation function for 5% and 20% In concentrations in In${}_{x}$Ga${}_{1-x}$As. There are now two inequivalent in-plane 1nn pairs directions, $[110]$ and $[1\overline{1}0]$.
For the epitaxial growth model, the configurational averages were performed over 20 different configurations for each concentration and temperature.
The statistical ensemble in the epitaxial growth model is considerably
smaller than for the bulk model, since each growth process here contributes with a single configuration to the ensemble.
Moreover, the bottom monolayers are fixed in composition to be of GaAs, which decreases the total number of In pairs in each generated configuration.
This leads correspondingly to larger statistical error bars.
We note that for $x=5\%$, $C\left({\bf r}_{12}\right)$
along $[110]$ is significantly reduced with respect to the bulk model behavior
(see Table 1), while pairs along $[1\overline{1}0]$ become essentially uncorrelated.
This is an interesting feature of the epitaxial growth model:
The free surface induces important anisotropy between the $[110]$ and $[1\overline{1}0]$
directions, and the correlation function in the bulk is approximately the
average between those calculated for the two inequivalent directions in the epitaxial
growth model.
The experimental results of Ref.[8] give $C({\bf r_{12}})=0.25$
along $[110]$ for $x=5\%$ and $T=800$ K.
Our corresponding values for the bulk and epitaxial models, $C({\bf r_{12}})=0.83$
and 0.56 respectively, indicate that epitaxial growth is a decisive ingredient in the observed experimental behavior.
Although the agreement is still not quantitative, our results confirm that the observed 1nn anti-correlation has an elastic origin. For 2nn pairs with $x=20\%$, we do not observe significant increase of the correlation function above the random value. However, the correlation enhancement in going from 5% to 20% seems to be present in the epitaxial growth model, although error bars are too large to allow a more precise statement. Therefore, only elastic effects and a simple epitaxial growth do not provide a clear and quantitative explanation for the clusters formation reported in Ref.[7], even though we detect a tendency for In-In attraction in the growth direction in many of the cases studied. Perhaps a more sophisticated growth model, including aspects such as surface reconstruction and step-mediated growth, is necessary to better evaluate such results.
Acknowledgments This work was partially supported by CNPq, CAPES, FAPERJ and FUJB (Brazil).
References
[1]
Zunger A. and Mahajan S., Handbook on Semiconductors,
vol 3, second edition, Elsevier, Amsterdam (1993).
[2]
Ferreira L. G., Mbaye A. A. and Zunger A., Phys. Rev B 37, 10547 (1987).
[3]
Wei S.-H., Ferreira L. G. and Zunger A., Phys. Rev B 41, 8240 (1989).
[4]
Mader K. A. and A. Zunger, Phys. Rev B 51, 10462 (1994).
[5]
Ipatova I. P., Malyshkin V. G., Maradudin A. A., Shchukin V. A. and Wallis R. F., Phys. Rev B 57, 112968 (1998).
[6]
Vegard, Z. Phys. 5, 17 (1921)
[7]
Zheng J. F., Walker J. D., Salmeron M. B. and Weber E. R., Phys. Rev. Lett. 72, 2414 (1994).
[8]
Chao K. J., Shih C. H., Gotthold D. W. and Streetman B. G., Phys. Rev. Lett. 79, 4822 (1997).
[9]
Pfister M., Johnson M. B., Alvarado S. F., Salemink H. W. M., Marti U., Martin D., Morier-Genoud F. and Reinhart F. K., Appl Phys. Lett. 67, (1995).
[10]
Holonyak N., Laidig W. D., Camras M. D., Morkoç H., Drummond T. J., Hess K. and Burroughs M. S., J Appl. Phys. 52, (1981).
[11]
Salemink H. W. M. and Albrektsen O., Phys. Rev. B 47, (1993).
[12]
The terms first and second-neighbors refer here to atoms in the same fcc sublattice.
[13]
Pashley M. D., Haberern K. W., Friday W., Woodall J. M. and Kirchner P. D., Phys. Rev. Lett. 60, 2176 (1988).
[14]
Keating P. N., Phys. Rev. 145, 637 (1966).
[15]
Martins J. L. and Zunger A., Phys. Rev.
B 30, 6217 (1984).
[16]
Schabel M. C. and Martins J. L.,
Phys. Rev. B 43, 11873 (1991). |
SymInfer: Inferring Program Invariants using Symbolic States
ThanhVu Nguyen
University of Nebraska-Lincoln, USA
tnguyen@cse.unl.edu
Matthew B. Dwyer
University of Nebraska-Lincoln, USA
dwyer@cse.unl.edu
Willem Visser
Stellenbosch University, South Africa
wvisser@cs.sun.ac.za
Abstract
We introduce a new technique for inferring program invariants that uses symbolic
states generated by symbolic execution. Symbolic states,
which consist of path conditions and constraints on local variables,
are a compact description of sets of concrete program states and they
can be used for both invariant inference and invariant verification.
Our technique uses a
counterexample-based algorithm that creates
concrete states from symbolic states, infers candidate
invariants from concrete states, and then verifies or refutes candidate
invariants using symbolic states. The refutation case produces concrete
counterexamples that prevent spurious results and allow the technique to
obtain more precise invariants. This process stops when the algorithm reaches
a stable set of invariants.
We present SymInfer, a tool that implements these ideas to automatically
generate invariants at arbitrary locations in a Java program. The tool
obtains symbolic states from Symbolic PathFinder and uses existing algorithms to
infer complex (potentially nonlinear) numerical invariants. Our preliminary
results show that SymInfer is effective in using symbolic states to generate
precise and useful invariants for proving program safety and analyzing program
runtime complexity.
We also show that SymInfer outperforms existing invariant generation systems.
I Introduction
Program invariants describe properties that always hold at a program location.
Examples of invariants include pre/postconditions, loop invariants, and
assertions.
Invariants are useful in many programming tasks, including documentation, testing, debugging, verification, code generation, and synthesis [1, 2, 3, 4].
Daikon [2] demonstrated that dynamic analysis is a practical approach to infer invariants from concrete program states that are observed when running the program on sample inputs.
Dynamic inference is typically efficient and supports expressive invariants, but can often produce spurious invariants that do not hold for all possible inputs.
Several invariant generation aproaches (e.g., iDiscovery [5], PIE [6], ICE [7], NumInv [8]) use a hybrid approach that dynamically infers candidate invariants and then statically checks that they hold for all inputs.
For a spurious invariant, the checker produces counterexamples, which help the inference process avoid this invariant and obtain more accurate results.
This approach, called CounterExample Guided Invariant Generation (CEGIR), iterates the inference and checking processes until achieving stable results.
In this paper, we present a CEGIR technique that uses symbolic program states.
Our key insight is that symbolic states generated by a symbolic execution engine are
(a) compact encodings of large (potentially infinite) sets of concrete states,
(b) naturally diverse since they arise along different execution paths,
(c) explicit in encoding relationships between program variables,
(d) amenable to direct manipulation and optimization, such as combining sets of states into a single joint encoding, and
(e) reusable across many different reasoning tasks within CEGIR algorithms.
We define algorithms for symbolic CEGIR that can be instantiated
using different symbolic execution engines and present an
implementation SymInfer that uses Symbolic PathFinder [9] (SPF)—
a symbolic executor for Java.
SymInfer uses symbolic states in both the invariant inference and
verification processes.
For inference, SymInfer uses symbolic states to generate
concrete states to bootstrap a set of candidate invariants using
DIG [10, 11, 12]—which can infer expressive nonlinear invariants.
For verification, SymInfer formulates verification conditions from symbolic states
to confirm or refute an invariant, solves those using a SAT solver,
and produces counterexamples to refine the inference process.
Symbolic states allow SymInfer to overcome several limitations of
existing CEGIR approaches.
iDiscovery, ICE, and PIE are limited to computing relatively simple invariants and often do not consider complex programs with nonlinear arithmetic and properties such as $x=qy+r,x^{2}+y^{2}=z^{2}$.
These invariants appear in safety and security-critical software and can be leveraged to improve quality, e.g., to verify the absence of errors in Airbus avionic systems [13] and to analyze program runtime complexity to detect security threats [14, 15].
As our evaluation of SymInfer demonstrates in Sec. V,
iDiscovery, which uses Daikon for inference, does not support nonlinear properties, and both ICE and PIE timeout frequently when nonlinear arithmetic is involved.
Recent work on NumInv [8] also
uses DIG to infer invariants, but it invokes KLEE [16] as a blackbox
verifier for candidate invariants. Since KLEE is unaware of the goals
of its verification it will attempt to explore the entire program state space
and must recompute that state space for each candidate invariant.
In contrast, SymInfer constructs a fragment of the state space
that generates a set of symbolic states that is sufficiently diverse
for invariant verification and it reuses symbolic states for all invariants.
We evaluated SymInfer over 3 distinct benchmarks which consist of 92 programs.
The study shows that SymInfer:
(1) can generate complex nonlinear invariants required in 21/27 NLA benchmarks,
(2) is effective in finding nontrivial complexity bounds for 18/19 programs, with 4 of those improving on the best known bounds from the literature,
(3) improves on the state-of-the-art PIE tool in 41/46 programs in the HOLA benchmark, and
(4) outperforms NumInv across the benchmarks while computing similar or better invariants.
These results strongly suggest that symbolic states form a powerful basis for computing program invariants. They permit an approach that
blends the best features of dynamic inference techniques and purely symbolic
techniques, such as weakest-precondition reasoning.
The key contribution of our work lies in the identification of the value
of symbolic states in CEGIR, in developing an algorithmic framework for
adaptively computing a sufficient set of symbolic states for invariant
inference, and in demonstrating, through our evaluation of
SymInfer, that it improves on the best known techniques.
II Overview
We illustrate invariant inference using symbolic states on
the integer division algorithm in
Figure 1;
L marks the location at which we are interested
in computing invariants. This example states assumptions on the
values of the parameters, e.g., no division by zero.
The best invariant at L is $\mathtt{x2}\cdot\mathtt{y1}+\mathtt{y2}+\mathtt{y3}=\mathtt{x1}$.
This loop invariant encodes the precise semantics of
the loop computing integer division, i.e.,
the dividend $\mathtt{x1}$ equals the divisor $\mathtt{x2}$ times the quotient $\mathtt{y1}$ plus the remainder, which is the sum of the two temporary variables $\mathtt{y2}$ and $\mathtt{y3}$.
Existing methods of dynamic invariant inference would
instrument the program at location L to record values
of the 5 local variables, and then, given
a set of input vectors, execute the program to record
a set of concrete states of the program to generate
candidate invariants. Since the focus here is on location
L, we refer to these as L-states and we
distinguish those that are observed by instrumentation
on a program run. It is these observed concrete L-states
that form the basis for all dynamic invariant inference techniques.
On eight hand-selected set of inputs that seek to expose diverse
concrete L-states, running Daikon [2] on this example results in very
simple invariants, e.g., $\mathtt{y1}\geq 0$, $\mathtt{x2}\geq 2$.
These are clearly much weaker than the desired invariant for this
example. Moreover, the invariant on x2 is actually
spurious since clearly $1$ can be passed as the second input
which will reach L.
Applying the more powerful DIG [12] invariant generator,
permits the identification of the desired invariant, but
it too will yield the spurious $\mathtt{x2}\geq 2$ invariant.
Spurious invariants are a consequence of the diversity
and representativeness of the inputs used, and the L-states
that are observed. Leveraging symbolic states can help address
this weakness.
II-A Generating a symbolic state space
Figure 2 depicts a tree resulting from a depth-bounded
symbolic execution of the example. The gray region includes paths
limited to at most 5 branches; in this setting depth is
a semantic property and syntactic branches with only a single
infeasible outcome are not counted, e.g., the branches with
labels enclosed in gray boxes. We denote the unknown values of
program inputs using variables $X_{i}$ and return points with ret.
The states at location L are denoted $l_{i}$ in the figure.
An observed symbolic L-state, $l_{i}$, is defined by the conjunction of the path-condition,
i.e., the set of constraints on the tree-path to the state,
and $v_{i}$, a constraint that encodes the values of local variables
in scope at L. For example, the symbolic state $l_{2}$ is defined as
$(X_{2}=1\wedge X_{1}\not=0\wedge X_{1}\geq 0\wedge X_{2}\geq 1)\wedge(\mathtt{%
y1}=0\wedge\mathtt{y2}=2\wedge\mathtt{y3}=X_{1}-2)$.
As is typical in symbolic execution, it is possible to increase the
depth-bound and generate additional states, e.g., $l_{6}$, $l_{10}$,
$l_{12}$, and $l_{13}$ which all appear at a depth of 6 branches.
There are several properties of symbolic states that make them useful
as a basis for efficient inference of invariants:
Symbolic states are expressive
Dynamic analysis has to observe many concrete
L-states to obtain useful results. Many of those states may be equivalent
from a symbolic perspective.
A symbolic state, like $l_{2}$, encodes a potentially infinite set of
concrete states, e.g., $X_{1}>0\wedge X_{2}=1$. Invariant generation
algorithms can exploit this expressive power to account for
the generation and refutation of candidate invariants from a
huge set of concrete states by processing a single symbolic state.
Symbolic states are relational
Symbolic states encode the values of program variables as expressions
over free-variables capturing program inputs, i.e., $X_{i}$. This
permits relationships between variables to be gleaned from the state.
For example, state $l_{2}$ represents the fact that $\mathtt{y3}<\mathtt{x1}$ for a large set of inputs.
Symbolic states can be reused
Invariant generation has to infer or refute
candidate invariants relative to the set of observed concrete L-states.
This can grow in cost as the product of the number of candidates and
the size of number of observed states.
A disjunctive encoding of observed symbolic L-states, $\bigvee\limits_{i\in[1-13]}l_{i}$, can be constructed a single time and reused for each of
the candidate invariants, which can lead to performance improvement.
Symbolic states form a sufficiency test
The diversity of symbolic L-states found during
depth-bounded symbolic execution combined with the expressive power
of each of those states provides a rich basis for inferring strong
invariants. We conjecture that for many programs a sufficiently
rich set of observed L-states for invariant inference will be
found at relatively shallow depth.
For example, the invariants generated and not refuted by the
disjunction of L-states at depth 5, $L_{k=5}=\{l_{1},l_{2},l_{3},l_{4},l_{5},l_{7},l_{8},l_{9},l_{11}\}$, is the
same for those at depth 6, $\bigvee\limits_{i\in[1-13]}l_{i}$.
Consequently, we explore an adaptive and
incremental approach that increases depth only when new L-states
lead to changes in candidate invariants.
II-B SymInfer in action
SymInfer will invoke a symbolic executor to generate
a set of symbolic L-states at depth $k$, e.g., $k=5$ in our example for the gray region.
SymInfer then forms a small population of concrete L-states,
using symbolic L-states, to generate a set of candidate invariants
using DIG.
DIG produces three invariants at L for this example:
$\mathtt{y1}\cdot\mathtt{y2}\cdot\mathtt{y3}=0$,
$\mathtt{x2}\cdot\mathtt{y1}-\mathtt{x1}+\mathtt{y2}+\mathtt{y3}=0$, and
$\mathtt{x1}\cdot\mathtt{y3}-12\cdot\mathtt{y1}\cdot\mathtt{y3}-\mathtt{y2}%
\cdot\mathtt{y3}-\mathtt{y3}^{2}=0$.
SymInfer attempts to refute these invariants by using the full
expressive power of the observed L-states to determine if
all of the represented concrete states are consistent with the invariant.
It does this by calling a SAT solver to check implications such as
$\bigvee\limits_{l\in L_{k=5}}l\Rightarrow(\mathtt{y1}\cdot\mathtt{y2}\cdot%
\mathtt{y3}=0)$.
This refutes the first and third candidate invariant.
SymInfer then seeks additional L-states by running symbolic execution
with a deeper bound, $k=6$. While this process produces an additional 4
states to consider, none of those can refute the remaining
invariant candidate. Thus, SymInfer terminates and produces the desired
invariant.
III Dynamically Infer Numerical Invariants
III-A Numerical Invariants
We consider invariants describing relationships over numerical program variables such as $x\leq y,0\leq idx\leq|arr|-1,x+2y=100$.
These numerical invariants have been used to verify program correctness, detect defects, establish security properties, synthesize programs, recover formal specifications, and more [2, 17, 18, 19, 20, 13, 21, 22, 4].
A particularly useful class of numerical invariants involves nonlinear relations, e.g., $x\leq y^{2},x2\cdot y1+y2+y3=x1$.
While more complex these arise naturally in many safety-critical applications [13, 23].
In addition to capturing program semantics (e.g., as shown in Section II), nonlinear invariants can characterize the computational complexity of a program.
Figure 3 shows a program, adapted from Figure 2 of [24], with
nontrivial runtime complexity.
At first, this program appears to take $O(NMP)$ due to the three nested loops.
But closer analysis shows a more precise bound $O(N+NM+P)$ because the innermost loop 3, which is updated each time loop 2 executes, changes the behavior of the outer loop 1.
When analyzing this program, SymInfer discovers a complex nonlinear invariant over the variables $P,M,N$ and $t$ (a temporary variable used to count the number of loop iterations) at location $L$ (program exit):
$$\displaystyle P^{2}Mt+PM^{2}t-PMNt-M^{2}Nt-PMt^{2}+MNt^{2}+PMt$$
$$\displaystyle-PNt-2MNt+Pt^{2}+Mt^{2}+Nt^{2}-t^{3}-Nt+t^{2}=0.$$
This nonlinear (degree 4) equality looks very different than the expected bound $N+NM+P$ or even $NMP$.
However, when solving this equation (finding the roots of $t$), we obtain three solutions that describe the exact bounds of this program:
$$\displaystyle t=0$$
when
$$\displaystyle N=0,$$
$$\displaystyle t=P+M+1$$
when
$$\displaystyle N\leq P,$$
$$\displaystyle t=N-M(P-N)$$
when
$$\displaystyle N>P.$$
These results give more precise bounds than the given bound $N+MN+P$ in [24].
III-B Inferring Invariants using Concrete States
To infer numerical invariants, SymInfer uses the algorithms in DIG [12].
For numerical invariants, DIG finds (potentially nonlinear) equalities and inequalities.
Like other dynamic analysis tools, DIG generates candidate invariants that only hold over observed concrete L-states.
III-B1 Nonlinear Equalities
To generate nonlinear equality invariants, DIG uses terms to represent nonlinear information from the given variables up to a certain degree.
For example, the set of 10 terms $\{1,x,y,z,xy,xz,yz,x^{2},y^{2},z^{2}\}$ consist of all monomials up to degree $2$ over the variables $\{x,y,z\}$.
DIG then applies the steps shown in Figure 1 to generate equality invariants over these terms using concrete states observed at location L, and returns a set of possible equality relations among those terms.
First, we use the input terms to form an equation template $c_{1}t_{1}+c_{2}t_{2}\dotsb+c_{n}t_{n}=0$, where and $t_{i}$ are terms and $c_{i}$ are real-valued unknowns to be solved for (line 1).
Next, we instantiate the template with concrete states to obtain concrete equations (line 1).
Then we use a standard equation solver to solve these equations for the unknowns (line 1).
Finally we combine solutions for the unknowns (if found) with the template to obtain equality relations (line 1).
III-B2 Octagonal Inequalities
DIG uses various algorithms to infer different forms of inequality relations.
We consider the octagonal relations of the form $c_{1}v_{1}+c_{2}v_{2}\leq k$ where $v_{1},v_{2}$ are variables and $c_{i}\in\{-1,0,1\}$ and $k$ is real-valued.
These relations represent linear inequalities among program variables, e.g., $x\leq y,-10\leq x-y\leq 20$.
To infer octagonal invariants from concrete states $\{(x_{1},y_{1}),\dots\}$, we compute the upper and lowerbounds:
$$\displaystyle u_{1}=\max(x_{i}),$$
$$\displaystyle\;l_{1}=\min(x_{i}),$$
$$\displaystyle u_{2}=\max(y_{i}),$$
$$\displaystyle\;l_{2}=\min(y_{i}),$$
$$\displaystyle u_{3}=\max(x_{i}-y_{i}),$$
$$\displaystyle\;l_{3}=\min(x_{i}-y_{i}),$$
$$\displaystyle u_{4}=\max(x_{i}+y_{i}),$$
$$\displaystyle\;l_{4}=\min(x_{i}+y_{i})$$
and form a set of 8 (octagonal) relations $\{u_{1}\geq x\geq l_{1},u_{2}\geq y\geq l_{2},u_{3}\geq x-y\geq l_{3},u_{4}%
\geq x+y\geq l_{4}\}$.
Although computing octagonal inequalities is very efficient (linear in the number of concrete states), the candidate results are likely spurious because the upper and lower bound values might not be in the observed concrete states.
SymInfer deals with such spurious invariants using a CEGIR approach described in Section IV.
IV CEGIR Algorithms using Symbolic States
The behavior of a program at a location can be precisely represented by the set of all possible values of the variables in scope of that location.
We refer to these values as the concrete states of the program.
Figure 1 shows several concrete states observed at location $L$ when running the program on inputs $(x1=15,x2=2)$ and $(x1=4,x2=1)$.
The set of all concrete states is the most precise representation of the relationship between variables at a program location, but it is potentially infinite and thus is difficult to use or analyze.
In contrast, invariants capture program behaviors in a much more compact way.
For the program in Figure 1 invariants at location $L$ include: $0\leq x1,1\leq x2,0\leq y2+y3,x2\cdot y1+y2+y3=x1,\dots$
The most useful at $L$ is $x2\cdot y1+y2+y3=x1$, which describes the semantics of integer division.
The inequality $0\leq y2+y3$ is also useful because it asserts that the remainder is non-negative.
Dynamic invariant generation techniques, like Daikon and DIG, use concrete program
states as inputs to compute useful invariants.
We propose to compute invariants from the symbolic states of a program.
Conceptually, symbolic states serve as an intermediary representation
between a set of concrete program states and
an invariant that might be inferred from those concrete states.
We assume a fixed and known set of variables in scope at
a given location in a program. Moreover, we assume variables
are indexed and that for an index $i$, $var(i)$ is a
canonical name for that variable.
Invariants will be inferred over these named variables.
This is straightforward for locals and parameters, but permits richer
naming schemes for other memory locations.
We write a set of appropriately typed
values for those variables
as $\vec{v}\equiv\langle v_{1},v_{2},\ldots,v_{n}\rangle$,
where the indexing corresponds to that of variables.
Undefined variables
have a $\bot$ value and the $i$th value is written $\vec{v}[i]$.
A concrete state is
$(l,\vec{v})$ where control is
at location $l$ and program variables have the values given by $\vec{v}$.
Let $I$ be a set free-variables that denote the undefined input
values of a program.
A symbolic value is an expression written using constants,
elements of $I$, and the operators available for the value’s type.
We write a sequence of symbolic values as
$\vec{e}\equiv\langle e_{1},e_{2},\ldots,e_{n}\rangle$.
Definition 1.
A symbolic state is
$(l,\vec{e},c)$ where control is at location $l$, $c$ is
a logical formula written over $I$, and
a program variable takes on the corresponding concrete values
that are consistent with $c$ and symbolic value.
The semantics of a symbolic state is:
$$\llbracket(l,\vec{e},c)\rrbracket=\{(l,\vec{v})\mid SAT((\bigwedge\limits_{i}%
\vec{v}[i]=\vec{e}[i])\wedge c)\}$$
The role of $c$ in a symbolic state is to define the constraints
between variables, for example, that may be established on execution
paths reaching $l$—a path condition.
IV-A Using Symbolic States
Symbolic states can help invariant generation in many ways.
We describe two concrete techniques using symbolic states to generate diverse concrete states and to verify candidate invariants.
IV-A1 Bootstrapping DIG with Concrete States
Our method generates candidate invariants using existing state of the
art concrete state-based invariant inference techniques like DIG.
In this application we need only use a small number of concrete states
to bootstrap the algorithms to generate a diverse set of candidate
invariants since symbolic states will be used to refute spurious invariants.
In prior work [10, 12], fuzzing was used to generate inputs and
that could be used here as well, but we can also exploit symbolic
states.
Figure 2 shows how we use symbolic states to
generate a diverse set of concrete states—at least one for each
symbolic state.
It first generates the set of symbolic L-states reachable depth
less than or equal to $d$ (line 2); note that these states can be cached
and reused for a given $P$ and $L$.
The loop on line 2 considers each such state, checks the satisfiability of the
states path condition, $c$, and then extracts the model from the solver.
We encode the model as a sequence, $\vec{i}$, indexed by the name of
a free input variables. The symbolic state is then evaluated by
the binding of concrete values to input variables in the model.
This produces a concrete state which is accumulated.
A conjunction of constraints equating the values of the model,
$\vec{i}$, and the names of inputs, $I$, is added to the
blocking clause for future state generation.
The loop on line 2 generates additional concrete states up to
the requested number, $n$. This process will randomly choose
a symbolic state and then call the SAT solver to generate a
solution that has not already been computed; here $\vec{i}$
is converted to a conjunction of equality constraints between
input variables and values from a model.
When a solution is found, we use the same processing as in lines 2-2
to create a new concrete state.
IV-A2 Symbolic States as a “Verifier”
Figure 3 shows how symbolic states are used to verify, or refute,
a property.
The algorithm obtains new symbolic states when it is determined that they
increase the accuracy of the verification.
Symbolic states are obtained from a symbolic execution engine.
There are potentially an infinite number of symbolic states at a location,
but most existing symbolic execution tools have the ability to perform
a depth-limited search.
We wrap the symbolic execution engine to just return the symbolic L-states
encountered during search of a given depth (getStatesAt).
The number of symbolic states varies with depth.
A low depth means few states. Few states will tend to encode a small
set of concrete L-states, which limits verification and refutation
power. Few states will also tend to produce a smaller and faster to solve
verification condition. To address this cost-effectiveness tradeoff,
rather than try to choose an optimal depth, our algorithm
computes the lowest depth that yields symbolic states that change verification outcomes.
In essence, the algorithm
adaptively computes a good cost-effectiveness tradeoff for a given
program, location of interest, and invariant.
The algorithm iterates with each iteration considering a different depth, $k$.
The body of the each iteration (lines 3 – 3) works as follows.
It extract a set of symbolic states for the current depth using
symbolic execution (line 3); note
this can be done incrementally to avoid re-exploring the program’s state space
using techniques like [25].
It then formulates a verification condition out of three components.
(1) For each symbolic state,
it constructs the conjunction of its path condition, $c$, with constraints
encoding equality constraints between variables and their symbolic values, $\vec{e}$;
these per-state conjunctions are then disjoined.
This expresses the set of
concrete L-states corresponding to all of the symbolic states.
(2) The negation of the disjunction of the set of states that
are to be blocked is formed.
These components are conjoined, which serves to eliminate the concrete
L-states that are to be blocked.
(3) If the resulting formula implies a candidate $p$ then that candidate
is consistent with the set of symbolic states.
We use a SAT solver to check the negation of this implication.
The solver can return sat which indicates that the property is
not an invariant (lines 3 – 3).
The solver is also queried for a model which is a sample
state that is inconsistent with the proposed invariant. This counterexample
state is saved so that the inference algorithm can search for
invariants that are consistent with it.
The solver can also return unsat indicating the property is a true invariant;
at least as far as the algorithm can determine given the symbolic states at the
current depth.
Finally, the solver can also return unknown, indicating it cannot determine whether the given property is true or false.
For the latter two cases, we increment the depth and explore a larger set
of symbolic states generated from a deeper symbolic execution.
Lines 3 – 3 work together to determine when increasing the depth
does not influence the verification. In essence, they check to see
whether the same result is computed at adjacent depths and if so, they
revert to the shallower depth and return.
IV-B A CEGIR approach using symbolic states
CounterExample Guided Invariant Generation (CEGIR) techniques consist of a guessing component that infers candidate invariants
and a checking component that verifies the candidate solutions.
If the candidate is invalid, the checker produces counterexamples, i.e., concrete states that are not consistent with the candidate invariant.
The guessing process incorporates the generated counterexamples so that any
new invariants account for them.
Alternation of guessing and checking repeats until no candidates can be disproved.
SymInfer integrates symbolic traces into two CEGIR algorithms to compute candidate invariants.
These algorithms use the inference techniques described in Section III for equality and inequality invariants.
IV-B1 Nonlinear Equalities
Figure 4 defines our CEGIR algorithm
for computing non-linear equality invariants.
It consists of two phases: an initial invariant candidate generation
phase and then an iterative invariant refutation and refinment phase.
Lines 4 – 4 define the initial generation phase.
As as described in Section III-B1, we first create terms to represent nonlinear polynomials (line 4).
Because solving for $n$ unknowns requires at least $n$ unique equations, we need to generate a sufficient set of concrete L-states (line 4).
This can either be realized through fuzzing an instrumented
version of the program that records concrete L-states or,
as described in Figure 2, one can use symbolic
L-states to generate them.
The initial candidate set of invariants is iteratively refined
on lines 4 – 4. The algorithm then refutes or confirms them using symbolic states as described in
Figure 3.
Any property that is proven to hold is recorded in $invs$ and
counterexample states, $cexs$, are accumulated across the set of properties.
Generated counterexample states are also blocked from contributing to the verification process.
If no property generated counterexample states, then the
algorithm terminates returning the verified invariants.
The counterexamples are added to the
set of states that are used to infer new candidate
invariants; this ensures that new invariants will be consistent
with the counterexample states (line 4). These new results may include some
already proven invariants, so we remove those from the set
of candidates considered in the next round of refinement.
IV-B2 Octagonal Inequalities
Our next CEGIR algorithm uses a divide and conquer approach to compute octagonal inequalities.
Given a term $t$, and an interval range $[minV,maxV]$, we compute the smallest integral upperbound $k$ of $t$ by repeatedly dividing the interval into halves that could contain $k$.
The use of an interval range $[minV,maxV]$ allows us to exclude terms ranges are too large (or that do not exist).
For example, if we check $t>maxV$ and it holds then we will not compute the bound of $t$ (which is strictly larger than $maxV$).
We start by checking a guess that $t\leq midV$, where $midV=\lceil\frac{maxV+minV}{2}\rceil$.
These checks are performed by formulating a verification condition
from symbolic states in a manner that is analogous to Figure 4.
If this holds, then $k$ is at most $midV$ and we tighten the search to a new interval $[minV,midV]$.
Otherwise, we obtain counterexample with $t$ having some value $c$, where $c>midV$.
We then tighten the search to a new interval $[c,maxV]$.
In either case, we repeat the guess for $k$ using an interval that is half the size of the previous one.
The search stops when $minV$ and $maxV$ are the same or their difference is one (in which case we return the smaller value if $t$ is less than or equal both).
To find octagonal invariants over a set of variables, e.g., $\{x,y,z\}$, we apply this method to find upperbounds of the terms $\{x,-x,y,-y,\dots,y+z,-y-z\}$.
Note that we obtain both lower and upperbound using the same algorithm because the upperbound for $t$ essentially lowerbound of $-t$ since all computations are reversed for $-t$.
SymInfer reuses the symbolic states from the inference of equalities to
formulate verification conditions for inequalities. This is another
example of how reuse speeds up inference.
V Implementation and Evaluation
We implemented SymInfer in Python/SAGE [26].
The tool takes as input a Java program with marked target locations and generates invariants at those locations.
We use Symbolic PathFinder (SPF) [9] to extract symbolic states for Java programs and the Z3 SMT Solver [20] to check and produce models representing counterexamples.
We also use Z3 to check and remove redundant invariants.
SymInfer currently supports equality and inequality relations over numerical variables.
For (nonlinear) equalities, SymInfer uses techniques from DIG to limit the number
of generated terms.
This allows us, for example, to infer equalities up to degree 5 for a program with 4 variables and up to degree 2 for program with 12 variables.
For octagonal invariants, we consider upper and lower bounds within the range $[-10,10]$; we rarely observe inequalities with large bounds.
SymInfer can either choose random values in a range, $[-300,300]$ by default,
for bootstrapping, or use the algorithm in Figure 2.
All these parameters can be changed by SymInfer’s user; we chose these values based on our experience.
V-A Research Questions
To evaluate SymInfer, we consider three research questions:
1.
Is SymInfer effective in generating nonlinear invariants describing complex program semantics and correctness?
2.
Can SymInfer generate expressive invariants that capture program runtime complexity?
3.
How does SymInfer perform relative to PIE, a state-of-the-art invariant generation technique?
To investigate these questions, we used 3 benchmark suites consist of 92 Java programs (described in details in each section).
These programs come with known or documented invariants.
Our objective is to compare SymInfer’s inferred invariants against these documented results.
To compare invariants, we used Z3 to check if the inferred results imply the documented ones.
We use a script to run SymInfer 11 times on each program and report the median results.
The scripts automatically terminates a run exceeding 5 minutes.
The experiments reported here were performed on a 10-core Intel i7 CPU 3.0GHZ Linux system with 32 GB of RAM.
V-B Analyzing Program Correctness
In this experiment, we use the NLA testsuite [12] which consists of 27 programs implementing mathematical functions such as intdiv, gcd, lcm, power.
Although these programs are relatively small (under 50 LoCs) they contain nontrivial structures such as nested loops and nonlinear invariant properties.
To the best of our knowledge, NLA contains the largest number of programs containing nonlinear arithmetic.
These programs have also been used to evaluate other numerical invariant systems [27, 12, 28].
These NLA programs come with known program invariants at various program locations (e.g., mostly nonlinear equalities for loop invariants and postconditions).
For this experiment, we evaluate SymInfer by finding invariants at these locations and comparing them with known invariants.
Results
Table I shows the results of SymInfer for the 27 NLA programs.
Column Locs show the number of locations where we obtain invariants.
Column V,T,D shows the number of variables, terms, and highest degree from these invariants.
Column Invs shows the number of discovered equality and inequality invariants.
Column Time shows the total time in seconds.
Column Correct shows if the obtained results match or imply the known invariants.
For 21/27 programs, SymInfer generates correct invariants that match or imply the known results.
In most cases, the discovered invariants match the known ones exactly.
Occasionally, we obtain results that are equivalent or imply the known results.
For example, for sqrt, for some runs we obtained the documented equalities $t=2a+1,s=(a+1)^{2}$, and for other runs we obtain $t=2a+1,t^{2}-4s+2t=-1$, which are equivalent to $s=(a+1)^{2}$ by replacing $t$ with $2a+1$.
We also obtain undocumented invariants, e.g., SymInfer generates the postconditions $x=qy+r,0\leq r,r\leq x,r\leq y-1$ for cohendiv, which computes the integer division result of two integers $q=x\div y$,
The first invariant is known and describes the precise semantics of integer division: the dividend $x$ is the divisor $y$ times the quotion $q$ plus the remainder $r$.
The other obtained inequalities were undocumented.
For example, $r\geq 0$ asserts that the remainder $r$ is non-negative and $r\leq x,r\leq y-1$ state that $r$ is at most the dividend $x$, but is strictly less than the divisor $y$.
Our experience shows that SymInfer is capable of generating many invariants that are unexpected yet correct and useful.
SymInfer did not find invariants for 6/27 programs (marked with “-” in Table I).
For egcd2, egcd3, the equation solver used in SAGE takes exceeding long time for more than half of the runs.
For geo3, we obtained the documented invariants and others, but Z3 stops responding when checking these results.
freire1 and freire2 contain floating point arithmetic, which are currently not supported by SymInfer.
SPF failed to produce symbolic states for knuth for any depth we tried.
This program invokes a library function Math.sqrt and SPF does not know the semantics of this function and thus fails to provide useful symbolic information.
For egcd2, egcd3, and geo3, SymInfer times out
after 5 minutes, and for freire1, freire2, and knuth, it exits upon encountering the unsupported feature.
V-C Analyzing Computational Complexity
As shown in Section III, nonlinear invariants can represent precise program runtime complexity.
More specifically, we compute the roots of nonlinear relationships to obtain disjunctive information (e.g., $x^{2}=4\Rightarrow(x=2\vee x=-2)$, which capture different and precise complexity bounds of programs.
To further evaluate SymInfer on discovering program complexity, we collect 19 programs, adapted from existing static analysis techniques specifically designed to find runtime complexity [24, 29, 30]111We remove nondeterministic features in these programs because SymInfer assumes determinstic behaviors..
These programs, shown in Table II, are small, but contain nontrivial structures and represent examples from Microsoft’s production code [24].
For this experiment, we instrument each program with a fresh variable $t$ representing the number of loop iterations and generate postconditions over $t$ and input variables (e.g., see Figure 3).
Results
Table II shows the median results of SymInfer from 11 runs.
Column Bound contains a ✓if we can generate invariants matching the bounds reported in the respective work, and ✓✓if the discovered invariants represent more precise bounds than the reported ones.A ✓${}^{*}$ indicates when the program was modified slightly to help our analysis—described below.
For 18/19 programs, SymInfer discovered runtime complexity characterizations
that match or improve on reported results.
For cav09_fig1a, we found the invariant $mt-t^{2}-100m+200t=10000$, which indicates the correct bound $t=m+100\vee t=100$.
For these complexity analyses, we also see the important role of combining both inequality and equality relations to produce informative bounds.
For popl09_fig3_4, SymInfer inferred nonlinear equality showing that $t=n\vee t=m$ and inequalities asserting that $t\geq n\wedge t\geq m$, together indicating that $t=\mathsf{max}(n,m)$, which is the correct bound for this program.
In four programs, SymInfer obtains better bounds than reported results.
The pldi_fig2 programs showing in Figure 3 is a concrete example where the obtained three bounds are strictly less than the given bound.
For several programs we needed some manual instrumentation or inspections to help the analysis.
For popl09_fig4_1 we added the precondition asserting the input $m$ is nonnegative.
For pldi09_fig4_5, we obtained nonlinear results giving three bounds $t=n-m$, $t=m$, and $t=0$, which establish the reported upperbound $t=max(0,n-m,m)$.
For pldi09_fig4_4, we obtained invariants that are insufficient to show the reported bound.
However, if we create a new term representing the quotient of an integer division of two other variables in the program, and obtain invariants over that term, we obtain more precise bounds than those reported.
V-D Comparing to PIE
We compare SymInfer to the recent CEGIR-based invariant tool PIE [6].
PIE aims to verify annotated relations by generating invariants based
on the given assertions.
In contrast, SymInfer generates invariants at given locations without given assertions or postconditions.
We use the HOLA benchmarks [31], adapted by the PIE developers.
These programs are annotated with various assertions representing loop invariants and postconditions.
This benchmark consists of 49 small programs, but contain nontrivial structures including nested loops or multiple sequential loops.
These programs, shown in Table III, have been used as benchmarks for other static analysis techniques [32, 33, 34].
For this experiment, we first run PIE and record its run time on proving the annotated assertions.
Next, we removed the assertions in the programs and asked SymInfer to generate invariants at those locations.
Our objective is to compare SymInfer’s discovered invariants with the annotated assertions.
Because these HOLA programs only consist of assertions having linear relations, we ask SymInfer to only generate invariants up to degree 2 (quadratic relations can represent linear relations, e.g., $x^{2}=4\Rightarrow x=2\vee x=-2$).
Results
Table III shows these obtained results from PIE and SymInfer.
Column PIE time shows the time, in seconds, for PIE to run each program.
Column SymInfer time shows the time, in seconds, for SymInfer to generate invariants for each program (the median of 11 runs).
The “-” symbol indicates when PIE fails to prove the given assertions, e.g., because it generates invariants that are too weak.
Column Correct shows whether SymInfer’s generated invariants match or imply the annotated assertions and therefore prove these assertions.
For this experiment we manually check the result invariants and use Z3 to compare them to the given assertions.
A ✓indicates that the generated invariants match or imply the assertions.
A $\circ$ indicates that the generated invariants are not sufficiently strong to prove the assertions.
For 40/46 programs, SymInfer discovered invariants are sufficiently strong to prove the assertions.
In most of these cases we obtained correct and stronger invariants than the given assertions.
For example, for H23, SymInfer inferred the invariants $i=n,n^{2}-n-2s=0,-i\leq n$, which imply the postcondition $s\geq 0$.
For H29, we obtained the invariants $b+1=c,a+1=d,a+b\leq 2,2\leq a$, which imply the given postcondition $a+c=b+d$.
Surprisingly, SymInfer also found invariants that are precise enough to establish conditions under forms that are not supported by SymInfer.
For example, H8 contains a postcondition $x<4\ \vee\ y>2$, which has a disjunctive form of strict inequalities.
SymInfer did not produce this invariant, but instead produced a correct and stronger relation $x\leq y$, which implies this condition.
Many HOLA programs contain disjunctive (or conditional) properties, e.g., if(c) assert (p); where the property $p$ only holds when the condition $c$ holds (written $c\Rightarrow p$).
For example, for H18, we obtained $fj=100f$, which implies the conditional assertion $f\neq 0\Rightarrow j=100$.
For H37, PIE failed to prove the postcondition if (n > 0) assert(0 <= m && m < n); which involves both conditional assertions and strict inequalities.
For this program, SymInfer inferred 2 equations and 3 inequalities222$m^{2}=nx-m-x,mn=x^{2}-x,-m\leq x,x\leq m+1,n\leq x$, which together establish the postcondition.
For 6/46 programs, SymInfer either failed to produce invariants (2 programs marked with “-”) or discovered invariant that are not strong enough to prove the given assertions (4 programs marked with $\circ$).
For both H24 and H27, Z3 stops responding when checking the inferred results and the run were terminated after 5 minutes.
For H01, we found the invariant $x=y$, which is not sufficiently to establish the postcondition $y\leq 1$.
For H27, SymInfer found no relation involving the variable $c$ to prove the assertion $c\geq 0$.
Summary
These preliminary results show SymInfer generates expressive, useful, and interesting invariants describing the semantics and match documented invariants (21/27 NLA programs), discovers difficult invariants capturing precise and informative complexity bounds of programs (18/19 programs), and is competitive with PIE (40/46 HOLA programs).
We also note that PIE, ICE, and iDiscovery (another CEGIR-based tools reviewed in Section VI), cannot find any of these high-degree nonlinear invariants found by SymInfer.
V-E Threats to Validity
SymInfer’s run time is dominated by computing invariants, more specifically solving hundred of equations for hundred of unknowns.
The run time of DIG can be improved significantly by limiting the search to invariants of a given maximum degree rather than using the default setting.
Verifying candidate invariants, i.e., checking implication using the Z3 solver, is much faster than DIG, even when multiple checks are performed at different depths.
This shows an advantage of reusing symbolic states when checking new invariants.
SymInfer encodes all symbolic states to into the Z3 verification condition.
This results in complex formulas with large disjunctions that can make Z3 timeout.
Moreover, depending on the program, SPF might not be able to
generate all possible symbolic states.
In such cases, SymInfer cannot refute candiate invariants and thus may produce unsound results.
However, our experience shows that SPF, by its nature as a symbolic executor, turns out to be very effective in producing sufficient symbolic states, which effectively remove invalid candidates.
Finally, we reuse existing analysis tools, such as DIG and SPF, which provides
a degree of assurance in the correctness of SymInfer, but our primary
means of assuring internal validity was performing both manual and automated (SMT)
checking of the invariants computed for all subject programs.
While our evaluation uses a variety of programs from different benchmarks,
these programs are small and thus do not represent large software projects.
Their use does promote comparative evaluation and reproducibility of our
results.
We believe using symbolic states will allow for the generation of
useful and complex invariants for larger software systems, in part because
of the rapid advances in symbolic execution and SMT solving technologies
and SymInfer leverages those advances.
VI Related Work and Future Work
Daikon [2] is a well-known dynamic tool that infers candidate invariants
under various templates over concrete program states.
The tool comes with a large set of templates which it tests against observed
concrete states, removing those that fail, and return the remaining ones as candidate invariants.
DIG [12] is similar to Daikon, but focuses on numerical invariants and therefore can compute more expressive numerical relations than those supported by Daikon’s templates.
PIE [6] and ICE [7] uses CEGIR to infer invariants to prove a given specification.
To prove a property, PIE iteratively infers and refined invariants by constructing necessary predicates to separate (good) states satisfying the property and (bad) states violating that property.
ICE uses a decision learning algorithm to guess inductive invariants over predicates separating good and bad states.
The checker produces good, bad, and “implication” counterexamples to help
learn more precise invariants.
For efficiency, they focus on octagonal predicates and only search for invariants that are boolean combinations of octagonal relations.
In general, these techniques focus on invariants that are necessary to prove a given
specification and, thus, the quality of the invariants are dependent target specification.
NumInv [8] is a recent CEGIR tool that discovers invariants for C programs.
The tool also uses DIG’s algorithms to infer equality and inequality relations.
For verification it instruments invariants into the program and runs the KLEE test-input generation tool [16].
KLEE does use a symbolic state representation internally, but this is inaccessible to
NumInv. Moreover, KLEE is unaware of its use in this context and it recomputes the
symbolic state space completely for each verification check, which is inefficient.
For the experiments in Section V, SymInfer is comparable to NumInv in
the quality of invariants produced, but SymInfer runs faster in spite of the
fact that KLEE’s symbolic execution of C programs is known to be faster than SPF’s
performance on Java programs. We credit this to the benefits of using symbolic states.
Similar to SymInfer, the CEGIR-based iDiscovery [5] tool uses SPF to check invariants.
However, iDiscovery does not exploit the internal symbolic state representation of symbolic excution but instead runs SPF as a blackbox to check program assertions encoding candidate invariants.
To speed up symbolic execution, iDiscovery applies several optimizations such as using the Green solver [35] to avoid recomputing the symbolic state space for each check.
In contrast, SymInfer precomputes the full disjunctive SMT formula encoding the paths to the interested location once and reuses that formula to check candidate invariants.
For dynamic inference, iDiscovery uses Daikon and thus has limited support for numerical invariants.
For example, iDiscovery cannot produce the required nonlinear invariants or any relevant inequalities for the programs in Figures 1 and 3.
Note that for programs involving non-numerical variables, Daikon/iDiscovery might be able to infer more invariants than SymInfer.
SymInfer is unlike any of the above in its reliance on symbolic states to
bootstrap, verify and iteratively refine the invariant generation process.
There are clear opportunities for significantly improving the performance of SymInfer
and targeting different languages, such as C through the use of other symbolic
executors. For example, generating symbolic states can be sped up for invariant inference
by combining directed symbolic execution [36] to target locations
of interest, memoized symbolic execution [25] to store symbolic
execution trees for future extension, and parallel symbolic execution [37] to accelerate the incremental generation of the tree.
Moreover, we can apply techniques for manipulating symbolic states in symbolic execution [16, 35]
to significantly reduce the complexity of the verification conditions sent to the solver.
VII Conclusion
We present SymInfer a method that uses symbolic encodings of program
states to efficiently discover rich invariants over numerical variables at
arbitrary program locations.
SymInfer uses a CEGIR approach that uses symbolic states to generate candidate invariants and also to verify or refute, and iteratively
refine, those candidates.
Key to the success of SymInfer is its ability to directly manipulate
and reuse rich encodings of large sets of concrete program states.
Preliminary results on a set of 92 nontrivial programs show that SymInfer is effective in discovering useful invariants to describe precise program semantics, characterize the runtime complexity of programs, and verify nontrivial correctness properties.
Acknowledgment
This material is based in part upon work supported by the
National Science Foundation under Grant Number 1617916.
References
[1]
M. D. Ernst, J. H. Perkins, P. J. Guo, S. McCamant, C. Pacheco, M. S. Tschantz,
and C. Xiao, “The Daikon system for dynamic detection of likely
invariants,” Science of Computer Programming, pp. 35–45, 2007.
[2]
M. D. Ernst, “Dynamically detecting likely program invariants,” Ph.D.
dissertation, University of Washington, 2000.
[3]
J. H. Perkins, S. Kim, S. Larsen, S. Amarasinghe, J. Bachrach, M. Carbin,
C. Pacheco, F. Sherwood, S. Sidiroglou, G. Sullivan, W.-F. Wong, Y. Zibin,
M. D. Ernst, and M. Rinard, “Automatically patching errors in deployed
software,” in Symposium on Operating Systems Principles. ACM, 2009, pp. 87–102.
[4]
W. Weimer, “Patches as better bug reports,” in Generative Programming
and Component Engineering. ACM, 2006,
pp. 181–190.
[5]
L. Zhang, G. Yang, N. Rungta, S. Person, and S. Khurshid, “Feedback-driven
dynamic invariant discovery,” in ISSTA. ACM, 2014, pp. 362–372.
[6]
S. Padhi, R. Sharma, and T. Millstein, “Data-driven precondition inference
with learned features,” in PLDI. ACM, 2016, pp. 42–56.
[7]
P. Garg, D. Neider, P. Madhusudan, and D. Roth, “Learning invariants using
decision trees and implication counterexamples,” in POPL. ACM, 2016, pp. 499–512.
[8]
T. Nguyen, T. Antopoulos, A. Ruef, and M. Hicks, “A Counterexample-guided
Approach to Finding Numerical Invariants,” in FSE. ACM, 2017, pp. 605–615.
[9]
S. Anand, C. S. Păsăreanu, and W. Visser, “JPF-SE: A Symbolic
Execution Extension to Java PathFinder,” in TACAS. Springer-Verlag, 2007, pp. 134–138.
[10]
T. Nguyen, D. Kapur, W. Weimer, and S. Forrest, “Using Dynamic Analysis to
Discover Polynomial and Array Invariants,” in International
Conference on Software Engineering (ICSE). IEEE, 2012, pp. 683–693.
[11]
——, “Using Dynamic Analysis to Generate Disjunctive Invariants,” in
ICSE. IEEE, 2014, pp. 608–619.
[12]
——, “DIG: A Dynamic Invariant Generator for Polynomial and Array
Invariants,” TOSEM, vol. 23, no. 4, pp. 30:1–30:30, 2014.
[13]
P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux, and
X. Rival, “The Astrée analyzer,” in ESOP. Springer, 2005, pp. 21–30.
[14]
V. C. Ngo, M. Dehesa-Azuara, M. Fredrikson, and J. Hoffmann, “Verifying and
synthesizing constant-resource implementations with types,” in
Symposium on Security and Privacy (SP). IEEE, 2017, pp. 710–728.
[15]
T. Antonopoulos, P. Gazzillo, M. Hicks, E. Koskinen, T. Terauchi, and S. Wei,
“Decomposition instead of self-composition for proving the absence of timing
channels,” in PLDI. ACM, 2017,
pp. 362–375.
[16]
C. Cadar, D. Dunbar, and D. R. Engler, “KLEE: Unassisted and automatic
generation of high-coverage tests for complex systems programs.” in
OSDI. USENIX Association, 2008,
pp. 209–224.
[17]
T. Ball and S. K. Rajamani, “Automatically validating temporal safety
properties of interfaces,” in SPIN. Springer, 2001, pp. 103–122.
[18]
T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre, “Lazy abstraction,” in
POPL. ACM, 2002, pp. 58–70.
[19]
M. Das, S. Lerner, and M. Seigle, “ESP: path-sensitive program verification
in polynomial time,” in PLDI. ACM, 2002, pp. 57–68.
[20]
L. De Moura and N. Bjørner, “Z3: An efficient SMT solver,” in
TACAS. Springer, 2008, pp.
337–340.
[21]
X. Leroy, “Formal certification of a compiler back-end or: programming a
compiler with a proof assistant,” in POPL. ACM, 2006, pp. 42–54.
[22]
Y. Wei, Y. Pei, C. A. Furia, L. S. Silva, S. Buchholz, B. Meyer, and A. Zeller,
“Automated fixing of programs with contracts,” in ISSTA. ACM, 2010, pp. 61–72.
[23]
B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné,
D. Monniaux, and X. Rival, “A static analyzer for large safety-critical
software,” in PLDI. ACM, 2003,
pp. 196–207.
[24]
S. Gulwani, S. Jain, and E. Koskinen, “Control-flow refinement and progress
invariants for bound analysis,” in PLDI, 2009, pp. 375–385.
[25]
G. Yang, C. S. Păsăreanu, and S. Khurshid, “Memoized symbolic
execution,” in ISSTA. ACM,
2012, pp. 144–154.
[26]
W. A. Stein et al., “Sage Mathematics Software,” 2017,
http://www.sagemath.org.
[27]
E. R. Carbonell and D. Kapur, “Generating all polynomial invariants in simple
loops,” Journal of Symbolic Computation, vol. 42, no. 4, pp.
443–476, 2007.
[28]
R. Sharma, S. Gupta, B. Hariharan, A. Aiken, P. Liang, and A. V. Nori, “A
data-driven approach for algebraic loop invariants,” in ESOP. Springer, 2013, pp. 574–592.
[29]
S. Gulwani, “SPEED: Symbolic complexity bound analysis,” in
CAV. Springer-Verlag, 2009, pp.
51–62.
[30]
S. Gulwani, K. K. Mehra, and T. M. Chilimbi, “SPEED: precise and efficient
static estimation of program computational complexity,” in
POPL. ACM, 2009, pp. 127–139.
[31]
I. Dillig, T. Dillig, B. Li, and K. McMillan, “Inductive invariant generation
via abductive inference,” in OOPSLA, 2013, pp. 443–456.
[32]
D. Beyer, T. A. Henzinger, R. Jhala, and R. Majumdar, “The software model
checker BLAST,” Software Tools for Technology Transfer, vol. 9, no.
5-6, pp. 505–525, 2007.
[33]
A. Gupta and A. Rybalchenko, “Invgen: An efficient invariant generator,” in
CAV. Springer-Verlag, 2009, pp.
634–640.
[34]
B. Jeannet, “Interproc analyzer for recursive programs with numerical
variables,” 2014,
http://pop-art.inrialpes.fr/interproc/interprocweb.cgi.
[35]
W. Visser, J. Geldenhuys, and M. B. Dwyer, “Green: Reducing, Reusing and
Recycling Constraints in Program Analysis,” in FSE. ACM, 2012, pp. 58:1–58:11.
[36]
K.-K. Ma, K. Y. Phang, J. S. Foster, and M. Hicks, “Directed symbolic
execution,” in SAS. Springer-Verlag, 2011, pp. 95–111.
[37]
M. Staats and C. Pǎsǎreanu, “Parallel symbolic execution for
structural test generation,” in ISSTA. ACM, 2010, pp. 183–194. |
Broadband Optical Delay with Large Dynamic Range Using Atomic Dispersion
M. R. Vanner, R. J. McLean, P. Hannaford and A. M. Akulshin
ARC Centre of Excellence for Quantum-Atom Optics, Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne
University of Technology, Melbourne, Australia 3122
Abstract
We report on a tunable all-optical delay line for pulses with
optical frequency within the Rb $D_{2}$ absorption line. Using
frequency tuning between absorption components from different
isotopes, pulses of 10 ns duration are delayed in a 10 cm hot vapour
cell by up to 40 ns while the transmission remains above 10%. The
use of two isotopes allows the delay to be increased or decreased by
optical pumping with a second laser, producing rapid tuning over a
range of more than 40% of the initial delay at 110${}^{\circ}$C. We
investigate the frequency and intensity ranges in which this delay
line can be realised. Our observations are in good agreement with a
numerical model of the system.
pacs: 42.25.Bs, 03.67-a
‘Slow light’ refers to the propagation of a pulse of light in a
dispersive medium at a group velocity much less than
$c$ [1]. By its use, optically encoded information can be
controllably delayed without the need for electronic transduction.
This is of great interest for telecommunications, where there is a
need for tunable all-optical delay lines for high-speed optical
signal processing, e.g., buffering optical data packets
[2]. Additionally, such a system may be included in the
growing repertoire of tools available for quantum information
processing.
To minimise pulse distortion, an optical delay line should have
approximately constant positive dispersion in a spectral region
$\Delta\nu$ of width larger than the inverse of the pulse duration,
i.e., $\Delta\nu>1/\tau$. The transmission should be high and the
fractional delay (the ratio of the delay $\delta$ to the pulse
duration), which provides a practical metric, should exceed unity,
i.e., $\delta/\tau>1$.
Narrow spectral features in the refractive index of atomic media due
to light-induced ground-state coherence can result in sub and
superluminal pulse propagation [3] and even ‘light
storage’ [4, 5]. Using atomic media to produce
optical delay has predominantly exploited the steep dispersion
associated with electromagnetically induced transparency
(EIT) [6, 7, 8]. While EIT in atomic vapour can
produce extremely low group velocities it has a severe bandwidth
limitation owing to the narrow spectral range over which the
transparency and steep dispersion occurs, making $\delta/\tau>0.3$ difficult to obtain. Because of this, it has been suggested
that ultracold atomic samples may be required to achieve large
fractional delays in EIT-based delay lines [9].
In solid-state media, attempts to obtain larger bandwidth include
methods based on spectral hole burning [10] and the
use of gain features such as stimulated Brillouin
scattering [11] and Raman
amplification [12] in optical fibres.
An attractive approach to realising a wide-bandwidth delay line
utilises the intrinsic positive dispersion and high transmission
between two absorption lines in an atomic vapour. This has allowed,
Camacho et al. to observe large fractional delays for
light pulses tuned between the ${}^{85}$Rb hyperfine components of the
$D_{2}$ line [13]. In addition, this technique provides a
high degree of spatial homogeneity in both dispersion and
absorption, allowing the delay of transversely encoded optical
information (images) [14].
In this paper, we investigate the delay and transmission properties
of optical pulses tuned within the Rb $D_{2}$ line in a heated vapour
with natural isotopic abundance. Moreover, we modify the dispersion
by optical pumping to either reduce or enhance the number of
interacting atoms on one of the absorption components, allowing
rapid control of the delay.
The scheme of our experimental setup is shown in
figure 1a. Signal and optical pumping radiation is
produced using extended cavity diode lasers tuned to the rubidium
$D_{2}$ (figure 1b) and $D_{1}$ lines, respectively. The laser
frequencies are controlled with reference to Doppler-free saturation
spectroscopy performed in auxiliary Rb cells and the spectral
purities are analysed using Fabry-Perot cavities.
Optical pulses of 9.3 ns duration (FWHM) with a repetition rate of
10 MHz are generated from the cw signal laser using an electro-optic
modulator (EOM) triggered by a waveform generator
(figure 1a). The optical intensity is controlled by a
neutral density filter (ND) before propagation through a 10 cm
vapour cell heated in a thermally insulated oven. The transmitted
pulses are detected using a fast photodiode and recorded on an
oscilloscope.
For rapid tuning of the delay, optical pumping radiation at either
the ${}^{87}$Rb (F=1) or (F=2) component of the $D_{1}$ line is applied,
approximately counter-propagating to the signal beam. A lens
minimises spatial deviation of the signal beam induced by optical
pumping.
The signal laser frequency is tuned to the $D_{2}$ line at $\lambda=780$ nm between the ${}^{85}$Rb (F=2) and ${}^{87}$Rb (F=1) components,
which have a separation of $\sim$2.5 GHz (figure 1b). The
inherent positive dispersion and low absorption in this broad
spectral region allows large fractional delays with high
transmission. In figure 2a the observed optical
delays for pulses at the frequency of peak transmission are shown
relative to a non-interacting reference pulse for temperatures
between 105${}^{\circ}$C and 135${}^{\circ}$C. A fractional delay
$\delta/\tau=4.3$ was observed for a transmission of 9% with good
pulse shape preservation, where the pulse duration narrowed by less
than 10%. It should be noted that the fractional delay is limited
in these experiments by the pulse duration we are able to generate.
The observed delay and transmission are plotted against temperature
in figure 2b, along with our numerical predictions.
For our numerical predictions, we model the absorption coefficient
$\alpha(\omega)$ and the real part of the refractive index
$n(\omega)$ of the Rb $D_{2}$ line using a convolution of profiles
arising from homogeneous and inhomogeneous broadening mechanisms.
The homogeneous profile includes contributions from natural
broadening, collisional broadening [15] and power
broadening and is convolved with the thermal Gaussian Doppler
profile. The group velocity is then approximated using the first
derivative of $n(\omega)$ with respect to $\omega$,
$$v_{g}=\frac{c}{n(\omega_{0})+\omega_{0}\frac{\partial n(\omega)}{\partial%
\omega}}=\frac{\partial\omega}{\partial k}.$$
(1)
At the frequency of peak transmission between the two absorption
lines, the probability for resonant interaction via the Doppler
shift is small, even at the temperatures used in these experiments.
Instead, interaction occurs mainly through the broad wings of the
homogeneous component of the profile. For example, at
$T=110^{\circ}$C the probability of an atom belonging to a
velocity class with a detuning of 1 GHz from resonance and an optical bandwidth of 110 MHz
(appropriate for our pulse duration) is about $10^{-4}$. Using an
estimated number density of $10^{13}$ cm${}^{-3}$ (based on
Ref. [16] and taking into account the natural isotopic
ratio), the optical depth $\alpha L$ is 0.4 for a 10 cm cell.
Our pulse bandwidth of $110$ MHz is less than the width of the
transmission window of approximately 1 GHz which allows the
variation in $v_{g}(\omega)$ and $\alpha(\omega)$ between the
absorption components to be explored. The frequency dependence of
the pulse delay and transmission at different temperatures is shown
in figure 3. For 10% transmission, suggested in
Ref. [13] as a practical limit for a delay line, the
usable bandwidth decreases from 1.1 GHz at $110^{\circ}$C to 540 MHz
at $127^{\circ}$C. At the former temperature we expect that a
fractional delay an order of magnitude larger could be achieved with
shorter pulses that utilise the available bandwidth.
The effect of saturation is quantified in our numerical model by the
saturation parameter $S=I/I_{sat}(\omega)$, where
$I_{sat}(\omega)$ is the saturation intensity which is inversely
proportional to the frequency dependent absorption cross section.
This means saturation has little effect on the wings of a
homogeneously broadened line. In contrast, for an inhomogeneously
broadened line, saturation can occur over the entire profile. This
means that although increasing the temperature decreases the usable
bandwidth due to Doppler broadening (figure 3), at a
given temperature the bandwidth may be increased by increasing the
saturation.
Expression (1) for $v_{g}$ provides an accurate approximation
for the delay at frequencies near the point of peak transmission.
However, the agreement is found to reduce for frequencies closer to
the absorption peaks. This may be due to higher spectral derivatives
in $n(\omega)$ becoming more significant. It was observed that the
pulse shape suffers little distortion from dispersive and absorptive
mechanisms and remains preserved. This supports the finding in
Ref. [13] that these mechanisms compensate one another.
Temperature tuning provides a method for changing the delay over a
wide range, but the change is inherently slow as the cell heats or
cools. Fast control of the delay was achieved in
Ref. [17] by using two additional laser fields to modify
the dispersion by saturating both absorption lines to reduce the
atomic ground state population. While this approach gives rapid
delay tunability, it produces a relatively small tuning range. In
the present work, where the pulses are tuned between absorption
resonances from different isotopes, hyperfine optical pumping allows
the ground state population of one of the resonances to be strongly
modified with minimal modification of the other. An optical pumping
laser tuned to one of the ${}^{87}$Rb components of the $D_{1}$ line is
used to modify the population of ${}^{87}$Rb (F=1) ground state atoms
interacting with the signal. Tuning the optical pumping laser to the
$D_{1}$ ${}^{87}$Rb (F=1) or (F=2) component respectively reduces or
increases the population in the F=1 ground state
(figure 4b and c). Pumping on the $D_{1}$ line is more
efficient than on the $D_{2}$ line as it has no cycling transition.
Moreover, the optical depth is less both for this line and for the
${}^{87}$Rb component, giving greater longitudinal pumping
homogeneity. In this manner, at $110^{\circ}$C the delay was reduced
by approximately 17.5% or increased by 25% of the unmodified delay
(figure 4a). The delay reconfiguration time is mainly
limited by atomic time of flight, which is expected to be of the
order of microseconds. A range of delay tuning greater than the
pulse duration should be possible with shorter pulses.
Measurements of the delay of optical pulses with very low intensity
have previously been performed with an average of less than one
photon per pulse [14]. In this work however, we are
interested in establishing high intensity limits, which we do by
using both positive and negative pulse shapes
(figure 5a). A negative pulse shape is an interval of
reduced intensity on a relatively large optical DC background.
Negative pulse shapes are found to exhibit similar delay to positive
pulses of comparable intensity, and they may be useful for other
applications in that the signal to noise ratio can be higher.
Furthermore, such pulses may be of interest in experiments involving
atomic or optical coherence. By neutral density filtering and
adjusting the beam waist, the delay was measured for a range of
values of the intensity of the 10 MHz pulse train
(figure 5b). It is observed that the delay decreases
with increasing intensity by approximately 1.8 ps/(mW/cm${}^{2}$). We
attribute this to the increase in saturation of the atomic medium,
which has the effect of reducing the dispersion.
In conclusion, optical pulses of 9.3 ns duration (FWHM) with
frequency tuned between the ${}^{85}$Rb (F=2) and ${}^{87}$Rb (F=1)
components of the $D_{2}$ line were delayed in a 10 cm vapour cell at
$135^{\circ}$C with low distortion by more than 40 ns (fractional
delay 4.3) and with approximately 10% transmission. The delay
arises from the intrinsic positive dispersion between the two
absorption peaks. In this experiment the fractional delay was
limited by the pulse duration, but should be ultimately limited by
the $\sim$1 GHz transmission window, making a fractional delay of 40
possible.
The dependence of delay, transmission and usable bandwidth with
temperature and frequency was investigated. With increasing
temperature and atomic density the delay increases and the
transmission reduces. This trend also applies as the optical
frequency is tuned closer to one of the resonances. A reduction in
usable bandwidth was measured with increasing temperature. In
addition, the delay was found to reduce with increasing intensity.
This was observed using negative pulses, which were delayed in a
similar manner to positive pulses.
In contrast to EIT-based delay lines this technique provides the
large bandwidth necessary for delaying short optical pulses and also
operates at both low and high intensity levels.
Using the spectral region between absorption components of different
isotopes for an all-optical delay line allows optical pumping with a
single laser to modify the interacting population of one absorption
component without modifying the other. Rapid tuning of the delay was
obtained in this way over a range more than 40% of the unmodified
pulse delay at $110^{\circ}$C.
Finally, we note that such broadband delay lines may be used to
delay many forms of optical quantum information encoding such as
weak coherent pulses or squeezed states.
References
References
[1]
Boyd R W and Gauthier D J 2002, Progress in Optics 43, 497 (Elsevier, New York)
[2]
Ku P C, Chang-Hasnian C J and Chuang S L 2002, Electron. Lett. 38, 1581
[3]
Akulshin A M, Cimmino A, Sidorov A I, Hannaford P and Opat G I 2003, Phys. Rev. A 67, 011801
[4]
Phillips D F, Fleischhauer A, Mair A, Walsworth R L and Lukin M D 2001, Phys. Rev. Lett. 86, 783
[5]
Liu C, Dutton Z, Behroozi C H and Hau L V 2001, Nature 409, 490
[6]
Kasapi A, Jain M, Yin G Y and Harris S E 1995, Phys. Rev. Lett. 74, 2447
[7]
Hau L V, Harris S E, Dutton Z and Behroozi C H 1999, Nature 397, 594
[8]
Kash M M, Sautenkov V A, Zibrov A S, Hollberg L, Welch G R, Lukin M D, Rostovtsev Y, Fry E S and Scully M O 1999, Phys. Rev. Lett. 82, 5229
[9]
Matsko A B, Strekalov D V and Maleki L 2005, Optics Express 13, 2210
[10]
Shakhmuratov R N, Rebane A, Mégret P and Odeurs J 2005, Phys. Rev. A 71, 053811
[11]
Okawachi Y, Bigelow M S, Sharping J E, Zhu Z, Schweinsberg A, Gauthier D J, Boyd R W, Gaeta A L 2005, Phys. Rev. Lett. 94, 153902
[12]
Sharping J E, Okawachi Y and Gaeta A L 2005, Optics Express 13, 6092
[13]
Camacho R M, Pack M V, Howell J C 2006, Phys. Rev. A 73, 063812
[14]
Camacho R M, Broadbent C J, Ali-Kahn I and Howell J C 2007, Phys. Rev. Lett. 98, 043902
[15]
Akulshin A M, Velichansky V L, Zibrov A S, Nikitin V V, Sautenkov V A, Yurkin E K and Senkov N V 1982, JETP Lett. 36, 303
[16]
Steck D A 2003, Rubidium 87 D line data, http://steck.us/alkalidata/
[17]
Camacho R M, Pack M V, Howell J C, Schweinsberg A and Boyd R W 2007, Phys. Rev. Lett. 98, 153601 |
Hybrid stress quadrilateral finite element approximation for stochastic plane elasticity equations
††thanks: This work was supported in part by
National Natural Science Foundation of China (11171239), Major Research
Plan of National Natural Science Foundation of China (91430105) and
Open Fund of Key Laboratory of Mountain Hazards and Earth Surface Processes, CAS.
Xiaojing Xu
, Wenwen Fan, Xiaoping Xie
School of Mathematics, Sichuan University, Chengdu
610064, China
Email: xuxiaojing0603@126.comEmail:fwwen123@126.comCorresponding author. Email: xpxie@scu.edu.cn
()
Abstract
This paper considers stochastic hybrid stress quadrilateral finite element analysis of plane elasticity equations with stochastic Young’s modulus and stochastic loads. Firstly, we apply Karhunen-Lo$\grave{e}$ve expansion to
stochastic Young’s modulus and stochastic loads so as to turn the original problem into a system
containing a finite number of deterministic parameters.
Then we deal with the stochastic field and the space field by $k-$version/$p-$version finite element methods and a hybrid stress quadrilateral finite element method, respectively. We show that the derived a priori error estimates are uniform with respect to the Lam$\acute{e}$ constant $\lambda\in(0,+\infty)$. Finally, we provide some numerical results.
Keywords.
stochastic plane elasticity Karhunen-Lo$\grave{e}$ve expansion hybrid stress finite element $k\times h-$version $p\times h-$version uniform error estimate
1 Introduction
Let $D\subset R^{2}$ be a bounded, connected, convex and open set with boundary $\partial D=\partial D_{0}\cup\partial D_{1}$ and meas($\partial D_{0}$) $>$ 0, and let ($\Omega$,$\mathcal{F}$,$\mathcal{P}$) be a complete probability space, where $\Omega$, $\mathcal{F}$, $\mathcal{P}$ denote respectively the set of outcomes, the $\sigma$-algebra of subsets of $\Omega$ and the probability measure. Consider the following stochastic plane elasticity equations: for almost everywhere (a.e.) $\theta\in\Omega$
$$\left\{\begin{array}[]{ll}-{\bf div}{\bm{\sigma}}(\cdot,\theta)=\textbf{f}(%
\cdot,\theta),&\text{in}~{}D,\\
\bm{\sigma}(\cdot,\theta)=\mathcal{C}\epsilon(\textbf{u}(\cdot,\theta)),&\text%
{in}~{}D,\\
\textbf{u}(\cdot,\theta)|_{\partial D_{0}}=0,{\bm{\sigma}}(\cdot,\theta)%
\textbf{n}|_{\partial D_{1}}=\textbf{g}(\cdot,\theta),&\end{array}\right.$$
(1.1)
where $\bm{\sigma}:\overline{D}\times\Omega\rightarrow R_{sym}^{2\times 2}$ denotes the symmetric stress tensor field, $\textbf{u}:\overline{D}\times\Omega\rightarrow R^{2}$ the displacement field,
$\epsilon(\textbf{u})=(\bigtriangledown\textbf{u}+\bigtriangledown^{T}\textbf{u%
})/2$ the strain with $\bigtriangledown=(\frac{\partial}{\partial x_{1}},\frac{\partial}{\partial x_{%
2}})^{T}$ for $\textbf{x}=(x_{1},x_{2})$,
$\textbf{f}:D\times\Omega\rightarrow R^{2}$ the body loading density and $\textbf{g}:\partial D_{1}\times\Omega\rightarrow R^{2}$ the surface traction, n the unit outward vector normal to $\partial D$, $\mathcal{C}$ the elasticity modulus tensor with
$$\mathcal{C}\epsilon(\textbf{u})=2\mu\epsilon(\textbf{u})+\lambda\mbox{div}%
\textbf{u}\textbf{I},$$
I the $2\times 2$ identity tensor, and $\mu,\lambda$ the Lam$\acute{e}$ parameters given by $\mu=\frac{\widetilde{E}}{2(1+\nu)}$, $\lambda=\frac{\widetilde{E}\nu}{(1+\nu)(1-2\nu)}$ for plane strain problems and by $\mu=\frac{\widetilde{E}}{2(1+\nu)}$, $\lambda=\frac{\widetilde{E}}{(1+\nu)(1-\nu)}$ for plane stress problems, with $\nu\in(0,0.5)$ the Poisson ratio and $\widetilde{E}:D\times\Omega\rightarrow R$ the Young’s modulus which is stochastic with
$$0<e_{min}\leq\widetilde{E}(\textbf{{x}},\theta)\leq e_{max}~{}~{}~{}\text{ a.e%
. in }D\times\Omega$$
(1.2)
for positve constants $e_{min}$ and $e_{max}$. Since in the analysis of this paper we need to use an explicit form of $\widetilde{E}$, we rewrite the second equation of (1.1) as
$$\bm{\sigma}(\cdot,\theta)=\widetilde{E}{\textbf{C}}\epsilon(\textbf{u}(\cdot,%
\theta)),$$
(1.3)
where the tensor ${\textbf{C}}:=\frac{1}{\widetilde{E}}\mathcal{C}$ depends only on the Poisson ratio $\nu$.
It is well-known that the standard 4-node displacement quadrilateral element (abbr. bilinear element) yields poor results for deterministic plane elasticity equations with bending and, for deterministic plane strain problems, at
the nearly incompressible limit. To improve its performance, Wilson et al. [26, 24] developed methods of incompatible modes by enriching the standard (compatible) displacement modes with internal incompatible displacements. Pian and Sumihara [17] proposed a hybrid stress quadrilateral element (PS element) based on Hellinger-Reissner variational principle,
where the displacement vector is approximated by isoparametric bilinear interpolations,
and the stress tensor by a piecewise-independent 5-parameter mode. Xie and Zhou [31, 32] derived robust 4-node hybrid stress quadrilateral elements by optimizing stress modes with a so-called energy-compatibility condition, i.e. the assumed stress terms are orthogonal to the enhanced strains caused by Wilson bubble displacements.
In [35] Zhou and Xie gave a unified analysis for some hybrid stress/strain quadrilateral methods,
but the upper bound in the error estimate is not uniform with respect to the Lam$\acute{e}$ parameter $\lambda$.
Yu, Xie and Carstensen [33] derived uniform convergence results for the hybrid stress methods in [17] and [31], in the sense that the error bound is independent of $\lambda$ .
In the numerical analysis of stochastic partial differential equations, stochastic finite element methods, which employ finite elements in the space domain, have gained much attention in the past two decades. In the probability domain, the stochastic finite element methods use
two types of approximation methods, statistical approximation and non-statistical approximation. Monte Carlo sampling(MCs) is one of the most commonly used statistical approximation methods [22]. In MCs, one generates realizations of stochastic terms so as to make the problem deterministic, and only needs to compute the deterministic problem repeatedly, and collect an ensemble of solutions, through which statistical information, such as mean and variance, can be obtained. The disadvantage of MCs lies in the need of a large amount of calculations and its low convergence rate. There are also some variants of MCs such as quasi Monte Carlo[6] and the stochastic collocation method[2, 14, 15, 16].
Non-statistical approximation methods mainly contain perturbation methods, Neumann series expansion methods[10] and so on at the beginning. But these methods are limited to the magnitude of uncertainties of stochastic terms and the accuracy of calculation. Later, polynomial approximation is used for the stochastic part. For example,
Polynomial chaos (PC) expansion is applied in [27, 10] to represent solutions
formally and obtain solutions by solving the expansion coefficients [9, 13]. Generalized polynomial chaos (gPC) is used to express solutions in [12, 28, 29]. According to [30], one can achieve exponential convergence when optimum gPC is chosen. Subsequently, it was further generalized [1, 7] that p version, k version and p-k-version finite element methods could be used for the approximation of the stochastic part.
So far, there are very limited studies on the numerical solution of the stochastic plane elasticity equations (1.1). In [11] a generalized $n$th order stochastic perturbation technique is implemented in conjunction with linear finite elements to model a 1D linear elastostatic problem with a single random variable.
In [9] the numerical solution of problem (1.1) is considered with stochastic Young’s modulus $\widetilde{E}$, where PC approximation and bilinear finite elements are applied respectively to the stochastic domain and the space domain. We refer to [5, 25] for some other related studies.
In this contribution, we shall propose and analyze stochastic $k\times h-$version and $p\times h-$version finite element methods for the problem (1.1), where
we use $k-$version/$p-$version finite element methods for the stochastic domain and PS hybrid stress quadrilateral finite element for the space domain.
We arrange the paper as follows. In Section 2 we show stochastic mixed variational formulations of (1.1), and give the existence and uniqueness of the weak solution.
Section 3 discusses the approximation of the stochastic coefficient and stochastic loads, as well as the truncated stochastic mixed variational formulations. Section 4 analyzes the proposed stochastic $k\times h-$version and $p\times h-$version
finite element methods
and derives uniform a priori error estimates. Finally, Section 5 provides some numerical results.
2 Stochastic mixed variational formulations
2.1 Notations
For the probability space $(\Omega,\mathcal{F},\mathcal{P})$ and an integer $m$, denote
$$L^{m}_{P}(\Omega):=\left\{Y|\ Y\text{ is a random variable in }(\Omega,%
\mathcal{F},\mathcal{P})\text{ with }\int_{\Omega}|Y(\theta)|^{m}\mathrm{d}P(%
\theta)<+\infty\right\}.$$
If $Y\in L_{P}^{1}(\Omega)$, we denote its expected value by
$$E[Y]=\int_{\Omega}Y(\theta)\mathrm{d}P(\theta)=\int_{R}y\mathrm{d}F(y),$$
(2.1)
where $F$ is the distribution probability measure of $Y$, given by $F(B)=P(Y^{-1}(B))$ for any borel set $B$ in $R$. Assume that $F(B)$ is absolutely continuous with respect to Lebesgue measure, then there exists a density function for $Y$, $\rho:R\rightarrow[0,+\infty)$, such that
$$E[Y]=\int_{R}y\rho(y)\mathrm{d}y.$$
(2.2)
We denote by $H^{m}(D)$ the usual Sobolev space consisting of functions defined on the domain $D$, with all derivatives of order
up to $m$ square-integrable. Let $(\cdot,\cdot)_{H^{m}(D)}$be the usual inner product on $H^{m}(D)$. The norm $||\cdot||_{m}$ on $H^{m}(D)$ deduced by $(\cdot,\cdot)_{H^{m}(D)}$ is given by
$$||v||_{m}:=(\sum_{0\leq j\leq m}|v|_{j}^{2})^{1/2}\text{ with the semi-norm }|%
v|_{j}:=(\sum_{|\alpha|=j}||D^{\alpha}v||_{0}^{2})^{1/2}.$$
In particular,
$L^{2}(D):=H^{0}(D)$. Denote
$$L^{\infty}(D):=\{w:\ ||w||_{\infty}:=esssup_{x\in D}|w(x)|<\infty\}.$$
We define the following stochastic Sobolev spaces:
$$L^{2}_{P}(\Omega;H^{m}(D)):=\{w:~{}w\text{ is strongly measurable with }w(%
\cdot,\theta)\in H^{m}(D)\text{ for $\theta\in\Omega$ and }||w||_{\widetilde{m%
}}<+\infty\},$$
$$L^{\infty}_{P}(\Omega;L^{\infty}(D)):=\{w:~{}w\text{ is strongly measurable %
with }w(\cdot,\theta)\in L^{\infty}(D)\text{ for $\theta\in\Omega$ and }||w||_%
{\widetilde{\infty}}<+\infty\},$$
where the norms $||\cdot||_{\widetilde{m}}$, $||\cdot||_{\widetilde{\infty}}$ are respectively defined as
$$||w||_{\widetilde{m}}:=(E[||w(\cdot,\theta)||^{2}_{m}])^{\frac{1}{2}},\quad||w%
||_{\widetilde{\infty}}:=esssup_{\theta\in\Omega}||w(\cdot,\theta)||_{\infty}.$$
(2.3)
On the other hand,
since stochastic functions intrinsically have different structures with respect to $\theta\in\Omega$ and $\textbf{x}\in D$, we follow [1] to introduce tensor spaces for the analysis of numerical approximation. Let $X_{1}(\Omega)$, $X_{2}(D)$ be Hilbert spaces. The tensor spaces $X_{1}(\Omega)\otimes X_{2}(D)$ is the completion of formal sums
$\phi(\theta,\textbf{x})=\Sigma_{i=1,...,n}u_{i}(\theta)v_{i}(\textbf{x}),u_{i}%
\in X_{1}(\Omega),v_{i}\in X_{2}(D)$, with respect to the inner product$(\phi,\widehat{\phi})_{X_{1}\otimes X_{2}}:=\Sigma_{i,j}(u_{i},\widehat{u_{j}}%
)_{X_{1}}(v_{i},\widehat{v_{j}})_{X_{2}}$.
Then, for the tensor space $L^{2}_{P}(\Omega)\otimes H^{m}(D)$,
we have the following isomorphism:
$$L^{2}_{P}(\Omega;H^{m}(D))\simeq L^{2}_{P}(\Omega)\otimes H^{m}(D).$$
For convenience, we use the notation $a\lesssim b$ to represent that there exists a generic positive constant C such that $a\leq Cb$, where $C$ is independent of the Lam$\acute{e}$ constant $\lambda$ and the mesh parameters $h$, $k$, the polynomial degree $p$ in the stochastic $k\times h-$version and $p\times h-$version
finite element methods.
2.2 Weak formulations
Introduce the spaces
$${V_{D}}:=\{v\in H^{1}(D)^{2}:v|_{\partial D_{0}}=0\},$$
$$\small{\Sigma_{D}}:=\left\{\begin{array}[]{ll}L^{2}(D;R^{2\times 2}_{sym}):=\{%
\tau:D\rightarrow R^{2\times 2}|\ \tau_{ij}\in L^{2}(D),\ \tau_{ij}=\tau_{ji},%
\ i,j=1,2\},&\text{if}~{}~{}\text{meas}(\partial D_{1})>0,\\
\{\bm{\tau}\in L^{2}(D;R^{2\times 2}_{sym}):\int_{D}tr\bm{\tau}\mathrm{d}%
\textbf{x}=0\text{ with trace }tr\bm{\tau}:=\bm{\tau}_{11}+\bm{\tau}_{22}\},&%
\text{if}~{}~{}\partial D_{1}=\emptyset.\end{array}\right.$$
Then the weak problem for the model (1.1) reads as: Find $(\bm{\sigma},\textbf{u})\in L^{2}_{P}(\Omega;~{}\Sigma_{D})\times L^{2}_{P}(%
\Omega;~{}{V_{D}})$ such that
$$\left\{\begin{array}[]{ll}a(\bm{\sigma},\bm{\tau})-b(\bm{\tau},\textbf{u})=0,&%
\forall\bm{\tau}\in L^{2}_{P}(\Omega;~{}\Sigma_{D}),\\
b(\bm{\sigma},\textbf{v})=\ell(\textbf{v}),&\forall\textbf{v}\in L^{2}_{P}(%
\Omega;{V_{D}}),\end{array}\right.$$
(2.4)
where
the bilinear forms $a(\cdot,\cdot):L^{2}_{P}(\Omega;~{}\Sigma_{D})\times L^{2}_{P}(\Omega;~{}%
\Sigma_{D})\rightarrow R$, $b(\cdot,\cdot):L^{2}_{P}(\Omega;~{}\Sigma_{D})\times L^{2}_{P}(\Omega;~{}{V_{D%
}})\rightarrow R$ and the linear form $\ell:L^{2}_{P}(\Omega;~{}{V_{D}})\rightarrow R$ are defined respectively by
$$a(\bm{\sigma},\bm{\tau}):=E[\int_{D}\frac{1}{\widetilde{E}}\bm{\sigma}:{%
\textbf{C}}^{-1}\bm{\tau}\mathrm{d}\textbf{x}]=\int_{\Omega}\int_{D}\frac{1}{%
\widetilde{E}}\bm{\sigma}:{\textbf{C}}^{-1}\bm{\tau}\mathrm{d}\textbf{x}%
\mathrm{d}P(\theta),$$
(2.5)
$$b(\bm{\tau},\textbf{u}):=E[\int_{D}\bm{\tau}:\epsilon(\textbf{u})\mathrm{d}%
\textbf{x}]=\int_{\Omega}\int_{D}\bm{\tau}:\epsilon(\textbf{u})\mathrm{d}%
\textbf{x}\mathrm{d}P(\theta),$$
(2.6)
$$\ell(\textbf{v}):=E[\int_{D}\textbf{f}\textbf{v}\mathrm{d}\textbf{x}+\int_{%
\partial D_{1}}\textbf{g}\cdot\textbf{v}\mathrm{d}s]=\int_{\Omega}\int_{D}%
\textbf{f}\textbf{v}\mathrm{d}\textbf{x}\mathrm{d}P(\theta)+\int_{\Omega}\int_%
{\partial D_{1}}\textbf{g}\cdot\textbf{v}\mathrm{d}s\mathrm{d}P(\theta).$$
(2.7)
Here $\bm{\sigma}:\bm{\tau}=\sum_{i,j=1}^{2}\bm{\sigma}_{ij}\bm{\tau}_{ij}$.
It is easy to see that the following continuity conditions hold: for $\bm{\sigma},\bm{\tau}\in L^{2}_{P}(\Omega;~{}\Sigma_{D})$, $\textbf{v}\in L^{2}_{P}(\Omega;~{}{V_{D}})$,
$$a(\bm{\sigma},\bm{\tau})\lesssim||\bm{\sigma}||_{\widetilde{0}}~{}||\bm{\tau}|%
|_{\widetilde{0}},\quad b(\bm{\tau},\textbf{v})\lesssim||\bm{\tau}||_{%
\widetilde{0}}~{}|\textbf{v}|_{\widetilde{1}},\quad\ell(\textbf{v})\lesssim(||%
\textbf{f}||_{\widetilde{0}}+||\textbf{g}||_{\widetilde{0},\partial D_{1}})~{}%
|\textbf{v}|_{\widetilde{1}}.$$
(2.8)
According to the theory of mixed finite element methods [3][4], we need the following two stability conditions for the well-posedness of the weak problem (2.4):
(A) Kernel-coercivity: for any $\bm{\tau}\in Z^{0}:=\{\bm{\tau}\in L^{2}_{P}(\Omega;~{}\Sigma_{D}):b(\bm{\tau}%
,\textbf{v})=0,~{}\forall~{}\textbf{v}\in L^{2}_{P}(\Omega;~{}{V_{D}})\}$
it holds
$$||\bm{\tau}||^{2}_{\widetilde{0}}\lesssim a(\bm{\tau},\bm{\tau)}.$$
(2.9)
(B) Inf-sup condition: for any $\textbf{v}\in L^{2}_{P}(\Omega;~{}{V_{D}})$ it holds
$$|\textbf{v}|_{\widetilde{1}}\lesssim\sup_{0\neq\bm{\tau}\in L^{2}_{P}(\Omega;~%
{}\Sigma_{D})}\frac{b(\bm{\tau},\textbf{v})}{||\bm{\tau}||_{\widetilde{0}}}.$$
(2.10)
Theorem 2.1.
The uniform stability conditions $(\textbf{A})$ and $(\textbf{B})$ hold.
Proof.
For any $\bm{\tau}\in Z^{0}$, we have, a.e. $\theta\in\Omega$, $\bm{\tau}(\cdot,\theta)\in\{\bm{\tau}\in{\Sigma_{D}}:\int_{D}\bm{\tau}:%
\epsilon(\textbf{v})\mathrm{d}\textbf{x}=0~{}~{}~{}\forall~{}\textbf{v}\in{V_{%
D}}\}$.
According to Theorem 2.1 in [33] and the assumption (1.2), it holds
$$\int_{D}\bm{\tau}(\cdot,\theta):\bm{\tau}(\cdot,\theta)\mathrm{d}\textbf{x}%
\lesssim\int_{D}\frac{1}{\widetilde{E}}\bm{\tau}(\cdot,\theta):\textbf{C}^{-1}%
(\cdot,\theta)\bm{\tau}(\cdot,\theta)\mathrm{d}\textbf{x},$$
which leads to
$$\int_{\Omega}\int_{D}\bm{\tau}:\bm{\tau}\mathrm{d}\textbf{x}\mathrm{d}P(\theta%
)\lesssim\int_{\Omega}\int_{D}\frac{1}{\widetilde{E}}\cdot\bm{\tau}:\textbf{ C%
}^{-1}\bm{\tau}\mathrm{d}\textbf{x}\mathrm{d}P(\theta),$$
i.e. $(\textbf{A})$ holds.
Let $\textbf{v}\in L^{2}_{P}(\Omega;~{}{V_{D}})$ and notice $\epsilon(\textbf{v})\in L^{2}_{P}(\Omega;~{}\Sigma_{D})$. Then
$$|\epsilon(\textbf{v})|_{\widetilde{0}}\leq\sup_{\bm{\tau}\in L^{2}_{P}(\Omega;%
~{}\Sigma_{D})\backslash\{0\}}\frac{\int_{\Omega}\int_{D}\bm{\tau}:\epsilon(%
\textbf{v})\mathrm{d}\textbf{x}\mathrm{d}P(\theta)}{||\bm{\tau}||_{\widetilde{%
0}}}.$$
Hence $(\textbf{B})$ follows from the equivalence between the two norms $|\epsilon(\textbf{v})|_{\widetilde{0}}$ and $|\textbf{v}|_{\widetilde{1}}$ on $L^{2}_{P}(\Omega;~{}{V_{D}})$.
∎
In view of the above conditions, we immediately obtain the following well-posedness result:
Theorem 2.2.
Assume that $\textbf{f}\in L_{P}^{2}(\Omega,L^{2}(D)^{2})$, $\textbf{g}\in L_{P}^{2}(\Omega,L^{2}(\partial D_{1})^{2})$. Then the weak problem (2.4) admits a unique solution $(\bm{\sigma},\textbf{u})\in L^{2}_{P}(\Omega;~{}\Sigma_{D})\times L^{2}_{P}(%
\Omega;~{}{V_{D}})$ such that
$$||\bm{\sigma}||_{\widetilde{0}}+|\textbf{u}|_{\widetilde{1}}\lesssim||\textbf{%
f}||_{\widetilde{0}}+||\textbf{g}||_{\widetilde{0},\partial D_{1}}.$$
(2.11)
3 Truncated stochastic mixed variational formulations
In order to solve the weak problem (2.4) by deterministic numerical methods, we firstly approximate
the stochastic coefficient $\widetilde{E}$ and the loads f, g by using a finite number of random variables; we refer to [21] for several approximation approaches. Here, we only consider the Karhunen-Lo$\grave{e}$ve(K-L) expansion.
3.1 Karhunen-Lo$\grave{e}$ve(K-L) expansion
For any stochastic process $\phi(\textbf{x},\theta)\in L_{P}^{2}(\Omega;L^{2}(D))$ with covariance function
$cov[\phi](\textbf{x}_{1},\textbf{x}_{2}):D\times D\rightarrow R$ , which is bounded, symmetric and positive definitely. Let $\{(\lambda_{n},b_{n})\}_{n=1}^{\infty}$ be the sequence of eigenpairs satisfying
$$\int_{D}~{}cov~{}[\phi]~{}(\textbf{x}_{1},\textbf{x}_{2})~{}b_{n}(\textbf{x}_{%
2})~{}\mathrm{d}\textbf{x}_{2}=\lambda_{n}b_{n}(\textbf{x}_{1}),$$
(3.1)
$$\sum_{n=1}^{+\infty}\lambda_{n}=\int_{D}cov[\phi](\textbf{x},\textbf{x})%
\mathrm{d}\textbf{x},\quad\int_{D}~{}b_{i}(\textbf{x})b_{j}(\textbf{x})~{}%
\mathrm{d}\textbf{x}=\delta_{ij},\ i,j=1,2,\cdots,$$
(3.2)
and $\lambda_{1}\geq\lambda_{2}\geq\cdots>0$.
Then the Karhunen-Lo$\grave{e}$ve(K-L) expansion of $\phi(\textbf{x},\theta)$ is given by
$$\phi(\textbf{x},\theta)=E[\phi](\textbf{x})+\sum^{\infty}_{n=1}\sqrt{\lambda_{%
n}}b_{n}(\textbf{x})Y_{n}(\theta),$$
(3.3)
and the truncated K-L expansion of $\phi(\textbf{x},\theta)$ is
$$\phi_{N}(\textbf{x},\theta)=E[\phi](\textbf{x})+\sum^{N}_{n=1}\sqrt{\lambda_{n%
}}b_{n}(\textbf{x})Y_{n}(\theta).$$
(3.4)
Here $\{Y_{n}\}^{\infty}_{n=1}$ are mutually uncorrelated with mean zeros and unit variance with $Y_{n}(\theta)=\frac{1}{\sqrt{\lambda}_{n}}\int_{D}(\phi(\textbf{x},\theta)-E[%
\phi](\textbf{x}))b_{n}(\textbf{x})\mathrm{d}\textbf{x}$.
By Mercer’s theorem [20], it holds
$$\sup_{\textbf{x}\in D}E[(\phi-\phi_{N})^{2}](\textbf{x})=\sup_{\textbf{x}\in D%
}\sum_{n=N+1}^{+\infty}\lambda_{n}b_{n}^{2}(\textbf{x})\rightarrow 0.~{}~{}~{}%
~{}~{}~{}~{}~{}as~{}~{}~{}N\rightarrow\infty.$$
(3.5)
In what follows we show the estimation of the truncated error $\phi-\phi_{N}$ in norms $||\cdot||_{\widetilde{0}}$ and $||\cdot||_{\widetilde{\infty}}$, respectively.
From (3.2) it follows
$$||\phi-\phi_{N}||_{\widetilde{0}}^{2}=\sum_{n=N+1}^{+\infty}\lambda_{n}\quad%
\text{and}\quad||\phi-\phi_{N}||_{\widetilde{0}}\rightarrow 0\quad as\ \ N%
\rightarrow+\infty.$$
(3.6)
Obviously the convergence rate of $||\phi-\phi_{N}||_{\widetilde{0}}$ is strongly depending on the decay rate of the eigenvalues $\lambda_{n}$, which ultimately depends on the regularity of the covariance function $cov[\phi]$.
Generally, the smoother the covariance is, the faster the eigenvalues decay, which implies the faster $||\phi-\phi_{N}||_{\widetilde{0}}$ converges to zero.
Now we quote from [23] the following definition (Definition 3.1, which are related to the regularity of $cov[\phi]$) and lemma (Lemma 3.1, which gives the decay rate of the eigenvalues $\lambda_{n}$).
Definition 3.1.
[23]
The covariance function $cov[\phi]:D\times D\rightarrow R$ is said to be piecewise analytic/smooth on $D\times D$ if there exists a finite family $(D_{j})_{1\leq j\leq J}\subset R^{2}$ of open hypercubes such that $\overline{D}\subseteq\cup_{j=1}^{J}\overline{D}_{j}$, $D_{j}\cap D_{j^{\prime}}=\varnothing,~{}~{}\forall j\neq j^{\prime}$ and $cov[\phi]|_{D_{j}\times D_{j^{\prime}}}$ has an analytic/smooth continuation in a neighbourhood of
$\overline{D}_{j}\times\overline{D}_{j^{\prime}}$ for any pair $(j,j^{\prime})$.
Lemma 3.1.
[23]
If $cov[\phi]$ is piecewise analytic on $D\times D$, then for the eigenvalue sequence $\{\lambda_{n}\}_{n\geq 1}$, there exist constants $c_{1},c_{2}$ depending only on $cov[\phi]$ such that
$$0\leq\lambda_{n}\leq c_{1}e^{-c_{2}n^{1/2}},~{}~{}~{}~{}~{}~{}~{}~{}~{}\forall
n%
\geq 1.$$
(3.7)
If $cov[\phi]$ is piecewise smooth on $D\times D$, then for any constant $s>0$ there exists a constant $c_{s}$ depending only on $cov[\phi]$ and $s$, such that
$$0\leq\lambda_{n}\leq c_{s}n^{-s},~{}~{}~{}~{}~{}~{}~{}~{}~{}\forall n\geq 1.$$
(3.8)
By Lemma 3.1, we immediately have the following convergence results.
Lemma 3.2.
If $cov[\phi]$ is piecewise analytic on $D\times D$, then there exists constants $c_{1},c_{2}$ depending only on $cov[\phi]$ such that
$$||\phi-\phi_{N}||_{\widetilde{0}}\leq\frac{2c_{1}}{c_{2}^{2}}(1+c_{2}N^{1/2})e%
^{-c_{2}N^{1/2}},~{}~{}~{}\forall N\geq 1.$$
(3.9)
If $cov[\phi]$ is piecewise smooth on $D\times D$, then for any $s>0$ there exists
$C_{s}$ depending only on $cov[\phi]$ and $s$, such that
$$||\phi-\phi_{N}||_{\widetilde{0}}\leq C_{s}N^{-s},~{}~{}~{}\forall N\geq 1.$$
(3.10)
To estimate $||\phi-\phi_{N}||_{\widetilde{\infty}}$,
we make the following assumption:
Assumption 3.1.
The random variables $\{Y_{n}(\theta)\}_{n=1}^{\infty}$ in the K-L expansion are independent and uniformly bounded with
$$||Y_{n}(\theta)||_{L^{\infty}(\Omega)}\leq C_{Y},~{}~{}~{}~{}~{}\forall n\geq 1,$$
where $C_{Y}$ is a positive constant.
Lemma 3.3.
[8, 23]
Suppose Assumption 3.1 holds.
If
$\text{cov}[\phi]$ is piecewise analytic on $D\times D$,
then there exist a constant $c>0$ such that, for any $s>0$, it holds
$$||\phi-\phi_{N}||_{\widetilde{\infty}}\leq Ce^{-c(1/2-s)N^{1/2}},\forall N\geq
1,$$
(3.11)
where $C$ is a positive constant depending on $s,c,\text{cov}[\phi]$ and $J$ given in Definition 3.1.
If
$\text{cov}[\phi]$ is piecewise smooth on $D\times D$, then for any
$t>0,r>0$, it holds
$$||\phi-\phi_{N}||_{\widetilde{\infty}}\leq C^{\prime}N^{1-t(1-r)/2},\forall N%
\geq 1,$$
(3.12)
where $C^{\prime}$ is a positive constant depending on $t,r,\text{cov}[\phi]$ and $J$.
Remark 3.1.
We note that we need to solve the integral equation
(3.1) to obtain the K-L expansion (3.3). For some special covariance functions, the equation can be solved analytically [10], but for more general cases numerical methods are required [8, 18, 23].
3.2 Finite dimensional approximations of $\widetilde{E}$, f, g
In this section, we use the K-L expansion to approximate $\widetilde{E}$, f and g.
For $\widetilde{E}$, assume its truncated K-L expansion is of the form
$$\widetilde{E}_{N}(\textbf{x},\theta)=\widetilde{E}_{N}(\textbf{x},Y_{1}(\theta%
),...,Y_{N}(\theta))=E[\widetilde{E}](\textbf{x})+\sum^{N}_{n=1}\sqrt{%
\widetilde{\lambda}_{n}}\widetilde{b}_{n}(\textbf{x})Y_{n}(\theta),$$
(3.13)
where $\{(\widetilde{\lambda}_{n},\widetilde{b}_{n}(\textbf{x}))\}_{n=1}^{N}$
and $\{Y_{n}(\theta)\}_{n=1}^{N}$ are the corresponding eigenpairs and random variables, respectively.
As for $\textbf{f}=(f_{1},f_{2})^{T}$ and $\textbf{g}=(g_{1},g_{2})^{T}$, we need to apply the K-L expansion to each of their components.
In this paper, following similar ways as in [1, 2] to avoid use of more notations, we assume the truncated K-L expansions of f and g take the following forms:
$$\small\textbf{f}_{N}(\textbf{x},\theta)=\textbf{f}_{N}(\textbf{x},Y_{1}(\theta%
),...,Y_{N}(\theta))=\left(\begin{array}[]{c}f_{1N}\\
f_{2N}\end{array}\right)=\left(\begin{array}[]{c}E[f_{1}](\textbf{x})\\
E[f_{2}](\textbf{x})\end{array}\right)+\sum^{N}_{n=1}\left(\begin{array}[]{c}%
\sqrt{\widehat{\lambda}_{1n}}\widehat{b}_{1n}(\textbf{x})\\
\sqrt{\widehat{\lambda}_{2n}}\widehat{b}_{2n}(\textbf{x})\end{array}\right)Y_{%
n}(\theta),$$
(3.14)
$$\small\textbf{g}_{N}(\textbf{x},\theta)=\textbf{g}_{N}(\textbf{x},Y_{1}(\theta%
),...,Y_{N}(\theta))=\left(\begin{array}[]{c}g_{1N}\\
g_{2N}\end{array}\right)=\left(\begin{array}[]{c}E[g_{1}](\textbf{x})\\
E[g_{2}](\textbf{x})\end{array}\right)+\sum^{N}_{n=1}\left(\begin{array}[]{c}%
\sqrt{\overline{\lambda}_{1n}}\overline{b}_{1n}(\textbf{x})\\
\sqrt{\overline{\lambda}_{2n}}\overline{b}_{2n}(\textbf{x})\end{array}\right)Y%
_{n}(\theta),$$
(3.15)
where $\{(\widehat{\lambda}_{in},\widehat{b}_{in}(\textbf{x}))\}_{n=1}^{N}$, $\{(\overline{\lambda}_{in},\overline{b}_{in}(\textbf{x}))\}_{n=1}^{N}$,$i=1,2$
are the corresponding eigenpairs.
Remark 3.2.
In practice, the Young’s modulus $\widetilde{E}$, the body force f and the surface load g may be independent. In such cases, the random variables $\{Y_{n}(\theta)\}_{n=1}^{N}$ in the truncated K-L expansions (3.13)-(3.15) for $\widetilde{E},f_{1},f_{2},g_{1},g_{2}$ may be different from each other.
However, the analysis of this paper still applies to these cases.
3.3 Truncated mixed formulations
By replacing ${\widetilde{E}},\textbf{f},\textbf{g}$ with their truncated forms
${\widetilde{E}_{N}},\textbf{f}_{N},\textbf{g}_{N}$ in the bilinear form $a(\cdot,\cdot)$, given in (2.5), and the linear form $\ell(\cdot)$, given in (2.7),
we can obtain the following modified mixed variational formulations for the weak problem (2.4):
find $(\bm{\sigma}_{N},\textbf{u}_{N})\in L^{2}_{P}(\Omega;~{}\Sigma_{D})\times L^{2%
}_{P}(\Omega;~{}{V_{D}})$ such that
$$\left\{\begin{array}[]{ll}a_{N}(\bm{\sigma}_{N},\bm{\tau})-b(\bm{\tau},\textbf%
{u}_{N})=0,&\forall\bm{\tau}\in L^{2}_{P}(\Omega;~{}\Sigma_{D}),\\
b(\bm{\sigma}_{N},\textbf{v})=\ell_{N}(\textbf{v}),&\forall\textbf{v}\in L^{2}%
_{P}(\Omega;~{}{V_{D}}).\end{array}\right.$$
(3.16)
We recall that $\{Y_{n}(\theta)\}_{n=1}^{N}$ are the random variables used in the K-L expansions of $\widetilde{E}$, f and g, which are assumed to satisfy Assumption 3.1. In what follows we denote
$$Y:=(Y_{1},Y_{2},...,Y_{N}),\quad\Gamma_{n}:=Y_{n}(\Omega)\subset R,\quad\Gamma%
:=\prod_{n=1}^{N}\Gamma_{n},$$
(3.17)
and let $\rho:\Gamma\rightarrow R$ be the joint probability density function of random vector $Y$ with
$\rho\in L^{\infty}(\Gamma)$.
According to Doob-Dynkin lemma [19], the weak solution of the modified problem (3.16) can be described by the random vector $Y$ as
$$\textbf{u}_{N}(\textbf{x},\theta)=\textbf{u}_{N}(\textbf{x},Y),\quad\bm{\sigma%
}_{N}(\textbf{x},\theta)=\bm{\sigma}_{N}(\textbf{x},Y),$$
and, by denoting $\textbf{y}:=(y_{1},y_{2},\cdots,y_{N})$, the corresponding strong formulation for (3.16) is of the form
$$\left\{\begin{array}[]{ll}-{\bf div}{\bm{\sigma}_{N}}(\textbf{x},\textbf{y})=%
\textbf{f}_{N}(\textbf{x},\textbf{y}),&\forall(\textbf{x},\textbf{y})\in D%
\times\Gamma,\\
\bm{\sigma}_{N}(\textbf{x},\textbf{y})=\widetilde{E}_{N}\textbf{C}~{}\epsilon(%
\textbf{u}_{N}(\textbf{x},\textbf{y})),&\forall(\textbf{x},\textbf{y})\in D%
\times\Gamma,\\
\textbf{u}_{N}(\textbf{x},\textbf{y})=0,&\forall(\textbf{x},\textbf{y})\in%
\partial D_{0}\times\Gamma,\\
{\bm{\sigma}}_{N}(\textbf{x},\textbf{y})\textbf{n}=\textbf{g}_{N}(\textbf{x},%
\textbf{y}),&\forall(\textbf{x},\textbf{y})\in\partial D_{1}\times\Gamma.\end{%
array}\right.$$
(3.18)
Recall that $\rho:\Gamma\rightarrow R$ is the joint probability density function of random vector $Y$. We introduce the weighted $L^{2}$-space
$$L^{2}_{\rho}(\Gamma):=\{v:\Gamma\rightarrow R~{}|~{}\int_{\Gamma}\rho v^{2}%
\mathrm{d}\textbf{y}<+\infty\}.$$
(3.19)
We note that from the norm definition (2.3) it follows
$$||w||_{\widetilde{m}}^{2}=\int_{\Gamma}\rho(\textbf{y})||w(\cdot,\textbf{y})||%
_{m}^{2}d\textbf{y}=||w||_{L^{2}_{\rho}(\Gamma)\otimes{H^{m}(D)}}^{2},\quad%
\forall w\in L^{2}_{\rho}(\Gamma)\otimes{H^{m}(D)}.$$
(3.20)
It is easy to see that the modified problem (3.16) is equivalent to the following
deterministic variational problem:
find $(\bm{\sigma}_{N},\textbf{u}_{N})\in(L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{D}})%
\times(L^{2}_{\rho}(\Gamma)\otimes{V_{D}})$ such that
$$\left\{\begin{array}[]{ll}a_{N}(\bm{\sigma}_{N},\bm{\tau})-b_{N}(\bm{\tau},%
\textbf{u}_{\rho})=0,&\forall\bm{\tau}\in L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{%
D}},\\
b_{N}(\bm{\sigma}_{N},\textbf{v})=\ell_{N}(\textbf{v}),&\forall\textbf{v}\in L%
^{2}_{\rho}(\Gamma)\otimes{V_{D}},\end{array}\right.$$
(3.21)
where
$$a_{N}(\bm{\sigma}_{N},\bm{\tau}):=\int_{\Gamma}\rho(\textbf{y})\int_{D}\frac{1%
}{\widetilde{E}_{N}}\cdot\bm{\sigma}_{N}:{{\textbf{C}}}^{-1}\bm{\tau}\mathrm{d%
}\textbf{x}\mathrm{d}\textbf{y},$$
(3.22)
$$b_{N}(\bm{\tau},\textbf{u}_{N}):=\int_{\Gamma}\rho(\textbf{y})\int_{D}\bm{\tau%
}:\epsilon(\textbf{u}_{N})\mathrm{d}\textbf{x}\mathrm{d}\textbf{y},$$
(3.23)
$$\ell_{N}(\textbf{v}):=\int_{\Gamma}\rho(\textbf{y})\int_{D}\textbf{f}_{N}%
\textbf{v}\mathrm{d}\textbf{x}\mathrm{d}\textbf{y}+\int_{\Gamma}\rho(\textbf{y%
})\int_{\partial D_{1}}\textbf{g}_{N}\cdot\textbf{v}\mathrm{d}s\mathrm{d}%
\textbf{y}.$$
(3.24)
The significance of the form (3.21) lies in that it turns the original formulation (2.4) into a deterministic one with perturbations of the Young’s modulus $\widetilde{E}$, the body force f and the surface load g. Lemma 3.4 shows, if the perturbations or the truncated errors are small enough, we can numerically solve the deterministic problem (3.21) so as to obtain an approximate solution of the original problem (2.4).
Remark 3.3.
In some applications it may be more efficient to numerically solve the problem
(3.21) just in a subdomain $\widehat{\Gamma}\subset\Gamma$, as, of course, will cause that the corresponding approximation solution has no value in $\Gamma\setminus{\widehat{\Gamma}}$.
Lemma 3.4.
Suppose that Assumption 3.1 holds and the covariance function, $cov[\widetilde{E}]$, of $\widetilde{E}$ is piecewise smooth (cf. Definition 3.1). Then, for sufficiently large $N$, the modified weak problem (3.16), or its equivalent problem (3.21), admits a unique solution $(\bm{\sigma}_{N},\textbf{u}_{N})\in(L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{D}})%
\times(L^{2}_{\rho}(\Gamma)\otimes{V_{D}})$ such that
$$||\bm{\sigma}-\bm{\sigma}_{N}||_{\widetilde{0}}+|\textbf{u}-\textbf{u}_{N}|_{%
\widetilde{1}}\lesssim||{\widetilde{E}}-{\widetilde{E}_{N}}||_{\widetilde{%
\infty}}\cdot||\bm{\sigma}||_{\widetilde{0}}+||\textbf{f}-\textbf{f}_{N}||_{%
\widetilde{0}}+||\textbf{g}-\textbf{g}_{N}||_{\widetilde{0},\partial D_{1}},$$
(3.25)
where
$(\bm{\sigma},\textbf{u})\in L^{2}_{P}(\Omega;~{}\Sigma_{D})\times L^{2}_{P}(%
\Omega;~{}{V_{D}})$ is the solution of the weak problem (2.4).
Moreover, (i) if the covariance functions $cov[\widetilde{E}]$, $cov[\textbf{f}]$ and $cov[\textbf{g}]$
are piecewise analytic, then there exists a constant $r>0$, and a constant $C_{r}>0$ depending only on $cov[\widetilde{E}]$, $cov[\textbf{f}]$, $cov[\textbf{g}]$ and $r$,
such that
$$||\bm{\sigma}-\bm{\sigma}_{N}||_{\widetilde{0}}+|\textbf{u}-\textbf{u}_{N}|_{%
\widetilde{1}}\lesssim C_{r}N^{1/2}e^{-rN^{1/2}}.$$
(3.26)
(ii) If $cov[\textbf{f}]$ and $cov[\textbf{g}]$
are piecewise smooth, then for any $s>0$, there exists
$C_{s}>0$ depending only on $cov[\widetilde{E}]$, $cov[\textbf{f}]$, $cov[\textbf{g}]$ and $s$, such that
$$||\bm{\sigma}-\bm{\sigma}_{N}||_{\widetilde{0}}+|\textbf{u}-\textbf{u}_{N}|_{%
\widetilde{1}}\lesssim C_{s}N^{-s}.$$
(3.27)
Proof.
We first show the modified problem (3.16) is well-posed. Since the uniform stability conditions for the bilinear form $b(\cdot,\cdot)$ and the linear form $\ell_{N}(\cdot)$ hold, it suffices to show that $\widetilde{E}_{N}$ is, for sufficiently large $N$, uniformly bounded with lower bound away from zero a.e. in $D\times\Omega$. In view of Lemma 3.3 and the assumption (1.2), there exists a positive integer $N_{0}$ such that,
for any $N>N_{0}$, it holds
$$e_{min}^{\prime}\leq\widetilde{E}_{N}\leq e_{max}^{\prime}~{}~{}~{}a.e.~{}~{}~%
{}~{}\text{in}~{}D\times\Omega,$$
(3.28)
where $e_{min}^{\prime}$ and $e_{max}^{\prime}$ are two positive constants depending only on the bounds of $\widetilde{E}$, i.e. $e_{min}$ and $e_{max}$ in (1.2).
Thus, the corresponding uniform stability conditions of the bilinear form $a_{N}(\cdot,\cdot)$ follow from those of $a(\cdot,\cdot)$. As a result, the weak problem (3.16) admits a unique solution $(\bm{\sigma}_{N},\textbf{u}_{N})\in L^{2}_{P}(\Omega;~{}\Sigma_{D})\times L^{2%
}_{P}(\Omega;~{}{V_{D}})$ with the stability result
$$||\bm{\sigma}_{N}||_{\widetilde{0}}+|\textbf{u}_{N}|_{\widetilde{1}}\lesssim||%
\textbf{f}_{N}||_{\widetilde{0}}+||\textbf{g}_{N}||_{\widetilde{0},\partial D_%
{1}}$$
(3.29)
for $N>N_{0}$.
Next we turn to derive the estimate (3.25). Subtracting the corresponding equations in (2.4) and (3.16),
we have
$$\left\{\begin{array}[]{ll}a_{N}(\bm{\sigma}-\bm{\sigma}_{N},\bm{\tau})-b(\bm{%
\tau},\textbf{u}-\textbf{u}_{N})=a_{N}(\bm{\sigma},\bm{\tau})-a(\bm{\sigma},%
\bm{\tau}),&\forall\bm{\tau}\in L^{2}_{P}(\Omega;~{}\Sigma_{D}),\\
b(\bm{\sigma}-\bm{\sigma}_{N},\textbf{v})=\ell(\textbf{v})-\ell_{N}(\textbf{v}%
),&\forall\textbf{v}\in L^{2}_{P}(\Omega;~{}{V_{D}}).\end{array}\right.$$
(3.30)
Then the desired estimate (3.25) follows from the corresponding stability conditions.
By Lemmas 3.2-3.3 and the estimate (3.25), we immediately obtain the estimates (3.26)-(3.27).
∎
4 Stochastic hybrid stress
finite element methods
In this section, we shall consider two types of stochastic finite element methods for the truncated deterministic variational
problem (3.21): $k\times h$ version and $p\times h$ version. We use the PS hybrid stress quadrilateral finite element [17] to discretize the space field and $k-$version/$p-$version finite elements to discretize the stochastic field.
For convenience we assume that the spacial field $D$ is a convex polygon and the stochastic filed $\Gamma=\prod_{n=1}^{N}\Gamma_{n}$ is bounded (cf. Assumption 3.1).
4.1 Hybrid stress finite element spaces on the spatial field
Let $\mathcal{T}_{h}$ be a partition of $\bar{D}$ by conventional quadrilaterals with the mesh size $h:=max_{T\in\mathcal{T}_{h}}h_{T}$, where
$h_{T}$ is the diameter of quadrilateral $T\in\mathcal{T}_{h}$. Let $A_{i}(x^{(i)}_{1},x^{(i)}_{2}),1\leq i\leq 4$, be the four vertices of T, and $T_{i}$ the sub-triangle of T with vertices $A_{i-1},A_{i},A_{i+1}$ (the index of $A_{i}$ is modulo 4). We assume that the partition $\mathcal{T}_{h}$ satisfies the following ”shape-regularity” hypothesis : there exist a constant $\zeta>2$ independent of h such that, for all $T\in\mathcal{T}_{h}$, it holds
$$h_{T}\leqslant\zeta\rho_{T},$$
(4.1)
where
$\rho_{T}:=min_{1\leq i\leq 4}$ {diameter of circle inscribed in $T_{i}$}.
Let $\widehat{T}=[-1,1]\times[-1,1]$ be the reference square with vertices $\widehat{A}_{i},1\leq i\leq 4$(Fig.1). Then exists a unique invertible mapping $F_{T}$ that maps $\widehat{T}$ onto T with $F_{T}(\widehat{A}_{i})=A_{i},1\leq i\leq 4$.
The isoparametric bilinear mapping $(x_{1},x_{2})=F_{T}(\widehat{x}_{1},\widehat{x}_{2})$ is given by
$$x_{1}=a_{0}+a_{1}\widehat{x}_{1}+a_{2}\widehat{x}_{1}\widehat{x}_{2}+a_{3}%
\widehat{x}_{2},~{}~{}~{}~{}~{}~{}~{}~{}x_{2}=b_{0}+b_{1}\widehat{x}_{1}+b_{2}%
\widehat{x}_{1}\widehat{x}_{2}+b_{3}\widehat{x}_{2},$$
(4.2)
where $\widehat{x}_{1},\widehat{x}_{2}\in[-1,1]$ are the local isoparametric coordinates, and
$$\left(\begin{array}[]{cccc}a_{0}&b_{0}\\
a_{1}&b_{1}\\
a_{2}&b_{2}\\
a_{3}&b_{3}\end{array}\right):=\frac{1}{4}\left(\begin{array}[]{cccc}1&1&1&1\\
-1&1&1&-1\\
1&-1&1&-1\\
-1&-1&1&1\end{array}\right)\left(\begin{array}[]{cccc}x_{1}^{(1)}&x_{2}^{(1)}%
\\
x_{1}^{(2)}&x_{2}^{(2)}\\
x_{1}^{(3)}&x_{2}^{(3)}\\
x_{1}^{(4)}&x_{2}^{(4)}\end{array}\right).$$
In Pian-SumiharaÕs hybrid stress finite element (abbr. PS element) method for deterministic plane elasticity problems, the piecewise isoparametric bilinear interpolation is used for the displacement approximation , namely the displacement approximation space ${V_{D}}_{h}\subset V_{D}$ is chosen as
$${V_{D}}_{h}:=\{\textbf{v}\in{V_{D}}:\widehat{\textbf{v}}=v|_{T}oF_{T}\in span%
\{1,\hat{x}_{1},\hat{x}_{2},\hat{x}_{1}\hat{x}_{2}\}^{2},~{}~{}\forall\ T\in%
\mathcal{T}_{h}\}.$$
(4.3)
In other words ,for $\textbf{v}=(\upsilon,\omega)^{T}\in V_{h}$ with nodal values $\textbf{v}(A_{i})=(\upsilon_{i},\omega_{i})^{T}$ on T, $\widehat{\textbf{v}}$ is of the form
$$\widehat{\textbf{v}}=\left(\begin{array}[]{c}V_{0}+V_{1}\widehat{x}_{1}+V_{2}%
\widehat{x}_{1}\widehat{x}_{2}+V_{3}\widehat{x}_{2}\\
W_{0}+W_{1}\widehat{x}_{1}+W_{2}\widehat{x}_{1}\widehat{x}_{2}+W_{3}\widehat{x%
}_{2}\end{array}\right),$$
where
$$\left(\begin{array}[]{cccc}V_{0}&W_{0}\\
V_{1}&W_{1}\\
V_{2}&W_{2}\\
V_{3}&W_{3}\end{array}\right)=\frac{1}{4}\left(\begin{array}[]{cccc}1&1&1&1\\
-1&1&1&-1\\
1&-1&1&-1\\
-1&-1&1&1\end{array}\right)\left(\begin{array}[]{cccc}\upsilon_{1}&\omega_{1}%
\\
\upsilon_{2}&\omega_{2}\\
\upsilon_{3}&\omega_{3}\\
\upsilon_{4}&\omega_{4}\end{array}\right).$$
To describe the stress approximation of PS element, we abbreviate the symmetric tensor $\bm{\tau}=\left(\begin{array}[]{cc}\tau_{11}&\tau_{12}\\
\tau_{12}&\tau_{22}\end{array}\right)$ to $\bm{\tau}=(\tau_{11},\tau_{22},\tau_{12})^{T}$. The 5-parameter stress mode of PS element takes the following form on $\widehat{T}$:
$$\widehat{\bm{\tau}}=\left(\begin{array}[]{c}\widehat{\tau}_{11}\\
\widehat{\tau}_{22}\\
\widehat{\tau}_{12}\end{array}\right)=\left(\begin{array}[]{ccccc}1&0&0&%
\widehat{x}_{2}&\frac{a_{3}^{2}}{b_{3}^{2}}\widehat{x}_{1}\\
0&1&0&\frac{b_{1}^{2}}{a_{1}^{2}}\widehat{x}_{2}&\widehat{x}_{1}\\
0&0&1&\frac{b_{1}}{a_{1}}\widehat{x}_{2}&\frac{a_{3}}{b_{3}}\widehat{x}_{1}%
\end{array}\right){\bm{\beta}}^{\tau}~{}~{}~{}~{}\text{for}~{}~{}\bm{\beta}^{%
\tau}=(\beta^{\tau}_{1},...,\beta^{\tau}_{5})^{T}\in R^{5}.$$
(4.4)
Then the corresponding stress approximation space for the PS finite element is
$${\Sigma_{D}}_{h}:=\{\bm{\tau}\in{\Sigma_{D}}:\widehat{\bm{\tau}}=\bm{\tau}|_{T%
}oF_{T}~{}~{}\text{is}~{}~{}\text{of}~{}~{}\text{form}~{}~{}(\ref{tau basis}),%
~{}\forall T\in\mathcal{T}_{h}\}.$$
(4.5)
4.2 Stochastic hybrid stress finite element method: $k\times h$-version
This subsection is devoted to the stability and a priori error analysis for the $k\times h$-version stochastic hybrid stress finite element method ($k\times h$-SHSFEM).
4.2.1 $k\times h$-SHSFEM scheme
We first use the same notations as in [1] to introduce a $k$-version tensor product finite element space on the stochastic field $\Gamma=\prod_{n=1}^{N}\Gamma_{n}\subset R^{N}$.
Consider a partition of $\Gamma$ consisting of a finite number of disjoint $R^{N}$-boxes, $\gamma=\prod_{n=1}^{N}(a_{n}^{\gamma},b_{n}^{\gamma})$ with $(a_{n}^{\gamma},b_{n}^{\gamma})\subset\Gamma_{n}$ and the mesh parameter $k_{n}:=\max_{\gamma}|b_{n}^{\gamma}-a_{n}^{\gamma}|$ for $n=1,2,\cdots,N$.
Let $\textbf{q}=(q_{1},q_{2},...,q_{N})$ be a nonnegative integer muti-index. We define the $k-$version tensor product finite element space $Y_{\textbf{k}}^{\textbf{q}}$ as
$$Y_{\textbf{k}}^{\textbf{q}}:=\otimes_{n=1}^{N}Y_{k_{n}}^{q_{n}},\quad Y_{k_{n}%
}^{q_{n}}:=\left\{\varphi:\Gamma_{n}\rightarrow R:\varphi|_{(a_{n}^{\gamma},b_%
{n}^{\gamma})}\in span\{y_{n}^{\alpha}:\alpha=0,1,...,q_{n}\},\forall\gamma%
\right\}.$$
(4.6)
The $k\times h$-SHSFEM scheme for the original weak problem (2.4), or the modified weak problem (3.21), reads as:
find $(\bm{\sigma}_{kh},\textbf{u}_{kh})\in(Y_{\textbf{k}}^{\textbf{q}}\otimes{%
\Sigma_{D}}_{h})\times(Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h})$ such that
$$\left\{\begin{array}[]{ll}a_{N}(\bm{\sigma}_{kh},\bm{\tau}_{kh})-b_{N}(\bm{%
\tau}_{kh},\textbf{u}_{kh})=0,&\forall\bm{\tau}_{kh}\in Y_{\textbf{k}}^{%
\textbf{q}}\otimes{\Sigma_{D}}_{h},\\
b_{N}(\bm{\sigma}_{kh},\textbf{v}_{kh})=\ell_{N}(\textbf{v}_{kh}),&\forall%
\textbf{v}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}.\end{array}\right.$$
(4.7)
Here we recall that
$$Y_{\textbf{k}}^{\textbf{q}}\otimes{\Sigma_{D}}_{h}=span\{\varphi(\textbf{y})%
\bm{\tau}(\textbf{x}):\varphi\in Y_{\textbf{k}}^{\textbf{q}},\bm{\tau}\in{%
\Sigma_{D}}_{h}\},$$
$$Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}=span\{\varphi(\textbf{y})\textbf%
{v}(\textbf{x}):\varphi\in Y_{\textbf{k}}^{\textbf{q}},\textbf{v}\in{V_{D}}_{h%
}\},$$
and ${V_{D}}_{h}$, ${\Sigma_{D}}_{h}$ are defined in (4.3), (4.5), respectively.
4.2.2 Stability
To show the $k\times h$-SHSFEM scheme (4.7) admits a unique solution, we need some stability conditions.
We note that the continuity of $a_{N}(\cdot,\cdot)$, $b_{N}(\cdot,\cdot)$ and $\ell_{N}(\cdot)$ follows from their definitions. Then, according to the theory of mixed methods [3], it suffices to prove the following two discrete versions of the stability conditions.
$(\textbf{A}_{h})$ Discrete Kernel-coercivity : for any $\bm{\tau}_{kh}\in Z_{kh}^{0}:=\{\bm{\tau}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}%
\otimes{\Sigma_{D}}_{h}:b_{N}(\bm{\tau}_{kh},\textbf{v}_{kh})=0,\ \forall%
\textbf{v}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}\}$
, it holds:
$$||\bm{\tau}_{kh}||^{2}_{\widetilde{0}}\lesssim a_{N}(\bm{\tau}_{kh},\bm{\tau}_%
{kh}).$$
(4.8)
$(\textbf{B}_{h})$ Discrete inf-sup condition : for any $\textbf{v}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}$ , it holds
$$|\textbf{v}_{kh}|_{\widetilde{1}}\lesssim\sup_{0\neq\bm{\tau}_{kh}\in Y_{%
\textbf{k}}^{\textbf{q}}\otimes{\Sigma_{D}}_{h}}\frac{b_{N}(\bm{\tau}_{kh},%
\textbf{v}_{kh})}{||\bm{\tau}_{kh}||_{\widetilde{0}}}.$$
(4.9)
To prove the stability condition $(\textbf{A}_{h})$, we need the following lemma [33]:
Lemma 4.1.
Assume that for any piecewise constant function $w$, i.e. $w\in L^{2}(D)$ with $w|_{T}=const$, $\forall T\in\mathcal{T}_{h}$, there exists $\textbf{v}\in V_{Dh}$ with
$$||w||^{2}_{0}\lesssim\int_{D}w\text{div}\textbf{v}~{}\mathrm{d}\textbf{x},%
\quad|\textbf{v}|^{2}_{1}\lesssim||w||^{2}_{0}.$$
Then,
for any $\bm{\tau}_{h}\in\{\bm{\tau}_{h}\in\Sigma_{Dh}:\int_{D}\bm{\tau}_{h}:\epsilon(%
\textbf{v}_{h})\mathrm{d}\textbf{x}=0,~{}~{}\forall\textbf{v}_{h}\in V_{Dh}\}$, it holds
$$||\bm{\tau}_{h}||_{0}^{2}\lesssim\int_{D}\frac{1}{\widetilde{E}_{N}}\bm{\tau}_%
{h}:{{\textbf{C}}}^{-1}\bm{\tau}_{h}\mathrm{d}\textbf{x}.$$
We note that the assumption of this lemma, which was first used in [34] in the analysis of several quadrilateral nonconforming elements for incompressible elasticity, requires that the quadrilateral mesh is stable for the Stokes element Q1-P0.
As we know, the only unstable case for Q1-P0 is the checkerboard mode. Thereupon, any quadrilateral mesh subdivision of $D$ which breaks the checkerboard mode is sufficient for the uniform stability $(\textbf{A}_{h})$.
Lemma 4.2.
Under the same condition as in Lemma 4.1, the uniform discrete kernel-coercivity condition $(\textbf{A}_{h})$ holds.
Proof.
For any $\bm{\tau}_{kh}\in Z_{kh}^{0}$, due to the definitions of spaces $Y_{\textbf{k}}^{\textbf{q}}\otimes{\Sigma_{D}}_{h}$ and $Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}$ we easily have $\bm{\tau}_{kh}(\cdot,\textbf{y}^{\prime})\in\{\bm{\tau}_{h}\in{\Sigma_{D}}_{h}%
:\int_{D}\bm{\tau}_{h}:\epsilon(\textbf{v}_{h})\mathrm{d}\textbf{x}=0,~{}~{}%
\forall\textbf{v}_{h}\in{V_{D}}_{h}\}$ for any $\textbf{y}^{\prime}\in\Gamma$. From Lemma 4.1 it follows
$$\int_{D}\bm{\tau}_{kh}(\cdot,\textbf{y}^{\prime}):\bm{\tau}_{kh}(\cdot,\textbf%
{y}^{\prime})\mathrm{d}\textbf{x}\lesssim\int_{D}\frac{1}{\widetilde{E}_{N}(%
\cdot,\textbf{y}^{\prime})}~{}\bm{\tau}_{kh}(\cdot,\textbf{y}^{\prime}):{{%
\textbf{C}}}^{-1}\bm{\tau}_{kh}(\cdot,\textbf{y}^{\prime})\mathrm{d}\textbf{x}%
,~{}~{}\forall\textbf{y}^{\prime}\in\Gamma,$$
(4.10)
which immediately implies $(\textbf{A}_{h})$.
∎
To prove the discrete inf-sup condition $\textbf{B}_{h}$ we need the following lemma:
Lemma 4.3.
For any $\textbf{v}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}$, there exists $\bm{\tau}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{\Sigma_{D}}_{h}$ such that, for any $T\in\mathcal{T}_{h}$,
$$\int_{\Gamma}\rho(\textbf{y})\int_{T}\bm{\tau}_{kh}:\epsilon(\textbf{v}_{kh})%
\mathrm{d}\textbf{x}\mathrm{d}\textbf{y}=||\bm{\tau}_{kh}||^{2}_{\widetilde{0}%
,T}\gtrsim||\epsilon(\textbf{v}_{kh})||^{2}_{\widetilde{0},T}.$$
(4.11)
Proof.
The desired result is immediate from Lemma 4.4 in [33].
∎
Lemma 4.4.
The uniform discrete inf-sup condition $(\textbf{B}_{h})$ holds.
Proof.
From Lemma 4.3, for any $\textbf{v}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}$, there exists $\bm{\tau}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{\Sigma_{D}}_{h}$ such that
$$||\bm{\tau}_{kh}||_{\widetilde{0}}|\textbf{v}_{kh}|_{\widetilde{1}}\lesssim(%
\sum_{T}\int_{\Gamma}\rho(\textbf{y})\int_{T}\bm{\tau}_{kh}:\bm{\tau}_{kh}%
\mathrm{d}\textbf{x}\mathrm{d}\textbf{y})^{\frac{1}{2}}(\sum_{T}\int_{\Gamma}%
\rho(\textbf{y})\int_{T}\epsilon(\textbf{v}_{kh}):\epsilon(\textbf{v}_{kh})%
\mathrm{d}\textbf{x}\mathrm{d}\textbf{y})^{\frac{1}{2}}$$
$$\lesssim\sum_{T}\int_{\Gamma}\rho(\textbf{y})\int_{T}\bm{\tau}_{kh}:\bm{\tau}_%
{kh}\mathrm{d}\textbf{x}\mathrm{d}\textbf{y}\lesssim\int_{\Gamma}\rho(\textbf{%
y})\int_{D}\bm{\tau}_{kh}:\epsilon(\textbf{v}_{kh})\mathrm{d}\textbf{x}\mathrm%
{d}\textbf{y},$$
where in the first inequality the equivalence of the seminorm $|\epsilon(\cdot)|_{\widetilde{0}}$ and the norm $||\cdot||_{\widetilde{1}}$ on the space $L^{2}_{P}(\Omega;~{}{V_{D}})$ is used. Then the uniform discrete inf-sup condition $(\textbf{B}_{h})$ follows from
$$|\textbf{v}_{kh}|_{\widetilde{1}}\lesssim\frac{\int_{\Gamma}\rho(\textbf{y})%
\int_{T}\bm{\tau}_{kh}:\epsilon(\textbf{v}_{kh})\mathrm{d}\textbf{x}\mathrm{d}%
\textbf{y}}{||\bm{\tau}_{kh}||_{\widetilde{0}}}\leqslant\sup_{\bm{\tau}_{kh}^{%
\prime}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{\Sigma_{D}}_{h}}\frac{\int_{%
\Gamma}\rho(\textbf{y})\int_{T}\bm{\tau}_{kh}^{\prime}:\epsilon(\textbf{v}_{kh%
})\mathrm{d}\textbf{x}\mathrm{d}\textbf{y}}{||\bm{\tau}_{kh}^{\prime}||_{%
\widetilde{0}}}.$$
∎
In light of Lemma 4.2 and Lemma 4.4, we immediately obtain the following existence and uniqueness of the $k\times h$-SHSFEM approximation $(\bm{\sigma}_{kh},\textbf{u}_{kh})$:
Theorem 4.1.
Under the same condition as in Lemma 4.1 , the discretization problem (4.7) admits a unique solution $(\bm{\sigma}_{kh},\textbf{u}_{kh})\in(Y_{\textbf{k}}^{\textbf{q}}\otimes{%
\Sigma_{D}}_{h})\times(Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h})$.
4.2.3 Uniform error estimation
In what follows we shall derive a priori estimates of the errors $||\bm{\sigma}-\bm{\sigma}_{kh}||_{\widetilde{0}}$ and $|\textbf{u}-\textbf{u}_{kh}|_{\widetilde{1}}$ which are uniform with respect to the Lam$\acute{e}$ constant $\lambda\in(0,+\infty)$, where $(\bm{\sigma},\textbf{u})\in(L^{2}_{P}(\Omega;~{}\Sigma_{D}))\times(L^{2}_{P}(%
\Omega;~{}{V_{D}}))$ is the solution of the weak problem (2.4).
Let $(\bm{\sigma}_{N},\textbf{u}_{N})\in(L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{D}})%
\times(L^{2}_{\rho}(\Gamma)\otimes{V_{D}})$ be the solution of truncated weak problem (3.21). By triangle inequality it holds
$$||\bm{\sigma}-\bm{\sigma}_{kh}||_{\widetilde{0}}\leq||\bm{\sigma}-\bm{\sigma}_%
{N}||_{\widetilde{0}}+||\bm{\sigma}_{N}-\bm{\sigma}_{kh}||_{\widetilde{0}},$$
(4.12)
$$|\textbf{u}-\textbf{u}_{kh}|_{\widetilde{1}}\leq|\textbf{u}-\textbf{u}_{N}|_{%
\widetilde{1}}+|\textbf{u}_{N}-\textbf{u}_{kh}|_{\widetilde{1}},$$
(4.13)
where the perturbation errors, $||\bm{\sigma}-\bm{\sigma}_{N}||_{\widetilde{0}}$ and $|\textbf{u}-\textbf{u}_{N}|_{\widetilde{1}}$, are estimated by Lemma 3.4. For the finite element approximation error terms $||\bm{\sigma}_{N}-\bm{\sigma}_{kh}||_{\widetilde{0}}$ and $|\textbf{u}_{N}-\textbf{u}_{kh}|_{\widetilde{1}}$, from the stability $(\textbf{A}_{h})$, $(\textbf{B}_{h})$ and the standard theory of mixed finite element methods [3] it follows
$$||\bm{\sigma}_{N}-\bm{\sigma}_{kh}||_{\widetilde{0}}+|\textbf{u}_{N}-\textbf{u%
}_{kh}|_{\widetilde{1}}\lesssim\inf_{\bm{\tau}_{kh}\in Y_{\textbf{k}}^{\textbf%
{q}}\otimes{\Sigma_{D}}_{h}}||\bm{\sigma}_{N}-\bm{\tau}_{kh}||_{\widetilde{0}}%
+\inf_{\textbf{v}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h}}|%
\textbf{u}_{N}-\textbf{v}_{kh}|_{\widetilde{1}}.$$
(4.14)
To further estimate the righthand-side terms of the above inequality, we need some regularity of the solution $(\bm{\sigma}_{N},\textbf{u}_{N})$.
In fact, it is well-known that the following regularity holds:
$$||\bm{\sigma}_{N}(\cdot,\textbf{y})||_{1}+||\textbf{u}_{N}(\cdot,\textbf{y})||%
_{2}\lesssim||\textbf{f}_{N}(\cdot,\textbf{y})||_{0}+||\textbf{g}_{N}(\cdot,%
\textbf{y})||_{0,\partial D_{1}},\quad\forall\textbf{y}\in\Gamma.$$
(4.15)
On the other hand, in view of (3.28) and the truncated K-L expansions (3.13)-(3.15), and by taking
derivatives with respect to $y_{n}$ in (3.18), standard
inductive arguments yield
$$\frac{||\partial_{y_{n}}^{q_{n}+1}\bm{\sigma}_{N}(\cdot,\textbf{y})||_{0}}{(q_%
{n}+1)!}+\frac{|\partial_{y_{n}}^{q_{n}+1}\textbf{u}_{N}(\cdot,\textbf{y})|_{1%
}}{(q_{n}+1)!}\lesssim(2\gamma_{n})^{q_{n}+1}(||\textbf{f}_{N}(\cdot,\textbf{y%
})||_{0}+||\textbf{g}_{N}(\cdot,\textbf{y})||_{0,\partial D_{1}}+1),\quad%
\forall\textbf{y}\in\Gamma,$$
(4.16)
where
$$\gamma_{n}:=\max\{\frac{1}{{e}_{min}^{\prime}}\sqrt{\widetilde{\lambda}_{n}}||%
\widetilde{b}_{n}||_{L^{\infty}(D)},\sqrt{\widehat{\lambda}_{in}}||\widehat{b}%
_{in}||_{0}(i=1,2),\sqrt{\overline{\lambda}_{in}}||\overline{b}_{in}||_{0,%
\partial D_{1}}(i=1,2)\}.$$
(4.17)
Then, thanks to $Y_{\textbf{k}}^{\textbf{q}}=\otimes_{n=1}^{N}Y_{k_{n}}^{q_{n}}$ and the regularity (4.15)-(4.16), standard interpolation estimation yields
$$\displaystyle\inf_{\bm{\tau}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{\Sigma%
_{D}}_{h}}||\bm{\sigma}_{N}-\bm{\tau}_{kh}||_{\widetilde{0}}$$
$$\displaystyle\lesssim$$
$$\displaystyle h||\bm{\sigma}_{N}||_{\widetilde{1}}+\sum_{n=1}^{N}(\frac{k_{n}}%
{2})^{q_{n}+1}\frac{||\partial_{y_{n}}^{q_{n+1}}\bm{\sigma}_{N}||_{L^{2}(%
\Gamma)\otimes{\Sigma_{D}}}}{(q_{n}+1)!}$$
(4.18)
$$\displaystyle\lesssim$$
$$\displaystyle h+\sum_{n=1}^{N}({k_{n}\gamma_{n}})^{q_{n}+1},$$
$$\displaystyle\inf_{\textbf{v}_{kh}\in Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}%
}_{h}}|\textbf{u}_{N}-\textbf{v}_{kh}|_{\widetilde{1}}$$
$$\displaystyle\lesssim$$
$$\displaystyle h||\textbf{u}_{N}||_{\widetilde{2}}+\sum_{n=1}^{N}(\frac{k_{n}}{%
2})^{q_{n}+1}\frac{||\partial_{y_{n}}^{q_{n+1}}\textbf{u}_{N}||_{L^{2}(\Gamma)%
\otimes{V_{D}}}}{(q_{n}+1)!}$$
(4.19)
$$\displaystyle\lesssim$$
$$\displaystyle h+\sum_{n=1}^{N}({k_{n}\gamma_{n}})^{q_{n}+1}.$$
In light of the estimates (4.14) and (4.18)-(4.19), we immediately obtain the following conclusion.
Theorem 4.2.
Let $(\bm{\sigma}_{N},\textbf{u}_{N})\in(L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{D}})%
\times(L^{2}_{\rho}(\Gamma)\otimes{V_{D}})$ and $(\bm{\sigma}_{kh},\textbf{u}_{kh})\in(Y_{\textbf{k}}^{\textbf{q}}\otimes{%
\Sigma_{D}}_{h})\times(Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h})$ be the solutions of (3.21) and (4.7), respectively. Then, under the same condition as in Lemma 4.1 and for sufficiently large $N$, it holds
$$||\bm{\sigma}_{N}-\bm{\sigma}_{kh}||_{\widetilde{0}}+|\textbf{u}_{N}-\textbf{u%
}_{kh}|_{\widetilde{1}}\lesssim h+\sum_{n=1}^{N}({k_{n}\gamma_{n}})^{q_{n}+1}.$$
(4.20)
Remark 4.1.
We notice that the estimate (4.34) is optimal with respect to the mesh parameters $h$ and $\textbf{k}=(k_{1},k_{2},\cdots,k_{N})$, but not optimal with respect to the polynomial degree $\textbf{q}=(q_{1},q_{2},\cdots,q_{N})$ since it requires $k_{n}\gamma_{n}<1$.
The above theorem, together with Lemma 3.4, implies the following a priori error estimates for the $k\times h$-SHSFEM approximation $(\bm{\sigma}_{kh},\textbf{u}_{kh})$.
Theorem 4.3.
Let $(\bm{\sigma},\textbf{u})\in(L^{2}_{P}(\Omega;~{}\Sigma_{D}))\times(L^{2}_{P}(%
\Omega;~{}{V_{D}}))$ and $(\bm{\sigma}_{kh},\textbf{u}_{kh})\in(Y_{\textbf{k}}^{\textbf{q}}\otimes{%
\Sigma_{D}}_{h})\times(Y_{\textbf{k}}^{\textbf{q}}\otimes{V_{D}}_{h})$ be the solutions of (2.4) and (4.7), respectively. Then, under the same conditions as in Theorem 4.2, it holds
$$||\bm{\sigma}-\bm{\sigma}_{kh}||_{\widetilde{0}}+|\textbf{u}-\textbf{u}_{kh}|_%
{\widetilde{1}}\lesssim N^{1/2}e^{-rN^{1/2}}+h+\sum_{n=1}^{N}(k_{n}\gamma_{n})%
^{q_{n}+1}$$
(4.21)
for any $r>0$ if the covariance functions of $\widetilde{E}$, f and g
are piecewise analytic, and holds
$$||\bm{\sigma}-\bm{\sigma}_{kh}||_{\widetilde{0}}+|\textbf{u}-\textbf{u}_{kh}|_%
{\widetilde{1}}\lesssim N^{-s}+h+\sum_{n=1}^{N}(k_{n}\gamma_{n})^{q_{n}+1}$$
(4.22)
for any $s>0$ if the covariance functions of $\widetilde{E}$, f and g
are piecewise smooth.
Remark 4.2.
Here we recall that $"\lesssim"$ denotes $"\leq C"$ with C a positive constant independent of $\lambda$ , $h$ , $N$, k.
4.3 Stochastic hybrid stress finite element approximation: $p\times h$ version
As shown in Section 4.2 and Remark 4.1, the $k\times h$-SHSFEM is based on the $k$ partition of the stochastic field $\Gamma$ and requires the mesh parameter $k_{n}$ ($n=1,2,\cdots,N$) to be sufficiently small so as to acquire optimal error estimates.
In this subsection, we shall introduce a $p\times h$-version stochastic hybrid stress finite element method ($p\times h$-SHSFEM), which does not require to refine $\Gamma$. We will show this method is of exponential rates of convergence with
respect to the degrees of the polynomials used for approximation. To this end,
we first assume
$$\widetilde{E}_{N}\in C^{0}(\Gamma,L^{\infty}(D)),\quad\textbf{f}_{N}\in C^{0}(%
\Gamma,L^{2}(D)),\quad\textbf{g}_{N}\in C^{0}(\Gamma,L^{2}(\partial D_{1})).$$
(4.23)
Here
$$C^{0}(\Gamma,B):=\{v:\Gamma\rightarrow B,v~{}\text{is continuous in}~{}\textbf%
{y}~{}\text{and}~{}\max_{\textbf{y}\in\Gamma}||v(\textbf{y})||_{B}<+\infty\}$$
(4.24)
for any Banach space, $B$, of functions defined in $D$.
The above assumptions indicate that the solution, $(\bm{\sigma}_{N},\textbf{u}_{N})$, of the problem (3.21), satisfies
$$\bm{\sigma}_{N}\in C^{0}(\Gamma,{\Sigma_{D}}),\quad\textbf{u}_{N}\in C^{0}(%
\Gamma,{V_{D}}).$$
Let $\textbf{p}:=(p_{1},p_{2},...,p_{N})$ be a nonnegative integer muti-index. We define the $p-$version tensor product finite element space $Z^{\textbf{p}}$ as
$$Z^{\textbf{p}}:=\otimes_{n=1}^{N}Z_{n}^{p_{n}},\quad Z_{n}^{p_{n}}:=\left\{%
\varphi:\Gamma_{n}\rightarrow R:\varphi\in span\{y_{n}^{\alpha}:\alpha=0,1,...%
,p_{n}\}\right\}.$$
(4.25)
Then the $p\times h$-SHSFEM scheme reads as:
find $(\bm{\sigma}_{ph},\textbf{u}_{ph})\in(Z^{\textbf{p}}\otimes{\Sigma_{D}}_{h})%
\times(Z^{\textbf{p}}\otimes{V_{D}}_{h})$ such that
$$\left\{\begin{array}[]{ll}a_{N}(\bm{\sigma}_{ph},\bm{\tau}_{ph})-b_{N}(\bm{%
\tau}_{ph},\textbf{u}_{ph})=0,~{}~{}~{}~{}~{}~{}\forall\bm{\tau}_{ph}\in Z^{%
\textbf{p}}\otimes{\Sigma_{D}}_{h},\\
b_{N}(\bm{\sigma}_{ph},\textbf{v}_{ph})=\ell_{N}(\textbf{v}_{ph}),~{}~{}~{}~{}%
~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\forall\textbf{v}_{ph}\in Z^{\textbf{p}}%
\otimes{V_{D}}_{h}.\end{array}\right.$$
(4.26)
We note that $Z^{\textbf{p}}$ is a special case of the $k-$version tensor product finite element space $Y_{\textbf{k}}^{\textbf{q}}$, then, in this sense, the $p\times h$-SHSFEM can be viewed as a special case of the $k\times h$-SHSFEM. As a result, the corresponding stability conditions and the existence and uniqueness of the solution of the $p\times h$-SHSFEM scheme (4.26) follow from those of the $k\times h$-SHSFEM (cf. Lemma 4.2, Lemma 4.4 and Theorem 4.1).
Following the same routine as in Section 4.2.3 (cf. the estimates (4.12)-(4.14)), we only need to estimate the terms $\inf\limits_{\bm{\tau}_{ph}\in Z^{\textbf{p}}\otimes{\Sigma_{D}}_{h}}||\bm{%
\sigma}_{N}-{\bm{\tau}}_{ph}||_{\widetilde{0}}$ and
$\inf\limits_{\textbf{v}_{ph}\in Z^{\textbf{p}}\otimes{V_{D}}_{h}}|\textbf{u}_{%
N}-\textbf{v}_{ph}|_{\widetilde{1}}.$
Since
$$\displaystyle\inf\limits_{\bm{\tau}_{ph}\in Z^{\textbf{p}}\otimes{\Sigma_{D}}_%
{h}}||\bm{\sigma}_{N}-{\bm{\tau}}_{ph}||_{\widetilde{0}}$$
$$\displaystyle\lesssim$$
$$\displaystyle\inf\limits_{\bm{\tau}_{p}\in Z^{\textbf{p}}\otimes{\Sigma_{D}}}|%
|\bm{\sigma}_{N}-{\bm{\tau}}_{p}||_{\widetilde{0}}+\inf\limits_{\bm{\tau}_{h}%
\in L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{D}}_{h}}||\bm{\sigma}_{N}-{\bm{\tau}_{%
h}}||_{\widetilde{0}}$$
(4.27)
$$\displaystyle\lesssim$$
$$\displaystyle\inf\limits_{\bm{\tau}_{p}\in Z^{\textbf{p}}\otimes{\Sigma_{D}}}|%
|\bm{\sigma}_{N}-{\bm{\tau}}_{p}||_{\widetilde{0}}+h||\bm{\sigma}_{N}||_{%
\widetilde{1}},$$
$$\displaystyle\inf\limits_{\textbf{v}_{ph}\in Z^{\textbf{p}}\otimes{V_{D}}_{h}}%
|\textbf{u}_{N}-\textbf{v}_{ph}|_{\widetilde{1}}$$
$$\displaystyle\lesssim$$
$$\displaystyle\inf\limits_{\textbf{v}_{p}\in Z^{\textbf{p}}\otimes{V_{D}}}|%
\textbf{u}_{N}-\textbf{v}_{p}|_{\widetilde{1}}+\inf\limits_{\textbf{v}_{h}\in L%
^{2}_{\rho}(\Gamma)\otimes{V_{D}}_{h}}|\textbf{u}_{N}-\textbf{v}_{h}|_{%
\widetilde{1}}$$
(4.28)
$$\displaystyle\lesssim$$
$$\displaystyle\inf\limits_{\textbf{v}_{p}\in Z^{\textbf{p}}\otimes{V_{D}}}|%
\textbf{u}_{N}-\textbf{v}_{p}|_{\widetilde{1}}+h||\textbf{u}_{N}||_{\widetilde%
{2}},$$
it remains to estimate $\inf\limits_{\bm{\tau}_{p}\in Z^{\textbf{p}}\otimes{\Sigma_{D}}}||\bm{\sigma}_%
{N}-{\bm{\tau}}_{p}||_{\widetilde{0}}$ and
$\inf\limits_{\textbf{v}_{p}\in Z^{\textbf{p}}\otimes{V_{D}}}|\textbf{u}_{N}-%
\textbf{v}_{p}|_{\widetilde{1}}.$
Recalling $Z^{\textbf{p}}=\otimes_{n=1}^{N}Z_{n}^{p_{n}}$, we easily have the following estimates:
$$\inf\limits_{\bm{\tau}_{p}\in Z^{\textbf{p}}\otimes{\Sigma_{D}}}||\bm{\sigma}_%
{N}-{\bm{\tau}}_{p}||_{\widetilde{0}}\lesssim\sum_{n=1}^{N}\inf_{\bm{\tau}_{p_%
{n}}\in Z_{n}^{p_{n}}\otimes\Sigma_{D}}||\bm{\sigma}_{N}-\bm{\tau}_{p_{n}}||_{%
C^{0}(\Gamma,\Sigma_{D})},$$
(4.29)
$$\inf\limits_{\textbf{v}_{p}\in Z^{\textbf{p}}\otimes{V_{D}}}|\textbf{u}_{N}-%
\textbf{v}_{p}|_{\widetilde{1}}\lesssim\sum_{n=1}^{N}\inf_{\textbf{v}_{p_{n}}%
\in Z_{n}^{p_{n}}\otimes V_{D}}||\textbf{u}_{N}-\textbf{v}_{p_{n}}||_{C^{0}(%
\Gamma,V_{D})}.$$
(4.30)
Then the thing left is to estimate the right hand side terms of the above two inequalities.
Denote $\Gamma_{n}^{*}:=\prod_{i=1,i\neq n}^{N}\Gamma_{i},$ then
$\Gamma=\Gamma_{n}\times\Gamma_{n}^{*},$ and for any $\textbf{y}\in\Gamma$ we denote $\textbf{y}=(y_{n},\textbf{y}_{n}^{*})$ with $y_{n}\in\Gamma_{n}$ and $\textbf{y}_{n}^{*}\in\Gamma_{n}^{*}$.
We have the following lemma.
Lemma 4.5.
Let $(\bm{\sigma}_{N},\textbf{u}_{N})\in(L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{D}})%
\times(L^{2}_{\rho}(\Gamma)\otimes{V_{D}})$ be the solution of the problem (3.21).
Then for any $\textbf{x}\in D$, $\textbf{y}=(y_{n},\textbf{y}_{n}^{*})\in\Gamma$,
the solutions $\bm{\sigma}_{N}(x,y_{n},y_{n}^{*})$ and $\textbf{u}_{N}(x,y_{n},y_{n}^{*})$ as functions of $y_{n}$, i.e. $\bm{\sigma}_{N}:\Gamma_{n}\rightarrow C^{0}(\Gamma_{n}^{*};{\Sigma_{D}})$, $\textbf{u}_{N}:\Gamma_{n}\rightarrow C^{0}(\Gamma_{n}^{*};{V_{D}})$, can be analytically extended to the complex plane
$$\Xi(\Gamma_{n};d_{n}):=\{z\in\mathbb{C},dist(z,\Gamma_{n})\leq d_{n}\},$$
with $0<d_{n}<\frac{1}{2\gamma_{n}}$ and
$\gamma_{n}$ given by (4.17).
In addition, for all $z\in\Xi(\Gamma_{n};d_{n})$, it holds
$$||\bm{\sigma}_{N}(z)||_{C^{0}(\Gamma_{n}^{*};\overline{\Sigma})}+|\textbf{u}_{%
N}(z)|_{C^{0}(\Gamma_{n}^{*};{V_{D}})}\lesssim\frac{1}{1-2d_{n}\gamma_{n}}(||%
\textbf{f}_{N}||_{C^{0}(\Gamma;L^{2}(D))}+||\textbf{g}_{N}||_{C^{0}(\Gamma;L^{%
2}(\partial D_{1}))}+1).$$
(4.31)
Proof.
Similar to
(4.16), for $\textbf{y}\in\Gamma$, $r\geq 0$ and $n=1,2,...,N$ it holds
$$\frac{||\partial_{y_{n}}^{r}\bm{\sigma}_{N}(\cdot,\textbf{y})||_{0}}{r!}+\frac%
{|\partial_{y_{n}}^{r}\textbf{u}_{N}(\cdot,\textbf{y})|_{1}}{r!}\lesssim(2%
\gamma_{n})^{r}(||\textbf{f}_{N}(\cdot,\textbf{y})||_{0}+||\textbf{g}_{N}(%
\cdot,\textbf{y})||_{0,\partial D_{1}}+1).$$
(4.32)
For any $y_{n}\in\Gamma_{n}$, we define power series
$$\bm{\sigma}_{N}(\textbf{x},z,y_{n}^{*})=\sum_{r=0}^{\infty}\frac{(z-y_{n})^{r}%
}{r!}\partial_{y_{n}}^{r}\bm{\sigma}_{N}(\textbf{x},y_{n},y_{n}^{*}),~{}~{}~{}%
~{}\textbf{u}_{N}(\textbf{x},z,y_{n}^{*})=\sum_{r=0}^{\infty}\frac{(z-y_{n})^{%
r}}{r!}\partial_{y_{n}}^{r}\textbf{u}_{N}(\textbf{x},y_{n},y_{n}^{*}).$$
then it follows
$$||\bm{\sigma}_{N}(\textbf{x},z,y_{n}^{*})||_{0}\leq\sum_{r=0}^{\infty}\frac{|z%
-y_{n}|^{r}}{r!}||\partial_{y_{n}}^{r}\bm{\sigma}_{N}(\textbf{x},y_{n},y_{n}^{%
*})||_{0},$$
$$|\textbf{u}_{N}(\textbf{x},z,y_{n}^{*})|_{1}\leq\sum_{r=0}^{\infty}\frac{|z-y_%
{n}|^{r}}{r!}|\partial_{y_{n}}^{r}\textbf{u}_{N}(\textbf{x},y_{n},y_{n}^{*})|_%
{1}.$$
Due to (4.32), we easily know that the above two series converge for all $z\in\Xi(\Gamma_{n};d_{n})$. Furthermore, by a continuation argument, the functions
$\bm{\sigma}_{N}$, $\textbf{u}_{N}$ can be extended analytically on the whole region $\Xi(\Gamma_{n};d_{n})$, and the estimate (4.31) follows.
∎
In order to estimate the right-hand-side terms of (4.29)(4.30), we need one more lemma by Babu$\breve{s}$ka et al [2].
Lemma 4.6.
Let $B$ be a Banach space, and $L\subset R$ be a bounded set.
Given a function $v\in C^{0}(L;B)$ which admits an analytic extension in the region of the complex plane $\Xi(L;d)=\{z\in\mathbb{C},dist(z,L)\leq d\}$ for some $d>0$, it holds
$$\min_{w\in P_{p}(L)\otimes B}||v-w||_{C^{0}(L;B)}\leq\frac{2}{\varrho-1}%
\varrho^{-p}\max_{z\in\Xi(L;d)}||v(z)||_{B},$$
(4.33)
where $P_{p}(L):=span(y^{s},s=0,1,...,p)$, $1<\varrho:=\frac{2d}{|L|}+\sqrt{1+\frac{4d^{2}}{|L|^{2}}}$.
In light of (4.27)-(4.30) and Lemmas 4.5-4.6, we immediately obtain the following result.
Theorem 4.4.
Let $(\bm{\sigma}_{N},\textbf{u}_{N})\in(L^{2}_{\rho}(\Gamma)\otimes{\Sigma_{D}})%
\times(L^{2}_{\rho}(\Gamma)\otimes{V_{D}})$ and $(\bm{\sigma}_{ph},\textbf{u}_{ph})\in(Z^{\textbf{p}}\otimes{\Sigma_{D}}_{h})%
\times(Z^{\textbf{p}}\otimes{V_{D}}_{h})$ be the solutions of (3.21) and (4.26), respectively. Then, under the same condition as in Lemma 4.1 and for sufficiently large $N$, it holds
$$||\bm{\sigma}_{N}-\bm{\sigma}_{kh}||_{\widetilde{0}}+|\textbf{u}_{N}-\textbf{u%
}_{kh}|_{\widetilde{1}}\lesssim h+\sum_{n=1}^{N}{\varrho_{n}}^{-p_{n}},$$
(4.34)
where $\varrho_{n}=\frac{2d_{n}}{|\Gamma_{n}|}+\sqrt{1+\frac{4d_{n}^{2}}{|\Gamma_{n}|%
^{2}}}$ and $0<d_{n}<\frac{1}{2\gamma_{n}}$.
The above theorem, together with Lemma 3.4, implies the following a priori error estimates for the $p\times h$-SHSFEM approximation $(\bm{\sigma}_{ph},\textbf{u}_{ph})$.
Theorem 4.5.
Let $(\bm{\sigma},\textbf{u})\in(L^{2}_{P}(\Omega;~{}\Sigma_{D}))\times(L^{2}_{P}(%
\Omega;~{}{V_{D}}))$ and $(\bm{\sigma}_{ph},\textbf{u}_{ph})\in(Z^{\textbf{p}}\otimes{\Sigma_{D}}_{h},Z^%
{\textbf{p}}\otimes{V_{D}}_{h})$ be the solutions of (2.4) and (4.26), respectively. Then, under the same conditions as in Theorem 4.4, it holds
$$||\bm{\sigma}-\bm{\sigma}_{kh}||_{\widetilde{0}}+|\textbf{u}-\textbf{u}_{kh}|_%
{\widetilde{1}}\lesssim N^{1/2}e^{-rN^{1/2}}+h+\sum_{n=1}^{N}{\varrho_{n}}^{-p%
_{n}}$$
(4.35)
for any $r>0$ if the covariance functions of $\widetilde{E}$, f and g
are piecewise analytic, and holds
$$||\bm{\sigma}-\bm{\sigma}_{kh}||_{\widetilde{0}}+|\textbf{u}-\textbf{u}_{kh}|_%
{\widetilde{1}}\lesssim N^{-s}+h+\sum_{n=1}^{N}{\varrho_{n}}^{-p_{n}}$$
(4.36)
for any $s>0$ if the covariance functions of $\widetilde{E}$, f and g
are piecewise smooth.
Remark 4.3.
This theorem shows the $p\times h$-SHSFEM yields exponential rates of convergence with
respect to the degrees $(p_{1},p_{2},...,p_{N})$ of the polynomials used for approximation.
5 Numerical examples
In this section we compute two numerical examples to test the performance of the proposed $p\times h$-version of stochastic hybrid stress finite element method. We note that the $p\times h$-SHSFEM can be viewed as a particular case of the $k\times h$ version. For convenience we denote
$$e_{u}:=\frac{|\textbf{u}-\textbf{u}_{h}|_{\widetilde{1}}}{|\textbf{u}|_{%
\widetilde{1}}},\quad e_{\sigma}:=\frac{||\bm{\sigma}-\bm{\sigma}_{h}||_{%
\widetilde{0}}}{||\bm{\sigma}||_{\widetilde{0}}},$$
where $(\textbf{u}_{h},\bm{\sigma}_{h})$ is the corresponding stochastic finite element approximation to the exact solution $(\textbf{u},\bm{\sigma})$.
Example 1 : stochastic plane stress problem
Set the spatial domain $D=(0,10)\times(-1,1)$ with meshes as in Figure 2. The body force f and the surface traction g on $\partial D_{1}=\{(x_{1},x_{2})\in[0,10]\times[-1,1]:x_{1}=10~{}\text{or}~{}x_{%
2}=\pm 1\}$ are given by
$$\textbf{f}=(0,0)^{T},\quad\textbf{g}|_{x_{1}=10}=(-2\widetilde{E}x_{2},0)^{T},%
\quad\textbf{g}|_{x_{2}=\pm 1}=(0,0)^{T}.$$
The exact solution $(\textbf{u},\bm{\sigma})$ is of the form
$$\textbf{u}=\left(\begin{array}[]{c}-2x_{1}x_{2}\\
x_{1}^{2}+\nu(x_{2}^{2}-1)\end{array}\right),\quad\bm{\sigma}=\left(\begin{%
array}[]{cc}-2\widetilde{E}x_{2}&0\\
0&0\end{array}\right),$$
where $\widetilde{E}$ is a uniform random variable on $[500,1500]$, and we set $\nu=0.25$.
In the computation we use the exact form of the stochastic coefficient $\widetilde{E}$ and take $N=1$, so there is no truncation error caused by the K-L expansion in the approximation.
Numerical results at different meshes and different values of $p$ are listed in Tables
1-2. For comparison we also list results computed by a stochastic finite element called $PC\times h$ method, where the polynomial chaos (PC) method [9] and the PS element method are used in the stochastic field $\Gamma$ and the space domain $D$, respectively. In the $PC\times h$ method, $p$ denotes the degree of polynomial chaos. We note that the computational costs of the $PC\times h$ method and the $p\times h$-SHSFEM are almost the same with the same $p$.
From the numerical results we can see that the solutions are more accurate with the increasing of $p$ and the refinement of meshes. Especially, $p=1$ and $p=2$ for the $p\times h$-SHSFEM give almost the
same results, which implies that the solutions are accurate enough with respect to the $p$-version approximation of the stochastic field for given spatial meshes; In these cases, the $p\times h$-SHSFEM is of first order accuracy in the mesh size $h$ for the displacement approximation and yields quite accurate results for the stress approximation. What’s more, we can see that the $p\times h$-SHSFEM is more accurate than the $PC\times h$ method at the same $p$.
Example 2 : stochastic plane strain problem
The domain $\Omega$ and meshes are the same as in Figure 2.
The body force $\textbf{f}=(0,0)^{T}$. The surface traction g on $\partial D_{1}=\{(x_{1},x_{2})\in[0,10]\times[-1,1]:x_{1}=10~{}or~{}x_{2}=\pm 1\}$ is given by $\textbf{g}|_{x_{1}=10}=(-2\widetilde{E}x_{2},0)^{T}$, $\textbf{g}|_{x_{2}=\pm 1}=(0,0)^{T}$, and the exact solution $(\textbf{u},\bm{\sigma})$ is of the form
$$\textbf{u}=\left(\begin{array}[]{c}-2(1-\nu^{2})x_{1}x_{2}\\
(1-\nu^{2})x_{1}^{2}+\nu(1+\nu)(x_{2}^{2}-1)\end{array}\right),~{}~{}~{}~{}~{}%
~{}~{}~{}~{}~{}~{}~{}~{}\bm{\sigma}=\left(\begin{array}[]{cc}-2\widetilde{E}x_%
{2}&0\\
0&0\end{array}\right),$$
where $\widetilde{E}=1+\xi^{2}$, $\xi$ is a standard normal Gaussian random variable.
Similar to Example 1, in the computation we use the exact form of the stochastic coefficient $\widetilde{E}$ and take $N=1$.
Numerical results at different meshes, different values of $p$ and different values of Poisson ratio $\nu$ are listed in Tables 3-8. For comparison we also list results computed by a stochastic finite element called $p\times$bilinear method, where the $p$-version method and the bilinear element are used in the stochastic field $\Gamma$ and the space domain $D$, respectively. We note that the computational costs of the $p\times$bilinear method and the $p\times h$-SHSFEM are almost the same.
Tables 3-4 show that the $p\times$bilinear method deteriorates as $\nu\rightarrow 0.5$ or $\lambda\rightarrow+\infty$, while Tables 5-8 show that the $p\times h$-SHSFEM yields uniformly accurate results for the displacement and stress approximations. Moreover, $p=0$ and $p=2$ give almost the same results, which implies that the solutions are accurate enough with respect to the $p$-version approximation of the stochastic field for given spatial meshes.
References
[1]
I. Babu$\breve{s}$ka, R. Tempone and G.E. Zouraris, Galerkin finite element approximations of stochastic elliptic partial differential equations, SIAM J. Numer. Anal. 42 (2) (2004) 800-825.
[2]
I. Babu$\breve{s}$ka, F. Nobile, R. Tempone, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM Review 52 (2) (2009) 317-355.
[3]
F. Brezzi, M. Fortin, Mixed and hybrid finite element methods, Springer-Verlag, New York, 1991.
[4]
F. Brezzi, On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers, Rev. Francaise Automat. Informat. Recherche Op$\acute{e}$rationnelle S$\acute{e}$r. Rouge 8 (R-2) (1974) 129-151.
[5]
B. Cambou, Applications of first-order uncertainty analysis in the finite
elements method in linear elasticity, Proc, 2nd Int. Conf. Applications of statistics and Probability in Soil and Struct. Engrg, Aachen, Germany. (1975) 67-87.
[6]
R.E. Caflisch, Monte Carlo and Quasi-Monte Carlo Methods, Acta numerica, 7 (1998) 1-49.
[7]
M.K. Deb, I.M. Babu$\breve{s}$ka, J.T. Oden, Solution of stochastic paritial differential equations using Galerkin finite element techniques, Comput. Method Appl. Mech. Engrg. 190 (2001) 6359-6372.
[8]
P. Frauenfelder, C. Schwab, R.A. Todor, Finite
elements for elliptic problems with stochastic coefficients, Comput. Methods Appl. Mech. Engrg. 194 (2005) 205-228.
[9]
R.G. Ghanem, P.D. Spanos, Polynomial Chaos in Stochastic Finite Elements, Journal of Applied Mechanics. 57 (1) (1990) 197-202.
[10]
R.G. Ghanem, P.D. Spanos, Stochastic finite elements: A spectral approach, Springer-Verlag, New York, 1991.
[11]
M. Kami$\acute{n}$ski, Generalized perturbation-based stochastic finite element method in elatostatics, Compaters and Strutures. 85 (2007) 586-594.
[12]
D. Lucor, C.-H. Su and G.E. Karniadakis, Generalized polynomial chaos and random oscillators, International Journal for Numerical Methods in Engineering, 60 (2004) 571-596.
[13]
H.G. Matthies, A. Keese, Galerkin methods for linear and nonlinear elliptic stochastic partial differential equations, Technical Report, Institute of Scientific Computing, Technical University Btaunschweig, July 2003.
[14]
A. Narayan, D.Xiu, Stochastic collocation methods on unstructured grids in high dimensions via interpolation, SIAM Journal on Scientific Computing, 34 (3) 2012 1729-1752.
[15]
F. Nobile, R. Tempone, C.G. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data, SIAM Journal on Numerical Analysis, 46 (5) (2008) 2309-2345.
[16]
F. Nobile, R. Tempone, C.G. Webster, An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data[J]. SIAM Journal on Numerical Analysis, 46 (5) 2008 2411-2442.
[17]
T.H.H. Pian, K. Sumihara, Rational approach for assumed stress finite elements, Int. J. Numer. Methods Engrg. 20 (9) (1984) 1685-1695.
[18]
K.K. Phoon, S.P. Huang, S.T. Quek, Implementation of Karhunen-Loeve expansion for simulation using a wavelet-Galerkin scheme, Probabilistic Engineering Mechanics, 17 (2002) 293-303.
[19]
B. ${\O}$ksendal, Stochastic differential equations, an introdution with applications, 5th ed. Springer-Verlag, Berlin, 1998.
[20]
F. Riesz and B.Sz.-Nagy, Functional Analysis, Dover, New York, 1990.
[21]
B. Sudret, A.D. Kiureghian, Stochastic finite element methods and reliability: a state-of-the-art report, Department of Civil and Environmental Engineering, University of California, 2000.
[22]
M. Shinozuka, Monte Carlo solution of structural dynamics, Computers and Structures, 2 (1972) 855-874.
[23]
C. Schwab, R.A. Todor, Karhuen-Lo$\grave{e}$ve approximation of random fields by generalized fast multipole methods, Journal of Computational Physics, 217 (2006) 100-122.
[24]
R.L. Taylor, P.J. Beresford, E.L. Wilson, A nonconforming element for stress analysis. International Journal for
Numerical Methods in Engineering, 10 (1976) 1211-1219.
[25]
K. Teferra, S.R. Arwade, G. Deodatis, Generalized variability response functions for two-dimensional elasticity problems, Comput. Methods Appl. Mech. Engrg. 272 (2014) 121-137.
[26]
E.L. Wilson, R.L. Taylor, W.P. Doherty, J. Ghaboussi, Incompatible displacement modes, Numer. Comput. Methods Struct. Mech. 43 (1973).
[27]
N. Wiener, The homogeneous chaos, American Journal of Mathematics, 60 (1938) 897-936.
[28]
D. Xiu, Fast numerical methods for stochastic computations : A review, Commun. Comput. Phys. February 5 (2-4) (2009) 242-272.
[29]
D. Xiu, G.E. Karniadakis, Modeling uncertainty in flow simulations via generalized polynomial chaos, J. Comput. Phys, 187 (2003) 137-167.
[30]
D. Xiu, G.E. Karniadakis, The wiener-askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput. 24 (2) (2002) 619-644.
[31]
X.P. Xie, T.X. Zhou, Optimization of stress modes by energy compatibility for 4-node hybrid quadrilaterals, Int. J. Numer. Methods Engrg. 59 (2004) 293-313.
[32]
X.P. Xie, T.X. Zhou, Accurate 4-node quadrilaterals elements with a new version of energy-compatible stress mode, Commun. Numer, Methods Engrg. 24 (2) (2008) 125-139.
[33]
G.Z. Yu, X.P. Xie, C. Carstensen, Uniform convergence and a posteriori error estimation for assumed stress hybrid finite element methods, Comput. Methods Appl. Mech. Engrg. 200 (2011) 2421-2433.
[34]
Z.M. Zhang, Analysis of some quadrilateral nonconforming elements for incompressible elasticity, SIAM J. Numer. Anal. 34 (2) (1997) 640-663.
[35]
T.X. Zhou, X.P. Xie, A unified analysis for stress/strain hybrid methods of high performance, Comput. Method Appl. Mech. Engrg. 191 (41-42) (2002) 4619-4640. |
Duality in
the Azbel-Hofstadter problem
and
the
two-dimensional
d-wave superconductivity
with a magnetic field
Y. Morita${}^{1}$ and
Y. Hatsugai${}^{1,2}$
Department of Applied Physics,
University of Tokyo,
7-3-1 Hongo Bunkyo-ku, Tokyo 113-8656, Japan${}^{1}$
PRESTO, Japan Science and Technology Corporation${}^{2}$
(November 21, 2020)
Abstract
A single-parameter family of lattice-fermion model is constructed.
It is a deformation of the Azbel-Hofstadter problem
by a parameter $h={\Delta}/t$
(quantum parameter).
A topological number is attached to each energy band.
A duality between
the classical limit ($h=+0$) and the quantum limit ($h=1$)
is revealed in the energy spectrum and the topological number.
The model has a close relation to
the two-dimensional d-wave superconductivity
with a magnetic field.
Making use of
the duality and a topological argument,
we shed light on
how the quasiparticles
with a magnetic field
behave
especially in the quantum limit.
pacs: 73.40.Hm,71.70.Ej,72.15.Rn,74.40.+k
Two-dimensional
Dirac fermions
with a gauge field
are of current interest, e.g.,
in the context of the vortex state
in a two-dimensional d-wave superconductivity
[1, 2, 3].
In our study,
Dirac fermions with a gauge field
are realized on a two-dimensional lattice.
It is a single-parameter deformation of
the Azbel-Hofstadter problem[4, 5].
In this paper,
the parameter $h$
is called
quantum parameter[6].
A topological number is assigned for each energy band
[7, 8, 9].
As the quantum parameter is varied
continuously
from the classical limit ($h=+0$)
to the quantum limit ($h=1$),
the energy spectrum is reconstructed
through the change of each topological number
(plateau transition[9, 10, 11]).
Although the two limits are not connected
adiabatically,
we found that
there is a duality
between the classical and the quantum regime.
The model has a close relation to
the two-dimensional d-wave superconductivity
with a magnetic field.
Applying the duality and a topological argument,
we provide insights into
the quasiparticle spectrum.
In the quantum limit,
interference effects
become relevant especially at zero energy[3].
The existence of edge states is discussed as well.
It reflects a non-trivial topology of
each energy band[9, 12].
Let us define a key Hamiltonian in our paper,
which is
a single-parameter family of lattice fermion model.
It is a deformation of the Azbel-Hofstadter problem.
The Hamiltonian is
${\cal H}=\sum_{l,m}{\bf c}_{l}^{\dagger}{\cal H}_{lm}{\bf c}_{m}$
with
$$\displaystyle{\cal H}_{lm}={e}^{iA_{lm}}\pmatrix{t_{lm}^{0}&\Delta_{lm}^{0}\cr%
{\Delta_{ml}^{0}}^{*}&-t_{ml}^{0}\cr},$$
(1)
${\bf c}_{n}^{\dagger}=\pmatrix{c_{n\uparrow}^{\dagger}c_{n\downarrow}^{\dagger%
}\cr}$,
${\bf c}_{n}=\pmatrix{c_{n\uparrow}\cr c_{n\downarrow}\cr}$,
$A_{lm}=-A_{ml}{\in}{\bf R}$ and
${\displaystyle\sum_{\put(0.0,0.0){\framebox(4.0,4.0){}}}}{A_{lm}}/2{\pi}={\phi%
}=p/q$
($p$ and $q$ are coprime integers and
the summation runs over four links around a plaquette).
Here
$l,m{\in}{\bf Z}^{2}$,
$t_{m{\pm}(1,0),m}^{0}=t_{m{\pm}(0,1),m}^{0}=t$,
${\Delta}_{m{\pm}(1,0),m}^{0}=-{\Delta}_{m{\pm}(0,1),m}^{0}={\Delta}$
$(t,{\Delta}{\in}{\bf R})$
and the other matrix elements are zero.
$t$ is set to be a unit energy and
a relevant parameter,
quantum parameter,
is defined by $h={\Delta}/t$.
The relation of this model to
the two-dimensional d-wave superconductivity
with a magnetic field
is discussed later.
In the classical limit $h=+0$,
this model
decouples to two essentially equivalent Hamiltonians.
It is the Azbel-Hofstadter Hamiltonian
$\sum_{l,m}{c}_{l}^{\dagger}{e}^{iA_{lm}}t_{lm}^{0}{c}_{m}$.
The energy spectrum at ${\phi}=0$
is given by
$E={\pm}{\sqrt{A(k)^{2}+|B(k)|^{2}}}$
where
$A(k)=2t(\cos{k_{x}}+\cos{k_{y}})$,
$B(k)=2{\Delta}(\cos{k_{x}}-\cos{k_{y}})$
and $k{\in}[-\pi,\pi]{\times}[-\pi,\pi]$.
The upper and lower bands
touch at four points
$(\pm{\pi/2},\pm{\pi/2})$
in the Brillouin zone.
The low-lying excitations
around the gap-closing points
are described by
massless Dirac fermions.
One of the basic observables
is a topological number for the $n$-th band, ${\cal C}_{n}$
[7, 8, 9].
It is
$$\displaystyle{\cal C}_{n}=\frac{1}{2\pi i}\int d{\bf k}\;\hat{z}\cdot({\bf{%
\nabla}}$$
$$\displaystyle\times$$
$$\displaystyle{\bf A}_{n}),\;{\bf A}_{n}=\langle u_{n}({\bf{k}})|{\bf{\nabla}}|%
u_{n}({\bf{k}})\rangle$$
where ${\nabla}={\partial}/{\partial}{\bf k}$ and
$|u_{n}(\bf{k})\rangle$ is a Bloch vector
of the $n$-th band.
The integration $\int d{\bf k}$
runs over the Brillouin zone (torus).
The non-zero topological number results in
the existence of edge states.
In order to see it,
put the system on a cylinder and
introduce a fictitious flux through the cylinder
(it is equivalent to
a twist in the boundary condition)[13].
The edge states
move from one boundary to the other
as the fictitious flux quanta $hc/e$
is added adiabatically.
The number of carried edge states
coincides with
the summation of
topological numbers
below the Fermi energy[9, 12].
Due to the topological stability,
a singularity
necessarily occurs
with the change of the topological number
(plateau transition [9, 10, 11]).
The singularity is identified with
an energy-gap closing on some points
in the Brillouin zone.
In Figs. 1-3,
the energy spectra are shown.
As $h$ is varied
continuously
from the classical limit ($h=+0$)
to the quantum limit ($h=1$),
the energy spectrum is reconstructed
through the plateau transitions.
Although the two limits are not connected
adiabatically,
a main feature in the data
is that
there is a symmetry
between the classical and the quantum regime.
It leads to a claim that
’
there is a faithful correspondence
between
$\phi=(1/2+p/q)$ in the classical limit $h=+0$
and $\phi=p/q$ in the quantum limit $h=1$’.
We call this phenomena duality.
As discussed above,
this model reduces to a doubled Azbel-Hofstader problem
in the classical limit $h=+0$.
The duality is an analogue of a statistical transmutaion
(composite fermion picture)
in the fractional quantum Hall effect[14].
In the composite fermion picture,
a locally attached flux to each fermion
is replaced with a global uniform flux.
In our case,
pairing effects (off-diagonal order)
play the role of shifting a flux globally.
It is reminiscent of a symmetry
between the d-wave pairing and the $\pi$-flux,
in other words, the $SU(2)$ symmetry[15].
Here some comments are in order.
At ${\phi}=0$ and $1/2$,
we proved analytically that
the claim is exact.
Moreover, as shown in Figs.1 and 3,
topological numbers
are consistent with the claim.
As an application of the duality,
let us study
the two-dimensional $d$-wave pairing model
with a magnetic field
especially in the quantum limit ($h=1$).
The pairing model is
$H=\sum_{l,m}{\bf c}_{l}^{\dagger}H_{lm}{\bf c}_{m}$
with
$$\displaystyle H_{lm}=\pmatrix{t_{lm}&\Delta_{lm}\cr\Delta_{ml}^{*}&-t_{ml}\cr}.$$
(2)
Under the
unitary transformation
$c_{n\uparrow}{\rightarrow}d_{n\uparrow}$,
$c_{n\downarrow}{\rightarrow}d_{n\downarrow}^{\dagger}$
(for ${\forall}n$),
it becomes
$H=\sum_{l,m}[d_{l\uparrow}^{\dagger}{t_{lm}}d_{m\uparrow}+d_{l\downarrow}^{%
\dagger}{t_{lm}}d_{m\downarrow}+d_{l\uparrow}^{\dagger}{\Delta_{lm}}d_{m%
\downarrow}^{\dagger}+d_{m\downarrow}{\Delta_{lm}^{*}}d_{l\uparrow}].$
It is equivalent to
the Bogoliubov-de Gennes Hamiltonian
for the singlet superconductivity.
Here
$t_{lm}^{*}=t_{ml}$ (hermiticity)
and $\Delta_{lm}=\Delta_{ml}$ (SU(2) symmetry)
are imposed as well.
It satisfies a relation
$-(\sigma_{y}H_{lm}\sigma_{y})^{*}=H_{lm}$,
which results in a particle-hole symmetry
in the energy spectrum.
The two-dimensional $d$-wave pairing model
with a magnetic field is defined
by the pairing model (2)
with
$$\displaystyle t_{lm}$$
$$\displaystyle=$$
$$\displaystyle{\exp}(iA_{lm})t_{lm}^{0},$$
$$\displaystyle{\Delta}_{lm}$$
$$\displaystyle=$$
$$\displaystyle{\exp}(i{\varphi}_{lm}){\Delta}_{lm}^{0}$$
where
$A_{lm}=-A_{ml},{\varphi}_{lm}=({\varphi}_{l}+{\varphi}_{m})/2{\in}{\bf R}$
so that
$t_{lm}^{*}=t_{ml}$ (hermiticity)
and $\Delta_{lm}=\Delta_{ml}$ (SU(2) symmetry)
are satisfied[1, 3].
Performing a gauge transformation [1]
$$\displaystyle{\bf c}_{n}{\rightarrow}\pmatrix{{e}^{i{\varphi}_{n}}&0\cr 0&1\cr%
}{\bf c}_{n},$$
we obtain
$$\displaystyle H_{lm}={e}^{-iA_{lm}}\pmatrix{t_{lm}^{0}{e}^{-2iv_{lm}}&{\Delta}%
_{lm}^{0}{e}^{-iv_{lm}}\cr{{\Delta}_{ml}^{0}}^{*}{e}^{-iv_{lm}}&-t_{ml}^{0}\cr}$$
(3)
where $v_{lm}=({\varphi}_{l}-{\varphi}_{m})/2-A_{lm}$.
It is a lattice realization of
the Hamiltonian
discussed in ref.[1]
and $v_{lm}$ corresponds to
the superfluid velocity.
In the case $v_{lm}=0$,
it
reduces to the Hamiltonian (1) and
the quasiparticle spectrum consists of
’Landau levels’
$E={\pm}{\omega}_{H}\sqrt{n}$ ($n{\in}{\bf N}$)
in the continuum limit
[1].
In the following,
we set
a period for $A_{lm}$ and $v_{lm}$
to be $l_{x}{\times}l_{y}$.
Here we emphasize that,
in the context of superconductivity,
$A_{lm}$ and $v_{lm}$ are determined in
a self-consistent way.
Moreover,
the spatial variation
of $|{\Delta}_{lm}|$
plays a crucial role especially
near a vortex core.
In our study, however,
$A_{lm}$ and $v_{lm}$ are treated as adjustable parameters
and
focus is put on the duality or topological arguments
which do not depend on the detail of the potential.
In Fig.4,
an example of the density of states (DoS)
is shown
for the $d$-wave pairing model
with a magnetic field
in the quantum limit ($h=1$).
Although
the weak-field regime (${\phi}{\sim}0$)
is relevant to reality,
we show the case ${\phi}=1/5$ for clarity.
In the weak-field regime,
the number of energy bands increases but
the following arguments are robust.
As the system at ${\phi}=p/q$ approaches
the quantum regime ($h{\sim}1$),
the quasiparticle spectrum
becomes close to
that of the Azbel-Hofstadter problem at ${\phi}=(1/2+p/q)$.
The Landau levels are mixed due to
lattice effects and $v_{lm}$.
It causes a singularity at zero energy [3] and
the broadening of each level (’Landau bands’).
It is
due to quantum interference effects
through spatially varying potentials.
It is an analogue of
the vanishing DoS at zero energy
in random Dirac fermions [16, 17, 18].
In the case of ${\phi}=p/q$ ($q$=odd),
it is a natural consequence
of the duality.
In other words,
it can be interpreted by
the fact
that there exists a singularity at zero energy
i.e. $\rho(E=0)=0$
in the Azbel-Hofstadter problem at ${\phi}=(1/2+p/q)$.
It is also to be noted that,
apart from the singularity and the broadening,
the energy spectrum
of the Azbel-Hofstadter problem at ${\phi}=(1/2+p/q)$
$(p=1,q{\gg}1)$
is ${\sim}{\pm}{\omega}_{H}\sqrt{n}$ near the band center
and ${\sim}{\pm}({\omega}{n}+C)$ near the band edge
($n{\in}{\bf N}$).
Now
put the system on a cylinder
(periodic in the $y$ direction and
open in the $x$ direction)
and let us study
the basic properties of edge states in
the d-wave pairing model with a magnetic field.
The Schr${\ddot{o}}$dinger equation
for the Hamiltonian (2)
is
$$\displaystyle\sum_{j}\pmatrix{t_{ij}&{\Delta}_{ij}\cr{\Delta}_{ji}^{*}&-t_{ji}%
^{*}\cr}\pmatrix{u_{j}\cr v_{j}\cr}=E\pmatrix{u_{i}\cr v_{i}\cr}.$$
Decompose
all the sites ${\cal N}$
into two sublattices $A$ and $B$
($A{\cup}B={\cal N}$
and $A{\cap}B={\phi}$)
where
$t_{ij}$ and ${\Delta}_{ij}$ connecting the same sublattice
are zero.
Define ${\bar{w}}_{k}$ by
${\bar{w}}_{k}=+{w}_{k}$ ($k{\in}A$) and
${\bar{w}}_{k}=-{w}_{k}$ ($k{\in}B$).
Then it follows from the SU(2) symmetry that
$$\displaystyle\sum_{j}\pmatrix{t_{ij}&{\Delta}_{ij}\cr{\Delta}_{ji}^{*}&-t_{ji}%
^{*}\cr}\pmatrix{-{\bar{v}}_{j}^{*}\cr{\bar{u}}_{j}^{*}\cr}=E\pmatrix{-{\bar{v%
}}_{i}^{*}\cr{\bar{u}}_{i}^{*}\cr}.$$
It leads to the claim that,
if an edge state exists,
a paired
(degenerate but linearly independent)
edge state can be constructed
which is localized in the same side[19].
Next we shall discuss the existence of edge states.
At first,
set ${\phi}=p/q$,
$v_{lm}=0$
and focus on
an energy gap with a non-zero topological number
(see, for example, Fig.3. The duality implies
that a topological number for a generic gap
is non-zero even number in the quantum regime.
It is consistent with the previous result [9]).
As discussed above,
a non-zero topological number
leads to the existence of edge states
in the gap.
In other words,
the existence of edge states is topologically stable.
Next consider the case when $v_{lm}$ is finite.
As the $v_{lm}$ is varied from zero to the finite value,
the Landau bands are deformed and
can be overlapped like a semimetal.
We can,
however,
observe edge states due to the topological stability,
when the plateau transition is absent in the deformation.
In fact, see Fig.5.
The existence of
edge states
can be confirmed
in each energy gap.
It reflects a non-trivial topology
of each energy band.
In the above discussion,
we have employed a cylinder.
It is possible to consider the same problem
on a geometry with ’defects’ (e.g. annulus).
In the case,
the states analogous to the edge states on a cylinder
may be bound to the defects.
In summary,
a single-parameter family of lattice-fermion model is constructed
in two dimensions.
The parameter $h={\Delta}/t$ is called
quantum parameter.
A duality
is revealed in the model
between
the classical limit ($h=+0$) and the quantum limit ($h=1$).
Employing the duality and a topological argument,
we provide insights into
how the quasiparticles
with a magnetic field
behave
especially in the quantum limit.
A more detailed study
including a self-consistent potential
and the fluctuation
is left as a future problem.
It is crucial for the analysis of dynamical properties.
As discussed in the context of the integer quantum Hall effect,
when the potential is sufficiently strong,
it can bring the system to a totally different state
[20].
One of the authors (Y.M.)
thanks
H. Matsumura
for valuable discussions.
This work was supported in part by Grant-in-Aid
from the Ministry of Education, Science and Culture
of Japan.
The computation has been partly done
using the facilities of the Supercomputer Center,
ISSP, University of Tokyo.
References
[1]
P. W. Anderson,
preprint (cond-mat/9812063).
[2]
L. P. Gor’kov and J. R. Schrieffer,
Phys. Rev. Lett. 80, 3360 (1998).
[3]
L. Marinelli,
B. I. Halperin and
S. H. Simon,
preprint (cond-mat/0001406)
and references therein.
[4]
M. Ya Azbel,
Sov. Phys. JETP 19 634 (1964).
[5]
D. R. Hofstadter,
Phys. Rev. B 14, 2239 (1976).
[6]
Y. Morita, M. Kohmoto and K. Maki,
Int. J. of Mod. Phys. B 12, 989 (1998).
[7]
D. J. Thouless, M. Kohmoto, P. Nightingale, and M. den Nijs,
Phys. Rev. Lett. 49, 405 (1982).
[8]
Y. Hatsugai,
J. Phys.: Condens. Matter 49, 2507 (1997).
[9]
Y. Morita and Y. Hatsugai,
Phys. Rev. B 62, 99 (2000).
[10]
Y. Hatsugai and M. Kohmoto,
Phys. Rev. B 42, 8282 (1990).
[11]
M. Oshikawa,
Phys. Rev. B 50, 17357 (1994).
[12]
Y. Hatsugai,
Phys. Rev. Lett. 71, 3697 (1993);
Phys. Rev. B 48, 11851 (1993).
[13]
R. B. Laughlin,
Phys. Rev. B 23, 5632 (1981).
[14]
J. K. Jain,
Phys. Rev. B 41, 7653 (1990).
[15]
I. Affleck, Z. Zou, T. Hsu and P. W. Anderson,
Phys. Rev. B 38, 745 (1988).
[16]
A. Ludwig, M. Fisher, R. Shankar and G. Grinstein,
Phys. Rev. B 50, 7526 (1994).
[17]
A. A. Nersesyan, A. M. Tsvelik and F. Wenger,
Phys. Rev. Lett. 72, 2628 (1994).
[18]
Y. Morita and Y. Hatsugai,
Phys. Rev. Lett. 79, 3728 (1997);
Phys. Rev. B 58, 6680 (1998).
[19]
If the paired edge state is linearly dependent,
$|u_{i}|^{2}+|v_{i}|^{2}=0$ must hold for all sites.
Since
the state is not a null vector,
it leads to a contradiction.
[20]
Y. Hatsugai, K. Ishibashi and Y. Morita,
Phys. Rev. Lett. 83, 2246 (1999);
Y. Morita, K. Ishibashi and Y. Hatsugai,
Phys. Rev. B 61, 15952 (2000). |
A Morphology-System andPart-of-Speech Tagger for German111In:
D. Gibbon, ed., Natural Language Processing and Speech Technology.
Results of the 3rd KONVENS Conference, Bielefeld, October 1996. Mouton de Gruyter, Berlin, 1996.
Wolfgang Lezius, Reinhard Rapp & Manfred Wettler
()
This paper presents an integrated tool for German morphology and
statistical part-of-speech tagging which aims at making some well established
methods widely available. The software is very user friendly, runs on any PC and
can be downloaded as a complete package (including lexicon and documentation) from
the World Wide Web. Compared with the performance of other tagging systems the
tagger produces similar results.
Es wird ein integriertes Programmpaket vorgestellt, das ein
Morphologie- und ein Taggingmodul für das Deutsche enthält.
Die frei verfügbare Software zeichnet sich insbesondere durch hohe
Benutzerfreundlichkeit aus und kann über das World Wide Web bezogen werden.
Die Qualität der erzielten Ergebnisse entspricht dem derzeitigen Stand
der Forschung.
1 Introduction
Morphology systems, lemmatisers and part-of-speech taggers are some of the
basic tools in natural language processing. There are numerous applications,
including syntax parsing, machine translation, automatic indexing and semantic
clustering of words. Unfortunately, for languages other than English, such tools
are rarely available, and different research groups are often forced to
redevelop them over and over again. Considering German, quite a few morphology
systems (Hausser 1996) and taggers (see table 1) have been developed, which are described in Wothke et al. (1993) (IBM Heidelberg), Steiner (1995) (University of Münster), Feldweg (1995) (University of Tübingen), Schmid (1995) (University of Stuttgart), Armstrong et al. (1995) (ISSCO Geneva), and Lezius (1995) (University of Paderborn).
However, in most cases, the tagger is isolated from the morphology system. It relies on a lexicon of full forms which, of course, may be generated by a morphological
tool. Unfortunately, most German lexicons are not available due to copyright
restrictions and - as far as we know - none of them is public-domain.
Therefore we have decided to make our system Morphy publicly available.
It combines a morphological and tagging module in a single package
and can be downloaded from the World Wide
Web.222
URL: http://www-psycho.uni-paderborn.de/lezius/morpho.html
Since it has been created not only for linguists, but also for second language
learners, it has been designed for standard PC-platforms and great effort has been put in making it as easy to use as possible.
2 The morphology module of Morphy
The morphology system is based on the Duden grammar (Drosdowski 1984).
It consists of three parts: Analysis, generation and
lexical system.
The lexical system is more sophisticated than other systems in order to allow a
user-friendly extension of the lexicon.
When entering a new word,
the user is asked the minimal number of questions
which are necessary to infer the new word’s grammatical features and
which any native speaker should
be able to answer. In most cases only the root of the word has to be typed in,
questions are answered by pressing the number of the correct alternative
(see figure 1 for the dialogue when entering the verb
telefonieren). Currently,
the lexicon comprises 21.500 words (about 100.000 word forms)
and is extended continuously.
Starting from the root of a word and the inflexion type as stored in the
lexicon, the generation system produces all inflected
forms which are shown on the screen.
Among other morphological features it considers vowel mutation, shifts
between ß and ss as well as pre- and infixation
of markers for participles.
The analysis system for each word form determines its root and its part of speech,
and, if appropiate, its gender, case, number, tense and comparative
degree. It also segments compound nouns using a longest-matching rule
which works from right to left. Since the system treats each word form separately,
ambiguities can not be resolved. For ambiguous word forms any possible lemma and
morphological description is given (for some examples
see table 2). If a word form can not be recognised,
its part of speech is predicted by an algorithm which makes use of
statistical data on German suffix frequencies.
Morphy’s lookup-mechanism when analyzing texts is not based on a lexicon of
full forms. Instead, there is only a lexicon of roots together with their
inflexion types. When analyzing a word form, Morphy cuts off all
possible suffixes, builds the possible roots, looks up these roots in the lexicon,
and for each root generates all possible inflected forms. Only those roots
which lead to inflected forms identical to the original word
form will be selected (for details see Lezius 1994).
Naturally, this procedure is much slower than the simple lookup-mechanism
in a full form lexicon.333Morphy’s current analysis speed is about 50 word
forms per second on a fast PC, which is sufficient for many purposes. For the
processing of larger corpora we have
used Morphy to generate a full-form lexicon under UNIX. This has led to an
analysis speed of many thousand word forms per second.
Nevertheless, there are advantages: First, the lexicon
can be kept small,444Only 750 KB memory is
necessary for the current lexicon. which is an important consideration
for a PC-based system intended to be widely distributed. Secondly, the
processing of German compound nouns fits in this concept.
The performance of the morphology system has been tested at
the Morpholympics conference 1994
in Erlangen (see Hausser (1996), pp. 13-14, and Lezius (1996)) with a
specially designed test corpus
which had been unknown to the participants. This corpus comprised about 7.800
word forms and consisted of different text types (two political speeches, a fragment of
the Limas-corpus and a list of special word forms). Morphy
recognised 89.2%, 95.9%, 86.9% and 75.8% of the word forms, respectively.
3 The tagging module of Morphy
Since morphological analysis operates on isolated word forms, ambiguities
are not resolved. The task of the tagger is to resolve these ambiguities
by taking into account contextual information. When designing a tagger,
a number of decisions have to be made:
•
Selection of a tag set.
•
Selection of a tagging algorithm.
•
Selection of a training and test corpus.
3.1 Tag Set
Like the morphology system, the tagger is
based on the classification of the parts of
speech from the Duden grammar. Supplementary additions have been taken from
the system of Bergenholtz and Schaeder (1977). The so-formed tag
set includes grammatical features as gender,
case and number. This results in a very complex system, comprising about
1000 different tags (see Lezius 1995). Since only 456 tags were actually
used in the training corpus, the tag set was reduced to half.
However, most German word forms are highly
ambiguous in this system (about 5 tags per word form on average).
Although the amount of information gained by this system is very high,
previous tagging algorithms with such large tag sets led to poor results in the
past (see Wothke et al. 1993; Steiner 1995). This is because different grammatical features
often have the same surface realization (e.g. nominative noun and accusative
noun are difficult to distinguish by the tagger). By grouping together parts of
speech with different grammatical features this kind of error can be
significantly reduced. This is what current small tag sets implicitly do.
However, one has to keep in mind that the gain of information provided by the tagger
is also reduced with a smaller tag set.
Since some applications do not require detailed distinctions, we also constructed
a small tag set comprising 51 tags as shown in table 3.
Both tag sets are constructed in such a way
that the large tag set can be directly mapped onto the small tag set.
3.2 Tagging algorithm
The tagger uses the Church-trigram-algorithm (Church 1988), which is still
unsurpassed in terms of simplicity, robustness and accuracy. However, since
we assumed that longer n-grams may give more information, and
since we observed that some longer n-grams are rather frequent in corpora
(see figure 2 for some statistics on the Brown-corpus),
we decided to compare
the Church algorithm with a tagging algorithm relying on variable context
widths as described by Rapp (1995).
Starting from an ambiguous word form which is to be tagged, this algorithm
considers the preceding word froms - which have already been
tagged - and the succeeding word forms still to be tagged.
For this ambiguous word form the algorithm constructs all possible tag
sequences composed of the already computed tags on the left, one of the
possible tags of the critical word form and possible tags on the right.
The choice of the tag for the critical word form is a function for the
length of the tag sequences to the left and to the right which can be
found in the training corpus. A detailed description of this algorithm
is given in Rapp (Rapp 1995, pp. 149-154).
Although some authors (Cutting et al. 1992; Schmid 1995; Feldweg 1995) claim
that unsupervised tagging algorithms produce superior results, we chose
supervised learning. These publications pay little attention to
the fact that algorithms for unsupervised tagging require great care
(or even luck) when tuning some initial parameters.
It frequently happens that unsupervised learning with sophisticated tag sets
ends up in local minima, which can lead to poor results without any
indication to the user. Such behavior seemed unacceptable for a standard
tool.
3.3 Training and test corpus
For training and testing we took a fragment from the
‘‘Frankfurter-Rundschau’’-corpus,555This corpus was generously donated by
the Druck- und Verlagshaus Frankfurt
am Main and has been included in the CD-ROM of the European
Corpus Initiative. We thank Gisela Zunker for her help with the acquisition and
preparation of the corpus.
which we have been collecting since 1992. Tables and other non-textual items
were removed manually. A segment of 20.000 word forms was used for training,
another segment of 5.000 word forms for testing. Any word forms not recognised
by the morphology system
were included in the lexicon. Using a special tagging editor which
- on the basis of the morphology module - for each word gives a choice of
possible tags, both corpora were tagged
semiautomatically with the large tag set. A recent version of the editor
additionally predicts the correct tag.
4 Results
Using the probabilities from the manually annotated training corpus,
the test corpus was tagged automatically. The results were compared with the
previous manual annotation of the test corpus. This was done for both tagging
algorithms and tag sets. For the small tag set, the Church algorithm achieved
an accuracy of 95.9%, whereas with the variable-context algorithm an accuracy
of 95.0% was obtained. For the large tag set the respective figures are 84.7%
and 81.8%.
In comparison with other research groups, the results are
similar for the small tagset and slightly better for the large tagset (see
table 1). Surprisingly, inspite of considering less context, the Church
algorithm performs better than the variable-context algorithm in both cases.
This is the reason why the current implementation
of Morphy only includes the Church algorithm.666The speed of the tagger
(including morphological analysis) is about 20 word forms per second for
the large and 100 word forms per second for the small tag set on a fast PC.
As an example, figure 4
gives the annotation results of a few test sentences for both tag sets.
However, there are also some advantages on the side of the
variable-context algorithm. First, its potential when using larger training
corpora seems to be slightly higher (see figure 3). Secondly, when
the algorithm
is modified in such a way that sentence boundaries are not assumed to be known
beforehand, the performance degrades only minimally. This means that this
algorithm can actually contribute to finding sentence boundaries. And third,
if there are sequences of unknown word forms in the text, the algorithm takes
better guesses than the Church algorithm (examples are given in
Rapp 1995, p. 155). When about 2% of the word forms in the test corpus were
randomly replaced by unknown word forms, the quality of the results for the
Church algorithm decreased by 0.7% for the small and by 2.0% for the large
tag set. The respective figures for the variable-context algorithm are 0.9%
and 1.3%, which is better overall.
In a further experiment, the contribution of the lexical probabilities
to the quality of the results was examined. Without the lexical
probabilities, the results decreased by 0.3% (small) and
0.6% (large tag set) for the Church algorithm, the respective
figures for the variable-context algorithm were 0.9% and 0.0%.
5 Conclusions
We have compared two different tagging algorithms and two different tag sets.
The first tagging algorithm is the Church algorithm
which uses trigrams to compute contextual probabilities. The second algorithm,
the so-called variable-context algorithm, has been described in paragraph 3.
The smaller of the two tag sets contains 51 parts-of-speech, the larger tag set
includes additional grammatical features such as case, number and gender.
The small tag set is a subset of the large tag set.
In comparison with the Church algorithm, the variable-context algorithm produces
similar results for the small tag set, but significantly inferior results for the
large tag set. On the other hand, the performance of the variable-context
algorithm for the large tag set improves faster with increasing size of the
training corpus than the performance of the Church algorithm.
Thus, with tagging more training
texts manually, similar results are to be expected for the two algorithms.
Considering the two tag sets, the results for the small tag set are
significantly better. Nevertheless, with increasing size of the training corpus
an approximation of the results seems to be possible.
One of our aims for the near future is to use the output of the tagger for
lemmatization. In this way a sentence like Ich meine meine Frau. could be
unambiguously reduced to ich / meinen / mein / Frau.
Bibliography
S. Armstrong, G. Russell, D. Petitpierre and G. Robert (1995). An open architecture for multilingual text processing. In: Proceedings of the ACL SIGDAT Workshop. From
Texts to Tags: Issues in Multilingual Language Analysis, Dublin.
H. Bergenholtz and B. Schaeder (1977). Die
Wortarten des Deutschen. Klett, Stuttgart.
K. Church (1988). A stochastic parts program and noun phrase parser
for unrestricted text. In:
Second Conference on Applied Natural Language Processing, pp. 136-143.
Austin, Texas.
D. Cutting, J. Kupiec, J. Pedersen and P. Sibun (1992). A
practical part-of-speech tagger. In: Proceedings of the Third Conference on
Applied Language Processing, pp. 133-140. Trento, Italy.
G. Drosdowski (1984). Duden. Grammatik der deutschen
Gegenwartssprache. Dudenverlag, Mannheim.
H. Feldweg (1995). Implementation and evaluation of a German
HMM for POS disambiguation. In: Feldweg and Hinrichs, eds., Lexikon und
Text, pp. 41-46. Niemeyer, Tübingen.
R. Hausser (1996). Linguistische Verifikation.
Dokumentation zur Ersten Morpholympics. Niemeyer, Tübingen.
W. Lezius (1994). Aufbau und Funktionsweise von Morphy.
Internal report. Universität-GH Paderborn, Fachbereich
2.
W. Lezius (1995). Algorithmen zum Taggen deutscher Texte.
Internal report, Universität-GH Paderborn, Fachbereich 2.
W. Lezius (1996). Morphologiesystem MORPHY. In:
R. Hausser, ed., Linguistische Verifikation: Dokumentation zur Ersten Morpholympics 1994,
pp. 25-35. Niemeyer, Tübingen.
R. Rapp (1995). Die Berechnung von Assoziationen - Ein korpuslinguistischer Ansatz. In: Hellwig and Krause, eds.,
Sprache und Computer, vol. 16, Olms, Hildesheim.
H. Schmid (1995). Improvements in part-of-speech tagging with an
application to German. In: Feldweg and Hinrichs, eds.,
Lexikon und Text, pp. 47-50. Niemeyer, Tübingen.
P. Steiner (1995). Anforderungen und Probleme beim Taggen
deutscher Zeitungstexte. In: Feldweg and Hinrichs, eds.,
Lexikon und Text. Niemeyer, Tübingen.
K. Wothke, I. Weck-Ulm, J. Heinecke, O. Mertineit and T. Pachunke (1993).
Statistically Based Automatic Tagging of German Text Corpora
with Parts-of-Speech - Some Experiments. Technical Report 75.93.02,
IBM Germany, Heidelberg Scientific Center. |
Self-supervised Change Detection in Multi-view Remote Sensing Images
Yuxing Chen,
Lorenzo Bruzzone
Y. Chen and L. Bruzzone are with the Department of Information Engineering and Computer Science, University of Trento, 38122 Trento, Italy (e-mail:chenyuxing16@mails.ucas.ac.cn;lorenzo.bruzzone@unitn.it).Corresponding author: L. Bruzzone
Abstract
The vast amount of unlabeled multi-temporal and multi-sensor remote sensing data acquired by the many Earth Observation satellites present a challenge for change detection.
Recently, many generative model-based methods have been proposed for remote sensing image change detection on such unlabeled data.
However, the high diversities in the learned features weaken the discrimination of the relevant change indicators in unsupervised change detection tasks.
Moreover, these methods lack research on massive archived images.
In this work, a self-supervised change detection approach based on an unlabeled multi-view setting is proposed to overcome this limitation.
This is achieved by the use of a multi-view contrastive loss and an implicit contrastive strategy in the feature alignment between multi-view images.
In this approach, a pseudo-Siamese network is trained to regress the output between its two branches pre-trained in a contrastive way on a large dataset of multi-temporal homogeneous or heterogeneous image patches.
Finally, the feature distance between the outputs of the two branches is used to define a change measure, which can be analyzed by thresholding to get the final binary change map.
Experiments are carried out on five homogeneous and heterogeneous remote sensing image datasets.
The proposed SSL approach is compared with other supervised and unsupervised state-of-the-art change detection methods.
Results demonstrate both improvements over state-of-the-art unsupervised methods and that the proposed SSL approach narrows the gap between unsupervised and supervised change detection.
Index Terms:
Change Detection, Self-supervised Learning, Multi-view, Sentinel-1/-2, Remote Sensing.
I Introduction
Change maps are one of the most important products of remote sensing and are widely used in many applications including damage assessment and environmental monitoring.
The spatial and temporal resolutions play a crucial role in obtaining accurate and timely change detection maps from multi-temporal images.
In this context, irrelevant changes, such as radiometric and atmospheric variations, seasonal changes of vegetation, and changes in the building shadows, which are typical of multi-temporal images, limit the accuracy of change maps.
In the past decades, many researchers developed techniques that directly compare pixels values of multi-temporal images to get the change maps from coarse resolution images [1, 2, 3], assuming that the spectral information of each pixel can completely characterize various underlying land-cover types.
Image rationing and change vector analysis (CVA) [2] are early examples of such algebraic approaches.
With the development of remote sensing satellite technology, the spatial and spectral resolutions of remote sensing images have significantly increased.
In this context, the use of spectral information only is often not enough to distinguish accurately land-cover changes.
Accordingly, the joint use of spatial context and spectral information to determine the land-cover changes has gained popularity.
Many supervised [4] and unsupervised [5] techniques have been developed in this context.
Most of them are based on image transformation algorithms where the crucial point is to obtain robust spatial-temporal features from multi-temporal images.
Recently, deep learning techniques and in particular Convolutional Neural Networks (CNNs) methods [6] have been widely used in this domain.
CNNs allows one to get effective and robust features for the change detection tasks, achieving state-of-the-art results in a supervised way [7].
Most of the past works are limited to the use of single modality images that are acquired by the same type of sensor with identical configurations.
Cross-domain change detection has not received sufficient attention yet.
Current Earth Observation satellite sensors provide abundant multi-sensor and multi-modal images.
On the one hand, images taken by different types of sensors can improve the time resolution thus satisfying the requirement of specific applications with tight constraints.
A possible example of this is the joint use of Sentinel-2 and Landsat-8 images for a regular and timely monitoring of burned areas [8].
However, the differences in acquisition modes and sensor parameters present a big challenge for traditional methods.
On the other hand, multimodal data are complementary to the use of single modality images and their use becomes crucial especially when only images from different sensors are available in some specific scenarios.
This could be the case of emergency management when, for example, optical and SAR images could be jointly exploited for flood change detection tasks [9].
In this scenario, methods capable of computing change maps from images of different sensors in the minimum possible time can be very useful.
This has led to the development of multi-source change detection methods, which can process either multi-sensor or multi-modal images.
Recent success of deep learning techniques in change detection is mainly focused on supervised methods [10, 11, 12], which are often limited from the availability of annotated datasets.
Especially in multi-temporal problems, it is expensive and often not possible to obtain a large amount of annotated samples for modeling change classes.
Thus, unsupervised methods are preferred to supervised ones in many operational applications.
The limited access to labeled data has driven the development of unsupervised methods, such as Generative Adversarial Network (GAN)[13] and Convolutional AutoEncoder (CAE)[14], which are currently among the most used deep learning methods in unsupervised change detection tasks.
Nevertheless, some studies have shown that such generative models overly focus on pixels rather than on abstract feature representations [15].
Recent researches in contrastive self-supervised learning [16, 17, 18, 19] encourage the network to learn more interpretable and meaningful feature representations.
This results in improvements on classification and segmentation tasks, where they outperformed the generative counterparts.
In this work, we present an approach to perform unsupervised change detection in multi-view remote sensing images, such as multi-temporal and multi-sensor images.
The proposed approach is based on two state-of-the-art self-supervised methods, i.e., multi-view contrastive learning [16] and BYOL [18], that are exploited for feature representation learning.
To this purpose, a pseudo-Siamese network (which exploits ResNet-34 as the backbone) is trained to regress the output between two branches (target and online sub-networks) that were pre-trained by a contrastive way on a large archived multi-temporal or multi-sensor images dataset.
In addition, we introduce a change score that can accurately model the feature distance between bi-temporal images.
Changes are identified when there is a significant disagreement between the feature vectors of the two branches.
The rest of this paper is organized as follows.
Section II presents the related works of unsupervised change detection in multi-view images including homogeneous and heterogeneous images.
Section III introduces the proposed approach by describing the architecture of the pseudo-Siamese network, the two considered contrastive learning strategies and the change-detection method.
The experimental results obtained on five different datasets and the related comparisons with supervised and unsupervised state-of-the-art methods are illustrated in Section IV.
Finally, Section V draws the conclusions of the paper.
II Related Works
In the literature, unsupervised change detection techniques in multi-view remote sensing images can be subdivided into two categories: homogeneous remote sensing image change detection and heterogeneous remote sensing image change detection.
Homogeneous image change detection methods are proposed to process multi-temporal images acquired by the same sensor or multi-sensor images with the same characteristics.
Heterogeneous image change detection methods focus on processing heterogeneous images, which are captured by different types of sensors with different imaging mechanism.
CVA [2] and its object-based variants are one of the most popular unsupervised homogeneous change detection methods.
They calculate the change intensity maps and the change direction for change detection and related classification.
Another popular method is the combination of PCA and K-means (PCA-KM)[20], which transforms and compares the bi-temporal images in the feature space, and then determine the binary change map using k-means.
In [21], Nilsen et al. treated the bi-temporal images as multi-view data and proposed the multivariate alteration detection (MAD) based on canonical correlations analysis (CCA), which maximizes the correlation between the transformed features of bi-temporal images for change detection.
Wu et al. [22] proposed a novel change detection method to project the bi-temporal images into a common feature space and detected the changed pixels by extracting the invariant components based on the theory of slow feature analysis (SFA).
As for homogeneous multi-sensor images, Solano et al. integrated CVA into a general approach to perform change detection between multi-sensor very high resolution (VHR) remote sensing images [23].
In [24], Ferraris et al. introduced a CVA-based unsupervised framework for performing change detection of multi-band optical images with different spatial and spectral resolutions.
However, the traditional methods are easily affected by the irrelevant changes due to their weak feature representation ability in presence of high-resolution remote sensing images [25].
To get a robust feature representation, deep learning techniques are widely used in remote sensing change detection tasks.
In [26], Liu et al. projected the bi-temporal images into a low-dimension feature space using the restricted Boltzmann machines (RBMs) and generated change maps based on the similarity of image feature vectors.
Du et al. [27] developed the slow feature analysis into deep learning methods to calculate the change intensity maps and highlight the changed components in the transformed feature space.
Then the binary change map was generated by image thresholding algorithms.
Instead of pixel-based analysis, Saha et al. [6] used a pre-trained CNNs to extract deep spatial-spectral features from multi-temporal images and analyzed the features using traditional CVA.
As an unsupervised learning method, generative models also are used in unsupervised change detection.
Lv et al. [28] adopted a contractive autoencoder to extract features from multi-temporal images automatically.
In [29], Ren et al. proposed to use GAN to generate the features of unregistered image pairs and detected the changes by comparing the generated images explicitly.
Unlike homogeneous change detection, the greatest challenge in unsupervised heterogeneous change detection is to align the inconsistent feature representation of different modality images.
This requires transforming heterogeneous representation into a common feature space where performing change detection.
There are a few traditional methods that focus on this transformation of different modalities.
Gong et al. [30] proposed an iterative coupled dictionary learning method that learns two couple dictionaries for encoding bi-temporal images.
Luppino et al. [31] proposed to perform image regression by transforming images to the domain of each other and to measure the affinity matrice distance, which indicates the change possibility of each pixel.
Sun et al. [32] developed a nonlocal patch similarity-based method by constructing a graph for each patch and establishing a connection between heterogeneous images.
Because of the ability of CNNs in feature learning, more and more techniques based on deep learning were also proposed in this area.
Zhao et al. [33] proposed a symmetrical convolutional coupling network (SCCN) to map the discriminative features of heterogeneous images into a common feature space and generated the final change map by setting a threshold.
Similarly, the conditional generative adversarial network (cGAN) was also used to translate two heterogeneous images into a single domain [34].
Luppino et al. used the change probability from [31] as the change before to guide the training of two new networks, the X-Net with two fully convolutional networks and the adversarial cyclic encoders network (ACE-Net) with two autoencoders whose code spaces are aligned by adversarial training [35].
In [36], they further jointly used domain-specific affinity matrices and autoencoders to align the related pixels from input images and reduce the impact of changed pixels.
These methods also work well for homogeneous multi-sensor images.
III Methodology
In this section, we present the proposed approach to multi-temporal and multi-sensor remote sensing image change detection based on self-supervised learning.
III-A Problem Statement
Change detection is the operation of distinguishing changed and unchanged pixels of multi-temporal images acquired by different sensors at different dates.
Let us consider two images $I_{1}$ and $I_{2}$ acquired at two different dates $t_{1}$ and $t_{2}$, respectively.
The aim of change detection is to create a change intensity map that contains the most salient changed pixels, from multi-view images $I_{1}$ and $I_{2}$.
As described in related works, the crucial point in this task is to align the features of unchanged pixels or patches from the different view data $T_{1}(\theta)=f_{\theta}(p_{1})$ and $T_{2}(\phi)=g_{\phi}(p_{2})$.
Here, $p_{1}$ and $p_{2}$ are unchanged patches or pixels in images $I_{1}$ and $I_{2}$, respectively.
The $f$ and $g$ functions are used to extract the features from multi-temporal images, where $\theta$ and $\phi$ denote the corresponding parameters.
The objective function of our task can be defined as:
$$\theta,\phi={\arg\min\limits_{\theta,\phi}}\{d[f_{\theta}(p_{1}),g_{\phi}(p_{2})]\}$$
(1)
where $d$ is a measure of feature distance between $T_{1}$ and $T_{2}$.
Many change detection techniques follow this formulation including CCA, canonical information analysis (CIA), and post-classification comparison (PCC).
CCA and CIA are used to calculate a linear/nonlinear relationship between features from multi-temporal images.
In classification-based approaches, $f$ and $g$ represent two classifiers trained independently or jointly [37].
While these change detection algorithms have made some contributions to the various application scenarios, they suffer some serious drawbacks, such as the variation in data acquisition parameters and the detection of unwanted irrelevant changes.
Thus, we still need the development of robust models, especially when the relevant changes are very hard to differentiate from the images.
With the development of deep learning, the multi-view contrastive loss and BYOL [38] were introduced in a multi-view setting to get robust features.
These methods are considered in this work as they can extract multi-view features by maximizing the mutual information of unchanged pixels or patches between views.
In the following subsections, we will describe the proposed approach by introducing the pseudo-Siamese network, two self-supervised methods (the multi-view contrastive loss and BYOL) as well as the change detection strategy for obtaining change maps.
III-B Pseudo-Siamese Network
Siamese networks [39] are the most used model in entities comparison.
However, the comparison of heterogeneous image pairs can not be performed by Siamese networks directly for their different imaging mechanism.
Siamese networks share identical weights in two branches, while heterogeneous image pairs have dissimilar low-level features.
Hence, the pseudo-Siamese network is used as the model architecture for heterogeneous image change detection.
It has two branches that share the same architecture except for the input channel, but with different weights.
Fig. 1 (a) shows the architecture used in this work for heterogonous change detection, where two branches are designed to extract the features of heterogeneous image pairs.
In this work, the ResNet-34 [40] is adopted as the backbone of the two branches and the input channels are changed for adapting to the heterogeneous image pairs, i.e., the polarization of SAR image patches and the spectral bands of optical images patches.
In greater detail, the heterogeneous image pairs are passed through the unshared branches and are then modeled in output from the related feature vectors.
The output feature vectors of two branches are normalized and then used to compute the similarity with each other and negative samples of the batch.
Finally, the model parameters are updated by maximizing a loss function.
For homogeneous images, we propose to use the mean teacher network [41] as the architecture of our model (Fig. 1 (b)).
Mean teacher is a common pseudo-Siamese network used in self-supervised learning, which uses an expositional moving average (EMA) weight to produce a more accurate model than using the same weights directly in the homogeneous images setting.
In this way, the target model has a better intermediate feature representation by aggregating the information of each step.
III-C Self-supervised Learning Approach
In this subsection, we present the two considered self-supervised methods that are used in our approach to heterogeneous (Fig. 1 (a)) and homogeneous (Fig. 1 (b)) remote sensing image change detection.
III-C1 Multi-view Contrastive Loss (heterogeneous images)
Contrastive learning is a popular methodology for unsupervised feature representation in the machine learning community [16, 17].
The main idea behind the contrastive loss is to find a feature representation that attributes the feature distance between different samples.
For heterogeneous change detection, let us consider each heterogenous image pairs $\{I_{1}^{i},I_{2}^{i}\}_{i=1,2,\dots,N}$ on a given scene $i$, which is considered as a positive pair sampled from the joint distribution $p(I_{1}^{i},I_{2}^{i})$.
Another image pair $\{I_{1}^{i},I_{2}^{j}\}$ taken from a different scene is considered as a negative pair sampled from the product of marginals $p(I_{1}^{i})p(I_{2}^{j})$.
The method introduces a similarity function, $h_{\theta}(\cdot)$, which is used to model the feature distance between positive and negative pairs.
The pseudo-Siamese network is trained to minimize the $\mathcal{L}_{\text{contrast}}^{S}$ defined as:
$$\mathcal{L}_{\text{contrast}}^{S}=-\underset{S}{\mathbb{E}}{\left[\log\frac{h_{\theta}(I_{1}^{1},I_{2}^{1})}{\sum_{j=1}^{N}h_{\theta}(I_{1}^{1},I_{2}^{j})}\right]}$$
(2)
where $(I_{1}^{1},I_{2}^{1})$ is a positive pair sample, $(I_{1}^{1},I_{2}^{j}|j\geq 1)$ are negative pair samples and $S=\{I_{1}^{1},I_{2}^{1},I_{2}^{2},\cdots,I_{2}^{N-1}\}$ is a set that contains $N-1$ negative samples and one positive sample.
During the training, positive image pairs are assigned to a higher value whereas negative pairs to a lower value.
Hence, the network represents positive pairs at a close distance whereas negative pairs at a high distance.
The self-supervised method takes different augmentations of the same image as positive pairs and negative pairs sampled uniformly from the different training data.
However, such a sampling strategy for negative pairs is no longer suitable in such a case.
Robinson et al. [42] proposed an effective hard negative sampling strategy to avoid the ”sampling bias” due to false-negative samples with same context information as the anchor.
With this strategy, we address the difficulty of negatives sampling in the self-supervised heterogeneous change detection task.
For heterogeneous change detection, we can construct two modalities image sets $S_{1}$ and $S_{2}$ by fixing one modality and enumerating positives and negatives from the other modality.
This allows us to define a symmetric loss as:
$$\mathcal{L}\left(S_{1},S_{2}\right)=\mathcal{L}_{\text{contrast}}^{S_{1}}+\mathcal{L}_{\text{contrast}}^{S_{2}}$$
(3)
In practice, the NCE method is used to make a tractable computation of (3) when $N$ is extremely large.
This multi-view contrastive learning approach makes the unsupervised heterogeneous change detection possible.
III-C2 Implicity Contrastive Learning (homogeneous images)
Recently, a self-supervised framework (BYOL) was proposed that presents an implicit contrastive learning way without the requirements to have negative samples during the network training [18].
In this method, the pseudo-Siamese network, including online and target networks, is used to regress each other’s output during the training.
The two networks are not fully identical.
The online network is followed by a predictor and the weights of the target network are updated by the EMA of the parameters of the online network.
Hence, the loss of the two networks can be written as the $l_{2}$ distance of each output:
$$\mathcal{L}\triangleq\mathbb{E}_{\left(I_{1},I_{2}\right)}\left[\left\|q_{w}\left(f_{\theta}\left(I_{1}\right)\right)-f_{\phi}\left(I_{2}\right)\right\|_{2}^{2}\right]$$
(4)
Similar to the multi-view contrastive loss, the feature vectors are $l_{2}$-normalized before output.
Here the online network $f_{\theta}$ is parameterized by $\theta$, and $q_{w}$ is the predictor network parameterized by $w$.
The target network $f_{\phi}$ has the same architecture as $f_{\theta}$ but without the final predictor and its parameters are updated by EMA controlled by $\tau$, i.e.,
$$\phi\leftarrow\tau\phi+(1-\tau)\theta$$
(5)
The most important property of BYOL is that no negative samples are used when training the two networks, and thus feature representations are learned only from positive samples.
A previous work [43] has pointed out that the architecture of Siamese network is the key to implicit contrastive learning and the predictor with batch-normalization can avoid the representation collapse during the training.
In this approach, the network is identical in the two branches, and the weights of the target part are updated according to another branch.
Hence, this algorithm is very suitable to process multi-temporal remote sensing images with the same modality (i.e., homogeneous images).
III-D Change Detection
The change detection strategy described in this subsection is based on the feature learned by the previously mentioned self-supervised methods.
Let $S=\{I_{1},I_{2},I_{3},...,I_{n}\}$ be a dataset of either homogeneous or heterogeneous multi-temporal remote sensing images.
Our goal is to detect changes between satellite images from different dates.
As mentioned before, most changes of interest are those relevant to human activities, while the results are easily affected by irrelevant changes, such as seasonal changes.
Other relevant changes are usually rare, whereas irrelevant changes are common during a long period.
This means that, under this assumption, the features of relevant changes can be derived from the unchanged features.
To this purpose, the models are trained to regress the features of images acquired at different dates.
As shown in Fig. 2, here we use the considered self-supervised learning algorithms to get features of either homogeneous or heterogeneous multi-temporal images.
After training, a change intensity map can be derived by assigning a score to each pixel indicating the probability of change.
During the network training, images acquired by the different sensors or at different dates are treated as two-views in our approach.
Homogeneous images are trained with BYOL, while heterogeneous images are trained by using multi-view contrastive loss.
Image patches centered at each pixel are fed in input to the network, and the output is a single feature vector for each patch-sized input.
In detail, given an input image $\mathbf{I}\in\mathbb{R}^{w\times h}$ of width $w$, height $h$, we can get a feature vector $T(r,c)$ of a square local image region with a side length $p$ for each image pixel at row $r$ and column $c$.
To get different scale feature representations, we trained an ensemble of $N\geq 1$ randomly initialized models that have an identical network architecture but use different input image sizes.
Therefore, changes of different sizes are detected by choosing one of the $N$ different side length values.
During the inference, each model provides as output a feature map that is generated by different sizes of input images.
Let ${T}_{1}^{i}(r,c)$ and ${T}_{2}^{i}(r,c)$ denote the feature vectors at the row $r$ and column $c$ for the considered bi-temporal images.
The change intensity map is defined as the pair-wise regression error $e(r,c)$ between the feature vectors of bi-temporal images:
$$\displaystyle e{(r,c)}$$
$$\displaystyle=\left\|T_{1}{(r,c)}-T_{2}{(r,c)}\right\|_{2}^{2}$$
(6)
$$\displaystyle=\left\|\frac{1}{N}\sum_{i=1}^{N}\left({T}_{1}^{i}(r,c)-{T}_{2}^{i}(r,c)\right)\right\|_{2}^{2}$$
In order to allow all model outputs to be merged, we normalize each output by its mean value $e_{\mu}$ and standard deviation $e_{\sigma}$.
Therefore, multi-scale change detection can be simplified into sub-tasks that train multiple pseudo-Siamese ensemble networks with varying values of $p$.
At each scale, a change intensity map with the same size as the input image is computed.
Given $N$ pseudo-Siamese ensemble models with different side length, the normalized regression error $\tilde{e}(r,c)$ of each model can be combined by simple averaging.
One can see from Fig. 2 that pixels can be classified as changed and unchanged by thresholding the feature distance in the change intensity map.
In this case, two strategies are considered.
The simplest strategy is to choose the opposite minimum value of standardized intensity maps as the threshold value.
An alternative strategy is the Robin thresholding method [44], which is robust and suitable for long-tailed distribution curves.
In this method, the threshold value is the ”corner” on the distribution curve of the intensity map and the maximum deviation from the straight line drawn between the endpoints of the curve.
In our technique, the threshold value is determined by the first strategy if the absolute difference of these two threshold values is smaller than half of their average value.
Otherwise, the threshold value is determined by the Robin thresholding method.
IV Experimental Results
In this section, we first present the considered datasets, then the state-of-the-art change detection methods used in the comparison, and finally conduct a thorough analysis of the performance of different approaches and of their results.
IV-A Description of Datasets
We developed our experiments on five different datasets including three homogeneous datasets and two heterogeneous datasets.
All remote sensing images in this work are raw images from the google earth engine (GEE) and without any specific pre-processing.
IV-A1 OSCD_S2S2/_S1S1/_S1S2/_L8S2
The Onera Satellite Change Detection (OSCD) dataset [45] was created for bi-temporal change detection using Sentinel-2 images acquired between 2015 and 2018.
These images have a total of 13 bands with a relatively high resolution (10 m) for Visible (VIS) and near-infrared (NIR) band images and 60 m resolution for other spectral channels.
The images of this dataset include urban areas and present the change type of urban growth and changes.
The dataset consists of 24 pairs of multispectral images and the corresponding pixel-wise ground truth acquired in different cities and including different landscapes.
The pixel-wise ground truth labels, which were manually annotated, were also provided for each pair but with some errors due to the relatively limited resolution of Sentinel-2 images.
At the original supervised setting, 14 pairs were selected for the training set and the rest 10 pairs were used to evaluate the performance of methods.
To use this dataset in self-supervised training, we downloaded additional Sentinel-2 images in the same location as the original bi-temporal images between 2016 and 2020.
We considered images from each month to augment existing image pairs.
Similarly, Landsat-8 multi-temporal images and Sentinel-1 ground range detected (GRD) image products are also provided in this dataset corresponding to the given Sentinel-2 scenes.
The Landsat-8 images have nine channels covering the spectrum from deep blue to shortwave infrared and two long-wave infrared channels and their resolution range from 15 m to 100 m.
The Sentinel-1 GRD products have been terrain corrected, multi-looked, and transformed to the ground range and geographical coordinates.
They consist of two channels including Vertical-Horizontal (VH) and Vertical-Vertical (VV) polarization as well as of additional information on the incidence angle.
To use this dataset for multi-view change detection, we separate it into four sub-datasets: OSCD_S2S2, OSCD_S1S1, OSCD_S1S2 and OSCD_L8S2.
These datasets are composed of homogeneous multi-temporal optical or SAR images (OSCD_S2S2, OSCD_S1S1, OSCD_L8S2) and heterogeneous multi-temporal SAR-optical images (OSCD_S1S2).
To keep consistency with previous research, 10 image pairs of these four datasets corresponding to the OSCD test image pairs are treated as the test dataset to evaluate the performance of different methods, and image pairs acquired on other scenes and on each month of four years are used for the self-supervised pre-training.
In practice, it is impossible to acquire the test image pairs of OSCD_S1S1, OSCD_L8S2, and OSCD_S1S2 at the same time as the OSCD_S2S2.
Hence, we only obtained these image pairs at the closest time to OSCD_S2S2 test image pairs.
IV-A2 Flood in California
The California dataset is also a heterogeneous dataset that includes a Landsat-8 (multi-spectral) and a Sentinel-1 GRD (SAR) image.
The multispectral and SAR images are acquired on 5 January 2017 and 18 February 2017, respectively.
The dataset represents a flood occurred in Sacramento County, Yuba County, and Sutter County, California.
The ground truth was extracted from a Sentinel-1 SAR image pair where the pre-event image is acquired approximately at the same time as the Landsat-8 image.
However, we realized that the ground truth in [31] contains many mistakes.
Hence, we updated the reference data with the PCC method according to bi-temporal Sentinel-1 images.
Other three image pairs of Sentinel-1 and Landsat-8 images of the same scene acquired in 2017 and 2018, respectively, were used for the self-supervised pre-training of the proposed SSL approach.
IV-B Experimental Settings
IV-B1 Literature Methods for Comparison
We considered different state-of-the-art methods for comparisons with the proposed SSL approach on the five datasets mentioned above.
On the first two homogeneous datasets (OSCD_S2S2 and OSCD_L8S2), the proposed SSL approach was compared with two unsupervised deep learning approaches (DSFA [27] and CAA [36]) and two deep supervised methods (FC-EF [10] and FC-EF-Res [46]).
Deep Slow Feature Analysis (DSFA) is a deep learning-based multi-temporal change detection method consisting of two symmetric deep networks and based on the slow feature analysis theory (SFA).
The two-stream CNNs are used to extract image features and detect changes based on SFA.
Code-Aligned Autoencoders (CAA) is a deep unsupervised methodology to align the code spaces of two autoencoders based on affinity information extracted from the multi-modal input data.
It allows achieving a latent space entanglement even when the input images contain changes by decreasing the interference of changed pixels.
However, it degrades its performance when only one input channel is considered.
It is also well suited for homogeneous change detection, as it does not depend on any prior knowledge of the data.
Fully convolutional-early fusion (FC-EF) is considered for the supervised change detection method on the OSCD dataset.
In this method, the bi-temporal image pair are stacked together as the input.
The architecture of FC-EF is based on U-Net [47], where the skip connections between encoder and decoder help to localize the spatial information more precisely and get clear change boundaries.
FC-EF-Res is an extension of FC-EF with residual blocks to improve the accuracy of change results.
In addition, it is worth noting that the first dataset (OSCD_S2S2) has previously been extensively used in other works.
Hence, we also compare our results with those of some conventional methods [45] (Log-ratio, GLRT and Image difference), an unsupervised deep learning method (ACGAN [48]) and supervised deep learning techniques (FC-Siam-conc and FC-Siam-diff [45]) reported in previous papers.
On the Sentinel-1 SAR images dataset, only unsupervised methods (DSFA, SCCN, and CAA) are used for comparison.
Note that some change information present in multi-spectral images is not detectable in SAR images, hence we did not use supervised methods on them.
On the two heterogeneous remote sensing image datasets (OSCD_S1S2 and California), two state-of-the-art methods are used for comparisons, including the symmetric convolutional coupling network (SCCN) and CAA.
Considering that only significant changes in the backscattering of SAR images can be detected, we only consider the LasVegas site in the OSCD_S1S2 dataset.
Similar to CAA, SCCN is an unsupervised multi-modal change detection method that exploits an asymmetrical convolutional coupling network to project the heterogeneous image pairs onto the common feature space.
This method is also used in the homogeneous SAR image pairs in our experiments.
IV-B2 Implementation details
We take the ResNet-34 as the backbone of two branches of the pseudo-Siamese network to get feature vectors of corresponding image patches.
In particular, we change the parameters of the strider from 2 to 1 in the third and fourth layers of the backbone for adapting the network to the relatively small input size.
In order to capture the different scales of change, we use three different patch sizes (p = 8, 16, 24 pixels) for the homogeneous image change detection task and two different patch sizes (p = 8, 16 pixels) for the heterogeneous change detection task.
During the training on OSCD_S2S2, we randomly composed all images acquired at different dates into pairs as the input.
While SAR/multi-spectral image pairs acquired in the same month have been used as input pairs for the rest of the multi-sensor dataset.
After finishing the training process, the test image pairs are feed into the pre-trained network and then the related change intensity maps are derived.
For the supervised method (FC-EF and FC-EF-Res), we used the 14 bi-temporal training images considered in the previous work [46].
In the self-supervised and supervised method, we use four channels (VIS and NIR) in Landsat-8 and Sentinel-2 images, while two polarizations (VH and VV) in Sentinel-1 images.
CAA and SCCN methods require heterogeneous image pairs having the same number of input channels.
According, to keep consistency with the four input channels of multi-spectral images, we augmented Sentinel-1 images with the plus and minus operation between the two polarizations as the other two channels.
IV-B3 Evaluation Criteria
To appraise the different methods presented above, five evaluation metrics (precision (Pre), recall (Rec), overall accuracy (OA), F1 score and Cohen’s kappa score (Kap)) are used in this paper.
We simply classify the image pixels into two classes by setting an appropriate threshold value according to the presented strategy and analyze them with reference to the ground truth map.
Then, the number of unchanged pixels incorrectly flagged as change is denoted by $FP$ (false positive) and the number of changed pixels incorrectly flagged as unchanged is denoted by $FN$ (false negative).
In addition, the number of changed pixels correctly detected as change is denoted by $TP$ (true positive) and the number of unchanged pixels correctly detected as unchanged is denoted by $TN$ (true negative).
From these four quantities, the five evaluation metrics can be defined as :
$$Pre=\frac{TP}{TP+FP}$$
(7)
$$Rec=\frac{TP}{TP+FN}$$
(8)
$$F_{1}=\frac{2Pre\cdot Rec}{Pre+Rec}$$
(9)
$$OA=\frac{TP+TN}{TP+TN+FP+FN}$$
(10)
$$Kap=\frac{OA-PE}{1-PE}$$
(11)
$$\displaystyle\mathrm{PE}$$
$$\displaystyle=\frac{(TP+FP)\cdot(TP+FN)}{(TP+TN+FP+FN)^{2}}$$
(12)
$$\displaystyle+\frac{(FN+TN)\cdot(FP+TN)}{(TP+TN+FP+FN)^{2}}$$
Obviously, a higher value of $Pre$ results in fewer false alarms, and a higher value of $Rec$ represents a smaller rate of incorrect detections.
The overall accuracy $OA$ is the ratio between correctly detected pixels and all pixels of the image.
However, these three metrics will give a misleading over-estimate of the result when the amount of changed pixels is a small fraction of the image.
$F1$ score and $Kap$ can overcome the problem of $Pre$ and $Rec$ and better reveal the overall performance.
Note that large $F1$ and $Kap$ values represent better overall performance.
IV-C Results on Homogeneous Datasets
We first evaluate the change detection performance of the proposed approach and state-of-the-art methods (DSFA, CAA and supervised methods) applied to the homogeneous change detection scenario.
This includes bi-temporal Sentinel-2 images (OSCD_S2S2 test dataset), bi-temporal landsat-8/Sentinel-2 images (OSCD_L8S2 test dataset) and bi-temporal Sentinel-1 images (OSCD_S1S1 test dataset).
The performance metrics obtained on the OSCD_S2S2 test dataset are reported in Table I.
As expected the FC-EF and FC-EF-Res supervised methods applied to raw images achieved the best performance in terms of Precision, OA, F1 and Kappa, but not on Recall.
Among all unsupervised methods, the proposed SSL approach with an OA of 92.5 % and a Kappa coefficient of 0.42, obtained the best performance on all five metrics and the third-best performance among all methods (included the supervised ones) implemented in this work.
Although two supervised methods performed better than other methods on most metrics, they have a much worse performance on Recall than the proposed SSL approach.
It is also worth noting that the proposed SSL approach is effective in closing the gap with the supervised methods on Kappa, which indicates its effective overall performance.
In addition, the results of other unsupervised methods (i.e., ACGAN, Image difference, GLRT, and Log-ratio) and supervised methods (i.e., Siamese and EF) on VIS and NIR channels in [45] are reported in the table.
They are all worse than those of the proposed SSL approach.
The results of other supervised methods (i.e., FC-EF*, FC-EF-Res*, FC-Siamese-Con* and FC-Siamese-Diff*) applied to carefully processed RGB channel images are reported in the last rows of Table I.
Their accuracies on most metrics are slightly better than those of the proposed SSL approach, but they can not be achieved when working on raw images as a high registration precision is required.
Indeed, in the related papers, multi-temporal images are carefully co-registered using GEFolki toolbox to improve the accuracy of change maps [45].
On the contrary, the proposed SSL approach is based image patches where the registration precision of Sentinel system is enough for obtaining a good change map.
Besides the quantitative analysis, we also provide a visual qualitative comparison in Fig. 3, where the TP, TN, FN and FP pixels are colored in green, white, blue and red, respectively.
One can see that change maps provided by DSFA and CAA are affected by a significant salt-and-pepper noise where plenty of unchanged buildings are misclassified as changed ones.
This is due to the lack of use of spatial context information in these methods.
This issue is well addressed by the proposed SSL approach and the FC-EF-Res supervised method, which provide better maps.
Most of the changed pixels are correctly detected in the proposed SSL approach, but with more false alarms than in the supervised FC-EF-Res method.
Note that this is probably due to some small changes that are ignored in the ground truth.
Nonetheless, since these results are processed in patches, some small objects are not classified correctly and false alarms on boundaries of buildings are provided by the proposed SSL approach.
A possible reason for this is the small patch-based method with a poor spatial context information learning ability.
Instead, the change maps obtained by the FC-EF-Res method are in general more accurate and less noisy due to the use of spatial-spectral information in U-Net and the supervised learning algorithm.
However, the FC-EF-Res method failed to detect most of changed pixels in the first scenario.
This confirms that the change detection results of supervised methods heavily rely on the change type distribution and the quality of training samples.
This is not an issue for the proposed SSL approach.
The performance of each model is also validated on the OSCD_L8S2 test dataset, which was obtained by different optical sensors having different spatial resolutions, and the quantitive evaluation is reported in Table II.
In general, the supervised methods outperform DSFA and CAA considering all five metrics.
However, the performance of FC-EF-res on Recall is much worse than those of CAA and the proposed SSL approach.
Meanwhile, the proposed SSL approach with an overall accuracy of 92.6% and a Kappa coefficient of 0.29, obtained the best accuracy among other unsupervised methods and is very close to the supervised methods on all five metrics.
Fig. 4 presents the binary change maps obtained by all methods on the OSCD_L8S2.
One can see that the change maps contain a larger number of false alarms for all methods compared with the maps obtained on the OSCD_S2S2.
This is probably due to the relatively lower resolution of Landsat-8 VIS and NIR channel images with respect to the counterparts in Sentinel-2 images.
Consistently with the results obtained on OSCD_S2S2 (see Fig. 3), the proposed SSL approach has a better segmentation result but with lower accuracy on all metrics, which indicates that the different resolution images increase the difficulty of change detection tasks.
To complete the evaluation on homogeneous datasets, the performance of all unsupervised methods are validated on the OSCD_S1S1 test dataset.
The quantitative results are reported in Table II, which shows that the proposed SSL approach produces a better accuracy than other methods on all metrics, except for OA.
The binary change maps obtained by each unsupervised methods are shown in Fig. 5.
One can see that all results appear much noisier due to the influence of speckle in SAR images.
It is worth noting that only a new building that appeared in the post-event SAR image can be detected because minor growth of the building does not cause significant backscatter change.
Apart from this, the boundaries of the detected objects are not accurate as those in the optical dataset due to the side-looking imaging mechanism.
In addition, the performance of the proposed SSL approach on OSCD_S1S1 is close to that obtained on OSCD_L8S2 but with fewer correct detections and more false alarms than the latter.
In general, the above three experiments based on homogeneous images demonstrate that the proposed SSL approach obtained the best quantitative and qualitative performance with respect to all the other considered unsupervised change detection techniques.
IV-D Results on Heterogeneous Datasets
In the second change detetcion scenario, we consider two heterogeneous datasets which consist of a Sentinel-1/Sentinel-2 image pair (OSCD_S1S2) and a Sentinel-1/Landsat-8 image pair (California).
The performance of three unsupervised methods (SCCN, CAA and SSL) on OSCD_S1S2 is reported in Table III.
One can see that the proposed SSL approach performs much better than the other two unsupervised methods on most metrics due to the separated training on the archived images.
In contrast, SCCN and CAA are both trained on the test image only and the complicated background in the scene makes them hard to separate the unchanged pixels for the network training causing too many false alarms in change detection maps.
Compared with the results obtained in the homogeneous experiments, the results presented here are much worse.
This demonstrates the difficulty of heterogeneous change detection in complicated backgrounds, such as an urban area.
Fig. 6 presents the qualitative visual results in terms of binary change maps.
One can observe that the results provided by SCCN and CAA are affected by many more missed detections and false alarms than in the homogeneous case.
The result of the proposed SSL approach has fewer false alarms but with more missed detections with respect to the homogeneous setting owing to the larger domain discrepancy.
Differently from the previous dataset, the California dataset is related to a simpler background and to more significant changes resulted from the flood.
Table III presents the results of all methods on this dataset.
The three unsupervised methods (SCCN, CAA and SSL) have similar performance on overall evaluation metrics (OA, F1 and Kappa).
The SCCN achieves the best Recall, F1 score, Kappa and the second-best values on Precision and OA, while the CAA achieved the highest Precision and OA values.
The proposed SSL approach gets the second-best values on three of five metrics, thus it does not show obvious superiority.
Fig. 6 illustrates the Landsat 8 and Sentinel-1 images and the change maps from the compared methods.
Maps provided by SCCN and ACC show a clear boundary of change areas, whereas the one of the proposed SSL approach is less precise.
The map of SCCN contains more false alarms, while the map of the CAA has more missed detections.
Even if the performance of the proposed SSL approach on the California dataset is not the best, it is still no worse than that of the other two methods considering all five metrics.
In general, considering the results on the two heterogeneous test datasets, the proposed SSL approach is the most accurate followed by the CAA, which is the second-best method and is only slightly worse than the proposed SSL approach.
V Conclusion
In this work, we have presented a self-supervised approach to unsupervised change detection in multi-view remote sensing images, which can be used with both multi-sensor and multi-temporal images.
The main idea of the presented framework is to extract a good feature representation space from homogeneous and heterogeneous images using contrastive learning.
Images from satellite mission archives are used to train the pseudo-Siamese network without using any label.
Under the reasonable assumption that the change event is rare in long-time archived images, the network can properly align the features learned from images obtained at different times even when they contain changes.
After completing the pre-training process, the regression error of image patches captured from bi-temporal images can be used as a change score to indicate the change probability.
If required, a binary change map can be directly calculated from change intensity maps by using a thresholding method.
Experimental results on both homogeneous and heterogeneous remote sensing image datasets proved that the proposed SSL approach can be applicable in practice, and demonstrated its superiority over several state-of-the-art unsupervised methods.
Results also show that the performance declines when the resolution of the two sensors is different in a homogeneous setting.
Moreover, in the SAR-optical change detection setting, the change detection results are affected by the complexity of the background.
As a final remark, note that in this work we only considered bi-temporal images to detect changes.
This has negative impacts on false alarms.
Our future work will be focused on the refinement of changed maps by further decreasing false alarms by combining a larger number of images from the time-series.
Acknowledgment
The authors would like to thank Yuanlong Tian and Thalles Silva for their open-source code in their work. This study was supported by the China Scholarship Council.
References
[1]
L. Bruzzone and S. Serpico, “Detection of changes in remotely-sensed images by
the selective use of multi-spectral information,” International
Journal of Remote Sensing, vol. 18, no. 18, pp. 3883–3888, 1997.
[2]
L. Bruzzone and D. F. Prieto, “Automatic analysis of the difference image for
unsupervised change detection,” IEEE Transactions on Geoscience and
Remote sensing, vol. 38, no. 3, pp. 1171–1182, 2000.
[3]
F. Bovolo and L. Bruzzone, “A theoretical framework for unsupervised change
detection based on change vector analysis in the polar domain,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 45, no. 1, pp. 218–236,
2006.
[4]
L. Bruzzone and L. Carlin, “A multilevel context-based system for
classification of very high spatial resolution images,” IEEE
transactions on Geoscience and Remote Sensing, vol. 44, no. 9, pp.
2587–2600, 2006.
[5]
S. Ghosh, L. Bruzzone, S. Patra, F. Bovolo, and A. Ghosh, “A context-sensitive
technique for unsupervised change detection based on hopfield-type neural
networks,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 45, no. 3, pp. 778–789, 2007.
[6]
S. Saha, F. Bovolo, and L. Bruzzone, “Unsupervised deep change vector analysis
for multiple-change detection in vhr images,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3677–3693, 2019.
[7]
Y. Zhan, K. Fu, M. Yan, X. Sun, H. Wang, and X. Qiu, “Change detection based
on deep siamese convolutional network for optical aerial images,” IEEE
Geoscience and Remote Sensing Letters, vol. 14, no. 10, pp. 1845–1849,
2017.
[8]
D. P. Roy, H. Huang, L. Boschetti, L. Giglio, L. Yan, H. H. Zhang, and Z. Li,
“Landsat-8 and sentinel-2 burned area mapping-a combined sensor
multi-temporal change detection approach,” Remote Sensing of
Environment, vol. 231, p. 111254, 2019.
[9]
M. Huang and S. Jin, “Rapid flood mapping and evaluation with a supervised
classifier and change detection in shouguang using sentinel-1 sar and
sentinel-2 optical data,” Remote Sensing, vol. 12, no. 13, p. 2073,
2020.
[10]
R. C. Daudt, B. Le Saux, and A. Boulch, “Fully convolutional siamese networks
for change detection,” in 2018 25th IEEE International Conference on
Image Processing (ICIP). IEEE, 2018,
pp. 4063–4067.
[11]
F. Rahman, B. Vasu, J. Van Cor, J. Kerekes, and A. Savakis, “Siamese network
with multi-level features for patch-based change detection in satellite
imagery,” in 2018 IEEE Global Conference on Signal and Information
Processing (GlobalSIP). IEEE, 2018,
pp. 958–962.
[12]
D. Peng, Y. Zhang, and H. Guan, “End-to-end change detection for high
resolution satellite images using improved unet++,” Remote Sensing,
vol. 11, no. 11, p. 1382, 2019.
[13]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
A. Courville, and Y. Bengio, “Generative adversarial nets,” in
Advances in neural information processing systems, 2014, pp.
2672–2680.
[14]
J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber, “Stacked
convolutional auto-encoders for hierarchical feature extraction,” in
International conference on artificial neural networks. Springer, 2011, pp. 52–59.
[15]
X. Liu, F. Zhang, Z. Hou, Z. Wang, L. Mian, J. Zhang, and J. Tang,
“Self-supervised learning: Generative or contrastive,” arXiv preprint
arXiv:2006.08218, vol. 1, no. 2, 2020.
[16]
Y. Tian, D. Krishnan, and P. Isola, “Contrastive multiview coding,”
arXiv preprint arXiv:1906.05849, 2019.
[17]
A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with
contrastive predictive coding,” arXiv preprint arXiv:1807.03748,
2018.
[18]
J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond,
E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar et al.,
“Bootstrap your own latent: A new approach to self-supervised learning,”
arXiv preprint arXiv:2006.07733, 2020.
[19]
K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for
unsupervised visual representation learning,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.
9729–9738.
[20]
J. Deng, K. Wang, Y. Deng, and G. Qi, “Pca-based land-use change detection and
analysis using multitemporal and multisensor satellite data,”
International Journal of Remote Sensing, vol. 29, no. 16, pp.
4823–4838, 2008.
[21]
A. A. Nielsen, K. Conradsen, and J. J. Simpson, “Multivariate alteration
detection (mad) and maf postprocessing in multispectral, bitemporal image
data: New approaches to change detection studies,” Remote Sensing of
Environment, vol. 64, no. 1, pp. 1–19, 1998.
[22]
C. Wu, B. Du, and L. Zhang, “Slow feature analysis for change detection in
multispectral imagery,” IEEE Transactions on Geoscience and Remote
Sensing, vol. 52, no. 5, pp. 2858–2874, 2013.
[23]
Y. T. Solano-Correa, F. Bovolo, and L. Bruzzone, “An approach for unsupervised
change detection in multitemporal vhr images acquired by different
multispectral sensors,” Remote Sensing, vol. 10, no. 4, p. 533, 2018.
[24]
V. Ferraris, N. Dobigeon, Q. Wei, and M. Chabert, “Detecting changes between
optical images of different spatial and spectral resolutions: a fusion-based
approach,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 56, no. 3, pp. 1566–1578, 2017.
[25]
L. Bruzzone and F. Bovolo, “A novel framework for the design of
change-detection systems for very-high-resolution remote sensing images,”
Proceedings of the IEEE, vol. 101, no. 3, pp. 609–630, 2012.
[26]
J. Liu, M. Gong, A. K. Qin, and K. C. Tan, “Bipartite differential neural
network for unsupervised image change detection,” IEEE Transactions on
Neural Networks and Learning Systems, vol. 31, no. 3, pp. 876–890, 2019.
[27]
B. Du, L. Ru, C. Wu, and L. Zhang, “Unsupervised deep slow feature analysis
for change detection in multi-temporal remote sensing images,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 57, no. 12, pp.
9976–9992, 2019.
[28]
N. Lv, C. Chen, T. Qiu, and A. K. Sangaiah, “Deep learning and superpixel
feature extraction based on contractive autoencoder for change detection in
sar images,” IEEE transactions on industrial informatics, vol. 14,
no. 12, pp. 5530–5538, 2018.
[29]
C. Ren, X. Wang, J. Gao, and H. Chen, “Unsupervised change detection in
satellite images with generative adversarial network,” arXiv preprint
arXiv:2009.03630, 2020.
[30]
M. Gong, P. Zhang, L. Su, and J. Liu, “Coupled dictionary learning for change
detection from multisource data,” IEEE Transactions on Geoscience and
Remote sensing, vol. 54, no. 12, pp. 7077–7091, 2016.
[31]
L. T. Luppino, F. M. Bianchi, G. Moser, and S. N. Anfinsen, “Unsupervised
image regression for heterogeneous change detection,” arXiv preprint
arXiv:1909.05948, 2019.
[32]
Y. Sun, L. Lei, X. Li, H. Sun, and G. Kuang, “Nonlocal patch similarity based
heterogeneous remote sensing change detection,” Pattern Recognition,
vol. 109, p. 107598.
[33]
W. Zhao, Z. Wang, M. Gong, and J. Liu, “Discriminative feature learning for
unsupervised change detection in heterogeneous images based on a coupled
neural network,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 55, no. 12, pp. 7066–7080, 2017.
[34]
X. Niu, M. Gong, T. Zhan, and Y. Yang, “A conditional adversarial network for
change detection in heterogeneous images,” IEEE Geoscience and Remote
Sensing Letters, vol. 16, no. 1, pp. 45–49, 2018.
[35]
L. T. Luppino, M. Kampffmeyer, F. M. Bianchi, G. Moser, S. B. Serpico,
R. Jenssen, and S. N. Anfinsen, “Deep image translation with an
affinity-based change prior for unsupervised multimodal change detection,”
arXiv preprint arXiv:2001.04271, 2020.
[36]
L. T. Luppino, M. A. Hansen, M. Kampffmeyer, F. M. Bianchi, G. Moser,
R. Jenssen, and S. N. Anfinsen, “Code-aligned autoencoders for unsupervised
change detection in multimodal remote sensing images,” arXiv preprint
arXiv:2004.07011, 2020.
[37]
W. Shi, M. Zhang, R. Zhang, S. Chen, and Z. Zhan, “Change detection based on
artificial intelligence: State-of-the-art and challenges,” Remote
Sensing, vol. 12, no. 10, p. 1688, 2020.
[38]
X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel,
“Infogan: Interpretable representation learning by information maximizing
generative adversarial nets,” in Advances in neural information
processing systems, 2016, pp. 2172–2180.
[39]
J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore,
E. Säckinger, and R. Shah, “Signature verification using a “siamese”
time delay neural network,” International Journal of Pattern
Recognition and Artificial Intelligence, vol. 7, no. 04, pp. 669–688, 1993.
[40]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in Proceedings of the IEEE conference on computer vision
and pattern recognition, 2016, pp. 770–778.
[41]
A. Tarvainen and H. Valpola, “Mean teachers are better role models:
Weight-averaged consistency targets improve semi-supervised deep learning
results,” in Advances in neural information processing systems, 2017,
pp. 1195–1204.
[42]
J. Robinson, C.-Y. Chuang, S. Sra, and S. Jegelka, “Contrastive learning with
hard negative samples,” arXiv preprint arXiv:2010.04592, 2020.
[43]
X. Chen and K. He, “Exploring simple siamese representation learning,”
arXiv preprint arXiv:2011.10566, 2020.
[44]
P. L. Rosin and J. Hervás, “Remote sensing image thresholding methods for
determining landslide activity,” International Journal of Remote
Sensing, vol. 26, no. 6, pp. 1075–1092, 2005.
[45]
R. C. Daudt, B. Le Saux, A. Boulch, and Y. Gousseau, “Urban change detection
for multispectral earth observation using convolutional neural networks,” in
IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing
Symposium. IEEE, 2018, pp.
2115–2118.
[46]
R. Daudt, B. Le Saux, A. Boulch, and Y. Gousseau, “Multitask learning for
large-scale semantic change detection,” Computer Vision and Image
Understanding, vol. 187, p. 102783, 2019.
[47]
O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for
biomedical image segmentation,” in International Conference on Medical
image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
[48]
S. Saha, Y. T. Solano-Correa, F. Bovolo, and L. Bruzzone, “Unsupervised deep
learning based change detection in sentinel-2 images,” in 2019 10th
International Workshop on the Analysis of Multitemporal Remote Sensing Images
(MultiTemp). IEEE, 2019, pp. 1–4. |
Search for Deconfined Criticality: SU(2) Déjà Vu
A.B. Kuklov
Department of Engineering Science and Physics, CUNY,
Staten Island, NY 10314
M. Matsumoto
Theoretische Physik, ETH Zurich, 8093 Zurich, Switzerland
Department of Physics, University of California,
Davis, CA 95616
N.V. Prokof’ev
Theoretische Physik, ETH Zurich, 8093 Zurich, Switzerland
Department of Physics, University of
Massachusetts, Amherst, MA 01003, USA
Russian Research Center
“Kurchatov Institute”, 123182 Moscow, Russia
B.V. Svistunov
Department of Physics, University of Massachusetts,
Amherst, MA 01003, USA
Russian Research Center
“Kurchatov Institute”, 123182 Moscow, Russia
M. Troyer
Theoretische Physik, ETH Zurich, 8093 Zurich, Switzerland
(December 3, 2020)
Abstract
Monte Carlo simulations of the SU(2)-symmetric deconfined critical point action
reveal strong violations of scale invariance for the deconfinement transition. We find
compelling evidence that the generic runaway renormalization flow of the
gauge coupling is to a weak first order transition, similar to the case of U(1)$\times$U(1) symmetry. Our
results imply that recent numeric studies of the Nèel antiferromagnet
to valence bond solid quantum phase transition in SU(2)-symmetric
models were not accurate enough in determining the nature of the transition.
pacs: 05.30.-d, 75.10.-b, 05.50.+q
Within the standard Ginzburg-Landau-Wilson description of critical
phenomena a direct transition between states which break different
symmetries is expected to be of first-order.
The existence of a generic line
of deconfined critical points (DCP) proposed in
Refs. Motrunich ; dcp1 ; dcp2 — an exotic second-order phase transition
between two competing orders — remains one of the most intriguing
and controversial topics in the modern theory of phase transitions.
In particular, the DCP theory makes the prediction that certain types of superfluid
to solid and the Nèel antiferromagnet to valence bond solid (VBS)
quantum phase transitions in 2D lattice systems can be continuous. Remarkably, the new criticality is in the
same universality class as a 3D system of $N=2$ identical complex-valued
classical fields coupled to a gauge vector field (referred to as the DCP action below). This makes the DCP theory
relevant also for the superfluid to normal liquid
transition in symmetric two-component superconductors Babaev .
An intrinsic difficulty in understanding properties of the $N$-component
DCP action is its runaway renormalization flow to strong coupling at large scales
and the absence of perturbative fixed points for realistic $N$ Halperin ; Sachdev .
One may only speculate that the value of $N$ might be of little importance since
the possibility of the continuous transition for $N=1$ is guaranteed by the exact duality
mapping between the inverted-XY and XY-universality classes duality and for
$N\to\infty$ it follows from the large-$N$ expansion for $N$ of the order of a hundred.
However, there are no exact analytic results either showing that in a
two-component system there exists a generic line of second-order phase transitions,
or proving that the second-order phase transition is fundamentally impossible.
The problem of deconfined criticaly for the most interesting
case of $N=2$ thus has to be resolved by numerical simulations.
The initial effort was focused on models of the superfluid to solid quantum phase transitions
and U(1)$\times$U(1)-symmetric DCP actions Sandvik2002 ; Motrunich .
First claims of deconfined criticality were confronted with
the observation of weak first-order transitions in other models weak_first .
While presenting a particular model featuring a first order phase transition does not prove
the impossibility of a continuous DCP yet, it does raise a warning flag. One needs to pay special attention to any signatures of violation of the scale invariance which may be indicative of a runaway flow to a first-order transition
even when all other quantities appear to change continuously
due to limited system sizes available in simulations prog_theor .
The flowgram method flowgram was developed as a generic tool for monitoring such
runaways flow to strong
coupling and was
used to prove the generic first-order nature of the deconfinement transition in the
U(1)$\times$U(1)-symmetric DCP action. A subsequent refined analysis resulted in the
reconsideration of the original claims in favor of a discontinuous transition for all known models
Sandvik2006 ; Sudbo2006 .
Recently the SU(2)-symmetric case has been studied in a series of papers
Sandvik2007 ; Melko2008 ; MV and an exciting observation of a
continuous DCP point was reported. However, the story seems to
repeat itself since renormalization flows for the $J$-$Q$ model studied in
Refs. Sandvik2007 ; Melko2008 were shown to be in violation of
scale invariance and, possibly, indicative of the first-order transition Wiese .
In this Letter we show that a runaway flow
to strong coupling and a first order transition is a generic feature
of all SU(2)-symmetric DCP models
analogous to the U(1)$\times$U(1) case su2 .
For our simulations we consider the lattice version of the SU(2)-symmetric NCCP${}^{1}$ model
dcp1 ; dcp2 and map it onto the two-component $J$-current model.
The DCP action for two spinon fields $z_{a},\,a=1,2$ on a three-dimensional simple cubic
lattice is defined as
$$\displaystyle S$$
$$\displaystyle=$$
$$\displaystyle-\sum_{<ij>,a}t(z^{*}_{ai}z_{aj}e^{iA_{<ij>}}+c.c)$$
(1)
$$\displaystyle+$$
$$\displaystyle\frac{1}{8g}\sum_{\Box}(\nabla\times A)^{2}\;;\quad\sum_{a}|z_{ai%
}|^{2}=1\;,$$
where $\langle ij\rangle$ runs over nearest neighbor pair of sites $i,j$,
the gauge field $A_{<ij>}$ is defined on the bonds,
and $\nabla\times A$ is a short-hand notation for the lattice curl-operator.
The mapping to the $J$-current model starts from the partition function $Z=\int DzDz^{*}DA\exp(-S)$ and a Taylor expansion of the exponentials $\exp\{tz^{*}_{ai}z_{aj}e^{iA_{<ij>}}\}$ and $\exp\{tz^{*}_{aj}z_{ai}e^{-iA_{<ij>}}\}$ on all bonds. One can then perform an
explicit Gaussian integration over $A_{<ij>},\,z_{ai}$ and arrive at a
formulation in terms of integer non-negative bond currents $J^{(a)}_{i,\mu}$.
We use $\mu=\pm 1,\pm 2,\pm 3$ to label the directions of bonds
going out of a given site the corresponding unit vectors are denoted by $\hat{\mu}$.
These $J$-currents obey the conservation laws:
$$\sum_{\mu}I^{(a)}_{i,\mu}=0,\,\mbox{ with }\,I^{(a)}_{i,\mu}\equiv J^{(a)}_{i,%
\mu}-J^{(a)}_{i+\hat{\mu},-\mu}.$$
(2)
The final expression for the partition function reads
$$Z=\sum_{\{J\}}{\cal Q}_{\rm site}\>{\cal Q}_{\rm bond}\>\exp(-H_{J}),$$
(3)
where
$$\displaystyle H_{J}$$
$$\displaystyle=$$
$$\displaystyle\frac{g}{2}\sum_{i,j;\,a,b;\,\mu=1,2,3}I^{(a)}_{i,\mu}\,V_{ij}\,I%
^{(b)}_{j,\mu}$$
(4)
$$\displaystyle{\cal Q}_{\rm site}$$
$$\displaystyle=$$
$$\displaystyle\prod_{i}\frac{{\cal N}^{(1)}_{i}!\,{\cal N}^{(2)}_{i}!}{(1+{\cal
N%
}^{(1)}_{i}+{\cal N}^{(2)}_{i})!},\quad{\cal N}^{(a)}_{i}=\frac{1}{2}\sum_{\mu%
}J^{(a)}_{i,\mu}$$
$$\displaystyle{\cal Q}_{\rm bond}$$
$$\displaystyle=$$
$$\displaystyle\prod_{i,a,\mu}\frac{t^{J^{(a)}_{i,\mu}}}{J^{(a)}_{i,\mu}!}\;,$$
The long-range interaction $V_{ij}$ depends on the
distance $r_{ij}$ between the sites $i$ and $j$. Its Fourier transform
is given by $V_{\bf q}=1/\sum_{\mu=1,2,3}\sin^{2}(q_{\mu}/2)$
and implies an asymptotic behavior $V\sim 1/r_{ij}$ at large
distances.
This formulation allows efficient Monte Carlo
simulations using a worm algorithm for the two-component system flowgram . For the flowgram analysis we measure the mean square fluctuations of the winding
numbers $\langle W^{2}_{a,\mu}\rangle\equiv\langle W^{2}_{a,-\mu}\rangle$
of the conserved currents $I^{(a)}_{i,\mu}$, or, equivalently, $\rho_{\pm}=\sum_{\mu}\langle(W_{1,\mu}\pm W_{2,\mu})^{2}\rangle/L\equiv\langle%
(W_{\pm}^{2}\rangle/L$.
In particular, we focused on the gauge invariant superfluid stiffness,
$\rho_{-}$ measuring the response to a twist of the phase of the product $z^{*}_{1}z_{2}$.
Similar to the U(1)$\times$U(1) case flowgram , the NCCP${}^{1}$ model features
three phases, Fig. 1,
characterized by the following order parameters:
VBS:
an insulator with $\langle z_{ai}\rangle=0$
and, accordingly, $\langle\rho_{+}\rangle=\langle\rho_{-}\rangle=0$.
2SF:
two-component superfluid (2SF) with $\langle z_{ai}\rangle\neq 0$,
$\langle\rho+\rangle\neq 0$ and $\langle\rho_{-}\rangle\neq 0$.
SFS:
supesolid (a paired phase note )
with $\langle z_{ai}\rangle=0,\,\langle z^{*}_{1i}z_{2j}\rangle\neq 0$, $\rho_{+}=0$ and $\rho_{-}\neq 0$.
The point $g=0$ and $t\approx 0.468$ features a continuous transition
in the O(4) universality class. The relevant part of the phase diagram
is the region of small $g$ close to this O(4) point, far away from the bicritical point
$g_{bc}\approx 2.0$ where SFS phase intervenes between the
VBS and 2SF phases. The corresponding direct VBS-2SF
transition has been proposed to be a deconfined critical line (DCP line) dcp1 ; dcp2 .
The key idea of the flowgram method flowgram is to demonstrate that the universal large-scale
behavior at $g\to 0$ is identical to that at some finite coupling $g=g_{\rm coll}$
where the nature of the transition can be easily revealed. The procedure is as follows:
(i)
Introduce a definition of the critical point for a finite-size system
of linear size $L$ consistent with the thermodynamic limit and insensitive
to the order of the transition. In our model we used the same definition as in Ref. flowgram .
Specifically, for any given $g$ and $L$ we adjusted $t$ so that the ratio of statistical weights of configurations with and without windings
was equal to $7.5$.
(ii)
At the transition point, calculate a quantity $R(L,g)$ that is
supposed to be scale-invariant for a continuous phase transition
in question, vanish in one of the phases and diverge in the other.
Here we consider $R(L,g)=\langle W_{-}^{2}\rangle$.
(iii)
Perform a data collapse for flowgrams of $R(L,g)$,
by rescaling the linear system size, $L\to C(g)L$, where $C(g)$ is
a smooth and monotonically increasing function of the
coupling constant $g$. In the present case we have $C(g\to 0)\propto g$ Halperin .
A collapse of the rescaled flows within an interval $g\in[0,\,g_{\rm coll}]$
implies that the type of the transition within the interval remains the same,
and thus can be inferred by dealing with the $g=g_{\rm coll}$ point only.
Since the $g\to 0$ limit implies large spatial scales, and, therefore, model-independent
runaway renormalization flow pattern, the conclusions are universal.
To have a reference comparison, we first simulated a short-range
analog of the NCCP${}^{1}$ model (4) with $V_{ij}=g\delta_{ij}$.
The short-range model has a similar phase diagram, but with a second order phase transition for small $g$ and a first order one at large $g$. Figure 2 clearly shows that the corresponding
flowgram cannot be collapsed on a single master curve by rescaling the length (shifting
the lines horizontally in logarithmical scale), and the separatrix at the tricritical point (TP)
at $g\approx 0.95$ is clearly visible.
Contrary to the short range model we find no such separatrix for the DCP action.
As shown in Fig. 3 the flows feature a fan of lines diverging with the
system size and with the slope increasing with $g$ without any sign of a TP separatrix.
One can notice that the NCCP${}^{1}$ flows exhibit a slope change, see Fig. 3
(also observed in Ref. Wiese for the J-Q-model) that might be interpreted
as a sign of the evolution towards a scale invariant behavior
$\langle W_{-}^{2}\rangle={\rm const}$, possibly achieved at a large enough $L$. The same
feature has been observed recently in Ref. MV , and caused the
authors to speculate that the NCCP${}^{1}$ model features a line of
continuous transitions for $g<1.25$ note1 .
The crucial test, then, is to
see if the fan of the NCCP${}^{1}$ lines can be collapsed on a single
master curve $\langle W_{-}^{2}\rangle=F(C(g)L)$, where
$C(g)$ describes the length-scale renormalization set by the coupling constant $g$.
As it turns out, the NCCP${}^{1}$ flows collapse perfectly note2 in the whole region
$0.125\leq g<1.65$ below the bicritical point $g_{bc}$ (see Fig. 4).
The rescaling function $C(g)$ exhibits a linear behavior $C(g)\propto g$ at small $g$ consistent with the runaway flow in the lowest-order
renormalization group analysis Halperin .
This behavior all but rules out the existence of
the TP on the VBS-2SF line.
Though our conclusions directly contradict claims made in
Refs. Sandvik2007 ; Melko2008 ; MV , the primary data are in agreement. A data collapse
of the flowgram presented in the lower panel of Fig. 13 in Ref. MV shows
the same qualitative behavior as our Fig. 4 comment .
We are also consistent with the conclusion reached in Ref. Wiese that the
slope change is an intermediate scale phenomenon and the
Nèel antiferromagnet to VBS transition in the J-Q-model violates
the scale invariance hypothesis as observed by the divergent flow of
$\langle W_{-}^{2}\rangle$.
The flow collapse within an interval $g\in[0,g_{\rm coll}]$ does
not yet imply a first-order transition. What appears to be a
diverging behavior in Fig. 3 might be just a reconstruction
of the flow from the O(4)-universality (at $g=0$) to a novel
DCP-universality at strong coupling. To complete the proof,
we have to determine the nature of the transition for $g=g_{\rm coll}$.
In this parameter range the standard technique of detecting discontinuous transitions
by the bi-modal energy distribution becomes feasible. As shown in Fig. 5
a clear bi-modal distribution develops at $g=1.65$ which is below the bicritical point $g_{bc}$
and within the data collapse interval $[0,g_{\rm coll}]$.
This leaves us with the clear conclusion that the whole phase transition line for small
$g$ features a generic weak first-order transition identical to the one observed in the U(1)$\times$U(1)
case. Driven by long-range interactions, this behavior develops on
length scales $\propto 1/g\to\infty$ for small $g$ and thus is universal.
It cannot be affected by microscopic variations of the NCCP${}^{1}$ model
suggested in Ref. MV to suppress the paired (molecular) phase.
We acknowledge useful discussions with O. Motrunich,
A. Vishwanath, L. Balents and E. Babaev. We thank the
Institut Henri Poincare-Centre Emile Borel and Nordita
for hospitality and support in 2007.
This work was supported by NSF under Grants Nos. PHY-0653135,
PHY-0653183 and CUNY grants 69191-0038, 80209-0914. We also
recognize the crucial role of the (super)computer clusters at
UMass, Typhon and Athena at CSI, and Hreidar at ETH.
References
(1)
O.I. Motrunich and A. Vishwanath,
Phys. Rev. B 70, 075104 (2004).
(2)
T. Senthil, A. Vishwanath, L. Balents, S. Sachdev,
and M.P.A. Fisher, Science 303, 1490 (2004).
(3)
T. Senthil, L. Balents, S. Sachdev, A. Vishwanath,
and M.P.A. Fisher, Phys. Rev. B 70 144407 (2004).
(4)
E. Babaev, Nucl. Phys. B 686, 397 (2004);
E. Babaev, A. Sudbo, N. W. Ashcroft, Nature 431, 666 (2004));
J. Smiseth, E. Smørgrav, E. Babaev, and A. Sudbø,
Phys. Rev. B 71, 214509 (2005).
(5)
B.I. Halperin, T.C. Lubensky, and S.-K. Ma,
Phys. Rev. Lett. 32, 292 (1974); E. Br$\acute{\rm e}$zin,
J.C. Le Guillou, and J. Zinn-Justin, Phys. Rev. B 10, 892
(1974); J.-H. Chen, T.C. Lubensky, and D. Nelson, Phys. Rev. B 17, 4274 (1978).
(6)
L. Balents, L. Bartosch, A. Burkov, S. Sachdev, and K. Sengupta,
Phys. Rev. B 71 144509 (2005); ibid 144508 (2005).
(7)
M. Peskin, Ann. Phys. (N.Y.) 113, 122 (1978);
P.R. Thomas and M. Stone, Nucl. Phys. B 144, 513 (1978);
C. Dasgupta and B.I. Halperin, Phys. Rev. Lett. 47, 1556
(1981).
(8)
A.W. Sandvik, S. Daul, R.R.P. Singh, and D.J. Scalapino, Phys.
Rev. Lett. 89, 247201 (2002).
(9)
A. Kuklov, N. Prokof’ev, and B. Svistunov, Phys.
Rev. Lett. 93, 230402 (2004).
(10)
A. Kuklov, N.V. Prokof’ev, and B.V. Svistunov,
Prog. of Theor. Phys. Suppl. 160, 337 (2005).
(11)
A. Kuklov, N. Prokof’ev, B. Svistunov, and M. Troyer, Ann. Phys. (N.Y.) 321, 1602 (2006).
(12)
A.W. Sandvik and R.G. Melko, Ann. Phys. (N.Y.) 321, 1651
(2006).
(13)
S. Kragset, E. Smørgrav, J. Hove, F.S. Nogueira, and A. Sudbø,
Phys. Rev. Lett. 97, 247201 (2006).
(14)
A.W. Sandvik, Phys. Rev. Lett. 98, 227202 (2007).
(15)
R.G. Melko and R.K. Kaul,
Phys. Rev. Lett. 100, 017203 (2008).
(16)
O.I. Motrunich and A. Vishwanath, arXiv:0805.1494.
(17)
F.-J. Jiang, M. Nyfeler, S. Chandrasekharan, and U.-J. Wiese,
arXiv:0710.3926.
(18)
Preliminary results have been
announced: A. Kuklov, M. Matsumoto, N. Prokof’ev, B. Svistunov, and
M. Troyer, Bull. Am. Phys. Soc. 53, S12.00006 (2008); and a presentation by A. Kuklov at the Quantum Fluids workshop
(Nordita, Stockholm, August 15 - September 30, 2007)
http://www.nordita.org/$\sim$qf2007/kuklov.pdf .
(19)
See Ref. Babaev for discussions of 2d as well as 3d field induced paired phases in two-component superconductors.
(20)
The interaction constant $K$
in Ref. MV is defined as $K=1/(4g)$.
(21)
A flow collapse
is meaningful even when the collapsing lines $R(L)$ are
relatively short
and reminiscent of straight lines: a straight line is described
by two independent parameters, while the rescaling procedure has only one
degree of freedom of shifting the line horizontally in logarithmical scale. The master curve may significantly deviate from a straight line and
prove indispensable for understanding the global character of the flow
and difficulties with the finite-size scaling in specific models.
(22)
A. Kuklov, M. Matsumoto, N. Prokof’ev, B. Svistunov, and
M. Troyer, ArXive:0805.2578. |
Inclusive electroproduction of light hadrons with large $p_{T}$ at
next-to-leading order
Bernd A. Kniehl
Abstract
We review recent results on the inclusive electroproduction of light hadrons
at next-to-leading order in the parton model of quantum chromodynamics
implemented with fragmentation functions and present updated predictions for
HERA experiments based on the new AKK set.
Keywords:Quantum chromodynamics, parton model, radiative corrections,
inclusive hadron production, deep-inelastic scattering
: 12.38.Bx, 12.39.St, 13.87.Fh, 14.40.Aq
1 Introduction
In the framework of the parton model of quantum chromodynamics (QCD), the
inclusive production of single hadrons is described by means of fragmentation
functions (FFs) $D_{a}^{h}(x,\mu)$.
At lowest order (LO), the value of $D_{a}^{h}(x,\mu)$ corresponds to the
probability for the parton $a$ produced at short distance $1/\mu$ to form a
jet that includes the hadron $h$ carrying the fraction $x$ of the longitudinal
momentum of $a$.
Analogously, incoming hadrons and resolved photons are represented by
(non-perturbative) parton density functions (PDFs) $F_{a/h}(x,\mu)$.
Unfortunately, it is not yet possible to calculate the FFs from first
principles, in particular for hadrons with masses smaller than or comparable
to the asymptotic scale parameter $\Lambda$.
However, given their $x$ dependence at some energy scale $\mu$, the evolution
with $\mu$ may be computed perturbatively in QCD using the timelike
Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equations.
Moreover, the factorization theorem guarantees that the $D_{a}^{h}(x,\mu)$
functions are independent of the process in which they have been determined
and represent a universal property of $h$.
This entitles us to transfer information on how $a$ hadronizes to $h$ in a
well-defined quantitative way from $e^{+}e^{-}$ annihilation, where the
measurements are usually most precise, to other kinds of experiments, such as
photo-, lepto-, and hadroproduction.
Recently, FFs for light charged hadrons with complete quark flavour separation
were determined through a global fit to $e^{+}e^{-}$ data from LEP, PEP, and SLC
akk thereby improving a previous analysis kkp .
The QCD-improved parton model should be particularly well applicable to the
inclusive production of light hadrons carrying large transverse momenta
($p_{T}$) in deep-inelastic lepton-hadron scattering (DIS) with large photon
virtuality ($Q^{2}$) due to the presence of two hard mass scales, with
$Q^{2},p_{T}^{2}\gg\Lambda^{2}$.
In Fig. 1, this process is represented in the parton-model picture.
The hard-scattering (HS) cross sections, which include colored quarks and/or
gluons in the initial and final states, are computed in perturbative QCD.
They were evaluated at LO more than 25 years ago Mendez:zx .
Recently, the next-to-leading-order (NLO) analysis was performed independently
by three groups Aurenche:2003by ; kkm ; Daleo:2004pn .
A comparison between Refs. kkm ; Daleo:2004pn using identical input
yielded agreement within the numerical accuracy.
The cross section of $e^{+}p\to e^{+}\pi^{0}+X$ in DIS was measured in various
distributions with high precision by the H1 Collaboration at HERA in the
forward region, close to the proton remnant Adloff:1999zx ; Aktas:2004rb .
This measurement reaches down to rather low values of Bjorken’s variable
$x_{B}=Q^{2}/(2P\!\cdot\!q)$, where $P$ and $q$ are the proton and virtual-photon
four-momenta, respectively, and $Q^{2}=-q^{2}$, so that the validity of the
DGLAP evolution might be challenged by Balitsky-Fadin-Kuraev-Lipatov (BFKL)
dynamics.
In Ref. kkm , the H1 data Adloff:1999zx ; Aktas:2004rb were
compared with NLO predictions evaluated with the KKP FFs kkp .
In Section 2, we present an update of this comparison based on the
new AKK FFs akk .
Our conclusions are summarized in Section 3.
2 Comparison with H1 data
We work in the modified minimal-subtraction ($\overline{\mathrm{MS}}$)
renormalization and factorization scheme with $n_{f}=5$ massless quark flavors
and identify the renormalization and factorization scales by choosing
$\mu^{2}=\xi[Q^{2}+(p_{T}^{\ast})^{2}]/2$, where the asterisk labels quantities in the
$\gamma^{\ast}p$ center-of-mass (c.m.) frame and $\xi$ is varied between 1/2
and 2 about the default value 1 to estimate the theoretical uncertainty.
At NLO (LO), we employ set CTEQ6M (CTEQ6L1) of proton PDFs
Pumplin:2002vw , the NLO (LO) set of AKK FFs akk , and the
two-loop (one-loop) formula for the strong-coupling constant
$\alpha_{s}^{(n_{f})}(\mu)$ with $\Lambda^{(5)}=226$ MeV (165 MeV)
Pumplin:2002vw .
The H1 data Adloff:1999zx ; Aktas:2004rb were taken in DIS of positrons
with energy $E_{e}=27.6$ GeV on protons with energy $E_{p}=820$ GeV in the
laboratory frame, yielding a c.m. energy of $\sqrt{S}=2\sqrt{E_{e}E_{p}}=301$ GeV.
The DIS phase space was restricted to $0.1<y<0.6$ and $2<Q^{2}<70$ GeV${}^{2}$,
where $y=Q^{2}/(x_{B}S)$.
The $\pi^{0}$ mesons were detected within the acceptance cuts $p_{T}^{*}>2.5$ GeV,
$5^{\circ}<\theta<25^{\circ}$, and $x_{E}>0.01$, where $\theta$ is their angle with
respect to the proton flight direction and $E=x_{E}E_{p}$ is their energy in the
laboratory frame.
The comparisons with the our updated LO and NLO predictions are displayed in
Figs. 2(a)–(d).
3 Conclusions
We calculated the cross section of $ep\to e\pi^{0}+X$ in DIS for finite values
of $p_{T}^{*}$ at LO and NLO in the parton model of QCD kkm using the new
AKK FFs akk and compared it with a precise measurement by the H1
Collaboration at HERA Adloff:1999zx ; Aktas:2004rb .
We found that our LO predictions always significantly fell short of the H1
data and often exhibited deviating shapes.
However, the situation dramatically improved as we proceeded to NLO, where our
default predictions, endowed with theoretical uncertainties estimated by
moderate unphysical-scale variations, led to a satisfactory description of the
H1 data in the preponderant part of the accessed phase space.
In other words, we encountered $K$ factors much in excess of unity, except
towards the regime of asymptotic freedom characterized by large values of
$p_{T}^{*}$ and/or $Q^{2}$.
This was unavoidably accompanied by considerable theoretical uncertainties.
Both features suggest that a reliable interpretation of the H1 data within the
QCD-improved parton model ultimately necessitates a full
next-to-next-to-leading-order analysis, which is presently out of reach,
however.
For the time being, we conclude that the successful comparison of the H1 data
with our NLO predictions provides a useful test of the universality and the
scaling violations of the FFs, which are guaranteed by the factorization
theorem and are ruled by the DGLAP evolution equations, respectively.
Significant deviations between the H1 data and our NLO predictions only
occurred in certain corners of phase space, namely in the photoproduction
limit $Q^{2}\to 0$, where resolved virtual photons are expected to contribute,
and in the limit $\eta\to\infty$ of the pseudorapidity
$\eta=-\ln[\tan(\theta/2)]$, where fracture functions are supposed to enter
the stage.
Both refinements were not included in our analysis.
Interestingly, distinctive deviations could not be observed towards the lowest
$x_{B}$ values probed, which indicates that the realm of BFKL dynamics has not
actually been accessed yet.
The author thanks G. Kramer and M. Maniatis for their collaboration.
This work was supported in part by BMBF Grant No. 05 HT1GUA/4.
References
(1)
S. Albino, B. A. Kniehl, and G. Kramer,
Report No. DESY 05-022 and hep-ph/0502188, Nucl. Phys. B (in press).
(2)
B. A. Kniehl, G. Kramer, and B. Pötter,
Nucl. Phys. B 582, 514 (2000);
Phys. Rev. Lett. 85, 5288 (2000);
Nucl. Phys. B 597, 337 (2001).
(3)
A. Mendez,
Nucl. Phys. B 145, 199 (1978).
(4)
P. Aurenche, R. Basu, M. Fontannaz, and R. M. Godbole,
Eur. Phys. J. C 34, 277 (2004).
(5)
B. A. Kniehl, G. Kramer, and M. Maniatis,
Nucl. Phys. B 711, 345 (2005);
720, 231(E) (2005).
(6)
A. Daleo, D. de Florian, and R. Sassot,
Phys. Rev. D 71, 034013 (2005);
R. Sassot, in these proceedings.
(7)
H1 Collaboration, C. Adloff et al.,
Phys. Lett. B 462, 440 (1999).
(8)
H1 Collaboration, A. Aktas et al.,
Eur. Phys. J. C 36, 441 (2004).
(9)
J. Pumplin, D. R. Stump, J. Huston, H.-L. Lai,
P. Nadolsky, and W.-K. Tung,
JHEP 0207, 012 (2002). |
Upper Bounds on Systoles of High-Dimensional Expanders Using Quantum Codes
Lior Eldar
Center for Theoretical Physics, MIT
Maris Ozols
Department of Applied Mathematics and Theoretical Physics, University of Cambridge
Kevin F. Thompson
School of Engineering and Applied Science, Harvard
Abstract
Existence of quantum LDPC codes whose minimal distance scales linearly with the number of qubits is a major open problem of both practical interest and theoretical importance. The growing field of high-dimensional expanders in recent years has had increasing interaction with quantum theory and presents a possibility of providing an affirmative resolution of the quantum LDPC conjecture. Indeed, existence of high-dimensional expanders of bounded degree and large systole and co-systole would translate into quantum codes with low-weight parity checks and large distance.
Alas, we show that trying to leverage the mere “random-like” appearance of such high-dimensional expanders to find good quantum codes may be futile: $d$-complexes that are $\varepsilon$-far from perfectly random, with $\varepsilon\ll 1$, have a systole with small Hamming weight. In particular, if the complex has $n$ vertices, we bound the size of its systole by $\tilde{O}(\varepsilon^{1/(2d)})n$.
Quantum codes aside, our work places a first upper-bound on the systole of high-dimensional expanders with small discrepancy,
and specifically Ramanujan complexes [Lub14].
Contents
1 Introduction
1.1 General
1.2 Quantum error correction
1.3 High-dimensional expanders
1.4 CSS codes: connecting quantum codes to high-dimensional expanders
1.5 Discussion and previous work
1.6 Overview of the proof
1.6.1 An $\varepsilon$-PR hyper-graph spans a weakly-binomial subspace
1.6.2 Weakly-binomial subspaces with LDPC duals have large dimension
2 Preliminaries
2.1 Notation
2.2 Kravchuk polynomials
2.3 The Sloane-MacWilliams transform
2.4 Markov chains
3 An $\varepsilon$-PR complex spans a weakly-binomial subspace
3.1 Weight enumerators as stationary distributions
3.2 Stationary distributions
3.3 Proof of Lemma 28
4 Weakly-binomial spaces have bounded minimal distance
5 Proof of Theorem 1
5.1 Note on the proof
A Technical estimates
B Acknowledgments
1 Introduction
1.1 General
Quantum error-correcting codes (QECCs) and high-dimensional expanders
are two distinct fields, with somewhat disparate motivations.
Quantum error correction studies the ability to efficiently encode quantum information in a way that is robust against errors induced by environment [LB13, NC11].
The study of QECCs is thus motivated by “practical” reasons (e.g. building a quantum computer), but also
relates to our basic understanding of multi-particle entanglement.
On the other hand, the nascent study of high-dimensional expanders (see for example [Lub14]) is motivated by an attempt to replicate the enormous
success of $1$-dimensional expanders, namely expander graphs, in theoretical computer science and mathematics more broadly [HLW06a, Lub12].
In this field, one attempts to provide a “natural” generalization of expansion to hyper-graphs in a way that inherits the fundamental properties of expander graphs,
perhaps the most prominent of these
being the tight connection between spectral properties (spectral gap) and combinatorial properties (Cheeger constant) [Che69, Alo86]
We will be interested in the notion of a chain complex from homological algebra. For a given dimension $d$, a chain complex $\mathcal{X}=(X_{0},X_{1},\dotsc,X_{d})$, or simply a $d$-complex, is a sequence of linear spaces $X_{i}$ (over $\mathbb{F}_{2}$) together with linear maps $\partial_{i}:X_{i}\to X_{i-1}$ between them.
For each $i$ we define a linear map $\partial_{i}:X_{i}\to X_{i-1}$ (also known as a boundary operator), much like the incidence matrix of a graph encodes the relationships between edges and vertices.
It is well-known for a while now [Kit97, BK01] that a $2$-dimensional chain complex gives rise - in a fairly natural way - to a certain QECC called CSS code (see Section 1.4 for more details).
However, until very recently this connection was not very well articulated, partly because quantum codes were considered most prominently
on regular grids in low dimensions (see, e.g. [Bom13]).
Yet, in recent years, it has become more evident that there may be some deep connections between these two questions.
On one hand, in the context of QECCs, new constructions of quantum CSS codes have diverged
from the rigid lattice-based topology that has been pervasive thus far
to accommodate far more robust and, in a sense, “expanding” topologies [LTZ15, Has16].
These topologies, as it turns out, have provided an
added advantage in terms of decodability and minimal distance.
It thus appears that basing quantum codes on expanding topologies, which are local but not spatially local, may go a long way
in improving our QECC capability in many respects.
On the other hand, there is significant and recent progress in our understanding of specific types of high-dimensional expanders,
specifically co-boundary and co-systole expanders.
In this context, recent studies by Kaufman, Kazhdan and Lubotzky [KKL16], and subsequent work by Evra and Kaufman [EK16], have established
that certain efficiently constructible $d$-dimensional complexes, called Ramanujan complexes, possess a property called
co-systole expansion.
This property generalizes the notion of combinatorial expansion in expander graphs and, perhaps more generally, the notion
of local testability of combinatorial properties [KL14].
A construction of a co-systole expander with a further property of Poincaré duality would imply
an affirmative solution to the ${\sf{qLDPC}}$ conjecture and to the
the NLTS conjecture [EH15],
which are major open problems in quantum information theory [FH14].
Arguably, the most important facet in the intersection of high-dimensional expanders and quantum codes, is the question of
the minimal error-correcting distance, in QECC terminology, or systole, in topology terms.
In concrete terms, assuming a high-dimensional complex with arbitrarily good expansion parameters, can one solve (or at least approach) the quantum LDPC conjecture (qLDPC) that posits the existence of locally-constrained quantum codes with linear distance (see Conjecture 3)?
In this work, we show that simply trying to leverage the random-looking property of high-dimensional expanders to
construct quantum codes with large distance, may be futile.
Specifically, we consider $d$-dimensional simplicial complexes that satisfy a high-dimensional generalization of being pseudo-random.
First though, a $1$-dimensional simplicial complex, i.e. graph, is denoted by $G=(V,E)$ and is said to be
$K$-regular if each vertex is adjacent to exactly $K$ others, and
$\varepsilon$-pseudo-random if, for any sets $A,B\subseteq V$,
$$\left|\frac{1}{K}|E(A,B)|-\frac{1}{n}|A|\cdot|B|\right|\leq\varepsilon\sqrt{|A%
|\cdot|B|},$$
where $n=|V|$ is the number of vertices. If we divide both sides by $n/2$, we can interpret this relation as follows: the total fraction of the edges crossing the cut $\frac{|E(A,B)|}{(nK/2)}$ is very close to $\frac{2|A|\cdot{}|B|}{n^{2}}\approx\frac{|A|\cdot{}|B|}{\binom{n}{2}}$, the fraction of edges crossing the cut in the complete graph, up to some constant additive error $\varepsilon$. Alternatively, if we fix the cut $(A,B)$ and we pick some uniform random edge $e_{1}$ from the complete graph, and some uniform edge from our expander graph $e_{2}$, the probability that edge $e_{1}$ crosses the cut is the same as the probability that $e_{2}$ crosses the cut, again up to some small additive error $\varepsilon$. It is well-known by a result of [AC88] that graphs with spectral gap $\varepsilon$ are $\varepsilon$-pseudo-random.
This well-known result is the expander mixing lemma and a converse due to Bilu and Linial [BL06], establishes that pseudo-randomness in fact characterizes expander graphs.
We define a generalization of this notion on hyper-graphs akin to the definition of Parzanchevski [Par13].
Let $\mathcal{X}$ be a $d$-dimensional complex on $n$. Denote the set of $d$-faces as $E$. For any $S\subseteq[n]$ a subset of the vertices, and for any $j\leq d$, define $A(S,j)$ to be the number of $d$-faces in $E$ with $j$ members in $S$. Then $\mathcal{X}$ is said to be $\varepsilon$-pseudo-random ($\varepsilon$-PR) if for all subsets of the vertices, $S\subseteq[n]$ we have:
$$\displaystyle\left|\frac{A(S,j)}{|E|}-\binom{d}{j}\left(\frac{|S|}{n}\right)^{%
j}\left(1-\frac{|S|}{n}\right)^{d-j}\right|<\varepsilon$$
(1)
In
[Par13, PRT16] Parzanchevski
establishes a high-dimensional analog of the expander mixing lemma, by
showing that for any simplicial complex,
if its Laplace operator
is spectrally gapped, then it is also pseudo-random.
In this paper, we refer to a natural and well-established map from $2$-complexes to quantum error correcting codes known has homological CSS codes [BMD07].
Specifically, we consider a $2$-complex $\mathcal{X}=(X_{0},X_{1},X_{2})$ with boundary maps $\partial_{2}:X_{2}\to X_{1}$ and $\partial_{1}:X_{1}\to X_{0}$.
We will associate the stabilizers of the quantum code to the images of the maps $\partial_{2}$, $\partial_{1}^{T}$. For instance we will assign the rows of $\partial_{1}$ to (a generating set of) Pauli-$Z$ check operators,
and the columns of $\partial_{2}$ to (a generating set of) Pauli-$X$ check operators. This can be done by mapping each binary vector
$b\in\operatorname{im}(\partial_{1}^{T})\in\mathbb{F}_{2}^{n}$ to $\bigotimes_{i=1}^{n}Z^{b(i)}$ (and the corresponding map for $X$ operators).
We then consider families of such $2$-complexes of bounded degree $K_{1},K_{2}=O(1)$, i.e. the Hamming weight of each column of $\partial_{2}$ and each row of $\partial_{1}$
is $K_{2}$ and $K_{1}$ respectively.
We will use the notation $d=O(1)$ to denote the maximum Hamming weight of each row of $\partial_{2}$ and column of $\partial_{1}$, i.e. the locality
of the Pauli stabilizers defining the CSS code.
Our main theorem relates to the above notion of pseudo-randomness and shows that the natural CSS quantum code
arising from an $\varepsilon$-pseudo-random $2$-complex has a stringent upper bound on its minimal distance:
Theorem 1.
Small discrepancy implies small quantum distance
Let $\{\mathcal{X}_{n}\}_{n\in\mathbb{N}}$ be a family of $2$-complexes
$$\mathcal{X}_{n}:X_{2}\overset{\partial_{2}}{\longrightarrow}X_{1}\overset{%
\partial_{1}}{\longrightarrow}X_{0},$$
(2)
where each column of $\partial_{2}$ and each row of $\partial_{1}$ has weight $d=O(1)$, each row of $\partial_{2}$ has
weight $K_{2}=O(1)$, and each column of $\partial_{1}$ has weight $K_{1}=O(1)$.
Denote by ${\cal C}(\mathcal{X}_{n})=[[n,k,d_{min}]]$ the quantum code associated to $\mathcal{X}_{n}$.
Let
$$\varepsilon_{0}=\min\left\{\left(H^{-1}(2^{-2\log(2d)/d})\right)^{2d},2^{-2d}%
\right\}.$$
If $\mathcal{X}_{n}$ is $\varepsilon$-PR,
where $\varepsilon\leq\varepsilon_{0}$,
then $d_{min}\leq\frac{12}{d^{2}}\varepsilon^{1/(2d)}\log^{2}(1/\varepsilon)n$.
One can also interpret this theorem as a statement about high-dimensional expanders which does not use the
language of quantum codes (see Section 1.2 for relevant definitions).
For example, using the definition of systole / co-systole expansion due to Evra and Kaufman [EK16]
we claim:
Theorem 2.
Pseudo-random complexes have bounded systoles
Let $\{\mathcal{X}_{n}\}_{n\in\mathbb{N}}$ be a family of $2$-complexes, with $\mathcal{X}_{n}$ as in Eq. 2 and $\dim(X_{1})=n$,
where each column of $\partial_{2}$ and each row of $\partial_{1}$ has weight $d=O(1)$ and each row of $\partial_{2}$ has weight $K_{2}=O(1)$
and each column of $\partial_{1}$ has weight $K_{1}=O(1)$.
If $\mathcal{X}_{n}$ is $\varepsilon$-PR then
$$\operatorname{syst}_{1}(\mathcal{X}_{n})\leq\frac{12}{d^{2}}\varepsilon^{1/(2d%
)}\log^{2}(1/\varepsilon)n.$$
Since by [EK16] Ramanujan complexes are co-systole expanders, then in particular the co-systole $\operatorname{syst}^{1}(\mathcal{X}_{n})\geq c_{0}n$ for some constant $c_{0}$.
Hence there exists $\varepsilon^{\prime}=\varepsilon^{\prime}(c_{0})$ such that these complexes have discrepancy at least $\varepsilon^{\prime}$.
1.2 Quantum error correction
The study of quantum error correcting codes is driven by the high-level goal of making quantum information robust against environment-induced errors. Generally speaking, a quantum error correcting code (QECC) for encoding $k$ qubits into $n$, with $n\geq k$, is a an isometry
$$V:\mathcal{H}:=(\mathbb{C}^{2})^{\otimes k}\hookrightarrow\mathcal{H}^{\prime}%
:=(\mathbb{C}^{2})^{\otimes n}$$
that protects quantum states by “spreading” them out over a larger subspace.
Specifically, for any Pauli error $\mathcal{E}:=\mathcal{E}_{1}\otimes\dotsb\otimes\mathcal{E}_{n}$, where each $\mathcal{E}_{i}\in\{I,X,Y,Z\}$ is one of the four Pauli matrices, in which at most $d_{min}$ terms $\mathcal{E}_{i}$ are different from the identity matrix $I$, any two orthogonal states $|\psi_{1}\rangle,|\psi_{2}\rangle\in\mathcal{H}$ remain orthogonal under $V^{\dagger}\mathcal{E}V$:
$$\langle\psi_{1}|V^{\dagger}\mathcal{E}V|\psi_{2}\rangle=0.$$
Hence a logical state $|\psi\rangle$, after being encoded as $V|\psi\rangle$, can be recovered from any error that acts on at most $\lfloor\frac{d_{min}-1}{2}\rfloor$ qubits.
As in the study of classical error correction, one then usually looks for efficient ways to encode and decode quantum states, and to achieve optimal
parameters in terms of rate and minimal distance.
In QECCs, as in classical ECCs, one can define a code ${\cal C}$, namely a subspace of $\mathcal{H}^{\prime}$,
as the space stabilized by
a set of parity checks $P_{i}$ - namely:
a quantum state $|\psi\rangle\in{\cal C}$ if $P_{i}|\psi\rangle=0$ for each $i$. In classical ECCs, when the $P_{i}$’s are sparse, then the resulting code is said to be a low-density parity-check code, or LDPC for short.
In classical ECC, LDPC codes can be easily achieved, even by randomly sampling a bi-regular graph and associating parity checks according to its connectivity. This can be shown, for example, using the Sipser–Spielman expander code construction [SS96]. Yet in the quantum setting no such constructions are known. In fact, it is unknown how to even achieve a quantum code with linear distance using a sparse set of checks, for any non-zero rate:
Conjecture 3 (${\sf{qLDPC}}$ Conjecture).
There exists a quantum code ${\cal C}$ of positive rate with a set of parity checks $P_{i}$, each of which can be written as
$P_{i}=I_{a}\otimes p_{i}\otimes I_{b}$, where $p_{i}$ is a matrix of dimension at
most $2^{K}$ for $K=O(1)$, such that $d_{min}({\cal C})=\Omega(n)$.
The ${\sf{qLDPC}}$ conjecture is not a mere question about optimality of parameters.
In a sense, it goes to the core of our understanding of multi-particle entanglement since the states corresponding to code-words of a quantum code with linear distance can be shown to be highly-entangled in a well-defined sense.
The existence of a ${\sf{qLDPC}}$ code with linear distance implies that, using only “local” constraints,
we can enforce a “global” phenomenon such as entanglement that scales with the number of qubits.
In more practical terms, existence of ${\sf{qLDPC}}$ codes would possibly facilitate the construction of quantum computers [Got14], as such codes would allow to stabilize a quantum-mechanical system using, say local magnetic fields, which are much easier to implement.
1.3 High-dimensional expanders
Expander graphs are graphs with a sparse adjacency matrix that are ”rapidly-mixing” in a well-defined sense [Alo86, HLW06b].
Given the enormous success of expander graphs in computer science and combinatorics, it was natural to explore
high-dimensional generalizations of expander graphs, namely, high-dimensional expanders.
There is no de facto standard definition for high-dimensional expanders and in the last few years, the nascent research
in this field has explored various definitions that are not known to be comparable.
We survey here briefly the definitions that are most relevant to us and mention that more are available in the literature [LM06, KKL16, KL14, Par13, PRT16, EK16, Lub14]:
We begin with a definition of chain complexes, boundary maps, and homology (in our case, all linear maps are defined over $\mathbb{F}_{2}$):
Definition 4.
Chain / co-chain complexes over $\mathbb{F}_{2}$
A $d$-dimensional chain complex $\mathcal{X}$, or $d$-complex, is a tuple $(X_{0},\dotsc,X_{d})$ of $d+1$ spaces over $\mathbb{F}_{2}$, with linear maps $\partial_{i}:X_{i}\to X_{i-1}$ for all
$1\leq i\leq d$, known as boundary maps, that satisfy the following boundary property:
$$\partial_{i}\circ\partial_{i+1}=0,\quad\forall i:1\leq i<d.$$
(3)
We define $Z_{i}:=\ker(\partial_{i})$, known as $i$-cycles, and $B_{i}:=\operatorname{im}(\partial_{i+1})$, known as $i$-boundaries. According to Eq. 3, $B_{i}\subseteq Z_{i}$ for all $i$, so we can define the $i$-th homology as the quotient group (space) $Z_{i}/B_{i}$.
We can also define co-boundary maps as $\partial^{i}:=\partial_{i}^{\mathsf{T}}$.
In this case,
$$\partial^{i+1}\circ\partial^{i}=0,\quad\forall i:1\leq i<d.$$
(4)
We define co-cycles and co-boundaries as $Z^{i}:=\ker(\partial^{i+1})$ and $B^{i}:=\operatorname{im}(\partial^{i})$, respectively, and hence $B^{i}\subseteq Z^{i}$ for all $i$. The $i$-th co-homology is then the quotient $Z^{i}/B^{i}$.
Our working definition for expansion is a natural generalization of Cheeger’s constant to hyper-graphs:
Definition 5.
$\varepsilon$-pseudo-random complex
Let $\mathcal{X}$ be a $d$-dimensional complex on $n$ vertices in which each ($d-1$)-cell has degree $K$. Denote the set of $d$-faces as $E$. For any $S\subseteq[n]$ a subset of the vertices, and for any $j\leq d$, define $A(S,j)$ to be the number of $d$-faces in $E$ with $j$ members in $S$. Then $\mathcal{X}$ is said to be $\varepsilon$-pseudo-random ($\varepsilon$-PR) if for all subsets of the vertices, $S\subseteq[n]$ we have:
$$\displaystyle\left|\frac{A(S,j)}{|E|}-\binom{d}{j}\left(\frac{|S|}{n}\right)^{%
j}\left(1-\frac{|S|}{n}\right)^{d-j}\right|<\varepsilon$$
(5)
Hence, a complex is $\varepsilon$-PR if the uniform distribution on its faces very nearly matches the uniform distribution over all faces of weight $d$, up to an additive error $\varepsilon$. The reader may note that the second term on the LHS is not really the probability that a uniformly chosen face of weight $d$ has $j$ vertices in $S$, but it is asymptotically close to this probability for large $n$ and $d=O(1)$.
Often, we will interpret the rows of $\partial_{d}$ as a
$d$-uniform hyper-graph
and then say that $\partial_{d}$ is $\varepsilon$-PR whenever $\mathcal{X}$ is $\varepsilon$-PR.
As a side note, our definition is very similar to the one defined in [PRT16], where a hyper-graph was said to be $\varepsilon$-PR
if for any partition of $V$ into $d+1$ parts the fraction of hyper-edges with one vertex in each part is the same
as that fraction for a uniformly random $d$-hyper-edge, up to additive error $\varepsilon$.
Under this definition, Parzanchevski then established an expander mixing lemma, showing that simplicial complexes
with a large spectral gap (referring here to the Hodge Laplacian of simplicial complexes) are also $\varepsilon$-PR.
The second definition is due to [KKL16, EK16], and considers the $\mathbb{F}_{2}$ expansion of non-cycles:
Definition 6.
Systole / co-systole
Let $\mathcal{X}$ be a $d$-complex with $\mathbb{F}_{2}$ boundary maps $\partial_{j}:X_{j}\to X_{j-1}$ for all $0<j\leq d$.
Let $Z_{j}:=\ker(\partial_{j})$ and $B_{j}:=\operatorname{im}(\partial_{j+1})$ denote the $j$-cycles and $j$-boundaries.
The $j$-th systole is then defined as:
$$\operatorname{syst}_{j}(\mathcal{X}):=\min_{w\in Z_{j}-B_{j}}|w|.$$
Likewise, the $j$-th co-systole is defined as:
$\min_{w\in Z^{j}-B^{j}}|w|$.
For cellulations of geometric manifolds, $\operatorname{syst}_{1}(\mathcal{X})$ can intuitively be understood as the length of a shortest non-trivial cycle.
Definition 7.
Cycle / co-cycle expansion
Let $X$ be a $d$-complex with $\mathbb{F}_{2}$ boundary maps $\partial_{j}:X_{j}\to X_{j-1}$ for all $0<j\leq d$.
$\mathcal{X}$ has $\varepsilon$ $j$-th cycle expansion if
$$\min_{x\in X_{j}-Z_{j}}\frac{|\partial_{j}x|}{\min_{w\in Z_{j}}|x+w|}\geq\varepsilon.$$
In [EK16] it is then shown that Ramanujan complexes have simultaneously $\varepsilon$ co-cycle expansion,
and a large (linear-size) co-systole.
Hence these Ramanujan complexes are said to be co-systole expanders.
1.4 CSS codes: connecting quantum codes to high-dimensional expanders
CSS codes invented by Calderbank, Shor, and Steane (see [NC11] or the original papers [CS96, Ste96a, Ste96b]) are one of the earliest and most influential type of quantum codes.
In fact, most quantum codes we know of to date are CSS codes.
Arguably, their greatest advantage, is that they can be defined using pairs of classical codes, which then allows one to think of these quantum codes as classical codes with certain restrictions:
Definition 8.
CSS code
A quantum $[[n,k,d_{min}]]$ CSS code is a pair of classical codes $C_{1},C_{2}\subseteq\mathbb{F}_{2}^{n}$ such that $C_{2}\subseteq C_{1}^{\perp}$ and $C_{1}\subseteq C_{2}^{\perp}$.
The parameters of the code are given by $k:=\dim(C_{1}^{\perp}/C_{2})$ and
$$d_{min}:=\min_{w\in C_{1}^{\perp}-C_{2},C_{2}^{\perp}-C_{1}}|w|.$$
The codes $C_{1}$ and $C_{2}$ describe the $X$ and $Z$ stabilizers of the codeword states, respectively. In the language of stabilizer quantum error correcting codes [NC11], we can specify a complete independent set of generators for the stabilizers by specifying a complete basis for the code spaces $C_{1}$ and $C_{2}$. If $C_{1}$ and $C_{2}$ are both $m$-dimensional classical codes, the resulting quantum code has $k=n-2m$, since each independent stabilizer reduces the size of the code-space by a factor of $2$.
Hence, to find a good quantum CSS code, we are required to find a pair of classical codes that contain the dual of each other, have a large quotient group (code rate), and a large value of $d_{min}$ defined above.
Furthermore, if we would like this CSS code to be sparse, it implies that each of $C_{1}$ and $C_{2}$ needs to be a LDPC code.
While finding good classical LDPC codes is today an easy task, the extra condition of pairwise duality has prevented thus far any construction of good quantum LDPC codes in this way (i.e., via the CSS construction).
One of the many elegant features of CSS codes that has been recognized a long time ago,
is that CSS codes naturally arise from chain complexes that are induced by a triangulation of a manifold.
This has been the most common way of finding CSS codes [BMD07] for a while, perhaps the most prominent example of this being the well-known toric code due to Kitaev [Kit97], which is based on a $2$-dimensional torus $\mathbb{Z}_{n}\times\mathbb{Z}_{n}$.
This gives rise to a quantum code ${\cal C}[[n^{2},1,n]]$ whose distance scales as the square root of the number of physical qubits.
Assigning a CSS code to a triangulation of a manifold is in fact a special case of assigning it to $2$-complex.
Generally, for any $d$-complex $\mathcal{X}$, we can define a quantum CSS code on any sub-chain of length $3$ as follows:
Definition 9.
Map from $2$-complexes to CSS codes
Let $\mathcal{X}=(X_{0},X_{1},X_{2})$ be a $2$-complex, with boundary maps $\partial_{2},\partial_{1}$.
The CSS code ${\cal C}$ corresponding to $X$ is defined by choosing $C_{1}:=\operatorname{im}\partial_{1}$, $C_{2}:=\operatorname{im}\partial_{2}^{\mathsf{T}}$,
and fixing ${\cal C}(\mathcal{X})=(C_{1},C_{2})$.
1.5 Discussion and previous work
In this work we have shown that high-dimensional complexes that are $\varepsilon$-pseudo-random
have actually very poor quantum code distance.
This implies that attempting to construct CSS codes in this way will probably not yield a construction of quantum
${\sf{qLDPC}}$ codes with linear distance.
Furthermore, it implies that $d$-complexes that are $\varepsilon$-pseudo-random for small $\varepsilon$ cannot be very
good systole / co-systole expanders.
Our result is arguably surprising in the context of high-dimensional expanders:
Evra and Kaufman recently showed [EK16] that certain high-dimensional Ramanujan complexes are good co-systole expanders.
Together with our theorem, this
implies a lower bound on the discrepancy $\varepsilon$ of these complexes.
In contrast, the discrepancy $\varepsilon$ of Ramanujan graphs can be made arbitrarily low (i.e. $2/\sqrt{K}$ for $K$-regular graphs).
Stepping back, our result is thus an example where the quantum perspective sheds light on questions in other fields.
In the context of quantum codes, there are no-go results for quantum codes that can be embedded into lattices. In particular Bravyi et al. [BT09, BPT10] have shown that stabilizer codes that can be embedded into $D$-dimensional lattices where $D=O(1)$, and each stabilizer is supported on $O(1)$ qubits within some hyper-cube, cannot have linear distance. Our result can be viewed as the “other end” of this limit. We show that codes which are in a sense “strongly” not embeddable into a lattice also have small distance, although we show only small linear distance and not sub-linear distance. We note that if ${\sf{qLDPC}}$ is true, it must be somewhere between these two extreme limits.
Interestingly, under a certain conjecture in high-dimensional geometry,
Hastings [Has16] recently showed a quantum CSS code with distance $n^{1-\varepsilon}$, for arbitrarily small $\varepsilon>0$,
whose parity check matrices have sparsity $\log(n)$,
using a cellulation of a family of random lattices, called LDA.
Given the inherent embedding in a lattice, we believe that the $2$-complex of his code would actually be very far from pseudo-random.
This suggests that perhaps the “right” way to resolve the ${\sf{qLDPC}}$ conjecture is to look for high-dimensional manifolds that avoid pseudo-randomness.
Perhaps more fundamentally, our work suggests that quantum multi-particle entanglement may be inherently limited on random-looking topologies,
contrasting the fact that random-looking topologies are
considered to be “robust”.
Interestingly enough, the notion that highly-expanding topologies are adversarial to large-scale quantum entanglement
resonates with a sequence of results
of a somewhat different context:
In [BH13] the authors show that
the ground-states of
$2$-local Hamiltonians whose graph is expanding
can be approximated by tensor-product states.
A similar result in [AE15] shows this for $k$-local commuting Hamiltonians with a bipartite
form of expansion, using a more stringent criterion called local expansion.
1.6 Overview of the proof
Our main statement considers a $2$-complex
$\mathcal{X}=(X_{0},X_{1},X_{2})$, with boundary operators $\partial_{1},\partial_{2}$
such that $\mathcal{X}$ is $\varepsilon$-pseudo-random, for some small $\varepsilon>0$.
We begin by defining a linear space $C\subseteq\mathbb{F}_{2}^{n}$ to be weakly-binomial if its weight enumerator $(B_{0},\dotsc,B_{n})$ - i.e. the vector of $n+1$ bins, specifying
the number of words of $C$ in any given weight - this enumerator has an upper-bound that behaves like the binomial distribution:
Definition 10.
Weakly-binomial subspace
A subspace $C\subseteq\mathbb{F}_{2}^{n}$ on $n$ bits is $(\zeta,\eta)$-weakly-binomial if for some constants $\zeta>0$ and $\eta>0$ we have:
$$\forall k\in\{0,\ldots,n\},\quad B_{k}\leq\frac{2^{\zeta n}\binom{n}{k}}{|C^{%
\perp}|}+2^{\eta n},$$
(6)
where $\{B_{k}\}$ is the weight enumerator of $C$ and $|C^{\perp}|$ is the size of the dual space of $C$.
Note that the normalization by $|C^{\perp}|$ is necessary, by setting $\zeta=\eta=0$.
Our proof consists of two main steps:
1.
We show that the subspace $C\subseteq\mathbb{F}_{2}^{n}$ spanned by the generators (hyper-edges) of an $\varepsilon$-pseudorandom complex is weakly-binomial.
2.
We show that any weakly-binomial subspace $C$, for which the dual code $C^{\perp}$ also contains a LDPC code,
must have a relatively large dimension.
This second statement implies that a CSS code ${\cal C}=(C_{1},C_{2})$ corresponding
to $\varepsilon$-PR boundaries, then the relative dimension of each of $C_{1},C_{2}$ is very close to $1/2$.
By standard distance-rate trade-offs in quantum error-correction this then implies an upper-bound on the
minimal quantum error-correcting distance of ${\cal C}$.
1.6.1 An $\varepsilon$-PR hyper-graph spans a weakly-binomial subspace
In the first step, we would like to approximate the weight enumerator of a space $C$
spanned by a set of generators that satisfy
the pseudorandom condition
in Definition 5.
The weight enumerator is approximated by considering a random walk ${\cal M}_{1}$ on the Cayley graph of the space $C$
using its set of LDPC-sparse, and $\varepsilon$-pseudorandom set of generators.
The stationary distribution of ${\cal M}$, when summed-up over separate shells of $\mathbb{F}_{2}^{n}$ of fixed-weight
then provide exactly the weight enumerator.
We would hence like to ”project”-down ${\cal M}_{1}$ to a random walk ${\cal M}_{2}$ defined on $n+1$ nodes
corresponding to ”shells” of fixed weight in $\mathbb{F}_{2}^{n}$.
Hence, we would like to define transition probabilities between ”weight-bins” that are independent of which word we choose in a fixed-weight bin. To do this, we define a coarse-graining of the chain over some fixed partition of the outcome space. We choose the shells of fixed weight as the sets in our partition.
So now, consider the line walk ${\cal M}_{2}$: it is comprised of $n+1$ bins, with non-zero transition probabilities
between nodes of distance at most $q$ - the locality of the generators. We consider a bin $B_{k}$ - i.e. the set of words in $C$ of weight $k$, and ask:
suppose that we sample a uniformly random generator $g$ from the rows of $\partial_{2}$ and add it to $w\in B_{k}$:
what is the distribution of $|w+g|$?
Generically - this might be a hard problem to solve.
However, using the $\varepsilon$-PR condition it becomes simpler: this condition, when interpreted in the appropriate way,
tells us that the probability that $|w+g|=k+j$, where $j\leq q$, and $q$ is the locality of each generator - behaves
approximately like sampling a word of weight $q$ uniformly at random and adding it to $w$.
As a result, this implies that ${\cal M}_{2}$ assumes the form of a binomial chain, i.e. adding
uniformly random words of weight $q$, up to an additive error at most $\varepsilon$.
We then analyze this chain, and show it implies that the stationary distribution of this perturbed
chain deviates from the pure unperturbed chain by a modest multiplicative exponential error, so long as the bins
we consider are ”close” to the center $n/2$.
Far from the center bin $B_{n/2}$ we have no control - but this is translated to a small exponential additive error -
which together implies weak binomiality.
1.6.2 Weakly-binomial subspaces with LDPC duals have large dimension
In this part of the proof we are given a subspace $C$, such that $C^{\perp}$ contains an LDPC code,
and $C$ is weakly binomial.
We think of $C$ as being the span of parity checks $C=C_{1}=S_{x}$ of a CSS code, and hence $C^{\perp}$ contains
also the space spanned by a set of LDPC parity checks $C_{2}=S_{z}$.
We assume that the boundary operators are $d$-local, and each bit is incident on $K$ (Pauli $x$) checks,
for $d,K=O(1)$.
To place a lower-bound on the dimension of $C$ we invoke the Sloane Mac-Williams transform [MS77]
which translates any weight enumerator on a code $C$, to the weight enumerator on the dual code $C^{\perp}$.
The crux of the argument is essentially a weak converse to previous results by Ashikhmin and Litsyn [AL99]
which use the transform in the context of classical codes.
Consider for example a classical code of large distance.
It’s weight enumerator $(B_{0},\dotsc,B_{n})$ is by definition such that $B_{0}=1$ (the zero word)
and $B_{i}=0$ for all $0<i<\delta_{min}$.
The result by Ashikhmin and Litsyn show that if this is the case, then the weight enumerator of the dual code
$(B_{0}^{\perp},\dotsc,B_{n}^{\perp})$ has an upper-bound that is very close to being binomial:
if one considers an interval of linear size around $n/2$, say $[n/3,\ldots{},n/2,\ldots,2n/3]$ then it is the case that
each $B_{k}$ is at most ${n\choose k}/|C|$ up to a multiplicative polynomial factor.
In our case, we consider a quantum code, namely a CSS code, and argue the opposite way:
we show that if the weight enumerator (of one of the corresponding classical codes) $(B_{0},\dotsc,B_{n})$ is weakly binomial
in the sense defined above, then the weight enumerator of the dual
$(B_{0}^{\perp},\dotsc,B_{n}^{\perp})$ does not have a precise prefix of $0$ bins, as in the classical case
of a large-distance code, but the first $\alpha n$ bins are still very small, for some constant $\alpha>0$.
On the other hand, and this happens only for the quantum case: we know that the dual code $C^{\perp}$
contains an LDPC code.
Any LDPC code - whether it is $\varepsilon$-PR or not, has the property that in the appropriate scale,
the number of words in bin $B_{k}$ grows exponentially fast with $k$, at least for sufficiently small $k=\beta n$.
So together, we collide on the bins of the dual code the opposing forces of the upper-bound implied by the weak binomial distribution,
which implies that the lower-prefix of $B^{\perp}$ is very small, with the fact that this lower-prefix
blows-up exponentially fast because it contains an LDPC code.
This implies a stringent limit on the dimension of the parity check spaces - and hence a lower-bound on the
rate of the code, which in turn implies a stringent upper-bound on its minimal distance of the corresponding quantum code.
2 Preliminaries
2.1 Notation
We adopt the following conventions and notation throughout the paper. Even though some of the machinery we use (e.g. MacWilliams identity) hold in a more general setting, we restrict our attention to binary linear codes in the classical case and binary (qubit) codes based on the CSS construction in the quantum case. Consequently, all linear operators in this paper are over $\mathbb{F}_{2}$. We denote a binary quantum code that encodes $k$ qubits into $n$ qubits with distance $d_{min}$ as $[[n,k,d_{min}]]$. Similarly, a classical code that encodes $k$ bits into $n$ bits and has distance $d_{min}$ will be denoted $[n,k,d_{min}]$. We will write $\rho:=k/n$ for the rate of a code, classical or quantum, and $\delta_{min}:=d_{min}/n$ for the error rate of a code.
A chain complex will be denoted by a calligraphic capital letter while the underlying spaces by the Roman shape of the same letter. For instance, a $2$-complex will be denoted as $\mathcal{X}=(X_{0},X_{1},X_{2})$ where $X_{i}$ are linear spaces over $\mathbb{F}_{2}$ with $X_{0}$ corresponding to formal linear combinations of vertices, $X_{1}$ of edges, and $X_{2}$ of faces. We denote the boundary operators by $\partial_{i}:X_{i}\to X_{i-1}$ and the co-boundary operators by $\partial^{i}:X_{i}\to X_{i+1}$.
We will use $d$ to denote the largest size of a hyper-edge in a complex. Alternatively, for a code over $\mathbb{F}_{2}$ the parameter $d$ (not to be confused with the distance $d_{min}$) represents the Hamming weight of its parity checks in the basis of minimal Hamming weight. The letter $K$ will be used to denote the degree of a vertex, i.e. it will denote the number of ($d-1$)-cells incident to each $d$ cell or, alternatively, the number of bits examined by each parity check. For $x\in\mathbb{F}_{2}^{n}$, $|x|$ denotes the Hamming weight of $x$. Finally, we define $\log(x)\equiv\log_{2}(x)$ and use $H(p):=-p\log p-(1-p)\log(1-p)$ to denote the binary entropy function.
For a discrete set $E$ we use the notation $x\sim U[E]$ to denote that $x$ is a uniformly random element from $E$.
Definition 11.
Let $g(n)$ and $h(n)$ be two functions of $n$. We write $g(n)\geq_{p}h(n)$ if, for all $n\geq 1$,
$$g(n)\geq h(n)n^{z}$$
(7)
for some constant $z$. Similarly, we write $g(n)\leq_{p}h(n)$ if, for all $n\geq 1$,
$$g(n)\leq h(n)n^{z}.$$
(8)
2.2 Kravchuk polynomials
Kravchuk polynomials are a special set of orthogonal polynomials with many applications in error correction [MS77, p. 130]. They have a simple interpretation which makes their definition and many of their properties intuitive.
We fix $n$ to be some positive integer throughout. Let $m\in\{0,\dotsc,n\}$ and denote by
$$S_{m}:=\{w\in\mathbb{F}_{2}^{n}:|w|=m\}$$
(9)
the set of all length-$n$ strings of Hamming weight $m$. Let $\chi_{u}:\mathbb{F}_{2}^{n}\to\{-1,+1\}$ be a character of $\mathbb{F}_{2}^{n}$ for some $u\in\mathbb{F}_{2}^{n}$, i.e. a function of the form $\chi_{u}(v):=(-1)^{u\cdot v}$ where $u\cdot v:=\sum_{i=1}^{n}u_{i}v_{i}$ denotes the inner product modulo $2$. The $m$-th Kravchuk polynomial evaluated at $x\in\{0,\dotsc,n\}$ is then defined as
$$P_{m}(x):=\sum_{w\in S_{m}}\chi_{u}(w)=\sum_{w\in S_{m}}(-1)^{u\cdot w},$$
(10)
where $u\in\mathbb{F}_{2}^{n}$ is any vector of Hamming weight $|u|=x$. Note by symmetry that $P_{m}(x)$ does not depend on the word $u$ chosen as long as $|u|=x$. Also note that $P_{m}(x)$ implicitly depends also on the dimension $n$ of the underlying space $\mathbb{F}_{2}^{n}$, which should be clear from the context.
For any integer $l\geq 0$ and formal variable $x$, we define the binomial coefficient as the following degree-$l$ polynomial in $x$:
$$\binom{x}{l}:=\frac{x(x-1)\dotsb(x-l+1)}{l!}.$$
(11)
For integers $l<0$ the the binomial coefficient is taken to be zero.
Using this, Kravchuk polynomials can be written explicitly as follows:
Definition 12.
The $m$-th Kravchuk polynomial, for $m\in\{0,\dotsc,n\}$, is a degree-$m$ polynomial in $x\in\mathbb{R}$ given by
$$P_{m}(x):=\sum_{l=0}^{m}(-1)^{l}\binom{x}{l}\binom{n-x}{m-l}.$$
(12)
It is not hard to see that Eqs. 10 and 12 agree for integer values of $x$.
One of the most important properties of Kravchuk polynomials is that they are orthogonal under a particular inner product. This fact that can be easily verified using the above interpretation:
Lemma 13.
For $i,j\in\{0,\dotsc,n\}$, the Kravchuk polynomials $P_{i}(k)$ and $P_{j}(k)$ satisfy
$$\sum_{k=0}^{n}\binom{n}{k}P_{i}(k)P_{j}(k)=\delta_{ij}2^{n}\binom{n}{i}$$
(13)
where $\delta_{ij}$ is the Kronecker delta.
Proof.
For each $k\in\{0,\dotsc,n\}$, let $u\in\mathbb{F}_{2}^{n}$ be some word of weight $k$. Then, by Eq. 10,
$$\sum_{k=0}^{n}\binom{n}{k}P_{i}(k)P_{j}(k)=\sum_{k=0}^{n}\binom{n}{k}\sum_{w%
\in S_{i}}(-1)^{u\cdot w}\sum_{w^{\prime}\in S_{j}}(-1)^{u\cdot w^{\prime}}.$$
(14)
Since we could have chosen any other word $u^{\prime}$ of the same weight,
$$\sum_{w\in S_{i}}(-1)^{u\cdot w}\sum_{w^{\prime}\in S_{j}}(-1)^{u\cdot w^{%
\prime}}=\sum_{w\in S_{i}}(-1)^{u^{\prime}\cdot w}\sum_{w^{\prime}\in S_{j}}(-%
1)^{u^{\prime}\cdot w^{\prime}},$$
(15)
so we can rewrite the right-hand side of Eq. 14 as:
$$\sum_{k=0}^{n}\sum_{x\in S_{k}}\sum_{\begin{subarray}{c}w\in S_{i}\\
w^{\prime}\in S_{j}\end{subarray}}(-1)^{x\cdot(w+w^{\prime})}.$$
(16)
Now group the first two sums together and interchange the order of summation:
$$\sum_{\begin{subarray}{c}w\in S_{i}\\
w^{\prime}\in S_{j}\end{subarray}}\sum_{x\in\mathbb{F}_{2}^{n}}(-1)^{x\cdot(w+%
w^{\prime})}.$$
(17)
If $w+w^{\prime}\neq 0$, the internal sum vanishes. If $i\neq j$ then $w+w^{\prime}$ is never $0$, so the whole expression vanishes. If $i=j$ then for every $w\in S_{i}$ there is exactly one word $w^{\prime}\in S_{j}$ with $w+w^{\prime}=0$, so in this case the sum is $\binom{n}{i}2^{n}$.
∎
One important application of this orthogonality relation is that any polynomial $g(x)$ with $\deg(g)\leq n$ has a unique Kravchuk decomposition. A simple way of determining such a decomposition is as follows:
Fact 14.
If $g(x)$ is a polynomial of degree at most $n$, its Kravchuk decomposition is
$$g(x)=\sum_{j=0}^{n}g_{j}P_{j}(x)\qquad\text{where}\qquad g_{j}:=\frac{1}{2^{n}%
\binom{n}{j}}\sum_{k=0}^{n}\binom{n}{k}P_{j}(k)g(k).$$
(18)
Following the line of argument in [KL97], we will make use of a particular decomposition for the polynomial $P_{m}(x)^{2}$:
Lemma 15.
For any $m\in\{0,\dotsc,\lfloor n/2\rfloor\}$,
$$(P_{m}(x))^{2}=\sum_{i=0}^{m}\binom{2i}{i}\binom{n-2i}{m-i}P_{2i}(x).$$
(19)
Proof.
According to Eq. 10,
$$(P_{m}(x))^{2}=\sum_{w,w^{\prime}\in S_{m}}(-1)^{u\cdot(w+w^{\prime})}$$
(20)
for any $u\in\mathbb{F}_{2}^{n}$ such that $|u|=x$. Note that $|w+w^{\prime}|=2i$ for some $i\in\{0,\dotsc,m\}$, so we can rewrite the right-hand side of Eq. 20 as
$$\sum_{i=0}^{m}\sum_{v\in S_{2i}}c_{v}(-1)^{u\cdot v},$$
(21)
where the integer $c_{v}$ accounts for the number of ways two $n$-bit strings $w$ and $w^{\prime}$ (each of Hamming weight $m$) can overlap to produce a given string $v=w+w^{\prime}$ of weight $|v|=2i$. It is not hard to see that $c_{v}$ depends only on the Hamming weight of $v$ and is given by
$$c_{v}=\binom{2i}{i}\binom{n-2i}{m-i}.$$
(22)
Indeed, we simply need to account for all ways of splitting the $2i$ ones of $v$ into two groups of size $i$ each (one of the groups is contributed by $w$ while the other by $w^{\prime}$) as well as picking $m-i$ out of the remaining $n-2i$ locations where the remaining $m-i$ ones of $w$ and $w^{\prime}$ would cancel out.
∎
We will also need the following simple upper bound on Kravchuk polynomials:
Lemma 16.
For any $k\in\{0,\dotsc,n\}$,
$$P_{m}(k)\leq\binom{n}{m}.$$
(23)
Proof.
This follows easily from Eq. 10. Let $u$ be any binary vector with $|u|=k$. Then
$$P_{m}(k)=\sum_{w\in S_{m}}(-1)^{u\cdot w}\leq\sum_{w\in S_{m}}(1)^{u\cdot w}=%
\binom{n}{m}$$
(24)
as claimed.
∎
2.3 The Sloane-MacWilliams transform
We will use an important relation known as MacWilliams identity [MS77]. Suppose we have some linear code $C$ in $\mathbb{F}_{2}^{n}$. We define the weight enumerator of the code $C$ as a set of coefficients $\{B_{k}\}$ where each $B_{k}$ denotes the number of words of weight $k$ in the code. The dual code is simply the code $C^{\perp}$, all the words in $\mathbb{F}_{2}^{n}$ that are orthogonal to all the words in $C$. Of course the code $C^{\perp}$ has its own weight enumerator $\{B_{k}^{\perp}\}$. We state these notions formally in the following definitions:
Definition 17.
Given a code $C$, and for all $k\in\{0,\dotsc,n\}$, we define the weight enumerator $B_{k}$ as:
$$\displaystyle B_{k}=\left|\{x\in C:|x|=k\}\right|$$
(25)
We define the dual code:
Definition 18.
Given a code $C$, we define the dual code $C^{\perp}$ as:
$$\displaystyle C^{\perp}=\{x\in\mathbb{F}_{2}^{n}:\forall c\in C,\,c\cdot{}x=0\}$$
(26)
Naturally the dual code has an analogously defined weight enumerator:
Definition 19.
For a code $C$ and $C^{\perp}$, we define:
$$\displaystyle B_{k}^{\perp}=|\{x\in C^{\perp}:|x|=k\}$$
(27)
The MacWilliams identity provides a way to write the weight enumerator of the dual code in terms of the weight enumerator of $\mathcal{C}$, and the Kravchuk polynomials.
Theorem 20 ([MS77]).
Let $\mathcal{C}$ be a linear code over $\mathbb{F}_{2}^{n}$ with weight enumerator $\{B_{k}\}$. Denote the dual code $\mathcal{C}^{\perp}$ and its weight enumerator $\{B_{k}^{\perp}\}$. Then, it holds that:
$$\displaystyle B_{k}^{\perp}=\frac{1}{|\mathcal{C}|}\sum_{j=0}^{n}P_{k}(j)B_{j}$$
(28)
where $|\mathcal{C}|$ denotes the number of codewords in $\mathcal{C}$.
There are several different ways in which MacWilliams is applied to study the parameters of codes (classical and quantum). One common way to apply the transform is to define a linear program on the weight enumerator of the code [MS77].
Since $B_{k}^{\perp}\geq 0$, this provides a non-trivial constraint in the program. Using standard ideas in linear programming, the dual program provides simple bounds on the original (also referred to as the primal) problem, and therefore on the resulting code. Any feasible solution in the dual program provides a bound on the original program [MS77]. We will only require a simple identity based on the MacWilliams transform:
Lemma 21 ([KL97]).
Let $\{B_{j}\}$ be the weight enumerator of a code $\mathcal{C}$ on $n$ bits, and $\{B_{j}^{\perp}\}$ be the weight enumerator of its dual. If $\alpha(x):=\sum_{j=0}^{n}\alpha_{j}P_{j}(x)$ for some coefficients $\alpha_{j}$, then
$$|C|\sum_{j=0}^{n}\alpha_{j}B_{j}^{\perp}=\sum_{j=0}^{n}\alpha(j)B_{j}.$$
(29)
Proof.
By the MacWilliams identity,
$$\sum_{j=0}^{n}\alpha_{j}B_{j}^{\perp}=\sum_{j=0}^{n}\alpha_{j}\left[\frac{1}{|%
C|}\sum_{k=0}^{n}B_{k}P_{j}(k)\right]=\frac{1}{|C|}\sum_{k=0}^{n}B_{k}\sum_{j=%
0}^{n}\alpha_{j}P_{j}(k)=\frac{1}{|C|}\sum_{j=0}^{n}B_{j}\alpha(j).$$
(30)
∎
For obvious reasons, this identity has many applications in error correction. In particular it is a useful tool for establishing many bounds on quantum codes. One such bound, which we will need in the proof of the main theorem, is provided by Ashikhmin and Litsyn:
Proposition 22 ([AL99, Corollary 2]).
A quantum code with parameters $[[n,k,d_{min}]]$ satisfies:
$$\frac{k}{n}\leq 1-\frac{\delta_{min}}{2}\log(3)-H\left(\frac{\delta_{min}}{2}%
\right)-o(1),$$
(31)
2.4 Markov chains
We will use coarse-grained Markov chains in our analysis. We wish to partition the discrete state space of a Markov chain and analyze the coarse-grained dynamics.
Given a Markov chain $\mathcal{M}$ with state space $\Omega$, and some subset $A\subseteq\Omega$, for any probability distribution $\pi$ over $\Omega$ we will denote:
$$\displaystyle\pi_{A}:=\sum_{i\in A}\pi_{i}$$
(32)
We then define a coarse-graining of a Markov chain:
Definition 23.
Coarse-grained Markov chain
Let $\mathcal{M}$ be an irreducible Markov chain with state space $\Omega$. Denote the probability of transitioning from $i$ to $j$ as $\mathcal{M}_{i,j}$. Suppose we have a partition $\{S_{k}\}$ of $\Omega$ (i.e. $\cup_{k}S_{k}=\Omega$ and $S_{k}\cap S_{j}=\emptyset$ for $k\neq j$). Denote the Markov chain’s stationary distribution by $\{\pi_{j}\}$.
We denote the coarse-grained Markov chain with respect to $\{S_{k}\}$ and $\pi$ by $\mathcal{M}^{\prime}$. It has exactly one state for each set in $\{S_{i}\}$. If $A$ and $B$ are two sets in $\{S_{i}\}$ we define:
$$\displaystyle\mathcal{M}_{A,B}^{\prime}=\sum_{i\in A}\sum_{j\in B}\frac{\pi_{i%
}}{\pi_{A}}\mathcal{M}_{i,j}.$$
(33)
Lemma 24.
Stationary distributions of coarse-grained chains are coarse-grained stationary distributions
Let $\mathcal{M}$ be an irreducible Markov chain with stationary distribution $\{\pi_{j}\}$, and suppose we have some partition of the state space $\{S_{i}\}$. Suppose we construct the coarse-grained Markov chain $\mathcal{M}^{\prime}$ with respect to the partition $\{S_{i}\}$. Denote the stationary distribution of $\mathcal{M}^{\prime}$ as $\{\pi_{S_{i}}^{\prime}\}$. The stationary distribution of $\mathcal{M}^{\prime}$satisfies:
$$\displaystyle\forall S_{i}\in\{S_{i}\}\,\,\,\,\pi_{S_{i}}^{\prime}=\pi_{S_{i}}%
=\sum_{j\in S_{i}}\pi_{j}$$
(34)
Proof.
Fix some $B\in\{S_{i}\}$, we can evaluate:
$$\displaystyle\sum_{S_{i}}\pi_{S_{i}}\mathcal{M}^{\prime}_{S_{i},B}=\sum_{S_{i}%
}\pi_{S_{i}}\sum_{j\in S_{i}}\sum_{k\in B}\frac{\pi_{j}}{\pi_{S_{i}}}\mathcal{%
M}_{j,k}$$
(35)
$$\displaystyle=\sum_{S_{i}}\sum_{j\in S_{i}}\sum_{k\in B}\pi_{j}\mathcal{M}_{j,%
k}=\sum_{j\in\Omega}\sum_{k\in B}\pi_{j}\mathcal{M}_{j,k}=\sum_{k\in B}\sum_{j%
\in\Omega}\pi_{j}\mathcal{M}_{j,k}$$
(36)
Since $\{\pi_{j}\}$ is stationary for the original chain, then
$$\displaystyle=\sum_{k\in B}\pi_{k}=\pi_{B}$$
(37)
So, the distribution $\{\pi_{S_{i}}\}$ is stationary for the coarse-grained chain ${\cal M}^{\prime}$.
∎
Definition 25.
Reversible Markov chains
Let ${\cal M}$ be a Markov chain on space $\Omega$ with stationary distribution $\pi$.
${\cal M}$ is said to be reversible if
$$\forall i,j\in\Omega,\quad\pi_{i}{\cal M}_{i,j}=\pi_{j}{\cal M}_{j,i}$$
The following fact is standard: it says that the random walk on the Cayley graph of a finite group is reversible. This follows almost immediately from the fact that this walk is invariant under left multiplication by an element of the group:
Fact 26.
Walks on finite Cayley graphs are reversible
Let $G$ be a group, and ${\cal G}(G,s)$ be the Cayley graph of $G$ w.r.t. some generating set $s\subseteq G$.
Let ${\cal M}$ denote the random walk on ${\cal G}$.
Then ${\cal M}$ is reversible.
Next, we show that under our natural definition of coarse-graining, reversible chains remain reversible:
Fact 27.
Coarse-grained reversible chains are reversible
Let ${\cal M}$ be a reversible Markov chain on space $\Omega$, with a stationary distribution $\pi$.
Let $\{S_{k}\}$ be some partition of $\Omega$.
Then the coarse-grained chain ${\cal M}^{\prime}(\{S_{k}\},\pi)$ is reversible.
Proof.
Let $\mathcal{M}$ be a reversible Markov chain. By definition, then,
$$\pi_{i}\mathcal{M}_{i,j}=\pi_{j}\mathcal{M}_{j,i}$$
(38)
We can write:
$$\pi_{S_{i}}\mathcal{M}_{S_{i},S_{j}}=\pi_{S_{i}}\sum_{k_{1}\in S_{i}}\sum_{k_{%
2}\in S_{j}}\frac{\pi_{k_{1}}}{\pi_{S_{i}}}\mathcal{M}_{k_{1},k_{2}}$$
(39)
$$=\sum_{k_{1}\in S_{i}}\sum_{k_{2}\in S_{j}}\pi_{k_{1}}\mathcal{M}_{k_{1},k_{2}}$$
(40)
by reversibility,
$$=\sum_{k_{1}\in S_{i}}\sum_{k_{2}\in S_{j}}\pi_{k_{2}}\mathcal{M}_{k_{2},k_{1}%
}=\pi_{S_{j}}\mathcal{M}_{S_{j},S_{i}}$$
(41)
∎
3 An $\varepsilon$-PR complex spans a weakly-binomial subspace
The goal of this section is to show that any degree-regular chain complex that is $\varepsilon$-PR
yields a set of stabilizers that span a weakly-binomial space.
The main lemma of this section shows that $\varepsilon$-pseudo-randomness (see Definition 5) implies weak binomiality:
Lemma 28.
$\varepsilon$-PR hyper-edges span weakly-binomial space
Let $G=(V,E)$ be a $d$-uniform hyper-graph, i.e. $E\subseteq{n\choose d}$ - subsets
of vertices of $V$ of size exactly $d$, where $deg(v)=K$ for each $v\in V$.
If $G$ is $\varepsilon$-pseudorandom ($\varepsilon$-PR), then the span of $E$ as words over $\mathbb{F}_{2}^{n}$
is $(\zeta,\eta)$ weakly binomial with constants $\zeta=\varepsilon^{1/2}$ and $\eta=H(\varepsilon^{1/(2d)})$.
We note that despite assumption that each vertex has a small, constant degree $K$,
the parameters of the weakly-binomial property do not depend on this value $K$.
This is crucial since typically $\varepsilon$ cannot scale better than $2/\sqrt{K}$ - as in the case of Ramanujan graphs [Alo86, Lub14].
3.1 Weight enumerators as stationary distributions
We analyze the weight enumerator of $\varepsilon$-PR complexes by associating them to natural Markov chains,
and then analyzing the stationary distributions of these Markov chains.
Definition 29.
Let $G=(V,E)$ be a hyper-graph $|V|=n$, and let $\mathcal{C}\subseteq\mathbb{F}_{2}^{n}$ be the space spanned by the hyper edges of $G$ as vectors over $\mathbb{F}_{2}^{n}$.
We associate a pair of Markov chains to ${\cal C}$: ${\cal M}^{1},{\cal M}^{2}$ as follows:
•
${\cal M}^{1}$ is the Markov chain defined by a random walk on to the Cayley graph of ${\cal C}$ using the set of generators $E$.
•
Let $S_{i}$ be the set of words in $\mathcal{C}$ of weight $i$, formally defined as $S_{i}=\{x\in\mathcal{C}\,\,|\,\,|x|=i\}$. $\mathcal{M}^{2}$ is defined as the coarse-grained Markov chain with respect to the partition $\{S_{i}\}$,
and some stationary distribution of ${\cal M}^{1}$.
Let us denote the Markov chains corresponding to the complete hyper-graph as $\mathcal{M}^{1}$ and $\mathcal{M}^{2}$ (in this case $E$ contains all words of weight $d$). Denote the corresponding stationary distributions as $\pi^{1}$ and $\pi^{2}$. Similarly, let us denote the Markov chains corresponding to our $\varepsilon$-pseudorandom complex as $\mathcal{M}^{1,\,\varepsilon}$ and $\mathcal{M}^{2,\,\varepsilon}$ and the corresponding stationary distributions as $\pi^{1,\,\varepsilon}$ and $\pi^{2,\,\varepsilon}$. The main observation is that to understand the weight enumerator of $\mathcal{C}=span(E)$ for our $\varepsilon$-pseudorandom complex, one can instead
look at the $1$-dimensional stationary distributions $\pi^{2,\,\varepsilon}$ of ${\cal M}^{2,\,\varepsilon}$.
It is easy to check that
$$\forall k\in[n],\quad B_{k}=2^{m}\pi^{2,\,\varepsilon}_{k}.$$
(42)
So now, if a $d$-uniform hyper-graph $G$ is $\varepsilon$-pseudorandom we would like to characterize
its corresponding ${\cal M}^{1,\,\varepsilon},{\cal M}^{2,\,\varepsilon}$ Markov chains as an approximation of the Markov chains $\mathcal{M}^{1}$ and $\mathcal{M}^{2}$.
Proposition 30.
$\varepsilon$-PR chains project down to approximately independent chains
Let $G=(V,E)$, $|V|=n$, be a $d$-uniform hyper-graph which is $\varepsilon$-PR.
Then we have in ${\cal M}^{2,\,\varepsilon}$ the following transition probabilities for all $i\in[n]$:
$$\forall j\leq d,\quad\left|{\mathcal{M}^{2,\,\varepsilon}_{i,i+d-2j}-{d\choose
j%
}(i/n)^{j}(1-i/n)^{d-j}}\right|\leq\varepsilon.$$
(43)
Proof.
Let $\pi^{1,\,\varepsilon}$ denote the stationary distribution of ${\cal M}^{1,\,\varepsilon}$.
Consider some $x\in S_{i}\subseteq\mathbb{F}_{2}^{n}$ and define $A\subseteq[n]$ as the subset of bits for which $x(i)=1$. Let us choose $S$ in Definition 5 as $A$. By the $\varepsilon$-pseudorandom condition, the probability that $x+y\in S_{i+d-2j}$
where $y\sim U[E]$ is a value in the following interval
$${\cal T}(i,j)=\left[\binom{d}{j}\left(\frac{i}{n}\right)^{j}\left(1-\frac{i}{n%
}\right)^{d-j}-\varepsilon,\binom{d}{j}\left(\frac{i}{n}\right)^{j}\left(1-%
\frac{i}{n}\right)^{d-j}+\varepsilon\right]$$
The Markov chain ${\cal M}^{2,\,\varepsilon}$ is derived by a coarse-graining of ${\cal M}^{1,\,\varepsilon}$ along shells $S_{i}$ with coefficients from $\pi^{1,\,\varepsilon}$.
Hence, by definition of coarse-graining Definition 23
the transition probability in ${\cal M}^{2,\,\varepsilon}$ between vertices $i$ and $i+d-2j$ is a convex combination of probability values in ${\cal T}(i,j)$.
Hence ${\cal M}^{2,\,\varepsilon}_{i,i+d-2j}\in{\cal T}(i,j)$ as required.
∎
We note that when ${\cal M}^{1}$ is the Markov chain corresponding to the span of all generators of weight $d$ we have that the stationary distribution of
its $1$-dimensional corresponding chain ${\cal M}^{2}$, $\pi^{2}$
satisfies
$$\forall k\in[n],\quad\pi^{2}_{k}=2^{-n}{n\choose k}.$$
(44)
3.2 Stationary distributions
Having established that the transition probabilities of $\mathcal{M}^{2,\,\varepsilon}$ are very close to the transition probabilities of $\mathcal{M}^{2}$, we prove that the stationary distribution $\pi^{2}$ is an upper bound on $\pi^{2,\,\varepsilon}$, up to a modest exponential factor.
Proposition 31.
Let ${\cal M}^{1}$ be the Markov chain corresponding to the random walk on $\mathbb{F}_{2}^{n}$ using
all words of weight $d$.
Let ${\cal M}^{1,\,\varepsilon}$ denote the Markov chain
corresponding to the random walk on $\mathbb{F}_{2}^{n}$ using $E$.
Let ${\cal I}$ denote the interval $[n\varepsilon^{1/(2d)},n(1-\varepsilon^{1/(2d)})]$.
Let $\pi^{1}$ and $\pi^{1,\,\varepsilon}$ denote the stationary distributions of $\mathcal{M}^{1}$ and $\mathcal{M}^{1,\,\varepsilon}$,
respectively.
Let ${\cal M}^{2},{\cal M}^{2,\,\varepsilon}$ denote the coarse-graining of ${\cal M}^{1},{\cal M}^{1,\,\varepsilon}$,
to the $n+1$ shells $\{S_{k}\}_{k=0}^{n}$ using the stationary distributions $\pi^{1},\pi^{1,\,\varepsilon}$ respectively.
Let $\pi^{2},\pi^{2,\,\varepsilon}$ denote their corresponding stationary distributions.
Then
$$\forall i\in{\cal I},\quad\pi^{2,\,\varepsilon}_{i}\leq\pi^{2}_{i}\cdot 2^{2%
\varepsilon^{1/2}(n/2-i)}.$$
Proof.
Since both ${\cal M}^{1},{\cal M}^{1,\,\varepsilon}$ are random walks on finite Cayley graphs then by Fact 26 they are reversible.
Hence by Fact 27 the coarse-grained Markov chains ${\cal M}^{1}(\{S_{k}\},\pi^{1}),{\cal M}_{1}^{1,\,\varepsilon}(\{S_{k}\},\pi^{%
1,\,\varepsilon})$
are also reversible.
This implies
$$\displaystyle\forall i\in[n],\ {\cal M}^{2}_{i,i+1}\pi^{2}_{i}={\cal M}^{2}_{i%
+1,i}\pi^{2}_{i+1}.$$
(45)
$$\displaystyle\forall i\in[n],\ {\cal M}^{2,\,\varepsilon}_{i,i+1}\pi^{2,\,%
\varepsilon}_{i}={\cal M}^{2,\,\varepsilon}_{i+1,i}\pi^{2,\,\varepsilon}_{i+1}.$$
(46)
By definition of the interval ${\cal I}$ and by Definition 5, any transition probability is lower bounded by $({\varepsilon^{1/(2d)}})^{d}=\varepsilon^{1/2}$ over $|j|\leq d$.
Hence:
$$\forall i\in{\cal I},j\leq d,\ \ \mathcal{M}_{i,i+j}^{2,}\geq\varepsilon^{1/2}.$$
(47)
Therefore by Definition 5
$$\displaystyle\forall i\in{\cal I},|j|\leq d\ \mathcal{M}_{i,i+j}^{2,\,%
\varepsilon}\leq\mathcal{M}_{i,i+j}^{2}\cdot(1+\varepsilon^{1/2}).$$
(48)
Together with Equations 45,46 above this implies
$$\forall i\in{\cal I},\quad\frac{\pi^{2,\,\varepsilon}_{i}}{\pi^{2,\,%
\varepsilon}_{i+1}}\leq\frac{\pi^{2}_{i}}{\pi^{2}_{i+1}}\cdot\frac{1+%
\varepsilon^{1/2}}{1-\varepsilon^{1/2}}\leq\frac{\pi^{2}_{i}}{\pi^{2}_{i+1}}%
\cdot(1+2\varepsilon^{1/2}),$$
(49)
where the last inequality follows from Taylor series expansion.
Thus,
$$\displaystyle\frac{\pi^{2,\,\varepsilon}_{i}}{\pi^{2,\,\varepsilon}_{i+1}}%
\cdot\ldots\cdot\frac{\pi^{2,\,\varepsilon}_{n/2-1}}{\pi^{2,\,\varepsilon}_{n/%
2}}\leq\frac{\pi^{2}_{i}}{\pi^{2}_{i+1}}\cdot\ldots\cdot\frac{\pi^{2}_{n/2-1}}%
{\pi^{2}_{n/2}}\cdot(1+2\varepsilon^{1/2})^{(n/2-i)}.$$
(50)
In addition, by monotonicity of the binomial distribution around $n/2$, we have that $\pi^{2}_{n/2}\geq n^{-k}$, for some constant $k>0$.
Hence
$$\displaystyle\forall i\in{\cal I},\quad\pi^{2,\,\varepsilon}_{i}\leq 2^{2%
\varepsilon^{1/2}(n/2-i)}n^{k}\pi^{2}_{i}.$$
(51)
∎
3.3 Proof of Lemma 28
To describe briefly the proof of the lemma:
Let $\mathcal{M}^{1}$ be the random walk using all words of weight $d$, and let $\mathcal{M}^{1,\,\varepsilon}$ be the random walk using generators $E$.
In general, the stationary distribution $\pi^{1,\,\varepsilon}$ of the $n$-dimensional Markov chain ${\cal M}^{1,\,\varepsilon}$ can very hard to analyze.
However, since we are only interested in the coarse graining of this distribution into $n+1$ fixed-weight Hamming shells $\pi^{2,\,\varepsilon}$ - namely
the weight enumerator of $\mathcal{C}={\rm span}(E)$,
we can consider instead the coarse-grained Markov chain ${\cal M}^{2,\,\varepsilon}$ - which is the coarse-graining of ${\cal M}^{1,\,\varepsilon}$ using $\pi^{1,\,\varepsilon}$.
This greatly simplifies the analysis in two senses:
On one hand - ${\cal M}^{2,\,\varepsilon}$ is reversible - because it’s the coarse-graining of the reversible chain ${\cal M}^{1,\,\varepsilon}$.
On the other hand, by the $\varepsilon$-PR condition, it is close to the $1$-dimensional chain defined by sampling uniformly at random a word of weight $d$, and adding it to a given word.
Together these two conditions imply a concise condition
on its stationary distribution $\pi^{2,\,\varepsilon}$ - namely the weakly-binomial property.
Proof.
Let $\mathcal{M}^{1}$ be the random walk using all words of weight $d$, and let $\mathcal{M}^{1,\,\varepsilon}$ be the random walk using generators of the code. Denote the stationary distributions similarly, as is described in the statement of Proposition 31. By Eq. 42 and Eq. 44, the stationary distributions have the form:
$$\displaystyle\pi^{2,\,\varepsilon}_{j}=\frac{B_{j}}{2^{m}}\,\,\,\,\,\,\,\,\,\,%
\,\,\,\pi^{2}_{j}=\frac{\binom{n}{j}}{2^{n}}$$
(52)
Proposition 31 implies that, for $j\in[n\varepsilon^{1/(2d)},n(1-\varepsilon^{1/(2d)})]$:
$$\displaystyle\frac{B_{j}}{2^{m}}\leq\frac{2^{\varepsilon^{1/2}n}\binom{n}{j}}{%
2^{n}}$$
(53)
or
$$\displaystyle B_{j}\leq\frac{2^{\varepsilon^{1/2}n}\binom{n}{j}}{|C^{\perp}|}$$
(54)
For $j$ outside the error interval, we know nothing. We can then write the general upper bound for all $j$ by including an additive “error floor term”. This error floor term has magnitude:
$$\displaystyle\binom{n}{n\varepsilon^{1/(2d)}}\leq 2^{nH(\varepsilon^{1/(2d)})}$$
(55)
So we can write the general upper bound as:
$$\displaystyle\forall j\in[0,\ldots{}n],\quad B_{j}\leq\frac{2^{\varepsilon^{(1%
/2)}n}\binom{n}{j}}{|C^{\perp}|}+2^{nH(\varepsilon^{1/(2d)})}.$$
(56)
Hence ${\cal C}$ is weakly-binomial with parameters $\zeta=\varepsilon^{1/2}$ and $\eta=H(\varepsilon^{1/(2d)})$.
∎
4 Weakly-binomial spaces have bounded minimal distance
In this section we use the definition of weakly-binomial spaces to argue an upper-bound on the minimal distance of quantum codes.
Lemma 32.
Large QECC distance implies large deviation from binomial
Let ${\cal Q}=(C_{1},C_{2})_{n}$ be a family of quantum $[[n,k,d]]$ CSS codes as described in Definition 8,
where $\delta_{min}=d/n>0$ is a constant independent of $n$.
Put $m=1/2-k/(2n)=\dim(C_{1})=\dim(C_{2})$.
Suppose that $C_{1},C_{2}$ are spanned by generators of weight $d$, such that in the hypergraph of each of $C_{1},C_{2}$
the degree is some constant $K_{1},K_{2}=O(1)$.
If $C_{1}$ is $(\zeta,\eta)$ weakly-binomial for $\eta\leq 2^{-2\log(2d)/d}$ and $\zeta<\eta^{d}$ then $\delta_{min}\leq 4H(\eta)$.
Proof.
Define
$$f_{d,\eta}(\gamma)=2\gamma+(1-2\gamma)H\left(\frac{\eta-\gamma}{1-2\gamma}%
\right)+\frac{1}{d^{2}}H\left(2\gamma d\right)-H(\eta)$$
(57)
We use now Fact 34 that analyzes $f_{d,\eta}$: by this fact, whenever $\eta\leq 2^{-2\log(2d)/d}$ we have:
$$\displaystyle h_{d,\eta}\equiv f_{d,\eta}(\eta^{d})\geq\eta^{d}.$$
(58)
By assumption, $C_{1}$ is $(\zeta,\eta)$-weakly binomial, for $\zeta<\eta^{d}\leq h_{d,\eta}$.
Fix $\gamma=\eta^{d}$, $t=\eta n,j=\gamma n$.
Consider the polynomial $P_{t}^{2}(x)$.
Using Eq. 19 express $P_{t}^{2}(x)$ in the Kravchuk basis as:
$$P_{t}^{2}(k)=\sum_{i=0}^{t}\binom{2i}{i}\binom{n-2i}{t-i}P_{2i}(k),$$
(59)
and denote
$$\alpha_{2i}:=\binom{2i}{i}\binom{n-2i}{t-i}.$$
(60)
Let $\{B_{k}\}$ be the weight enumerator of $C_{1}$ defined inDefinition 17 and $\{B_{k}^{\perp}\}$ be the weight enumerator of $C_{1}^{\perp}$ defined in Definition 19.
By Mac-Williams theorem 21 and Eq. 59:
$$\displaystyle|C_{1}|\sum_{i=0}^{t}\alpha_{2i}B_{2i}^{\perp}=\sum_{k=0}^{n}B_{k%
}P_{t}(k)^{2}$$
(61)
Since the RHS is a weighted average of $B_{k}$ with positive coefficients, we can apply the upper bound we have on $B_{k}$ assuming weak-binomiality:
$$\displaystyle|C_{1}|\sum_{i=0}^{t}\alpha_{2i}B_{2i}^{\perp}\leq\sum_{k=0}^{n}%
\left[\frac{2^{\zeta n}\binom{n}{k}}{|C_{1}^{\perp}|}+2^{\eta n}\right]P_{t}(k%
)^{2}$$
(62)
$$\displaystyle=\left[\sum_{k=0}^{n}\frac{2^{\zeta n}\binom{n}{k}|C_{1}|}{2^{n}}%
P_{t}(k)^{2}\right]+\left[\sum_{k=0}^{n}P_{t}(k)^{2}2^{\eta n}\right]$$
(63)
Now apply Eq. 59 to the first term:
$$\displaystyle=\left[\sum_{k=0}^{n}\frac{2^{\zeta n}\binom{n}{k}|C_{1}|}{2^{n}}%
\sum_{i=0}^{t}\alpha_{2i}P_{2i}(k)\right]+\left[\sum_{k=0}^{n}P_{t}(k)^{2}2^{%
\eta n}\right]$$
(64)
Reversing the order of summation yields:
$$\displaystyle=\sum_{i=0}^{t}\frac{2^{\zeta n}|C_{1}|\alpha_{2i}}{2^{n}}\sum_{k%
=0}^{n}\binom{n}{k}P_{2i}(k)+\sum_{k=0}^{n}P_{t}(k)^{2}2^{\eta n}$$
(65)
Now observe that we can interpret the inner sum as an inner product between $P_{2i}$ and $P_{0}$ - i.e. the constant function. By Lemma 13, $2^{n}\delta_{i,\,0}$ must be zero unless $i=0$.
Thus, we have:
$$\displaystyle=\frac{2^{\zeta n}|C_{1}|\alpha_{0}}{2^{n}}2^{n}+\sum_{k=0}^{n}P_%
{t}(k)^{2}2^{\eta n}$$
(66)
Now we can apply Lemma 16
$$\displaystyle\leq 2^{\zeta n}|C_{1}|\alpha_{0}+n\binom{n}{t}^{2}2^{\eta n}$$
(67)
and so by Equation 60
we derive the inequality:
$$\displaystyle|C_{1}|\sum_{i=0}^{t}\alpha_{2i}B_{2i}^{\perp}\leq 2^{\zeta n}|C_%
{1}|\binom{n}{t}+n\binom{n}{t}^{2}2^{\eta n}$$
(68)
Dividing by $|C_{1}|$,
$$\displaystyle\sum_{i=0}^{t}\alpha_{2i}B_{2i}^{\perp}\leq 2^{\zeta n}\binom{n}{%
t}+n\binom{n}{t}^{2}\frac{2^{\eta n}}{|C_{1}|}\leq_{p}2^{\zeta n}\binom{n}{t}+%
\binom{n}{t}^{2}\frac{2^{\eta n}}{|C_{1}|}$$
(69)
This implies for all $i\leq t$:
$$\displaystyle\alpha_{2i}B_{2i}^{\perp}\leq_{p}2^{\zeta n}\binom{n}{t}+\binom{n%
}{t}^{2}\frac{2^{\eta n}}{|C_{1}|}$$
(70)
and in particular for $j=\gamma n=\eta^{d}\,n$:
$$\displaystyle\alpha_{2j}B_{2j}^{\perp}\leq_{p}2^{\zeta n}\binom{n}{t}+\binom{n%
}{t}^{2}\frac{2^{\eta n}}{|C_{1}|}$$
(71)
Applying Eq. 60:
$$\displaystyle\binom{2j}{j}\binom{n-2j}{t-j}B_{2j}^{\perp}\leq_{p}2^{\zeta n}%
\binom{n}{t}+\binom{n}{t}^{2}\frac{2^{\eta n}}{|C_{1}|}$$
(72)
By the definition of CSS codes (Definition 8), $C_{2}\subseteq C_{1}^{\perp}$ so we can lower bound the weight enumerator of $C_{1}^{\perp}$ with the weight enumerator of $C_{2}$.
Since by hypothesis $C_{2}$ is also generated by a $d$-uniform hyper-graph of some fixed degree $K_{2}=O(1)$, we can apply Proposition 33:
$$\displaystyle\binom{2j}{j}\binom{n-2j}{t-j}\binom{\frac{n}{d^{2}}}{\frac{2j}{d%
}}\leq_{p}2^{\zeta n}\binom{n}{t}+\binom{n}{t}^{2}\frac{2^{\eta n}}{|C_{1}|}$$
(73)
Next, by the binomial coefficient approximations in Proposition 35:
$$2^{2j+(n-2j)H\left(\frac{t-j}{n-2j}\right)+\frac{n}{d^{2}}H\left(\frac{2jd}{n}%
\right)}\leq_{p}2^{\zeta n+nH\left(\frac{t}{n}\right)}+2^{n(2H(t/n)+\eta-m/n)}$$
(74)
Now rewrite Eq. 74 in terms of $\gamma$ and $\eta$:
$$2^{n\left(2\gamma+(1-2\gamma)H\left(\frac{\eta-\gamma}{1-2\gamma}\right)+\frac%
{1}{d^{2}}H\left(2\gamma d\right)\right)}\leq_{p}2^{n\left(\zeta+H(\eta)\right%
)}+2^{n\left(2H(\eta)+\eta-m/n\right)}$$
(75)
Now observe by that by our choice of parameters the first summand of RHS is negligible compared to LHS as follows:
By assumption:
$$\displaystyle f_{d,\eta}(\eta^{d})=h_{d,\eta}\geq\eta^{d}>\zeta.$$
(76)
or, by definition of $f_{d,\eta}$:
$$\displaystyle 2\gamma+(1-2\gamma)H\left(\frac{\eta-\gamma}{1-2\gamma}\right)+%
\frac{1}{d^{2}}H\left(2\gamma d\right)=h_{d,\eta}+H(\eta)>\zeta+H(\eta).$$
(77)
Therefore, together with Eq. 75 this implies the following upper-bound:
$$\displaystyle\frac{1}{d^{2}}H(2\gamma d)+2\gamma+(1-2\gamma)H\left(\frac{\eta-%
\gamma}{1-2\gamma}\right)\leq 2H(\eta)+\eta-m/n$$
(78)
or
$$\displaystyle f_{d,\eta}(\eta^{d})-H(\eta)\leq\eta-m/n$$
(79)
or
$$\displaystyle H(\eta)-f_{d,\eta}(\eta^{d})\geq m/n-\eta$$
(80)
Since $f_{d,\eta}(\gamma)>\zeta>0$,
$$\displaystyle H(\eta)+\eta\geq m/n$$
(81)
If we let $\rho=1-2\frac{m}{n}=k/n$ denote the rate of the quantum CSS code, then we can write:
$$\displaystyle\frac{1-\rho}{2}\leq H(\eta)+\eta$$
(82)
which implies
$$\displaystyle 1-2H(\eta)-2\eta\leq\rho$$
(83)
Now apply Proposition 22. We obtain:
$$\displaystyle 1-2H(\eta)-2\eta\leq 1-\frac{\delta_{min}}{2}\log(3)-H\left(%
\frac{\delta_{min}}{2}\right)$$
(84)
It is easy to see that the RHS is a convex function of $\delta_{min}$. Clearly the linear function $\delta_{min}$ is convex, and the binary entropy function is well known to be concave (hence $-H(x)$ is convex). So, we can upper bound the RHS with a straight line with endpoints at the beginning and end of our interval of interest. Define:
$$\displaystyle g(\delta_{min})=1-\frac{\delta_{min}}{2}\log(3)-H\left(\frac{%
\delta_{min}}{2}\right)$$
(85)
$g(0)=1$ and let $z>0$ be the first point such that $g(z)=0$. It can be verified computationally that this fixed value of $z$ exists in the interval $[0.35,0.4]$.
So we can upper bound:
$$\displaystyle\forall\delta_{min}\in[0,z]\,\,\,g(\delta_{min})\leq 1-\frac{1}{z%
}\delta_{min}$$
(86)
So, we can write:
$$\displaystyle 1-2H(\eta)-2\eta\leq 1-\frac{1}{z}\delta_{min}$$
(87)
Since $z<1$, this implies:
$$\displaystyle\delta_{min}\leq 2(H(\eta)+\eta).$$
(88)
By Proposition 36, for $\eta\leq 1/2$
$$\displaystyle\delta_{min}\leq 4H(\eta)$$
(89)
∎
Proposition 33.
Lower bound on the number of words in a sparsely-generated space
Let $C$ be an $m$-dimensional linear code over $\mathbb{F}_{2}$ on $n$ bits. Suppose each generator of the code has weight $d$, and that the degree of every vertex is $K$ .
For $k$ divisible by $d$ and $k\leq n/d$ we have:
$$\displaystyle B_{k}\geq\binom{\frac{n}{d^{2}}}{\frac{k}{d}}$$
(90)
Proof.
The number of generators is $g=nK/d$.
We find a set of non-overlapping generators of maximal size.
By choosing a generator, and discarding all its neighbors we can find such a non-overlapping set of size at least $n/d^{2}$.
Consider any subset of $k/d$ such generators.
Any such set corresponds to a word in $C$ of weight exactly $k$.
Since $k\leq n/d$ then $k/d\leq n/d^{2}$ so the number of such words is therefore at least the binomial coefficient:
${n/(d^{2})\choose k/d}$.
∎
5 Proof of Theorem 1
Proof.
Let $\mathcal{X}=(X_{0},X_{1},X_{2})$ be a $2$-complex
and denote by ${\cal C}(\mathcal{X})=[[n,k,d_{min}]]$ be the associated quantum code, and write ${\cal C}(\mathcal{X})=(C_{1},C_{2})$.
Suppose that $\mathcal{X}$ is $\varepsilon$-PR.
By Lemma 28 we have that $C_{1}$ is weakly-binomial with parameters $\zeta=\varepsilon^{1/2},\eta=H(\varepsilon^{1/(2d)})$.
Since by assumption $\partial_{1},\partial_{2}^{T}$ both have row weight $d$ we can invoke Lemma
32.
The lemma states that if $\eta\leq 2^{-2\log(2d)/d}$ and $\zeta<\eta^{d}$ then $\delta_{min}\leq 4H(\eta)$.
To check that this is indeed the case, observe that by assumption
$$\eta=H(\varepsilon^{1/(2d)})\leq 2^{-2\log(2d)/d}.$$
In addition, we have
$$\displaystyle\eta^{d}=\left[H(\varepsilon^{1/(2d)})\right]^{d}>\left[%
\varepsilon^{1/(2d)}\log\left(\frac{1}{\varepsilon^{1/(2d)}}\right)\right]^{d}%
=\varepsilon^{1/2}\left(\frac{1}{2d}\right)^{d}\log^{d}\left(\frac{1}{%
\varepsilon}\right)$$
(91)
Also by assumption $\varepsilon\leq 2^{-2d}$ so we can lower bound the above equation as:
$$\displaystyle\eta^{d}>\varepsilon^{1/2}\left(\frac{1}{2d}\right)^{d}\log^{d}%
\left(\frac{1}{\varepsilon}\right)\geq\varepsilon^{1/2}$$
(92)
The upper bound we obtain on the distance is:
$$\displaystyle\delta_{min}\leq 4H(H(\varepsilon^{1/(2d)}))$$
(93)
For $\eta\leq 1/2$, applying Proposition 36
$$\displaystyle\delta_{min}\leq 4H(2\varepsilon^{1/(2d)}\log(1/\varepsilon^{1/(2%
d)}))$$
(94)
Applying it once more,
$$\displaystyle\delta_{min}\leq 4\cdot 2\cdot 2\varepsilon^{1/(2d)}\log(1/%
\varepsilon^{1/(2d)})\log\left(\frac{1}{2\varepsilon^{1/(2d)}\log(1/%
\varepsilon^{1/(2d)})}\right)$$
(95)
$$\displaystyle=16\varepsilon^{1/(2d)}\log(1/\varepsilon^{1/(2d)})\left(1+\log(1%
/\varepsilon^{1/(2d)})-\log(\log(1/\varepsilon^{1/(2d)}))\right)$$
(96)
$$\displaystyle\leq 48\varepsilon^{1/(2d)}\log(1/\varepsilon^{1/(2d)})^{2}$$
(97)
$$\displaystyle=\frac{12}{d^{2}}\varepsilon^{1/(2d)}\log^{2}(1/\varepsilon)$$
(98)
∎
5.1 Note on the proof
The reader may have noticed that the ideal Markov chain $\mathcal{M}^{1}$ may not span all of $\mathbb{F}_{2}^{n}$, depending on the constant $d$.
If $d$ is odd, then $\mathcal{M}^{1}$ must span all of $\mathbb{F}_{2}^{n}$, so our proof follows exactly as written. This follows from a simple induction argument. With any word of weight $d$, odd, we can make any word of weight $2$. With any word of weight $2$ we can make any even weight word. Using any word of even weight and any word of weight $d$ allows us to make any word of any odd weight.
Only minor modifications are needed to Proposition 31 if $d$ is even. In this case the chain $\mathcal{M}^{1}$ only spans the even words, so we can only telescope on nonzero stationary probabilities in Eq. 49. Additionally, the inequality $\pi_{1}(n/2)\geq 1/n^{k}$ only holds up to a factor of $2$, not significantly effecting our bounds.
Appendix A Technical estimates
Fact 34.
Let
$$f_{d,\beta}(\gamma)=2\gamma+(1-2\gamma)H\left(\frac{\beta-\gamma}{1-2\gamma}%
\right)+\frac{1}{d^{2}}H\left(2\gamma d\right)-H(\beta)$$
(99)
Then whenever $\beta\leq 2^{-2\log(2d)/d}$ we have:
$$h_{d,\beta}\equiv f_{d,\beta}(\beta^{d})\geq\beta^{d}.$$
Proof.
Expanding $H\left(\frac{\beta-\beta^{d}}{1-2\beta^{d}}\right)$ to first order
$$H\left(\frac{\beta-\beta^{d}}{1-2\beta^{d}}\right)\geq H(\beta)+\beta^{d}\cdot%
\log(\beta).$$
$$\displaystyle f_{d,\beta}(\beta^{d})$$
$$\displaystyle=2\beta^{d}+(1-2\beta^{d})\cdot(H(\beta)+\beta^{d}\cdot\log(\beta%
))+\frac{1}{d^{2}}H\left(2\beta^{d}d\right)-H(\beta)$$
(100)
$$\displaystyle=2\beta^{d}+H(\beta)-2\beta^{d}H(\beta)+\beta^{d}\log(\beta)-2%
\beta^{2d}\log(\beta)+\frac{1}{d^{2}}H\left(2\beta^{d}d\right)-H(\beta)$$
(101)
$$\displaystyle\geq 2\beta^{d}+\beta^{d}\log(\beta)+\frac{1}{d^{2}}H\left(2\beta%
^{d}d\right)$$
(102)
$$\displaystyle\geq\beta^{d}\log(\beta)+\frac{1}{d^{2}}H\left(2\beta^{d}d\right)$$
(103)
$$\displaystyle\geq\beta^{d}\log(\beta)-\frac{1}{d^{2}}(2\beta^{d}d)\log(2\beta^%
{d}d)$$
(104)
$$\displaystyle=\beta^{d}\log(\beta)-\frac{1}{d^{2}}(2\beta^{d}d)(\log(\beta^{d}%
)+\log(2d))$$
(105)
$$\displaystyle=-\beta^{d}\log(\beta)-\beta^{d}(2\log(2d)/d)$$
(106)
$$\displaystyle\geq-\frac{1}{2}\beta^{d}\log(\beta)\geq\beta^{d}$$
(107)
where the inequality before last assumes $\beta\leq 2^{-2\log(2d)/d}$.
∎
The following fact appears in [FG06, p. 427] and [Ash90, p. 121].
Proposition 35.
Let $1\leq k\leq n$. If $\varepsilon=k/n$, then
$$\displaystyle\frac{1}{\sqrt{8n\varepsilon(1-\varepsilon)}}2^{H(\varepsilon)n}%
\leq\sum_{i=0}^{k}\binom{n}{i}\leq 2^{H(\varepsilon)n}.$$
(108)
This implies that:
$$\displaystyle\binom{n}{k}\leq 2^{H(\varepsilon)n}$$
(109)
and
$$\displaystyle\frac{1}{k\sqrt{8n\varepsilon(1-\varepsilon)}}2^{nH(\varepsilon)}%
\leq\binom{n}{k}$$
(110)
Or,
$$\displaystyle 2^{nH(\varepsilon)}\leq_{p}\binom{n}{\varepsilon n}\leq_{p}2^{nH%
(\varepsilon)}$$
(111)
Proposition 36.
The following holds for all $x\in[0,1/2]$,
$$\displaystyle x\leq H(x)\leq 2x\log(1/x)$$
(112)
Proof.
The first inequality $x\leq H(x)$ holds since
$$\displaystyle H(x)\geq x\log(1/x)\geq x\,\,\forall x\in[0,1/2]$$
(113)
To prove the second inequality, define the function:
$$\displaystyle g(x)=x\log(1/x)-(1-x)\log(1/(1-x))$$
(114)
By taking the derivative and simplifying, its clear that the only local maximum of the function is at $x=1/2$. Also, we can see that the derivative is positive at $x=1/4$, and that $g(0)=0=g(1/2)$. The inequality follows.
∎
Appendix B Acknowledgments
The authors wish to thank Tali Kaufman, Alex Lubotzky, and Peter Shor for useful conversations.
LE was inspired in part by ideas presented at the “Conference on High-Dimensional Expanders 2016”,
and thanks the organizers for inviting him.
KT wishes to acknowledge the summer school at Weizmann Institute, “High Dimensional Expanders, Inverse Theorems and PCPs” for providing valuable insight into expanders.
MO was funded by the Leverhulme Trust Early Career Fellowship (ECF-2015-256) and would like to thank MIT, where most of this work was done, for hospitality. KT is funded by the NSF through the STC
for Science of Information under grant number CCF0-939370.
References
[AC88]
Noga Alon and Fan RK Chung.
Explicit construction of linear sized tolerant networks.
Annals of Discrete Mathematics, 38:15–19, 1988.
doi:10.1016/S0167-5060(08)70766-0.
[AE15]
Dorit Aharonov and Lior Eldar.
The commuting local Hamiltonian problem on locally expanding graphs
is approximable in NP.
Quantum Information Processing, 14(1):83–101, 2015.
arXiv:1311.7378,
doi:10.1007/s11128-014-0877-9.
[AL99]
Alexei Ashikhmin and Simon Litsyn.
Upper bounds on the size of quantum codes.
IEEE Transactions on Information Theory, 45(4):1206–1215, May
1999.
arXiv:quant-ph/9709049, doi:10.1109/18.761270.
[Alo86]
Noga Alon.
Eigenvalues and expanders.
Combinatorica, 6(2):83–96, 1986.
doi:10.1007/BF02579166.
[Ash90]
Robert Ash.
Information theory.
Dover Publications, New York, 1990.
[BH13]
Fernando G.S.L. Brandao and Aram W. Harrow.
Product-state approximations to quantum ground states.
In Proceedings of the Forty-fifth Annual ACM Symposium on Theory
of Computing, STOC ’13, pages 871–880, New York, NY, USA, 2013. ACM.
arXiv:1310.0017,
doi:10.1145/2488608.2488719.
[BK01]
Sergey B. Bravyi and Alexei Yu. Kitaev.
Quantum codes on a lattice with boundary.
Quantum Computers and Computing, 2(1):43–48, 2001.
arXiv:quant-ph/9811052.
[BL06]
Yonatan Bilu and Nathan Linial.
Lifts, discrepancy and nearly optimal spectral gap.
Combinatorica, 26(5):495–519, 2006.
arXiv:math/0312022, doi:10.1007/s00493-006-0029-7.
[BMD07]
Héctor Bombín and Miguel A. Martin-Delgado.
Homological error correction: Classical and quantum codes.
Journal of Mathematical Physics, 48(5):052105, 2007.
arXiv:quant-ph/0605094, doi:10.1063/1.2731356.
[Bom13]
Héctor Bombín.
An introduction to topological quantum codes.
In Daniel A. Lidar and Todd A. Brun, editors, Quantum Error
Correction. Cambridge University Press, New York, 2013.
arXiv:1311.0277.
[BPT10]
Sergey Bravyi, David Poulin, and Barbara Terhal.
Tradeoffs for reliable quantum information storage in 2D systems.
Phys. Rev. Lett., 104(5):050503, Feb 2010.
arXiv:0909.5200,
doi:10.1103/PhysRevLett.104.050503.
[BT09]
Sergey Bravyi and Barbara Terhal.
A no-go theorem for a two-dimensional self-correcting quantum memory
based on stabilizer codes.
New Journal of Physics, 11(4):043029, 2009.
arXiv:0810.1983,
doi:10.1088/1367-2630/11/4/043029.
[Che69]
Jeff Cheeger.
A lower bound for the smallest eigenvalue of the Laplacian.
In Proceedings of the Princeton conference in honor of Professor
S. Bochner, pages 195–199, 1969.
[CS96]
Arthur R. Calderbank and Peter W. Shor.
Good quantum error-correcting codes exist.
Phys. Rev. A, 54(2):1098–1105, Aug 1996.
arXiv:quant-ph/9512032, doi:10.1103/PhysRevA.54.1098.
[EH15]
Lior Eldar and Aram W. Harrow.
Local Hamiltonians with no low-energy trivial states.
2015.
arXiv:1510.02082.
[EK16]
Shai Evra and Tali Kaufman.
Bounded degree cosystolic expanders of every dimension.
In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory
of Computing, STOC 2016, pages 36–48, New York, NY, USA, 2016. ACM.
arXiv:1510.00839,
doi:10.1145/2897518.2897543.
[FG06]
Jörg Flum and Martin Grohe.
Parameterized complexity theory.
Texts in Theoretical Computer Science. An EATCS Series. Springer,
Berlin London, 2006.
URL: https://books.google.co.uk/books?id=VfJz6hvFAjoC.
[FH14]
Michael H. Freedman and Matthew B. Hastings.
Quantum systems on non-$k$-hyperfinite complexes: A generalization
of classical statistical mechanics on expander graphs.
Quantum Information and Computation, 14(1&2):0144–0180, Jan
2014.
URL: http://www.rintonpress.com/journals/qiconline.html#v14n12,
arXiv:1301.1363.
[Got14]
Daniel Gottesman.
Fault-tolerant quantum computation with constant overhead.
Quantum Information and Computation, 14(15&16):1338–1371, Nov
2014.
URL:
http://www.rintonpress.com/journals/qiconline.html#v14n1516, arXiv:1310.2984.
[Has16]
Matthew B. Hastings.
Quantum codes from high-dimensional manifolds.
2016.
arXiv:1608.05089.
[HLW06a]
Shlomo Hoory, Nathan Linial, and Avi Wigderson.
Expander graphs and their applications.
Bull. Amer. Math. Soc., 43(4):439–561, Oct 2006.
doi:10.1090/S0273-0979-06-01126-8.
[HLW06b]
Shlomo Hoory, Nathan Linial, and Avi Wigderson.
Expander graphs and their applications.
Bull. Amer. Math. Soc., 43:439–561, 2006.
URL:
http://www.ams.org/bull/2006-43-04/S0273-0979-06-01126-8/home.html,
doi:10.1090/S0273-0979-06-01126-8.
[Kit97]
Alexei Yu Kitaev.
Quantum computations: algorithms and error correction.
Russian Mathematical Surveys, 52(6):1191, 1997.
doi:10.1070/RM1997v052n06ABEH002155.
[KKL16]
Tali Kaufman, David Kazhdan, and Alexander Lubotzky.
Isoperimetric inequalities for Ramanujan complexes and topological
expanders.
Geometric and Functional Analysis, 26(1):250–287, 2016.
arXiv:1409.1397,
doi:10.1007/s00039-016-0362-y.
[KL97]
Ilia Krasikov and Simon Litsyn.
Estimates for the range of binomiality in codes’ spectra.
IEEE Transactions on Information Theory, 43(3):987–991, May
1997.
URL: http://bura.brunel.ac.uk/handle/2438/3323, doi:10.1109/18.568707.
[KL14]
Tali Kaufman and Alexander Lubotzky.
High dimensional expanders and property testing.
In Proceedings of the 5th Conference on Innovations in
Theoretical Computer Science, ITCS ’14, pages 501–506, New York, NY, USA,
2014. ACM.
arXiv:1312.2367,
doi:10.1145/2554797.2554842.
[LB13]
Daniel A. Lidar and Todd A. Brun.
Quantum Error Correction.
Cambridge University Press, 2013.
URL: https://books.google.com/books?id=XV9sAAAAQBAJ.
[LM06]
Nathan Linial and Roy Meshulam.
Homological connectivity of random 2-complexes.
Combinatorica, 26(4):475–487, 2006.
doi:10.1007/s00493-006-0027-9.
[LTZ15]
A. Leverrier, J. P. Tillich, and G. Zèmor.
Quantum expander codes.
In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual
Symposium on, pages 810–824, Oct 2015.
doi:10.1109/FOCS.2015.55.
[Lub12]
Alexander Lubotzky.
Expander graphs in pure and applied mathematics.
Bull. Amer. Math. Soc., 49(1):113–162, Jan 2012.
arXiv:1105.2389,
doi:10.1090/S0273-0979-2011-01359-3.
[Lub14]
Alexander Lubotzky.
Ramanujan complexes and high dimensional expanders.
Japanese Journal of Mathematics, 9(2):137–169, 2014.
arXiv:1301.1028,
doi:10.1007/s11537-014-1265-z.
[MS77]
Florence Jessie MacWilliams and Neil James Alexander Sloane.
The theory of error correcting codes.
North-Holland mathematical library. North-Holland Pub. Co. New York,
Amsterdam, New York, 1977.
[NC11]
Michael A. Nielsen and Isaac L. Chuang.
Quantum Computation and Quantum Information: 10th Anniversary
Edition.
Cambridge University Press, New York, NY, USA, 10th edition, 2011.
[Par13]
Ori Parzanchevski.
Mixing in high-dimensional expanders.
2013.
arXiv:1310.6477.
[PRT16]
Ori Parzanchevski, Ron Rosenthal, and Ran J. Tessler.
Isoperimetric inequalities in simplicial complexes.
Combinatorica, 36(2):195–227, 2016.
arXiv:1207.0638,
doi:10.1007/s00493-014-3002-x.
[SS96]
Michael Sipser and Daniel A. Spielman.
Expander codes.
IEEE Transactions on Information Theory, 42(6):1710–1722, Nov
1996.
doi:10.1109/18.556667.
[Ste96a]
Andrew M. Steane.
Error correcting codes in quantum theory.
Phys. Rev. Lett., 77(5):793–797, Jul 1996.
doi:10.1103/PhysRevLett.77.793.
[Ste96b]
Andrew M. Steane.
Multiple-particle interference and quantum error correction.
Proceedings of the Royal Society of London A: Mathematical,
Physical and Engineering Sciences, 452(1954):2551–2577, 1996.
arXiv:quant-ph/9601029, doi:10.1098/rspa.1996.0136. |
Scanning probe microscopy of thermally excited mechanical modes of an
optical microcavity
T. J. Kippenberg, H. Rokhsari, K.J. Vahala
Present address: Max Planck Institute of Quantum Optics, 85748 Garching,
Germany.
vahala@its.caltech.edu
California Institute of Technology, Department of Applied Physics, Pasadena,
CA 91125, USA.
Abstract
The resonant buildup of light within optical microcavities elevates the
radiation pressure which mediates coupling of optical modes to the
mechanical modes of a microcavity. Above a certain threshold pump power,
regenerative mechanical oscillation occurs causing oscillation of certain
mechanical eigenmodes. Here, we present a methodology to spatially image the
micro-mechanical resonances of a toroid microcavity using a scanning probe
technique. The method relies on recording the induced frequency shift of the
mechanical eigenmode when in contact with a scanning probe tip. The method
is passive in nature and achieves a sensitivity sufficient to spatially
resolve the vibrational mode pattern associated with the thermally agitated
displacement at room temperature. The recorded mechanical mode patterns are
in good qualitative agreement with the theoretical strain fields as obtained
by finite element simulations.
pacs: 42.65Yj, 42.55-Sa, 42.65-Hw
The work of V.B. BraginskyBraginsky et al. (2001) predicted that due to
radiation pressure the mechanical mirror-eigenmodes of a Fabry-Pérot
(FP) resonator can couple to the optical modes, leading to a parametric
oscillation instability. This phenomenon is characterized by regenerative
mechanical oscillation of the mechanical cavity eigen-modes. Significant
theoretical studies have been devoted to this effect in the context of the
laser gravitational wave observatory (LIGO) (Braginsky et al. (2001); Kells and D’Ambrosio (2002)), as it potentially impedes gravitational wave detection. Whereas in
macroscopic resonators the influence of radiation pressure is weak and only
appreciable at high power levelsA. Dorsel et al. (1983), the mutual coupling of
optical and mechanical modes is significantly enhanced in optical
microcavities (such as silica microspheresBraginsky et al. (1989), microdisks
or toroidsArmani, D.K. and Kippenberg, T.J. and Spillane, S.M.
and Vahala, K.J. (2003)) which simultaneously exhibit ultra-high-Q
optical modes and high-Q mechanical modes in the radio-frequency
range. The combination of high optical power and small mechanical mass and
dissipation leads to threshold levels in the micro-Watt regime for
regenerative acoustic oscillations (i.e. parametric oscillation
instability), making it the dominant micro-cavity nonlinear optical effect
as reported previously in toroid microcavitiesKippenberg, T.J. and Spillane, S.M. and Vahala,
K.J. (2005); Rokhsari, H. and Kippenberg, T.J. and Carmon, T. and
Vahala, K.J. (2005); Carmon et al. (2005).
In this letter, we report a novel scanning-probe technique, which allows
direct spatial imaging of the amplitude of the micro-mechanical modes
of a microcavity associated with their thermally driven displacement at room
temperature. The method is based on the induced resonance shift by a
scanning probe tip, whose influence on the mechanical oscillator’s resonance
frequency is detected optically via the light transmitted past the
microcavity. This technique is passive in nature, and reaches a sensitivity
which is sufficient to detect the minute amplitude of the thermally driven
mechanical modes. Initial demonstrations of this method show very good
agreement between the mechanical mode distribution obtained by
scanning-probe spectroscopy and finite-element modeling. Besides providing
insight into the spatial pattern of the mechanical modes of an optical
microcavity, this technique should provide a useful tool for the study of
other micromechanical or nano-mechanical resonatorsZalalutdinov et al. (2000).
The experimental scenario is depicted schematically in Figure 1. It consists
of a standard geometry in which a pump wave is coupled from a waveguide (a
tapered optical fiberCai et al. (2000)) to an ultra-high-Q microcavity mode of
a toroid microcavity on a chip Armani, D.K. and Kippenberg, T.J. and Spillane, S.M.
and Vahala, K.J. (2003). In addition to their
excellent optical properties, this microcavity geometry - owing to its free
hanging silica membrane supporting the toroidal periphery - possesses high-Q
micromechanical resonances. The inset in figure 3 shows the first two ($n=1,2$) mechanical modes of the structure, calculated using finite element
modeling. The modes are rotationally symmetric (i.e. their azimuthal mode
number being $m=0$). As evident from the finite element simulation, the
motion of the first and second order flexural mode is primarily in the
out-of plane direction (of the toroid and disk).
On resonance the high buildup of light within the cavity leads to an
increase in radiation pressure, expanding the cavity (optical round trip)
and thereby coupling the optical mode to the mechanical eigenmodes of the
cavity as described in RefsKippenberg, T.J. and Spillane, S.M. and Vahala,
K.J. (2005); Rokhsari, H. and Kippenberg, T.J. and Carmon, T. and
Vahala, K.J. (2005); Carmon et al. (2005).
The mutual coupling of the mechanical and optical mode is described by the
following set of equations:
$$\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}x-\frac{\omega_{m}}{Q_{m}}\frac{\mathrm{%
d}}{\mathrm{d}t}x+\omega_{m}^{2}x=K_{om}\frac{{|a|}^{2}}{T}$$
(1)
$$\frac{\mathrm{d}}{\mathrm{d}t}a=-\frac{1}{\tau}a+\mathrm{i}\left(\Delta\omega+%
K_{mo}x\right)a+\mathrm{i}s\sqrt{\frac{1}{\tau_{ex}}}$$
(2)
The first equation describes the mechanical eigenmode, where $\omega_{m}$
is the mechanical frequency, ${|x_{m}|}^{2}$ is normalized to mechanical
energy i.e. ${|x_{m}|}^{2}=\sum_{i=r,z,\Theta}\int\epsilon_{i}\sigma_{i}\mathrm{d}V\equiv m%
_{eff}\cdot\omega_{m}^{2}\cdot r^{2}$, which decays
with the lifetime $\frac{1}{\tau_{m}}$ (i.e. $Q_{m}=\omega_{m}\tau_{m}$).
Correspondingly $|a|^{2}$ is the energy in the optical whispering gallery
mode ($1/T\cdot|a|^{2}$ is the power, where $T=\frac{2\pi Rn_{e\!f\!f}}{c}$
is the cavity round-trip time), which is excited with a pump laser detuned
by the amount $\Delta\omega$ from line-center. The expressions $K_{om}$
and $K_{mo}$ describe the mutual coupling of optical and mechanical
eigen-modes, and depend on the respective modes. The coupling can be mapped
to a Fabry-Pérot cavity, by considering the modes as a harmonic
oscillator with in plane (radial amplitude $r$, which modulates the cavity
pathlength) and out-of plane motion (amplitude $z$). Solving equations
(1)-(2) shows that the radiation pressure causes the mechanical oscillator
to experience: (1) a change in rigidity; (2) the addition of a velocity
dependent term (providing either a viscous force or an accelerating force),
i.e.
$$\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}x-\left(\frac{\omega_{m}}{Q_{m}}+\Delta%
\beta_{L}\right)\frac{\mathrm{d}}{\mathrm{d}t}x+\left(\omega_{m}^{2}+\frac{%
\Delta k_{L}}{m}\right)x=0$$
(3)
The approximate solutions are (for $\omega_{m}\ll\omega_{0}/Q$): $\Delta\beta_{L}=\tau\frac{\mathrm{d}F}{\mathrm{d}x}$and $\Delta k_{L}=\frac{\mathrm{d}F}{\mathrm{d}x}.$Consequently, the laser being detuned to the blue
with respect to the cavity resonance ($\Delta\beta_{L}>0$) leads to
mechanical gain. If the mechanical gain exceeds the mechanical loss, a
parametric oscillation instability can be observed in which regenerative
mechanical oscillations occur. This phenomenon has been observed for the
first time recently in toroid microcavities and has been extensively
characterizedKippenberg, T.J. and Spillane, S.M. and Vahala,
K.J. (2005); Rokhsari, H. and Kippenberg, T.J. and Carmon, T. and
Vahala, K.J. (2005); Carmon et al. (2005).
Here we investigate the interaction of a local probe (whose dimensions are
small compared to the optical microcavity) with the mechanical modes, and
demonstrate a novel method which can spatially resolve the mechanical mode
pattern associated with the thermally agitated displacement of the toroid
microcavities. In order to spatially image the mechanical modes a scanning
probe is introduced as shown in Figure 1 which contacts the free hanging
disk connected to the toroidal cavity and supported by the silicon pillar.
The scanning probe setup consisted of a silica tapered fiber tip controlled
by a piezoelectric stage (with 80 nm resolution). The tip of the scanning
probe was fabricated by $\mathrm{CO}_{2}$ laser melting and stretching of a
single mode fiber, and had a tip-diameter ca. 3 $\mu m$. The probe was
lowered onto the silica support disk (i.e. the interior region of the
toroidal microcavity), while the taper was simultaneously coupled to the
toroid. Figure 1 a,c shows an optical micrograph of the taper-cavity system
in contact with the probe (top view, side view respectively). The presence
of the probe couples the mechanical microcavity mode to the acoustical modes
of the probe. This coupling has two effects; (1) the mechanical quality ($Q_{m}$) factor of the microcavity structure is reduced; and (2) the
mechanical eigenfrequency ($\omega_{m}$) is changed, due to a change in
rigidityZalalutdinov et al. (2000). We note the similarity of this method to the
”AC” mode of an atomic force microscopeBinnig et al. (1986); Albrecht et al. (1990) , which
uses the change in mechanical frequency of an oscillating cantilever to
record the topology of the surface (induced by position dependent forces due
to the presence of near-field forces). However, in the present case, not the
resonant frequency shift of the probe is monitored, but rather the resonant
frequency and $Q$ of the micromechanical resonator itself. As the mechanical
cavity-motion modulates the optical power coupled to the cavity (i.e.,
cavity transmission) and thereby creates sidebands at $\omega=\omega\pm\,\omega_{m}$ in the transmitted optical spectrum, the mechanical
oscillation frequency and Q-factor can be readily measured via the
transmitted pump power past the cavity.
If the optical pump power is low (compared to the threshold for mechanical
oscillations to occur) then the optical field acts purely as a probe of the
mechanical oscillation and does not modify the mechanical properties of the
structure (i.e. the light will not excite mechanical resonances, since $P\ll P_{thresh}$ equivalent to $\Delta\beta_{L}$ $\ll\frac{\omega_{m}}{Q_{m}}$). The threshold for mechanical oscillation can be increased rapidly (and
the regime $P\ll P_{thresh}$ achieved), when biasing the coupling junction
into the overcoupled regimeSpillane, S.M. and Kippenberg, T.J. and Painter, O.J.
and Vahala, K.J. (2003), owing to the fact that
threshold scales inverse cubically with the optical $Q$ factor (in cases
where the mechanical frequency is smaller than the cavity bandwidthKippenberg, T.J. and Spillane, S.M. and Vahala,
K.J. (2005); Rokhsari, H. and Kippenberg, T.J. and Carmon, T. and
Vahala, K.J. (2005)). Therefore the system can be described by a
simplified set of equations by introducing $\omega_{m}^{\ast}$ and $\Delta\beta^{\ast}$ (which contain the radiation pressure induced shift in
resonance and change in rigidity):
$$\frac{\mathrm{d}}{\mathrm{d}t}a=-\frac{1}{\tau}a+\mathrm{i}\big{(}\Delta\omega%
+K_{mo}\,x_{T}\,\mathrm{cos}\left(\omega_{m}t\right)\big{)}a+\mathrm{i}s\sqrt{%
\frac{1}{\tau_{ex}}}$$
(4)
In this regime ($P\ll P_{thresh}$), the oscillator is only thermally driven
(i.e. the energy being equal to $k_{b}T$), causing modulation of the field
stored inside the cavity, due to the change in cavity path-length. The
solutions in this regime to the above equation are given by:
$$a=\sum_{n}\frac{s\sqrt{\tau_{ex}^{-1}}\,J_{n}(M)\,\mathrm{e}^{\mathrm{i}n%
\omega_{m}t}}{-\tau^{-1}+\mathrm{i}\Delta\omega+\mathrm{i}n\omega_{m}}$$
(5)
The appearance of sidebands (at $\omega_{m}$ and harmonics) is thus
observable at the mechanical eigen-frequencies with a modulation depth $M$
which is governed by the specifics of the coupling mechanism and the
amplitude of the motion, i.e. $M=K_{mo}\cdot x_{T}$, where $x_{T}$ is the
thermal displacement noise, as given by $x_{T}=\sqrt{\frac{k_{B}T}{m\omega_{m}^{2}}}$. The temperature $T$ in the presence of the optical probe field
is increased above the ambient temperature of 300 K by several degrees, due
to absorption of photons and subsequent heating (as evidenced from thermal
hysteresis)Rokhsari et al. . Detection of light transmitted past the
microcavity exhibits modulation components at $\omega_{m}$ and harmonics,
as the transmitted light past the tapered fiber interferes with the
modulated field emerging from the cavity, i.e.
$$T\cong P_{in}|T_{0}|^{2}+\sqrt{P_{in}}|T_{0}|\mathrm{cos}\left(\omega_{m}t%
\right)\frac{J_{0}(M)\cdot 2\,\omega_{m}}{{|\tau^{-1}-\mathrm{i}\Delta\omega-%
\mathrm{i}\omega_{m}|}^{2}}$$
(6)
The transmission is given by $T_{0}^{\,2}={\left|\frac{\tau_{ex}^{-1}-\tau_{0}^{-1}+\mathrm{i}\Delta\omega}{%
\tau_{ex}^{-1}+\tau_{0}^{-1}-\mathrm{i}\Delta\omega}\right|}^{2}$ and maximum modulation
depth occurs at critical coupling and for detuning of $\Delta\omega/2$,
where the slope of the Lorentzian transmission is maximum. By spectral
decomposition of the cavity transmission ($T$), the mechanical resonance
frequency and mechanical Q-factor can therefore be recorded. The inset of
Figure 2 shows the lorentzian lineshape of the first flexural mode as well
as a theoretical fit (solid line). Therefore, the transmitted pump light can
be used to optically probe both the micromechanical resonant frequency, as
well as the mechanical Q factor.
Having established a detection technique for the mechanical resonant
characteristics ($\omega_{m}$, $Q_{m}$) static probe contact measurements
were carried out. When the probe is brought in contact with the silica disk,
a reduction of the mechanical Q factor is observed, as the probe constitutes
a dissipative loss channel to the mechanical oscillator. The reduction of Q
factor increases with contact pressure of the probe. In addition, we note
that the observed decrease in $Q$ is concomitant with an increase in
mechanical eigenfrequency. Figure 2 shows $Q_{m}$ as a function of frequency
$\omega_{m}$. As can be seen, a linear relationship between mechanical loss
($\propto Q_{m}^{-1}$) and induced frequency shift ($\Delta\omega_{m}$) is
obtained. This is non-intuitive at first sight, and not in agreement with a
damped harmonic oscillator model, which predicts that the increase in
damping (e.g. due to the tip) causes a red-shift of the resonance
frequency, i.e. $\omega^{\prime}=\omega\sqrt{1-\frac{4}{Q^{2}}}$.
However, the presence of the tip causes not only a change in dissipation but
also a change in the rigidity of the oscillator ($\Delta k_{P}$) which
causes a blue-shift by $\Delta\omega_{\Delta k}=\omega_{m}\frac{-\Delta k}{2k}$. This effect is well known for cantilevers used for atomic
force microscopy Binnig et al. (1986); Albrecht et al. (1990); Zalalutdinov et al. (2003). This empirical
linear relationship of $Q_{m}$ and $\Delta\omega_{m}$ was recorded for
different modes (1,2,3) and found repeatedly for all measurements reported
in this letter.
Next spatial probing was carried out, and the influence of the
mechanical resonant characteristics ($\omega_{m}$, $Q_{m}$) on the spatial
probe position analyzed. Figure 3 shows the results of this measurement in
which scanning is performed along the entire cross section of a toroidal
cavity with a diameter of 104 $\mu m$ (the resonant frequencies for the $n=1$
and $n=2$ modes were 6.07 MHz and 14.97 MHz). The cavity was fabricated with
a strong undercut, in order to promote the recorded amplitudes (the thermal
displacement was approximately $x^{T}\approx 1$ pm for the $n=1$ mode). As
can be observed in Fig. 3 (upper graph) for the $n=1$ mode (which in its
unperturbed state has a resonant frequency of 6.07 MHz) a continuous
decrease, followed by a plateau and then followed by an increase in the
mechanical frequencies was observed while scanning from the one outer edge
of the toroid to the opposing edge (The frequency range covered the
unperturbed value of 6.07 MHz to 6.17 MHz, equating to a fractional change
of $\frac{\Delta\omega_{m}}{\omega_{m}}\approx 0.016$). Similarly the
mechanical Q-factor was continually reduced, plateaud in the interior
region, followed again by an increase. The data in Fig. 3 represents the
recorded frequency shift normalized to unity. This is a first indication,
that the induced frequency shifts and Q-factor change depend critically on
the amplitude of the mechanical oscillations, and therefore probe
local information about the mechanical amplitude. To confirm that
this effect is not topological in nature (e.g. due to an irregular shape of
the interior silica disk surface), and in order to establish that the
recorded shift in frequency is indeed a measure of the mechanical amplitude,
thedependencies for the $n=2$ mode were measured. The result for the $n=2$
mode is shown in Fig. 3b, and is clearly distinct from the $n=1$ mode. These
measurements were obtained repeatedly on different samples, with different
tips. As evident the mechanical frequency is perturbed maximally at the
point of maximum mechanical amplitude and decreases to zero in the interior
of the toroid, as well as at the outer edge. This clearly indicates that the
observed frequency shift is not related to the surface-topology, but in fact
provides direct information on the local mechanical oscillation
amplitude. In order to make a qualitative comparison of the recorded
frequency shift and the actual mechanical motion, the numerically calculated
amplitude ( in the z-direction) is superimposed in figure 3. The
calculations employed finite element modeling of the actual geometry (as
inferred by SEM). The theoretically modeled amplitudes were scaled to
unity in the vertical direction. We note that the position of maximum
amplitude of the $n=1$ and $n=2$ mode, agrees very well with the finite
element simulation as does the overall shape of the curves.While a detailed
understanding of how the probe changes the resonant characteristics of the
oscillator is at present lacking, the data along with the modeling strongly
suggests that the recorded frequency shifts do directly relate to the strain
fields in the z-direction. Note that deviation of the recorded shift with
the numerical modeling is expected, due to the convolution of the finite
probe size (in the experiments this is approximately 3 $\mu m$) with the
mechanical motion.
In summary a novel method is presented which allows direct probing of the
vibrational mode patterns of a micromechanical oscillator. The method relies
on spatially recording the induced frequency shift by a scanning probe tip,
and exhibits sufficient sensitivity to detect the mode pattern of thermally
excited acoustic modes. The present results should also be applicable to
other types of micromechanical oscillators, and could provide a useful tool
in the field of MEMS/NEMSZalalutdinov et al. (2000), as well as to achieve tuning
of mechanical resonator modesZalalutdinov et al. (2000),.
Acknowledgements.
This research was supported by the DARPA and the Caltech Lee Center for
Advanced Networking. TJK gratefully acknowledges receipt of a postdoctoral
fellowship from the IST-CPI.
References
Braginsky et al. (2001)
V. B. Braginsky,
S. E. Strigin,
and S. P.
Vyatchanin, Physics Letters A
287, 331 (2001).
Kells and D’Ambrosio (2002)
W. Kells and
E. D’Ambrosio,
Physics Letters A 229,
326 (2002).
A. Dorsel et al. (1983)
A. Dorsel et al., Physical
Review Letters 51, 1550
(1983).
Braginsky et al. (1989)
V. Braginsky,
M. Gorodetsky,
and V. Ilchenko,
Physics Letters A 137,
393 (1989).
Armani, D.K. and Kippenberg, T.J. and Spillane, S.M.
and Vahala, K.J. (2003)
Armani, D.K. and Kippenberg, T.J. and Spillane,
S.M. and Vahala, K.J. , Nature
421, 925 (2003).
Kippenberg, T.J. and Spillane, S.M. and Vahala,
K.J. (2005)
Kippenberg, T.J. and Spillane, S.M. and Vahala,
K.J., Physical Review Letters
95, 033901
(2005).
Rokhsari, H. and Kippenberg, T.J. and Carmon, T. and
Vahala, K.J. (2005)
Rokhsari, H. and Kippenberg, T.J. and Carmon, T.
and Vahala, K.J., Optics Express
13, 5293 (2005).
Carmon et al. (2005)
T. Carmon,
H. Rokhsari,
L. Yang,
T. Kippenberg,
and K. Vahala,
Physical Review Letters 94,
223902 (2005).
Zalalutdinov et al. (2000)
M. Zalalutdinov,
B. D. Illic,
Czaplewski,
A. Zehnder,
H. Craighead,
and J. Parpia,
Applied Physics Letters 77,
3278 (2000).
Cai et al. (2000)
M. Cai,
O. Painter, and
K. Vahala,
Physical Review Letters 85,
74 (2000).
Binnig et al. (1986)
G. Binnig,
C. Quate, and
C. Gerber,
Physical Review Letters 56,
930 (1986).
Albrecht et al. (1990)
T. Albrecht,
P. Grütter,
D. Horne, and
D. Rugar,
Journal of Applied Physics 69,
668 (1990).
Spillane, S.M. and Kippenberg, T.J. and Painter, O.J.
and Vahala, K.J. (2003)
Spillane, S.M. and Kippenberg, T.J. and Painter,
O.J. and Vahala, K.J., Physical Review Letters
91, art. no.
(2003).
(14)
H. Rokhsari,
S. Spillane, and
K. Vahala,
Applied Physics Letters (????).
Zalalutdinov et al. (2003)
M. Zalalutdinov,
K. Aubin,
M. Pandey,
A. Zehnder,
R. Rand,
H. Craighead,
J. Parpia, and
B. Houston,
Applied Physics Letters 83,
3281 (2003). |
Sharp bounds for variance of treatment effect estimators in the finite population in the presence of covariates
Ruoyu Wang
wangruoyu17@mails.ucas.edu.cn
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, 55 Zhongguancun East Road, Haidian District, Beijing 100190, China
Qihua Wang
qhwang@amss.ac.cn
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, 55 Zhongguancun East Road, Haidian District, Beijing 100190, China
Wang Miao
mwfy@pku.edu.cn
School of Mathematical Sciences, Peking University, 5 Summer Palace Road, Haidian District, Beijing 100871, China
Xiaohua Zhou
azhou@bicmr.pku.edu.cn
Beijing International Center for Mathematical Research and Department of Biostatistics, Peking University, 5 Summer Palace Road, Haidian District, Beijing 100871, China
Abstract
In the completely randomized experiment, the variances of treatment effect estimators in the finite population are usually not identifiable and hence not estimable. Although some estimable bounds of the variances have been established in the literature, few of them are derived in the presence of covariates.
In this paper, the difference-in-means estimator and the Wald estimator are considered in the completely randomized experiment with perfect compliance and noncompliance, respectively. Sharp bounds for the variances of these two estimators are established when covariates are available.
Furthermore, consistent estimators for such bounds are obtained, which can be used to shorten the confidence intervals and improve power of tests.
Simulations were conducted to evaluate the proposed methods. The proposed methods are also illustrated with two real data analyses.
Keywords:
Finite population; Partial identification; Potential outcome; Randomized experiment
1 Introduction
Estimation and inference for the average treatment effect are extremely important in practice.
Most of the literature assumes that the observations are sampled from an infinite super population (Hirano et al., 2003; Imbens, 2004; Belloni et al., 2014; Chan et al., 2016). In some cases, the infinite super population seems contrived if we are interested in evaluating the treatment effect for a particular finite population (Li and Ding, 2017).
In randomized experiments,
the finite population framework has been widely used in data analysis since Neyman and Iwaszkiewicz (1935). This framework views all potential outcomes of a finite population as fixed; estimation and inference are based on randomization solely (Imbens and Rosenbaum, 2005; Nolen and Hudgens, 2011). However, in the completely randomized experiment, a fundamental problem in the finite population is that the variance of the widely-used difference-in-means estimator is unidentifiable.
Thus, we can not obtain a consistent variance estimator and standard inference based on normal approximation fails.
To mitigate this problem, Neyman (1990)
adopted an estimable upper bound for the variance, which leads to conservative inference.
Precision of the bound is crucial for the power of a test and the width of the resulting confidence interval.
Thus it is important to incorporate all the available information to make the bound as precise as possible.
Aronow et al. (2014) improved Neyman (1990)’s results by deriving a sharp bound that can not be improved with no information other than the marginal distributions of potential outcomes. Besides, the variance bound with binary outcomes was fairly well studied by Robins (1988), Ding and Dasgupta (2016), Ding and Miratrix (2018), etc.
In many randomized experiments, some covariates are observed in addition to the outcome.
However, few of previous approaches consider how to improve the variance bound using covariate information, with the exception of Ding et al. (2019).
Surprisingly, we observe that the upper bound for variance of the treatment effect estimator given by Ding et al. (2019) can be larger than that given by Aronow et al. (2014) in some situations.
This is illustrated with Example 1 in Section 2.
In this paper, the first main contribution is to derive the sharp bound for the variance of the difference-in-means estimator in the finite population when covariates are available, and obtain consistent estimator of the bound.
It should be mentioned that the proofs for
the consistency are nontrivial in the finite population due to the complicated associations between the terms in the estimator.
Based on the consistent estimator of the variance bound, a shorter confidence interval with more accurate coverage rate is obtained.
The aforementioned results focus on completely randomized experiments where units comply with the assigned treatments.
However, noncompliance often occurs in randomized experiments,
in which case, the parameter of interest is the local average treatment effect (Angrist et al., 1996; Abadie, 2003),
and its inference is more complicated.
Recently, the estimating problem of the local average treatment effect is considered in the finite population. Some estimators are suggested including the widely-used Wald estimator and the ones due to Ding et al. (2019) and Hong et al. (2020). Identification problem also exists for the variances of these estimators.
However, to the best of our knowledge, no sharp bound is obtained for the unidentifiable variances in the literature.
In this paper, another main contribution is to further extend the aforementioned results for the
completely randomized experiment to the case
with noncompliance and establish the sharp bound for the variance of the Wald estimator. It is worth mentioning that a bound without covariate information can be derived as a special case of our result and such a bound is also new in the literature. Consistent estimator for the sharp bound is also constructed.
Simulations and application to two real data sets from the randomized trial ACTG protocol 175 (Hammer et al., 1996) and JOBS II (Vinokuir et al., 1995) demonstrate the advantages of our methods.
This paper is organized as follows. In Section 2, we establish the sharp variance bound in the presence of covariates for the difference-in-means estimator in the completely randomized experiment with perfect compliance. A consistent estimator is obtained for the bound. In Section 3, we consider the Wald estimator for the local average treatment effect in the completely randomized experiment in the presence of noncompliance; we establish sharp variance bound for the Wald estimator in the presence of covariates, and obtain a consistent estimator for the bound. Simulation studies are conducted to evaluate the empirical performance of the proposed bound estimators in Section 4, followed by some applications to data from the randomized trial ACTG protocol 175 and JOBS II in Section 5. A discussion on some possible extensions of our results is provided in Section 6. Proofs are relegated to the Appendix.
2 Sharp variance bound for the difference-in-means estimator
2.1 Preliminaries
Suppose we are interested in the effect of a binary treatment on an outcome in a finite population consisting of $N$ units.
In a completely randomized experiment, $n$ out of $N$ units are sampled from the population,
with $n_{1}$ of them randomly assigned to the treatment group and the other $n_{0}=n-n_{1}$ to the control group.
Let $T_{i}=1$ if unit $i$ is assigned to the treatment group, $T_{i}=0$ the control group, and $T_{i}=-1$ if not enrolled in the sample.
For each unit $i$ and $t=0,1$, let $y_{ti}$ denote the potential outcome that would be observed if unit $i$ is assigned to treatment $t$.
We let $w_{i}$ denote a vector of covariates with the constant $1$ as its first component.
Then the characteristics of the population can be viewed as a matrix $\mathbf{U}=(y_{1},y_{0},w)$ where $y_{1}=(y_{11},y_{12},\dots,y_{1N})^{T}$, $y_{0}=(y_{01},y_{02},\dots,y_{0N})^{T}$
and $w=(w_{1},\dots,w_{N})^{\mathrm{\scriptscriptstyle T}}$. Here $y_{ti}$ is observed if $T_{i}=t$ and $w_{i}$ is observed as long as $T_{i}\not=-1$, where $t=0,1$.
For any vector $a=(a_{1},\dots,a_{N})^{\mathrm{\scriptscriptstyle T}}$, we let
$$\mu(a)=\frac{1}{N}\sum\limits_{i=1}^{N}a_{i},\quad\phi^{2}(a)=\frac{1}{N}\sum%
\limits_{i=1}^{N}(a_{i}-\mu(a))^{2}.$$
Letting $\tau_{i}=y_{1i}-y_{0i}$ be the treatment effect for unit $i$ and $\tau=(\tau_{1},\dots,\tau_{N})^{\mathrm{\scriptscriptstyle T}}$, the parameter of interest is the average treatment effect,
$$\theta=\mu(\tau)=\frac{1}{N}\sum\limits_{i=1}^{N}y_{1i}-\frac{1}{N}\sum\limits%
_{i=1}^{N}y_{0i}.$$
Note that all the parameters in this paper depend on $N$ if not otherwise specified, and we drop out the dependence in notation for simplicity when there is no ambiguity.
The difference-in-means estimator of the average treatment effect
$$\hat{\theta}=\frac{1}{n_{1}}\sum_{T_{i}=1}y_{1i}-\frac{1}{n_{0}}\sum_{T_{i}=0}%
y_{0i},$$
is widely used. According to Freedman (2008), the variance of $\hat{\theta}$ is
$$\frac{1}{N-1}\left\{\frac{N}{n_{1}}\phi^{2}(y_{1})+\frac{N}{n_{0}}\phi^{2}(y_{%
0})-\phi^{2}(\tau)\right\},$$
and we denote this variance by $\sigma^{2}/(N-1)$.
Under certain standard regularity conditions in the finite population, previous authors (Freedman, 2008; Aronow et al., 2014; Li and Ding, 2017) have established that
$$\sqrt{N}\sigma^{-1}(\hat{\theta}-\theta)\overset{d}{\to}N(0,1),$$
(1)
and statistical inference may be made based on this asymptotic distribution.
However, it is difficult to obtain a consistent estimator for $\sigma^{2}$.
According to standard results in survey sampling (Cochran, 1977), $\phi^{2}(y_{t})$ can be consistently estimated by
$$\hat{\phi}_{t}^{2}=\frac{1}{n_{t}-1}\sum_{T_{i}=t}\Big{(}y_{ti}-\frac{1}{n_{t}%
}\sum_{T_{j}=t}y_{tj}\Big{)}^{2}$$
for $t=0,1$.
However, $\phi^{2}(\tau)$ and hence $\sigma^{2}$ is not identifiable
because the potential outcomes $y_{1}$ and $y_{0}$ can never be observed simultaneously.
To make inference for $\theta$ based on (1), one can use an upper bound for $\sigma^{2}$ to construct a conservative confidence interval.
Alternatively, one may use an estimable lower bound for $\sigma^{2}$ to get a shorter confidence interval. However, coverage rate of such a confidence interval may not be guaranteed.
An estimable upper (lower) bound for $\sigma^{2}$ is equivalent to an estimable lower (upper) bound for
the unidentifiable term $\phi^{2}(\tau)$.
We then derive the sharp bound for $\phi^{2}(\tau)$ and obtain its consistent estimator.
2.2 Sharp bound for $\phi^{2}(\tau)$
For two vectors $a=(a_{1},\dots,a_{N})^{\mathrm{\scriptscriptstyle T}}$, $b=(b_{1},\dots,b_{N})^{\mathrm{\scriptscriptstyle T}}$ and two sets $A$, $B$,
we let
$$\displaystyle P(a\in A)=\frac{1}{N}\sum_{i=1}^{N}1\{a_{i}\in A\},$$
$$\displaystyle P(a\in A\mid b\in B)=\frac{\sum_{i=1}^{N}1\{a_{i}\in A,b_{i}\in B%
\}}{\sum_{i=1}^{N}1\{b_{i}\in B\}},$$
where $1\{\cdot\}$ is the indicator function. For any function $H$, we define $H^{-1}(u)=\inf\{s:H(s)\geq u\}$.
We let $\{\xi_{1},\dots,\xi_{K}\}$ be the set of all different values of $w_{i}$. Clearly $K\leq N$, $F_{t\mid k}(y)=P(y_{t}\leq y\mid w=\xi_{k})$ and $\pi_{k}=P(w=\xi_{k})$ for $t=0,1$ and $k=1,\dots,K$.
We aim to derive bounds for $\phi^{2}(\tau)$ by using covariate information efficiently.
Let $\mathcal{U}=\{\mathbf{U}^{*}=(y_{1}^{*},y_{0}^{*},w^{*}):P(y_{t}^{*}\mid w^{*}%
=\xi_{k})=F_{t\mid k}(y)\ \text{and}\ P(w^{*}=\xi_{k})=\pi_{k}\ \text{for}\ t=%
0,1\ \text{and}\ k=1,\dots,K\}$ be the set of all populations that have identical covariate-potential outcome distributions as $\mathbf{U}$.
Then we establish the sharp bound in the following theorem.
Theorem
A bound for $\phi^{2}(\tau)$ is $[\phi^{2}_{\rm L},\phi^{2}_{\rm H}]$ where
$$\displaystyle\phi^{2}_{\rm L}=\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-%
1}(u)-F_{0\mid k}^{-1}(u))^{2}du-\theta^{2},$$
$$\displaystyle\phi^{2}_{\rm H}=\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-%
1}(u)-F_{0\mid k}^{-1}(1-u))^{2}du-\theta^{2};$$
and the bound is sharp in the sense that it can be attained by some population among $\mathcal{U}$.
${}_{\blacksquare}$
Here we compare this bound to previous bounds obtained by Aronow et al. (2014) and Ding et al. (2019).
By utilizing the marginal distributions of potential outcomes, Aronow et al. (2014) derived the bound for $\phi^{2}(\tau)$:
$$\displaystyle\phi^{2}_{\rm AL}\colonequals$$
$$\displaystyle\int_{0}^{1}(F_{1}^{-1}(u)-F_{0}^{-1}(u))^{2}du-\theta^{2}\leq%
\phi^{2}(\tau)$$
$$\displaystyle\leq\int_{0}^{1}(F_{1}^{-1}(u)-F_{0}^{-1}(1-u))^{2}du-\theta^{2}%
\equalscolon\phi^{2}_{\rm AH}$$
where $F_{t}(y)=P(y_{t}\leq y)$ for $t=0,1$.
The bound of Aronow et al. (2014) is sharp given the marginal distributions of potential outcomes.
However, in the presence of covariates,
Ding et al. (2019) proposed the following regression based bound that may improve Aronow et al. (2014)’s bound in certain situations:
$$\displaystyle\phi^{2}_{\rm DL}\colonequals$$
$$\displaystyle\phi^{2}(\tau_{w})+\int_{0}^{1}(F_{e_{1}}^{-1}(u)-F_{e_{0}}^{-1}(%
u))^{2}du\leq\phi^{2}(\tau)$$
$$\displaystyle\leq\phi^{2}(\tau_{w})+\int_{0}^{1}(F_{e_{1}}^{-1}(u)-F_{e_{0}}^{%
-1}(1-u))^{2}du\equalscolon\phi^{2}_{\rm DH}.$$
where $\tau_{w}=(w_{1}^{\mathrm{\scriptscriptstyle T}}(\gamma_{1}-\gamma_{0}),\dots,w%
_{N}^{\mathrm{\scriptscriptstyle T}}(\gamma_{1}-\gamma_{0}))^{\mathrm{%
\scriptscriptstyle T}}$, $F_{e_{t}}(s)=P(e_{t}\leq s)$, $e_{t}=(y_{t1}-w_{1}^{\mathrm{\scriptscriptstyle T}}\gamma_{t},\dots,y_{tN}-w_{%
N}^{\mathrm{\scriptscriptstyle T}}\gamma_{t})^{\mathrm{\scriptscriptstyle T}}$, and $\gamma_{t}$ is the least square regression coefficient of $y_{ti}$ on $w_{i}$.
The lower bound of Ding et al. (2019) is not sharp, as we observe that it can even be smaller than that of Aronow et al. (2014) and thus may lead to more conservative confidence intervals in spite of the available covariate information.
Nonetheless, the bound we proposed in Theorem 2.2 is sharp in the presence of covariates in the sense that it can be attained by some population among $\mathcal{U}$.
In the Appendix, we further show that
$$\phi^{2}_{\rm L}\geq\max\{\phi^{2}_{\rm AL},\phi^{2}_{\rm DL}\},\ \phi^{2}_{%
\rm H}\leq\min\{\phi^{2}_{\rm AH},\phi^{2}_{\rm DH}\}.$$
(2)
When there is no covariate, our bound reduces to $[\phi^{2}_{\rm AL},\phi^{2}_{\rm AH}]$ by letting $K=1$, $\xi_{1}=1$ and $w_{i}=1$ for $i=1,\dots,N$.
The following example illustrates the improvement of our bound as the association between covariates and potential outcomes varies.
Example
Consider a population with $N=600$ units. Suppose the potential outcomes and the covariate are binary,
with $P(w=1)=1/3$, $P(y_{1}=1)=2/3$, $P(y_{0}=1)=1/3$, $P(y_{0}=1\mid w=1)=3/4$, and $P(y_{0}=1\mid w=0)=1/8$.
Let $p=P(y_{1}=1\mid w=1)$ ($p\in\{1/200,\dots,1\}$), then $P(y_{1}=1\mid w=0)=1-p/2$.
Figure 1 presents the three bounds under different values of $p$.
${}_{\blacksquare}$
Insert Fig 1 about here
Figure 1 shows that $\phi^{2}_{\rm L}\geq\max\{\phi^{2}_{\rm AL},\phi^{2}_{\rm DL}\}$ and $\phi^{2}_{\rm H}\leq\min\{\phi^{2}_{\rm AH},\phi^{2}_{\rm DH}\}$ under all settings of $p$,
and in many situations the inequalities are strict.
For $p\leq 143/200$, Ding et al. (2019)’s bound is tighter than Aronow et al. (2014)’s bound.
However, for $p>143/200$, $\phi^{2}_{\rm DL}<\phi^{2}_{\rm AL}$, although covariate information is also used in the approach of Ding et al. (2019).
2.3 Estimation of the sharp bound
To estimate $\phi_{\rm L}^{2}$ and $\phi_{\rm H}^{2}$ and study asymptotic properties of the
proposed estimators for them, we adopt the following standard framework (Li and Ding, 2017) for theoretical development. Suppose that there is a sequence of finite populations
$\mathbf{U}_{\nu}$ of size $N_{\nu}$. A random assignment that assigns $n_{1\nu}$ units to treatment group and $n_{2\nu}$ units to control group is associated with each $\mathbf{U}_{\nu}$. The population size $N_{\nu}\to\infty$ and $n_{1\nu}/N_{\nu}\to\rho_{1}$, $n_{0\nu}/N_{\nu}\to\rho_{0}$ with $\rho_{1}$, $\rho_{0}\in(0,1)$ as $\nu\to\infty$ and $\rho_{1}+\rho_{0}\leq 1$ . For notation simplicity, the index $\nu$ is suppressed and the limiting process is represented by $N\to\infty$. Here we assume that the number of covariate levels $K$ is known and is allowed to grow with the population size $N$.
To accommodate continuous covariates, we can stratify them and increase the number of strata with the sample size.
We estimate $F_{t\mid k}(y)$ and $\pi_{k}$ with empirical probabilities $\hat{F}_{t\mid k}(y)=\sum_{T_{i}=t}1\{y_{ti}\leq y,w=\xi_{k}\}/\sum_{T_{i}=t}1%
\{w_{i}=\xi_{k}\}$ and $\hat{\pi}_{k}=1/n\sum_{T_{i}\not=-1}1\{w_{i}=\xi_{k}\}$, respectively.
By plugging in these estimators, we obtain the following estimators for $\phi_{\rm L}^{2}$ and $\phi_{\rm H}^{2}$:
$$\hat{\phi}^{2}_{\rm L}=\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{1\mid k%
}^{-1}(u)-\hat{F}_{0\mid k}^{-1}(u))^{2}du-\hat{\theta}^{2},$$
$$\hat{\phi}^{2}_{\rm H}=\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{1\mid k%
}^{-1}(1-u)-\hat{F}_{0\mid k}^{-1}(u))^{2}du-\hat{\theta}^{2}.$$
These estimators involve quantile functions $\hat{F}_{t\mid k}^{-1}(u)$, which are complexly correlated with each other in the finite population. Many theoretical results for quantile functions in the super population framework can not be applied to the scenario we consider. Thus it is not trivial to analyze the asymptotic properties of these estimators. However, we observe that the first term of these estimators are actually weighted sums of the Wasserstein distances between some distributions. By invoking a representation theorem of Wasserstein distances, we are able to prove the consistency results for the estimators with a careful analysis of the error terms. See Appendix B.3 for more details.
The following theorem establishes consistency of the bound estimators.
Theorem
Under the Conditions A, A and A in Appendix A,
$(\hat{\phi}_{\rm L}^{2},\hat{\phi}_{\rm H}^{2})$ is consistent for $(\phi_{\rm L}^{2},\phi_{\rm H}^{2})$.
${}_{\blacksquare}$
The lower bound $\phi^{2}_{\rm L}$ for $\phi^{2}(\tau)$ implies an upper bound for $\sigma^{2}$.
Consistency result of $\hat{\phi}^{2}_{\rm L}$ is sufficient for constructing a conservative confidence interval.
A conservative $1-\alpha$ confidence interval for $\theta$ is given by
$$\left[\hat{\theta}-q_{\frac{\alpha}{2}}\hat{\sigma}N^{-\frac{1}{2}},\hat{%
\theta}+q_{\frac{\alpha}{2}}\hat{\sigma}N^{-\frac{1}{2}}\right],$$
where
$$\hat{\sigma}^{2}=\left(\frac{N}{n_{1}}\hat{\phi}^{2}_{1}+\frac{N}{n_{0}}\hat{%
\phi}^{2}_{0}-\hat{\phi}^{2}_{L}\right)$$
and $q_{\alpha/2}$ is the upper $\alpha/2$ quantile of a standard normal distribution.
3 Sharp variance bound for the Wald estimator
3.1 Sharp bound for the unidentifiable term in the variance
In the previous section we discuss the variance bound in completely randomized experiments with perfect compliance. However, noncompliance often arises in randomized experiments. In the presence of noncompliance, the treatment received may differ from the treatment assigned. For each unit $i$ and $t=0,1$, we let $d_{ti}\in\{0,1\}$ denote the treatment that unit $i$ actually receives if assigned to treatment $t$ and $d_{ti}$ is observed if $T_{i}=t$. In this case, the units can be classified into four groups according to the value of $(d_{1i},d_{0i})$ (Angrist et al., 1996; Frangakis and Rubin, 2002),
$$g_{i}=\left\{\begin{array}[]{lcl}\text{Alway Taker}\ (\rm a)&&\text{if}\ d_{1i%
}=1\ \text{and}\ d_{0i}=1,\\
\text{Complier}\ (\rm c)&&\text{if}\ d_{1i}=1\ \text{and}\ d_{0i}=0,\\
\text{Never Taker}\ (\rm n)&&\text{if}\ d_{1i}=0\ \text{and}\ d_{0i}=0,\\
\text{Defier}\ (\rm d)&&\text{if}\ d_{1i}=0\ \text{and}\ d_{0i}=1.\\
\end{array}\right.$$
Let $g=(g_{1},\dots,g_{N})$, then the characteristics of the population can be viewed as a matrix $\mathbf{U}_{\rm c}=(y_{1},y_{0},w,g)$ where $y_{1}$, $y_{0}$ and $w$ are defined in Section 2.
For $t=0,1$, $k=1,\dots,K$ and $h=\rm a,c,n$, let $F_{t\mid(k,h)}(y)=P(y_{t}\leq y\mid w=\xi_{k},g=h)$, $\pi_{k\mid h}=P(w=\xi_{k}\mid g=h)$, $\pi_{k\mid h}=P(w=\xi_{k}\mid g=h)$ and $\pi_{h}=P(g=h)$.
In this section, we maintain the following standard assumptions for analyzing the randomized experiment with noncompliance.
Assumption
(i) Monotonicity:$d_{1i}\geq d_{0i}$; (ii) exclusion restriction: $y_{1i}=y_{0i}$ if $d_{1i}=d_{0i}$; (iii) strong instrument: $\pi_{\rm c}\geq C_{0}$ where $C_{0}$ is a positive constant.
${}_{\blacksquare}$
Assumption 3.1 (i) rules out the existence of defiers and is usually easy to assess in many situations. For example, it holds automatically if units in control group do not have access to the treatment.
Assumption 3.1 (ii) means that the treatment assignment affect the potential outcome only through affecting the treatment a unit actually receive. Assumption 3.1 (iii) assures the existence of compliers. See Angrist et al. (1996); Abadie (2003) for more detailed discussions of Assumption 3.1.
In the randomized experiment with noncompliance, the parameter of interest is the local average treatment effect (Angrist et al., 1996; Abadie, 2003),
$$\theta_{\rm c}=\frac{\sum_{i=1}^{N}\tau_{i}1\{g_{i}={\rm c}\}}{\sum_{i=1}^{N}1%
\{g_{i}={\rm c}\}},$$
which is the average effect of treatment in the compliers.
Under the monotonicity and exclusion restriction, we have $1\{g_{i}={\rm c}\}=d_{1i}-d_{0i}$ and $(d_{1i}-d_{0i})\tau_{i}=\tau_{i}=y_{1i}-y_{0i}$. Thus
$$\pi_{\rm c}=\frac{1}{N}\sum_{i=1}^{N}1\{g_{i}={\rm c}\}=\frac{1}{N}\sum_{i=1}^%
{N}(d_{1i}-d_{0})=\mu(d_{1})-\mu(d_{0}),$$
$$\frac{1}{N}\sum_{i=1}^{N}(d_{1i}-d_{0i})\tau_{i}=\frac{1}{N}\sum_{i=1}^{N}\tau%
_{i}=\mu(\tau),$$
and $\theta_{\rm c}=\pi_{\rm c}^{-1}\theta$.
Hence $\theta_{\rm c}$ can be estimated by the “Wald estimator”,
$$\hat{\theta}_{\rm c}=\hat{\pi}_{\rm c}^{-1}\hat{\theta}$$
where $\hat{\theta}=\sum_{T_{i}=1}y_{1i}/n_{1}-\sum_{T_{i}=0}y_{0i}/n_{0}$ and
$\hat{\pi}_{\rm c}=\sum_{T_{i}=1}d_{1i}/n_{1}-\sum_{T_{i}=0}d_{0i}/n_{0}$.
We show that $\hat{\theta}_{\rm c}$ has the following asymptotic normality property.
Theorem
Under Assumption 3.1 and Conditions A and A in Appendix A,
$$\sqrt{N}\sigma_{\rm c}^{-1}(\hat{\theta}_{\rm c}-\theta_{\rm c})\overset{d}{%
\to}N(0,1),$$
where
$$\sigma^{2}_{\rm c}=\frac{1}{\pi_{\rm c}^{2}}\left(\frac{N}{n_{1}}\phi^{2}(%
\tilde{y}_{1})+\frac{N}{n_{0}}\phi^{2}(\tilde{y}_{0})-\phi^{2}(\tilde{\tau})%
\right),$$
$\tilde{y}_{t}=(y_{t1}-\theta_{\rm c}d_{t1},\dots,y_{tN}-\theta_{\rm c}d_{tN})^%
{\mathrm{\scriptscriptstyle T}}$ for $t=0,1$ and $\tilde{\tau}=\tilde{y}_{1}-\tilde{y}_{0}$.
${}_{\blacksquare}$
Let $\hat{y}_{ti}=y_{ti}-\hat{\theta}_{\rm c}d_{ti}$, then under the conditions of Theorem 3.1, $\phi^{2}(\tilde{y}_{t})$ can be consistently estimated by
$$\check{\phi}^{2}_{t}=\frac{1}{n_{t}-1}\sum_{T_{i}=t}\Big{(}\hat{y}_{ti}-\frac{%
1}{n_{t}}\sum_{T_{j}=t}\hat{y}_{tj}\Big{)}^{2},$$
and $\pi_{\rm c}$ can be consistently estimated by $\hat{\pi}_{\rm c}$. However, analogous to $\phi^{2}(\tau)$, $\phi^{2}(\tilde{\tau})$ is unidentifiable. Here we construct a sharp bound for $\phi^{2}(\tilde{\tau})$ using covariate information. It should be pointed out that the sharp
bound has not been obtained even in the absence of covariate information. Let $\tilde{F}_{t\mid k}(y)=P(\tilde{y}_{t}\leq y\mid w=\xi_{k},g={\rm c})$ and let $\mathcal{U}_{\rm c}=\{\mathbf{U}^{*}=(y_{1}^{*},y_{0}^{*},w^{*},g^{*}):\mathbf%
{U}^{*}\text{ satisfies Assumption \ref{ass:iv}};P(y_{t}^{*}\mid w^{*}=\xi_{k}%
,g^{*}=h)=F_{t\mid(k,h)}(y),\ P(w^{*}=\xi_{k}\mid g^{*}=h)=\pi_{k\mid h}\ %
\text{and}\ P(g^{*}=h)=\pi_{h}\ \text{for}\ t=0,1\ \text{and}\ h={\rm a,c,n}\}$ be the set of all populations that satisfies Assumption 3.1 and have identical compliance type-covariate-potential outcome distributions as $\mathbf{U}_{\rm c}$. Then we can establish the following sharp bound for $\phi^{2}(\tilde{\tau})$.
Theorem
A bound for $\phi^{2}(\tilde{\tau})$ is $[\tilde{\phi}^{2}_{\rm L},\tilde{\phi}^{2}_{\rm H}]$ where
$$\displaystyle\tilde{\phi}^{2}_{\rm L}=\sum_{k=1}^{K}\pi_{\rm c}\pi_{k\mid\rm c%
}\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)-\tilde{F}_{0\mid k}^{-1}(u))^{2}du,$$
$$\displaystyle\tilde{\phi}^{2}_{\rm H}=\sum_{k=1}^{K}\pi_{\rm c}\pi_{k\mid\rm c%
}\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)-\tilde{F}_{0\mid k}^{-1}(1-u))^{2}du;$$
and the bound is sharp in the sense that it can be attained by some population among $\mathcal{U}_{\rm c}$.
${}_{\blacksquare}$
If there is no covariate information, we let $K=1$, $\xi_{1}=1$ and define $w_{i}=1$ for $i=1,\dots,N$, then we can obtain a bound without utilizing covariate information, which has not been considered in the literature.
3.2 Estimation of the sharp bound
To estimate the bounds, we need to estimate $\pi_{\rm c}$, $\pi_{k\mid\rm c}$, and
$\tilde{F}_{t\mid k}(y)$. Here $\pi_{\rm c}$ can be estimated by $\hat{\pi}_{\rm c}$.
We let
$$\displaystyle\hat{\lambda}_{1k}=\frac{1}{n_{1}}\sum_{T_{i}=1}d_{1i}1\{w_{i}=%
\xi_{k}\}-\frac{1}{n_{0}}\sum_{T_{i}=0}d_{0i}1\{w_{i}=\xi_{k}\}$$
$$\displaystyle\hat{\lambda}_{0k}=\frac{1}{n_{0}}\sum_{T_{i}=0}(1-d_{0i})1\{w_{i%
}=\xi_{k}\}-\frac{1}{n_{1}}\sum_{T_{i}=1}(1-d_{1i})1\{w_{i}=\xi_{k}\}.$$
Under Assumption 3.1, we estimate $\pi_{k\mid\rm c}$ and $\tilde{F}_{t\mid k}(y)$ with
$$\hat{\pi}_{k\mid{\rm c}}=\hat{\pi}_{\rm c}^{-1}\hat{\lambda}_{1\mid k},$$
$$\check{F}_{1\mid k}(y)=\hat{\lambda}_{1\mid k}^{-1}\left(\frac{1}{n_{1}}\sum_{%
T_{i}=1}d_{1i}1\{\hat{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\}-\frac{1}{n_{0}}\sum_{T%
_{i}=0}d_{0i}1\{\hat{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\}\right)$$
and
$$\check{F}_{0\mid k}(y)=\hat{\lambda}_{0\mid k}^{-1}\left(\frac{1}{n_{0}}\sum_{%
T_{i}=0}(1-d_{0i})1\{\hat{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\}-\frac{1}{n_{1}}%
\sum_{T_{i}=1}(1-d_{1i})1\{\hat{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\}\right)$$
where $\hat{y}_{ti}=y_{ti}-\hat{\theta}_{\rm c}d_{ti}$ for $t=0,1$, and $i=1,\dots,N$.
We then obtain estimators for $\tilde{\phi}_{\rm L}^{2}$ and $\phi_{\rm H}^{2}$ by plugging these estimators into the expressions of Theorem 3.1,
$$\check{\phi}^{2}_{\rm L}=\sum_{k=1}^{K}\hat{\pi}_{\rm c}\hat{\pi}_{k\mid\rm c}%
\int_{0}^{1}(\check{F}_{1\mid k}^{-1}(u)-\check{F}_{0\mid k}^{-1}(u))^{2}du,$$
$$\check{\phi}^{2}_{\rm H}=\sum_{k=1}^{K}\hat{\pi}_{\rm c}\hat{\pi}_{k\mid\rm c}%
\int_{0}^{1}(\check{F}_{1\mid k}^{-1}(u)-\check{F}_{0\mid k}^{-1}(1-u))^{2}du.$$
The estimator $\check{F}_{t\mid k}(y)$ may not be monotonic with respect to $y$, which brings about some difficulties for theoretical analysis. However, $\check{F}_{t\mid k}^{-1}(u)$ is still well defined, and in the next theorem we show that the non-monotonicity of $\check{F}_{t\mid k}(y)$ does not diminish the consistency of $\check{\phi}_{\rm L}^{2}$ and $\check{\phi}_{\rm H}^{2}$.
Theorem
Under Assumption 3.1 and Conditions A, A and A in Appendix A,
$(\check{\phi}_{\rm L}^{2},\check{\phi}_{\rm H}^{2})$ is consistent for $(\tilde{\phi}_{\rm L}^{2},\tilde{\phi}_{\rm H}^{2})$.
${}_{\blacksquare}$
The lower bound $\tilde{\phi}^{2}_{\rm L}$ for $\phi^{2}(\tilde{\tau})$ implies an upper bound for $\sigma^{2}_{\rm c}$.
By Theorems 3.1 and 3.2 we can construct a conservative $1-\alpha$ confidence interval for $\theta_{\rm c}$,
$$\left[\hat{\theta}_{\rm c}-q_{\frac{\alpha}{2}}\hat{\sigma}_{\rm c}N^{-\frac{1%
}{2}},\hat{\theta}_{\rm c}+q_{\frac{\alpha}{2}}\hat{\sigma}_{\rm c}N^{-\frac{1%
}{2}}\right],$$
where
$$\hat{\sigma}_{\rm c}^{2}=\frac{1}{\hat{\pi}_{\rm c}^{2}}\left(\frac{N}{n_{1}}%
\check{\phi}^{2}_{1}+\frac{N}{n_{0}}\check{\phi}^{2}_{0}-\check{\phi}^{2}_{L}\right)$$
and $q_{\alpha/2}$ is the upper $\alpha/2$ quantile of a standard normal distribution.
4 Simulations
4.1 Completely randomized experiments with perfect compliance
In this subsection we evaluate performance of the bound estimators for the variance of the difference-in-means estimator proposed in Section 2 via some simulations. We first
generate a finite population of size 400 by drawing an i.i.d. sample from the following data generating process:
(i)
$W$ takes value in $\{1,2,3,4\}$ with equal probability;
(ii)
for $w=1,2,3,4$, $Y_{1}\mid W=w\sim N(\mu_{w},\phi^{2}_{w})$, $V\mid W=w\sim N(0,6-0.5\phi^{2}_{w})$, $Y_{1}\Perp V\mid W$ and $Y_{0}=0.3Y_{1}+V$ where $(\mu_{1},\mu_{2},\mu_{3},\mu_{4})=(3,0,-2,4)$ and $(\phi^{2}_{1},\phi^{2}_{2},\phi^{2}_{3},\phi^{2}_{4})=(2,1.5,5,4)$.
Then 170 units are randomly assigned to the treatment group while 130 units are randomly assigned to the control group. The randomized assignment is repeated for 1000 times. For each of these random assignments, we estimate the bounds proposed by Aronow et al. (2014), Ding et al. (2019) and our method proposed in Section 2, respectively.
Mean values of the bound estimates are summarized in Table 1 with standard deviation in the bracket. In this simulation, $\phi^{2}(\tau)=31.56$.
It can be seen that the estimator for the lower bound proposed in Section 2 is much larger than estimators for the other two lower bounds, and the estimator for the upper bound proposed in Section 2 is much smaller than estimators for the other two upper bounds.
The average width and coverage rate of $95\%$ confidence intervals based on the naive bound zero (Neyman, 1990), $\phi^{2}_{\rm AL}$ (Aronow et al., 2014), $\phi^{2}_{\rm DL}$ (Ding et al., 2019) and $\phi^{2}_{\rm L}$ (Our method in Section 3) as the lower bound for $\phi^{2}(\tau)$ are listed as follows:
The results show that the confidence intervals based on the bound we proposed in Section 2 is the shortest, and the corresponding coverage rate is closest to $95\%$ among the four methods.
4.2 Completely randomized experiments in the presence of noncompliance
Here we illustrate the performance of our bound proposed in Section 3 by some simulations. First, we
generate a finite population of size 400 by drawing an i.i.d. sample from the following data generating process:
(i)
generate the compliance type $G\in\{{\rm a,c,n}\}$ with the probability that $G={\rm a}$, $\rm c$ and $\rm n$ being $0.2$, $0.7$ and $0.1$, respectively;
(ii)
for $h=\rm a,c$ and $\rm n$, the conditional distribution $W\mid G=h$ has probability mass $p_{1h}$, $p_{2h}$, $p_{3h}$ and $p_{4h}$ at $1,2,3,4$, respectively, where $(p_{1\rm a},p_{2\rm a},p_{3\rm a},p_{4\rm a})=(0.15,0.2,0.3,0.35)$, $(p_{1\rm c},p_{2\rm c},p_{3\rm c},p_{4\rm c})=(0.25,0.25,0.25,0.25)$ and $(p_{1\rm n},p_{2\rm n},p_{3\rm n},p_{4\rm n})=(0.35,0.3,0.2,0.15)$;
(iii)
for $w=1,2,3,4$, $Y_{1}\mid W=w\sim N(\mu_{w},\phi^{2}_{w})$ where $(\mu_{1},\mu_{2},\mu_{3},\mu_{4})=(3,0,-2,4)$ and $(\phi^{2}_{1},\phi^{2}_{2},\phi^{2}_{3},\phi^{2}_{4})=(2,1.5,5,4)$;
(iv)
if $G=\rm a$ or $\rm n$, $Y_{0}=Y_{1}$, and $Y_{0}\mid G={\rm c},W=w\sim N(-0.2w,6-0.5\phi^{2}_{w})$.
Then 170 units are randomly assigned to the treatment group while 130 units are randomly assigned to the control group. The randomized assignment is repeated for 1000 times. For each of these random assignments, we estimate the bounds proposed by Theorem 3.1. As stated behind Theorem 3.1, this theorem can also provide a bound without using the covariate. So to illustrate the usefulness of the covariate, here we estimate the bounds that is constructed using and without using the covariate information.
Mean values of the bound estimates are summarized in Table 3 with standard deviation in the bracket. In this simulation, $\phi^{2}(\tilde{\tau})=35.70$.
It can be seen that, the estimator for the lower bound using the covariates information is much larger than the estimator for the lower bound without using the covariate information, and the estimator for the upper bound using the covariates information is much smaller than the estimator for the upper bound without using the covariate information.
The following table summaries the average width and coverage rate of $95\%$ confidence intervals based on the naive lower bound zero for $\phi^{2}(\tilde{\tau})$, and the lower bounds proposed in Theorem 3.1 using and without using the covariate information.
The results show that the confidence intervals based on the bound proposed in Theorem 3.1 using the covariate information is the shortest, and the corresponding coverage rate is closest to $95\%$ among the three methods. This illustrates the usefulness of covariate in constructing confidence interval.
5 Real data applications
5.1 Application to ACTG protocol 175
In this section, we apply our approach proposed in Section 2 to a dataset from the randomized trial ACTG protocol 175 (Hammer et al., 1996) for illustration.
The ACTG 175 study evaluated four therapies in human immunodeficiency virus infected subjects whose CD4 cell counts (a measure of immunologic status) are from 200 to 500 $\rm mm^{-3}$. Here we regard the 2139 enrolled subjects as a finite population and consider two treatment arms: the standard zidovudine monotherapy (denoted by “arm 0”) and the combination therapy with zidovudine and didanosine (denoted by “arm 1”). The parameter of interest is the average treatment effect of the combination therapy on the CD8 cell count measured at $20\pm 5$ weeks post baseline compared to the monotherapy. In the randomized trial, 532 subjects are randomly assigned to the arm 0 and 522 subjects are randomly assigned to arm 1. We divide people into six groups according to age, with the first group less than 20 years old, the second group between 21 and 30 years old, the third group between 31 and 40 years old, the third group between 41 and 50 years old, the third group between 51 and 60 years old and the last group older than 60 years old. We then use the age group, the gender and the antiretroviral history as covariates and apply our method proposed in Section 2. The follwing table shows the variance bounds, $\phi^{2}_{\rm AL}$, $\phi^{2}_{\rm AH}$, $\phi^{2}_{\rm DL}$, $\phi^{2}_{\rm DH}$, $\phi^{2}_{\rm L}$ and $\phi^{2}_{\rm H}$, calculated in this real data set.
The table shows that our lower bound is the biggest among the three lower bounds and our upper bound is the smallest among the three upper bounds. The $95\%$ confidence interval constructed using zero as a lower bound for $\phi^{2}(\tau)$ is $[-13.27,92.79]$. Aronow et al. (2014)’s lower bound lead to the confidence interval $[-13.25,92.77]$, and Ding et al. (2019)’s lower bound lead to the confidence interval $[-13.23,92.75]$. And the confidence interval $[-12.46,91.98]$ given by our lower bound proposed in Section 2 is a subset of all these confidence intervals. Clearly,
using the three lower bounds instead of the naive lower bound zero in the construction of confidence interval make the resulting confidence intervals shorter. The following table shows the improvement of the three methods with respect to widths of the resulting confidence intervals.
It can be seen that our method makes much greater improvement in aspect of width of $95\%$ CI compared to the existing bounds.
5.2 Application to JOBS II
In this section, we apply our approach proposed in Section 3 to a dataset from the randomized trial JOBS II (Vinokuir et al., 1995) for illustration. The JOBS II intervention trial studied the efficacy of a job training intervention in preventing depression caused by job loss and in prompting high quality reemployment. The treatment consisted of five half-day training seminars that enhance the participants’ job search strategies. The control group receive a booklet with some brief tips. After some screening procedures, 1801 respondents were enrolled in this study, with 552 and 1249 respondents in the control and treatment groups, respectively. Of the respondents assigned to the treatment group, only $54\%$ participated in the treatment. Thus there is a large proportion of noncompliance in this study. The parameter of interest is the LATE of the treatment on the depression score (larger score indicating severer depression). We use the gender, the initial risk status and the economic hardship as the covariates and apply our method proposed in Section 3. The estimates for $\tilde{\phi}^{2}_{\rm L}$ and $\tilde{\phi}^{2}_{\rm H}$ are $0.23$ and $0.81$, respectively. The $95\%$ confidence intervals constructed using the naive bound zero and $\tilde{\phi}_{\rm L}$ are $[-0.24,0.027]$ and $[-0.24,0.020]$, respectively. Our method shortens the confidence interval by $0.014$. When testing the null hypothesis that $\theta_{\rm c}=0$ against the alternative hypothesis that $\theta_{\rm c}<0$, the naive method gives the $\rm p$-value of $0.059$ while our method gives the $\rm p$-value of $0.049$. Thus our method is able to detect the treatment effect at $0.05$ significance level while the naive method is not.
6 Discussion
In this paper we establish sharp variance bounds for the widely-used difference-in means estimator and Wald estimator in the presence of covariates in completely randomized experiments. These bounds can help to improve the performance of inference procedures based on normal approximation.
We do not impose any assumption on the support of outcomes, hence our results are general and are applicable to both binary and continuous outcomes.
Variances of the difference-in-means estimator in matched pair randomized experiments (Imai, 2008) and the HorvitzâThompson estimator in stratified randomized experiments and clustered randomized experiments (Miratrix et al., 2013; Mukerjee et al., 2018; Middleton and Aronow, 2015) share similar unidentifiable term as our
consideration. Moreover, the unidentifiable phenomenon also appears in the asymptotic variance
of regression adjustment estimators, see Lin (2013), Freedman (2008), Bloniarz et al. (2016). The
insights in this paper are also applicable in these settings and we do not discuss this in detail to avoid cumbersome notations. And it is of great interest to extend our work to randomized experiments with other randomization schemes such as $2^{2}$ factorial design (Lu, 2017) or some other complex assignment mechanisms (Mukerjee et al., 2018).
Appendix
Appendix A Regularity conditions
Condition
There is some constant $C_{M}$ which does not depend on $N$ such that $1/N\sum_{i=1}^{N}y_{ti}^{4}\leq C_{M}$ for $t=0,1$.
${}_{\blacksquare}$
Condition
There is some constant $C_{\pi}$ which does not depend on $N$ such that $\pi_{k}\geq C_{\pi}/K$ for $k=1,\dots,K$.
${}_{\blacksquare}$
Condition
There is some constant $C_{\pi,{\rm c}}$ which does not depend on $N$ such that $\pi_{k\mid{\rm c}}\pi_{\rm c}\geq C_{\pi,{\rm c}}/K$ for $k=1,\dots,K$.
${}_{\blacksquare}$
Condition
$K^{2}\log K/N\to 0$ as $N\to\infty$.
${}_{\blacksquare}$
Condition
$K^{2}\log K\max\{C_{N}^{4},1\}/N\to 0$ as $N\to\infty$ where $C_{N}=\max_{t,i}|y_{ti}|$.
${}_{\blacksquare}$
Condition
Let $Z_{i}=(y_{1i},y_{0i},d_{1i},d_{0i})^{{\mathrm{\scriptscriptstyle T}}}$ and $\bar{z}=\sum_{i=1}^{N}z_{i}/N$, then
$$\frac{1}{N}\sum_{i=1}^{N}(z_{i}-\bar{z})(z_{i}-\bar{z})^{{\mathrm{%
\scriptscriptstyle T}}}\to V$$
where $V$ is a positive definite matrix.
${}_{\blacksquare}$
Appendix B Proofs
B.1 Proof of Theorem 2.2
Proof
Note that
$$\phi^{2}(\tau)=\sum_{i=1}^{K}\pi_{k}\frac{1}{N_{k}}\sum_{w_{i}=\xi_{k}}(y_{1i}%
-y_{0i})^{2}-\mu^{2}(\tau),$$
where $N_{k}=N\pi_{k}$.
Letting $a_{k}(s)=F_{1\mid k}^{-1}(s/N_{k})$ and $b_{k}(s)=F_{0\mid k}^{-1}(s/N_{k})$ for $k=1,\dots,K$ and $s=1,\dots,N_{k}$, then we have
$$\frac{1}{N_{k}}\sum_{w_{i}=\xi_{k}}(y_{1i}-y_{0i})^{2}=\frac{1}{N_{k}}\sum_{i=%
1}^{N_{k}}a_{k}(i)^{2}+\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}b_{k}(i)^{2}-\frac{2}{%
N_{k}}\sum_{i=1}^{N_{k}}a_{k}(i)b_{k}(\Pi_{k}(i))$$
where $\Pi_{k}$ is a permutation on $\{1,\dots,N_{k}\}$. By the rearrangement inequality, we have
$$\sum_{i=1}^{N_{k}}a_{k}(i)b_{k}(N_{k}-k+1)\leq\sum_{i=1}^{N_{k}}a_{k}(i)b_{k}(%
\Pi_{k}(i))\leq\sum_{i=1}^{N_{k}}a_{k}(i)b_{k}(i).$$
Thus
$$\displaystyle\int_{0}^{1}(F_{1\mid k}^{-1}(u)-F_{0\mid k}^{-1}(u))^{2}du=\frac%
{1}{N_{k}}\sum_{i=1}^{N_{k}}(a_{k}(i)-b_{k}(i))^{2}\leq\frac{1}{N_{k}}\sum_{w_%
{i}=\xi_{k}}(y_{1i}-y_{0i})^{2}$$
$$\displaystyle\leq\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}(a_{k}(i)-b_{k}(N_{k}+1-i))^%
{2}=\int_{0}^{1}(F_{1\mid k}^{-1}(u)-F_{0\mid k}^{-1}(1-u))^{2}du$$
This proves the bound in Theorem 2.2.
Next, we prove the sharpness of the bound. For $k=1,\dots,K$, let $i_{k(1)}<\cdots<i_{k(N_{k})}$ be the indices in $\mathcal{I}_{k}=\{i:w_{i}=\xi_{k}\}$ in increasing order. Define the population $\mathbf{U}^{L}$ consisting of $N$ units with two potential outcomes $y_{1i}^{L}$ and $y_{0i}^{L}$ and a vector of covariates $w_{i}^{L}$ associated with unit $i$ for $i=1,\dots,N$. Let $y_{1i_{k(j)}}^{L}=a_{k}(j)$, $y_{0i_{k(j)}}^{L}=b_{k}(j)$ and $w_{i_{k(j)}}^{L}=\xi_{k}$ for $k=1\dots,K$ and $j=1,\dots,N_{k}$. Let $\tau_{i}^{L}=y_{1i}^{L}-y_{0i}^{L}$ for $i=1,\dots,N$. Then $P(y_{t}^{L}\leq y\mid w^{L}=\xi_{k})=F_{t\mid k}(y)$ and $P(w^{L}=\xi_{k})=\pi_{k}$ for $t=0,1$, $k=1,\dots,K$ and $y\in\mathbf{R}$ and $1/N_{k}\sum_{i\in\mathcal{I}_{k}}(y_{1i}^{L}-y_{0i}^{L})^{2}=1/N_{k}\sum_{j=1}%
^{N_{k}}(a_{k}(j)-b_{k}(j))^{2}=\int_{0}^{1}(F_{1\mid k}^{-1}(u)-F_{0\mid k}^{%
-1}(u))^{2}du$ for $k=1,\dots,K$. Thus $\phi^{2}(\tau^{L})=\phi^{2}_{L}$, which attains the lower bound for $\phi^{2}(\tau)$.
Define $\mathbf{U}^{H}$ similarly with $y_{1i_{k(j)}}^{H}=a_{k}(j)$, $y_{1i_{k(j)}}^{H}=b_{k}(N_{k}+1-j)$ and $w_{i_{k(j)}}^{H}=\xi_{k}$ for $k=1\dots,K$ and $j=1,\dots,N_{k}$. Let $\tau_{i}^{H}=y_{1i}^{H}-y_{0i}^{H}$, then $P(y_{t}^{H}\leq y\mid w^{H}=\xi_{k})=F_{t\mid k}(y)$ and $P(w^{H}=\xi_{k})=\pi_{k}$ for $t=0,1$, $k=1,\dots,K$ and $y\in\mathbf{R}$. Moreover, $1/N_{k}\sum_{i\in\mathcal{I}_{k}}(y_{1i}^{H}-y_{0i}^{H})^{2}=1/N_{k}\sum_{j=1}%
^{N_{k}}(a_{k}(j)-b_{k}(N_{k}+1-j))^{2}=\int_{0}^{1}(F_{1\mid k}^{-1}(u)-F_{0%
\mid k}^{-1}(1-u))^{2}du$ for $k=1,\dots,K$. Thus $\phi^{2}(\tau^{H})=\phi^{2}_{H}$, which attains the upper bound for $\phi^{2}(\tau)$.
$\square$
B.2 Proof of the inequalities (2) in Section 2
Proof
For any $\mathbf{U}^{*}\in\mathcal{U}$, let $\tau^{*}=(y_{11}^{*}-y_{01}^{*},\dots,y_{1N}^{*}-y_{0N}^{*})$. By the applying Aronow et al. (2014)’s result to the population $\mathbf{U}^{*}$, we have
$$\int_{0}^{1}(F_{1*}^{-1}(u)-F_{0*}^{-1}(u))^{2}du-\mu(\tau^{*})^{2}\leq\phi^{2%
}(\tau^{*})\leq\int_{0}^{1}(F_{1*}^{-1}(u)-F_{0*}^{-1}(1-u))^{2}du-\mu(\tau^{*%
})^{2}$$
where $F_{t*}(y)=P(y^{*}_{t}\leq y)$ for $t=0,1$.
By the applying Ding et al. (2019)’s result to the population $\mathbf{U}^{*}$, we have
$$\phi^{2}(\tau^{*}_{w})+\int_{0}^{1}(F_{e^{*}_{1}}^{-1}(u)-F_{e^{*}_{0}}^{-1}(u%
))^{2}du\leq\phi^{2}(\tau^{*})\leq\phi^{2}(\tau^{*}_{w})+\int_{0}^{1}(F_{e^{*}%
_{1}}^{-1}(u)-F_{e^{*}_{0}}^{-1}(1-u))^{2}du$$
with $\tau^{*}_{w}=(w_{1}^{*{\mathrm{\scriptscriptstyle T}}}(\gamma^{*}_{1}-\gamma^{%
*}_{0}),\dots,w_{N}^{*{\mathrm{\scriptscriptstyle T}}}(\gamma^{*}_{1}-\gamma^{%
*}_{0}))^{\mathrm{\scriptscriptstyle T}}$, $F_{e^{*}_{t}}(s)=P(e^{*}_{t}\leq s)$, $e^{*}_{t}=(y^{*}_{t1}-w_{1}^{*{\mathrm{\scriptscriptstyle T}}}\gamma^{*}_{t},%
\dots,y^{*}_{tN}-w_{N}^{*{\mathrm{\scriptscriptstyle T}}}\gamma^{*}_{t})^{%
\mathrm{\scriptscriptstyle T}}$, and $\gamma^{*}_{t}$ the least square regression coefficient of $y^{*}_{ti}$ on $w^{*}_{i}$. Because $\mathbf{U}^{*}\in\mathcal{U}$, it is straightforward to show
$$\displaystyle F_{t*}(y)=F_{t}(y),\ \mu(\tau^{*})=\theta,\ \phi^{2}(\tau_{w}^{*%
})=\phi^{2}(\tau_{w})\ \text{and}\ F_{e_{t}^{*}}(s)=F_{e_{t}}(s),$$
for $t=0,1$. Thus $\phi^{2}_{\rm AL}\leq\phi^{2}(\tau^{*})$ and $\phi^{2}_{\rm DL}\leq\phi^{2}(\tau^{*})$ for any $\mathbf{U}^{*}\in\mathcal{U}$. Since $\phi^{2}_{\rm L}$ is attained by some population among $\mathbf{U}^{*}\in\mathcal{U}$, we have $\max\{\phi^{2}_{\rm AL},\phi^{2}_{\rm DL}\}\leq\phi^{2}_{\rm L}$. Similarly, $\min\{\phi^{2}_{\rm AH},\phi^{2}_{\rm DH}\}\geq\phi^{2}_{\rm H}$.
$\square$
B.3 Proof of Theorem 2.3
Proof
Throughout this and the following proofs, for any real number $a$ and $b$, we let $a\land b=\min\{a,b\}$, $a\lor b=\min\{a,b\}$, $a_{+}=a\land 0$ and $a_{-}=-(a\land 0)$. For any function $H$ and any constant $C$, we let
$$H_{C}(y)=\left\{\begin{array}[]{lc}0&\ y<-C\\
H(y)&\ -C\leq y<C\\
1&\ y\geq C\end{array}\right.$$
and $H_{C}^{-1}(y)=(H_{C})^{-1}(y)$.
We prove consistency of the lower bound estimator only,
and the consistency for the upper bound estimator follows similarly.
Note that $\hat{\theta}-\theta\stackrel{{\scriptstyle P}}{{\to}}0$ under Condition A, then by the continuous mapping theorem,
$\hat{\theta}^{2}-\theta^{2}\stackrel{{\scriptstyle P}}{{\to}}0$.
Let $\hat{\pi}_{tk}=\sum_{T_{i}=t}1\{w=\xi_{k}\}/n_{t}$ for $t=0,1$ and $k=1,\dots,K$. Then for any $C>0$,
$$\displaystyle|\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{1\mid k}^{-1}(u%
)-\hat{F}_{0\mid k}^{-1}(u))^{2}du-\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k%
}^{-1}(u)-F_{0\mid k}^{-1}(u))^{2}du|$$
$$\displaystyle\leq|\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{1\mid k}^{-%
1}(u)-\hat{F}_{0\mid k}^{-1}(u))^{2}du-\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}%
(\hat{F}_{1\mid k,C}^{-1}(u)-\hat{F}_{0\mid k,C}^{-1}(u))^{2}du|$$
$$\displaystyle\phantom{\quad}+|\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-%
1}(u)-F_{0\mid k}^{-1}(u))^{2}du-\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k,%
C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du|$$
$$\displaystyle\phantom{\quad}+|\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_%
{1\mid k,C}^{-1}(u)-\hat{F}_{0\mid k,C}^{-1}(u))^{2}du-\sum_{k=1}^{K}\pi_{k}%
\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du|$$
$$\displaystyle\equalscolon I_{1}+I_{2}+I_{3}.$$
Hence to prove Theorem 2.3, it suffices to show $I_{1}+I_{2}+I_{3}\stackrel{{\scriptstyle P}}{{\to}}0$.
Note that
$$\displaystyle I_{2}$$
$$\displaystyle\leq|\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-1}(u)-F_{1%
\mid k,C}^{-1}(u))(F_{1\mid k}^{-1}(u)+F_{1\mid k,C}^{-1}(u)-F_{0\mid k}^{-1}(%
u)-F_{0\mid k,C}^{-1}(u))du|$$
$$\displaystyle\phantom{\quad}+|\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{0\mid k}^{-%
1}(u)-F_{0\mid k,C}^{-1}(u))(F_{1\mid k}^{-1}(u)+F_{1\mid k,C}^{-1}(u)-F_{0%
\mid k}^{-1}(u)-F_{0\mid k,C}^{-1}(u))du|$$
$$\displaystyle\leq\Big{(}\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-1}(u)-%
F_{1\mid k,C}^{-1}(u))^{2}du\Big{)}^{\frac{1}{2}}\Big{(}\sum_{k=1}^{K}\pi_{k}%
\int_{0}^{1}(F_{1\mid k}^{-1}(u)+F_{1\mid k,C}^{-1}(u)-F_{0\mid k}^{-1}(u)-F_{%
0\mid k,C}^{-1}(u))^{2}du\Big{)}^{\frac{1}{2}}$$
$$\displaystyle\phantom{\quad}+\Big{(}\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{0\mid
k%
}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du\Big{)}^{\frac{1}{2}}\Big{(}\sum_{k=1}^{%
K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-1}(u)+F_{1\mid k,C}^{-1}(u)-F_{0\mid k}^{-%
1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du\Big{)}^{\frac{1}{2}},$$
where the second inequality follows from Cauchy-Schwartz inequality.
Under Condition A, because for $t=0,1$, $\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-1}(u)-F_{1\mid k,C}^{-1}(u))^{%
2}du\leq\frac{1}{N}\sum_{|y_{ti}|\geq C}y_{ti}^{2}\leq C_{M}/C^{2}$, and
$$\displaystyle\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k}^{-1}(u)+F_{1\mid k,%
C}^{-1}(u)-F_{0\mid k}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du$$
$$\displaystyle\leq 4\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}[(F_{1\mid k}^{-1}(u))^{2}%
+(F_{1\mid k,C}^{-1}(u))^{2}+(F_{0\mid k}^{-1}(u))^{2}+(F_{0\mid k,C}^{-1}(u))%
^{2}]du$$
$$\displaystyle\leq\frac{8}{N}\sum_{i=1}^{N}y_{1i}^{2}+\frac{8}{N}\sum_{i=1}^{N}%
y_{0i}^{2}\leq 16\sqrt{C_{M}},$$
we have $I_{2}\leq 8C_{M}^{3/4}/C$.
By similar arguments, we have
$$\displaystyle I_{1}$$
$$\displaystyle\leq\Big{(}\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{1\mid
k%
}^{-1}(u)-\hat{F}_{1\mid k,C}^{-1}(u))^{2}du\Big{)}^{\frac{1}{2}}\Big{(}4\sum_%
{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}[(\hat{F}_{1\mid k}^{-1}(u))^{2}+(\hat{F}_{1%
\mid k,C}^{-1}(u))^{2}+(\hat{F}_{0\mid k}^{-1}(u))^{2}+(\hat{F}_{0\mid k,C}^{-%
1}(u))^{2}]du\Big{)}^{\frac{1}{2}}$$
$$\displaystyle\phantom{\quad}+\Big{(}\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(%
\hat{F}_{0\mid k}^{-1}(u)-\hat{F}_{0\mid k,C}^{-1}(u))^{2}du\Big{)}^{\frac{1}{%
2}}\Big{(}4\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}[(\hat{F}_{1\mid k}^{-1}(u))%
^{2}+(\hat{F}_{1\mid k,C}^{-1}(u))^{2}+(\hat{F}_{0\mid k}^{-1}(u))^{2}+(\hat{F%
}_{0\mid k,C}^{-1}(u))^{2}]du\Big{)}^{\frac{1}{2}}.$$
Because for $t=0,1$,
$$\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{t\mid k}^{-1}(u)-\hat{F}_{t%
\mid k,C}^{-1}(u))^{2}du\leq\max_{k}\frac{\hat{\pi}_{k}}{\hat{\pi}_{tk}}\frac{%
N}{n_{t}}\frac{1}{N}\sum_{|y_{ti}|\geq C}y_{ti}^{2}\leq\frac{N}{n_{t}}\max_{k}%
\frac{\hat{\pi}_{k}}{\hat{\pi}_{tk}}\frac{C_{M}}{C^{2}},$$
and
$$\begin{aligned} &\displaystyle\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}[(\hat{F}%
_{1\mid k}^{-1}(u))^{2}+(\hat{F}_{1\mid k,C}^{-1}(u))^{2}+(\hat{F}_{0\mid k}^{%
-1}(u))^{2}+(\hat{F}_{0\mid k,C}^{-1}(u))^{2}]du&\\
&\displaystyle\leq\frac{N}{n_{1}\land n_{0}}\max_{t,k}\frac{\hat{\pi}_{k}}{%
\hat{\pi}_{tk}}\Big{(}\frac{2}{N}\sum_{i=1}^{N}y_{1i}^{2}+\frac{2}{N}\sum_{i=1%
}^{N}y_{0i}^{2}\Big{)}&\\
&\displaystyle\leq\frac{4N}{n_{1}\land n_{0}}\max_{t,k}\frac{\hat{\pi}_{k}}{%
\hat{\pi}_{tk}}\sqrt{C_{M}},&\end{aligned},$$
we have
$$I_{1}\leq\frac{8N}{n_{1}\land n_{0}}\max_{t,k}\frac{\hat{\pi}_{k}}{\hat{\pi}_{%
tk}}\frac{C_{M}^{\frac{3}{4}}}{C}.$$
For the last term $I_{3}$, we have
$$\displaystyle I_{3}$$
$$\displaystyle=|\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{1\mid k,C}^{-1%
}(u)-\hat{F}_{0\mid k,C}^{-1}(u))^{2}du-\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1%
\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du|$$
$$\displaystyle\leq|\sum_{k=1}^{K}\hat{\pi}_{k}\Big{(}\int_{0}^{1}(\hat{F}_{1%
\mid k,C}^{-1}(u)-\hat{F}_{0\mid k,C}^{-1}(u))^{2}du-\int_{0}^{1}(F_{1\mid k,C%
}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du\Big{)}|$$
$$\displaystyle\phantom{\quad}+|\sum_{k=1}^{K}(\hat{\pi}_{k}-\pi_{k})\int_{0}^{1%
}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du|$$
$$\displaystyle\leq\max_{k}|\int_{0}^{1}(\hat{F}_{1\mid k,C}^{-1}(u)-\hat{F}_{0%
\mid k,C}^{-1}(u))^{2}du-\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}%
(u))^{2}du|\sum_{k=1}^{K}\hat{\pi}_{k}$$
$$\displaystyle\phantom{\leq}+|\sum_{k=1}^{K}(\frac{\hat{\pi}_{k}}{\pi_{k}}-1)%
\pi_{k}\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du|$$
$$\displaystyle\leq\max_{k}|\int_{0}^{1}(\hat{F}_{1\mid k,C}^{-1}(u)-\hat{F}_{0%
\mid k,C}^{-1}(u))^{2}du-\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}%
(u))^{2}du|$$
$$\displaystyle\phantom{\quad}+\max_{k}|\frac{\hat{\pi}_{k}}{\pi_{k}}-1||\sum_{k%
=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du|$$
$$\displaystyle\equalscolon I_{31}+I_{32}.$$
By Condition A, we have
$$\displaystyle\sum_{k=1}^{K}\pi_{k}\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k%
,C}^{-1}(u))^{2}du$$
$$\displaystyle\leq 2\sum_{k-1}^{K}\pi_{k}\int_{0}^{1}(F^{-1}_{1\mid k}(u))^{2}%
du+2\sum_{k-1}^{K}\pi_{k}\int_{0}^{1}(F^{-1}_{0\mid k}(u))^{2}du$$
$$\displaystyle=2\Big{(}\frac{1}{N}\sum_{i=1}^{N}y_{1i}^{2}+\frac{1}{N}\sum_{i=1%
}^{N}y_{0i}^{2}\Big{)}\leq 2\sqrt{C_{M}}.$$
Thus
$$I_{32}\leq 2\max_{k}|\frac{\hat{\pi}_{k}}{\pi_{k}}-1|\sqrt{C_{M}}.$$
For $k=1,\dots,K$, $\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du$ is the square of the Wasserstein distance induced by $L_{2}$ norm between $F_{1\mid k,C}$ and
$F_{0\mid k,C}$.
By the representation theorem (Bobkov and Ledoux, 2019)[Theorem 2.11],
$$\int_{0}^{1}(\hat{F}_{1\mid k,C}^{-1}(u)-\hat{F}_{0\mid k,C}^{-1}(u))^{2}du=2%
\iint_{v\leq w}[(\hat{F}_{1\mid k,C}(v)-\hat{F}_{0\mid k,C}(w))_{+}+(\hat{F}_{%
0\mid k,C}(v)-\hat{F}_{1\mid k,C}(w))_{+}]dvdw.$$
Because $\hat{F}_{1\mid k,C}^{-1}(v)-\hat{F}_{0\mid k,C}^{-1}(w)\leq 0$ and
$\hat{F}_{0\mid k,C}(v)-\hat{F}_{1\mid k,C}(w)\leq 0$ when $v<-C$ or $w\geq C$,
the integral domain can be restricted to $\{-C\leq v\leq w<C\}$ without changing
the integral.
Similarly, we have
$$\displaystyle\int_{0}^{1}(F_{1\mid k,C}^{-1}(u)-F_{0\mid k,C}^{-1}(u))^{2}du=$$
$$\displaystyle 2\iint_{-C\leq v\leq w<C}[(F_{1\mid k,C}(v)-F_{0\mid k,C}(w))_{+}$$
$$\displaystyle\phantom{2\iint_{-C\leq v\leq w<C}}+(F_{0\mid k,C}(v)-F_{1\mid k,%
C}(w))_{+}]dvdw.$$
Because $|(u_{1})_{+}-(u_{2})_{+}|\leq|u_{1}-u_{2}|$ for any $u_{1}$, $u_{2}$,
$$\displaystyle I_{31}$$
$$\displaystyle\leq 2\max_{k}\iint_{-C\leq v\leq w<C}[|\hat{F}_{1\mid k,C}(v)-F_%
{1\mid k,C}(v)|+|\hat{F}_{0\mid k,C}(v)-F_{0\mid k,C}(v)|+|\hat{F}_{1\mid k,C}%
(w)-F_{1\mid k,C}(w)|$$
$$\displaystyle\phantom{\leq 2\max_{k}\iint_{-C\leq v\leq w<C}}+|\hat{F}_{0\mid k%
,C}(w)-F_{0\mid k,C}(w)|]dvdw$$
$$\displaystyle\leq 2C^{2}\max_{k}\{\sup_{v}|\hat{F}_{1\mid k,C}(v)-F_{1\mid k,C%
}(v)|+\sup_{v}|\hat{F}_{0\mid k,C}(v)-F_{0\mid k,C}(v)|\}$$
$$\displaystyle\leq 4C^{2}\max_{t,k}\sup_{v}|\hat{F}_{t\mid k,C}(v)-F_{t\mid k,C%
}(v)|.$$
For any given $M$ and $t=0,1$, let $s_{tk,j}=F_{t\mid k,C}^{-1}(j/M)$. By the standard technique in the proof of Glivenko-Cantelli theorem (van der Vaart, 2000),
$$\sup_{v}|\hat{F}_{t\mid k,C}(v)-F_{t\mid k,C}(v)|\leq\max_{j}|\hat{F}_{t\mid k%
,C}(s_{tk,j})-F_{t\mid k,C}(s_{tk,j})|+\frac{1}{M}.$$
For $t=0,1$ and $i=1,\dots,N$, let $y_{ti}^{*}=y_{ti}1\{-C\leq y_{ti}<C\}+C1\{y_{ti}\geq C\}-C1\{y_{ti}<-C\}$.
Then
$$\displaystyle\hat{F}_{tk,C}(s_{tk,j})-F_{tk,C}(s_{tk,j})$$
$$\displaystyle=\frac{1}{\hat{\pi}_{tk}}\frac{1}{n_{t}}\sum_{T_{i}=t}1\{y_{ti}^{%
*}\leq s_{tk,j},x=\xi_{k}\}-\frac{1}{\pi_{k}}\frac{1}{N}\sum_{i=1}^{N}1\{y_{ti%
}^{*}\leq s_{tk,j},x=\xi_{k}\}$$
$$\displaystyle=\frac{1}{\pi_{k}}(\frac{1}{n_{t}}\sum_{T_{i}=t}1\{y_{ti}^{*}\leq
s%
_{tk,j},x=\xi_{k}\}-\frac{1}{N}\sum_{i=1}^{N}1\{y_{ti}^{*}\leq s_{tk,j},x=\xi_%
{k}\})$$
$$\displaystyle\phantom{\quad}-\frac{1}{\pi_{k}\hat{\pi}_{tk}}(\hat{\pi}_{tk}-%
\pi_{k})\frac{1}{n_{t}}\sum_{T_{i}=t}1\{y_{ti}^{*}\leq s_{tk,j},x=\xi_{k}\}.$$
Because
$$\left|\frac{1}{\hat{\pi}_{tk}\frac{1}{n_{t}}}1\{y_{ti}^{*}\leq s_{tk,j},x=\xi_%
{k}\}\right|\leq 1,$$
we have
$$\displaystyle I_{31}$$
$$\displaystyle\leq 4C^{2}\max_{t,k,j}\Big{\{}|\frac{1}{\pi_{k}}(\frac{1}{n_{t}}%
\sum_{T_{i}=t}1\{y_{ti}^{*}\leq s_{tk,j},x=\xi_{k}\}-\frac{1}{N}\sum_{i=1}^{N}%
1\{y_{ti}^{*}\leq s_{tk,j},x=\xi_{k}\})+\frac{1}{\pi_{k}}|\hat{\pi}_{tk}-\pi_{%
k}|\Big{\}}$$
$$\displaystyle\phantom{\quad}+\frac{1}{M}.$$
For any small positive number $\epsilon$, one can choose $C$ and $M$ such that $8C_{M}^{3/4}/C\leq\epsilon$ and $1/M\leq\epsilon$. Then by Hoeffding inequality for sample without replacement (Bardenet and Maillard, 2015) and the Bonferroni inequality, we have
$$\displaystyle\mathbf{P}\left(4C^{2}\max_{t,k,j}\Big{\{}\frac{1}{\pi_{k}}|\frac%
{1}{n_{t}}\sum_{T_{i}=t}1\{y_{ti}^{*}\leq s_{tk,j},x=\xi_{k}\}-\frac{1}{N}\sum%
_{i=1}^{N}1\{y_{ti}^{*}\leq s_{tk,j},x=\xi_{k}\}|\Big{\}}\geq\epsilon\right)$$
$$\displaystyle\leq 4MK\exp\left(-\frac{n_{1}\land n_{0}\epsilon^{2}\min_{k}\pi_%
{k}^{2}}{8C^{4}}\right),$$
and
$$\mathbf{P}\left(\max_{t,k}\frac{1}{\pi_{k}}|\hat{\pi}_{tk}-\pi_{k}|\geq%
\epsilon\right)\leq 2K\exp\left(-2n_{1}\land n_{0}\epsilon^{2}\min_{k}\pi_{k}^%
{2}\right).$$
By Conditions A and A the right hand side of these two inequalities converge to zero because
$$n_{1}\land n_{0}\min_{k}\pi_{k}^{2}-C^{*}\log K\geq C_{\pi}(n_{1}\land n_{0}/N%
)(N/K^{2})-C^{*}\log K\to\infty,$$
with $C^{*}=8C^{4}\lor(1/2)$.
Hence, for any $\delta>0$ and sufficiently large $N$, $I_{31}\leq 3\epsilon$ and
$$\max_{t,k}\left|\frac{\hat{\pi}_{k}}{\hat{\pi}_{tk}}\right|\leq\frac{1+%
\epsilon}{1-\epsilon}$$
with probability greater than $1-\delta$.
Without loss of generality, we let $\epsilon\leq 1/3$, then $(1+\epsilon)/(1-\epsilon)\leq 2$ and for sufficiently large $N$ we have
$$|\sum_{k=1}^{K}\hat{\pi}_{k}\int_{0}^{1}(\hat{F}_{1\mid k}^{-1}(u)-\hat{F}_{0%
\mid k}^{-1}(u))^{2}du-\sum_{k=1}^{K}\int_{0}^{1}(F_{1\mid k}^{-1}(u)-F_{0\mid
k%
}^{-1}(u))^{2}du|\leq\left(\frac{4}{\rho_{1}\land\rho_{0}}+2\sqrt{C_{M}}+4%
\right)\epsilon$$
with probability greater than $1-\delta$. This completes the proof of Theorem 2.3.
$\square$
B.4 Proof of Theorem 3.1
Proof
According to Conditions A and A, the conditions of the finite population central limit theorem (Freedman, 2008)[Theorem 1] is satisfied. Thus
$$\sqrt{N}\left(\frac{1}{n_{1}}\sum_{T_{i}=1}y_{1i}-\mu(y_{1}),\frac{1}{n_{1}}%
\sum_{T_{i}=1}d_{1i}-\mu(d_{1}),\frac{1}{n_{0}}\sum_{T_{i}=0}y_{0i}-\mu(y_{0})%
,\frac{1}{n_{0}}\sum_{T_{i}=0}d_{0i}-\mu(d_{0})\right)^{{\mathrm{%
\scriptscriptstyle T}}}$$
converges to a multivariate normal distribution. Thus
$$\displaystyle\hat{\theta}_{\rm c}-\theta_{\rm c}=$$
$$\displaystyle\hat{\pi}_{\rm c}^{-1}(\hat{\theta}-\hat{\pi}_{\rm c}\theta_{\rm c})$$
$$\displaystyle=$$
$$\displaystyle\left(\hat{\pi}_{\rm c}^{-1}-\pi_{\rm c}^{-1}\right)\left(\hat{%
\theta}-\hat{\pi}_{\rm c}\theta_{\rm c}\right)+\pi_{\rm c}^{-1}(\hat{\theta}-%
\hat{\pi}_{\rm c}\theta).$$
By the strong instrument assumption and the asymptotic normality invoked before, we have
$$\hat{\pi}_{\rm c}^{-1}-\pi_{\rm c}^{-1}=O_{p}\left(\frac{1}{\sqrt{N}}\right)$$
and
$$\hat{\theta}-\hat{\pi}_{\rm c}\theta_{\rm c}=O_{p}\left(\frac{1}{\sqrt{N}}%
\right).$$
Hence
$$\hat{\theta}_{\rm c}-\theta_{\rm c}=\pi_{\rm c}^{-1}(\hat{\theta}-\hat{\pi}_{%
\rm c}\theta_{\rm c})+o_{p}\left(\frac{1}{\sqrt{N}}\right).$$
Straightforward calculation can show that $(\hat{\theta}-\hat{\pi}_{\rm c}\theta_{\rm c})/\pi_{\rm c}$ has mean zero and variance
$$\frac{1}{\pi_{\rm c}^{2}(N-1)}\left(\frac{N}{n_{1}}\phi^{2}(\tilde{y}_{1})+%
\frac{N}{n_{0}}\phi^{2}(\tilde{y}_{0})-\phi^{2}(\tilde{\tau})\right)=\frac{%
\sigma^{2}_{\rm c}}{N-1}.$$
Again by the finite population central limit theorem we have
$$\sqrt{N}\sigma_{\rm c}^{-1}(\hat{\theta}_{\rm c}-\theta_{\rm c})=\sqrt{N}%
\sigma_{\rm c}^{-1}\pi_{\rm c}^{-1}(\hat{\mu}(\tau)-\hat{\pi}_{\rm c}\theta_{%
\rm c})+o_{p}(1)\overset{d}{\to}N(0,1).$$
$\square$
B.5 Proof of Theorem 3.1
Proof
According to exclusion restriction, if $g_{i}=\rm a$ or $\rm n$, we have
$$\tilde{\tau}_{i}=y_{1i}-y_{0i}-\theta_{\rm c}(d_{1i}-d_{0i})=0.$$
By the definition of $\theta_{\rm c}$, we have $\mu(\tilde{\tau})=0$.
Thus
$$\phi^{2}(\tilde{\tau})=\frac{1}{N}\sum_{i=1}^{N}\tilde{\tau}^{2}_{i}=\pi_{\rm c%
}\frac{1}{N_{\rm c}}\sum_{g_{i}=\rm c}\tilde{\tau}_{i}^{2}$$
where $N_{\rm c}$. Then the theorem can be proved following the same arguments as Theorem 2.2.
$\square$
B.6 Proof of Theorem 3.2
Proof
Firstly, we provide some relationship that is useful in the proof.
According to the monotonicity and exclusion restriction, we have $1\{g_{i}={\rm c}\}=d_{1i}-d_{0i}$, $(1-d_{1i})1\{\tilde{y}_{0i}\leq y\}=(1-d_{1i})1\{\tilde{y}_{1i}\leq y\}$ and $d_{0i}1\{\tilde{y}_{1i}\leq y\}=d_{0i}1\{\tilde{y}_{0i}\leq y\}$. Thus
$$\displaystyle\pi_{k\mid{\rm c}}$$
$$\displaystyle=\frac{\sum_{i=1}^{N}1\{g_{i}={\rm c}\}1\{w_{i}=\xi_{k}\}}{\sum_{%
i=1}^{N}1\{g_{i}={\rm c}\}}$$
$$\displaystyle=\pi_{\rm c}^{-1}\left(\frac{1}{N}\sum_{i=1}^{N}d_{1i}1\{w_{i}=%
\xi_{k}\}-\frac{1}{N}\sum_{i=1}^{N}d_{0i}1\{w_{i}=\xi_{k}\}\right)$$
$$\displaystyle=\pi_{\rm c}^{-1}\left(\frac{1}{N}\sum_{i=1}^{N}(1-d_{0i})1\{w_{i%
}=\xi_{k}\}-\frac{1}{N}\sum_{i=1}^{N}(1-d_{1i})1\{w_{i}=\xi_{k}\}\right),$$
$$\displaystyle\tilde{F}_{1\mid k}(y)$$
$$\displaystyle=\frac{\sum_{i=1}^{N}(d_{1i}-d_{0i})1\{\tilde{y}_{1i}\leq y\}1\{w%
_{i}=\xi_{k}\}}{\sum_{i=1}^{N}(d_{1i}-d_{0i})1\{w_{i}=\xi_{k}\}}$$
$$\displaystyle=\pi_{\rm c}^{-1}\pi_{k\mid{\rm c}}^{-1}\left(\frac{1}{N}\sum_{i=%
1}^{N}d_{1i}1\{\tilde{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\}-\frac{1}{N}\sum_{i=1}^%
{N}d_{0i}1\{\tilde{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\}\right)$$
$$\displaystyle=\pi_{\rm c}^{-1}\pi_{k\mid{\rm c}}^{-1}\left(\frac{1}{N}\sum_{i=%
1}^{N}d_{1i}1\{\tilde{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\}-\frac{1}{N}\sum_{i=1}^%
{N}d_{0i}1\{\tilde{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\}\right),$$
and
$$\displaystyle\tilde{F}_{0\mid k}(y)$$
$$\displaystyle=\frac{\sum_{i=1}^{N}(d_{1i}-d_{0i})1\{\tilde{y}_{0i}\leq y\}1\{w%
_{i}=\xi_{k}\}}{\sum_{i=1}^{N}(d_{1i}-d_{0i})1\{w_{i}=\xi_{k}\}}$$
$$\displaystyle=\pi_{\rm c}^{-1}\pi_{k\mid{\rm c}}^{-1}\left(\frac{1}{N}\sum_{i=%
1}^{N}(1-d_{0i})1\{\tilde{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\}-\frac{1}{N}\sum_{i%
=1}^{N}(1-d_{1i})1\{\tilde{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\}\right)$$
$$\displaystyle=\pi_{\rm c}^{-1}\pi_{k\mid{\rm c}}^{-1}\left(\frac{1}{N}\sum_{i=%
1}^{N}(1-d_{0i})1\{\tilde{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\}-\frac{1}{N}\sum_{i%
=1}^{N}(1-d_{1i})1\{\tilde{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\}\right).$$
Since the estimators in Theorem 3.2 have the similar structure as those in Theorem 2.3, we intend to prove the consistency in a similar way. However, there are two extra difficulties. One is that $\hat{y}_{ti}\not=\tilde{y}_{ti}$ and we need to control the error introduced by using $\hat{y}_{ti}$ in place of $\tilde{y}_{ti}$ in the estimators. The other is that the estimators $\check{F}_{t\mid k}(y)$ for $t=0,1$ and $k=1,\dots,K$ are not distribution functions and thus the representation theorem (Bobkov and Ledoux, 2019)[Theorem 2.11] can not be used directly. To solve this problem, we define
$$\check{F}_{t\mid k}^{*}(y)=\sup_{v\leq y}\check{F}_{t\mid k}(v),$$
for $t=0,1$ and $k=1,\dots,K$. Then by defination, $\check{F}_{t\mid k}^{*}(y)$ is a distribution function and, for $u\in(0,1)$, $\check{F}_{t\mid k}^{*-1}(u)=\check{F}_{t\mid k}^{-1}(u)$. Hence we can use $\check{F}_{t\mid k}^{*}(y)$ instead of $\check{F}_{t\mid k}(y)$ in the representation theorem.
We prove only for the lower bound, and the consistency result for the upper bound follows similarly. Let $\lambda_{k}=\pi_{\rm c}\pi_{k\mid\rm c}$. Because $\hat{\pi}_{\rm c}\hat{\pi}_{k\mid\rm c}=\hat{\lambda}_{1k}$, to prove $\check{\phi}^{2}_{\rm L}-\tilde{\phi}^{2}_{\rm L}\stackrel{{\scriptstyle P}}{{%
\to}}0$, we only need to prove
$$\sum_{k=1}^{K}\hat{\lambda}_{1k}\int_{0}^{1}(\check{F}_{1\mid k}^{-1}(u)-%
\check{F}_{0\mid k}^{-1}(u))^{2}du-\sum_{k=1}^{K}\lambda_{k}\int_{0}^{1}(%
\tilde{F}_{1\mid k}^{-1}(u)-\tilde{F}_{0\mid k}^{-1}(u))^{2}du\stackrel{{%
\scriptstyle P}}{{\to}}0.$$
Note that
$$\displaystyle|\sum_{k=1}^{K}\hat{\lambda}_{1k}\int_{0}^{1}(\check{F}_{1\mid k}%
^{-1}(u)-\check{F}_{0\mid k}^{-1}(u))^{2}du-\sum_{k=1}^{K}\lambda_{k}\int_{0}^%
{1}(\tilde{F}_{1\mid k}^{-1}(u)-\tilde{F}_{0\mid k}^{-1}(u))^{2}du|$$
$$\displaystyle\leq|\sum_{k=1}^{K}\hat{\lambda}_{1k}\Big{(}\int_{0}^{1}(\check{F%
}_{1\mid k}^{-1}(u)-\check{F}_{0\mid k}^{-1}(u))^{2}du-\int_{0}^{1}(\tilde{F}_%
{1\mid k}^{-1}(u)-\tilde{F}_{0\mid k}^{-1}(u))^{2}du\Big{)}|$$
$$\displaystyle\phantom{\quad}+|\sum_{k=1}^{K}(\hat{\lambda}_{1k}-\lambda_{k})%
\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)-\tilde{F}_{0\mid k}^{-1}(u))^{2}du|$$
$$\displaystyle\leq\left|\sum_{k=1}^{K}\Big{(}\frac{1}{n_{1}}\sum_{T_{i}=1}d_{1i%
}1\{w_{i}=\xi_{k}\}\Big{)}\Big{(}\int_{0}^{1}(\check{F}_{1\mid k}^{-1}(u)-%
\check{F}_{0\mid k}^{-1}(u))^{2}du-\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)-%
\tilde{F}_{0\mid k}^{-1}(u))^{2}du\Big{)}\right|$$
$$\displaystyle\phantom{\leq}+\left|\sum_{k=1}^{K}\Big{(}\frac{1}{n_{0}}\sum_{T_%
{i}=0}d_{0i}1\{w_{i}=\xi_{k}\}\Big{)}\Big{(}\int_{0}^{1}(\check{F}_{1\mid k}^{%
-1}(u)-\check{F}_{0\mid k}^{-1}(u))^{2}du-\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1%
}(u)-\tilde{F}_{0\mid k}^{-1}(u))^{2}du\Big{)}\right|$$
$$\displaystyle\phantom{\leq}+\left|\sum_{k=1}^{K}\left(\frac{\hat{\lambda}_{1k}%
}{\lambda_{k}}-1\right)\lambda_{1k}\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)-%
\tilde{F}_{0\mid k}^{-1}(u))^{2}du\right|$$
$$\displaystyle\leq 2\max_{k}\left|\int_{0}^{1}(\check{F}_{1\mid k}^{-1}(u)-%
\check{F}_{0\mid k}^{-1}(u))^{2}du-\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)-%
\tilde{F}_{0\mid k}^{-1}(u))^{2}du\right|$$
$$\displaystyle\phantom{\quad}+\max_{k}\left|\frac{\hat{\lambda}_{1k}}{\lambda_{%
k}}-1\right|\sum_{k=1}^{K}\lambda_{1k}\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)%
-\tilde{F}_{0\mid k}^{-1}(u))^{2}du$$
$$\displaystyle\equalscolon I_{1,\rm c}+I_{2,\rm c}.$$
By Condition A and Assumption
3.1(iii), we have
$$\theta\leq\frac{1}{N}\sum_{i=1}^{N}|y_{1i}|+\frac{1}{N}\sum_{i=1}^{N}|y_{0i}|%
\leq 2C_{M}^{\frac{1}{4}},$$
and
$$|\theta_{\rm c}|\leq C_{0}^{-1}\left(\frac{1}{N}\sum_{i=1}^{N}|y_{1i}|+\frac{1%
}{N}\sum_{i=1}^{N}|y_{0i}|\right)\leq 2C_{0}^{-1}C_{M}^{\frac{1}{4}}$$
due to Jensen’s inequality. Note that $|\tilde{y}_{ti}|\leq y_{ti}+|\theta_{\rm c}|$. By Condition A we have
$$\displaystyle\sum_{k=1}^{K}\lambda_{1k}\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u%
)-\tilde{F}_{0\mid k}^{-1}(u))^{2}du$$
$$\displaystyle\leq\frac{2}{N}\sum_{i=1}^{N}\tilde{y}_{1i}^{2}+\frac{2}{N}\sum_{%
i=1}^{N}\tilde{y}_{0i}^{2}$$
$$\displaystyle\leq\frac{4}{N}\sum_{i=1}^{N}y_{1i}^{2}+\frac{4}{N}\sum_{i=1}^{N}%
y_{0i}^{2}+8|\theta_{\rm c}|$$
$$\displaystyle\leq 8\sqrt{C_{M}}(1+2C_{0}^{-1}).$$
For any small positive $\epsilon$, according to Hoeffding inequality for sampling without replacement (Bardenet and Maillard, 2015) and the Bonferroni inequality we have
$$\mathbf{P}\left(8\sqrt{C_{M}}(1+2C_{0}^{-1})\max_{t,k}\frac{1}{\lambda_{k}}|%
\hat{\lambda}_{tk}-\lambda_{k}|\geq\epsilon\right)\leq 2K\exp\left(-\frac{1}{3%
2C_{M}(1+2C_{0}^{-1})^{2}}n_{1}\land n_{0}\epsilon^{2}\min_{k}\lambda_{k}^{2}%
\right)\to 0$$
by Conditions A and A. Without loss of generality, we assume $\epsilon\leq 1/2$ in the proof.
Thus for any $\delta>0$ and sufficiently large $N$
$$I_{2,\rm c}\leq\epsilon$$
with probability at least $1-\delta/3$.
Define $B_{N}=(C_{N}+C_{B})\lor 1$ where $C_{B}=10C_{0}^{-2}C_{M}^{1/4}$.
For $\epsilon\leq B_{N}(C_{0}\land C_{M}^{1/4})$, it is easy to verify that
$$\{B_{N}|\hat{\pi}_{\rm c}-\pi_{\rm c}|<\frac{\epsilon}{2}\}\cap\{B_{N}|\hat{%
\theta}-\theta|<\frac{\epsilon}{2}\}\subset\{B_{N}|\hat{\theta}_{\rm c}-\theta%
_{\rm c}|<\epsilon\}.$$
Thus
$$\displaystyle\mathbf{P}\left(B_{N}|\hat{\theta}_{\rm c}-\theta_{\rm c}|\geq C_%
{B}\epsilon\right)$$
$$\displaystyle\leq\mathbf{P}\left(B_{N}|\hat{\pi}_{\rm c}-\pi_{\rm c}|\geq\frac%
{\epsilon}{2}\right)+\mathbf{P}\left(B_{N}|\hat{\theta}-\theta|\geq\frac{%
\epsilon}{2}\right)$$
(3)
$$\displaystyle\leq 4\exp\left(-\frac{2n_{1}\land n_{0}\epsilon^{2}}{B_{N}^{2}}%
\right)+4\exp\left(-\frac{n_{1}\land n_{0}\epsilon^{2}}{2B_{N}^{2}C_{N}^{2}}\right)$$
where the last inequality follows from the Hoeffding inequality. By Condition A,
$$4\exp\left(-\frac{2n_{1}\land n_{0}\epsilon^{2}}{B_{N}^{2}}\right)+4\exp\left(%
-\frac{n_{1}\land n_{0}\epsilon^{2}}{2B_{N}^{2}C_{N}^{2}}\right)\to 0.$$
Thus for sufficiently large $N$ with probability greater than $1-\delta/3$ we have $B_{N}|\hat{\theta}_{\rm c}-\theta_{\rm c}|\leq C_{B}\epsilon$.
Note that $C_{0}\leq 1$, $|\tilde{y}_{ti}|\leq y_{ti}+|\theta_{\rm c}|\leq C_{N}+2C_{0}^{-1}C_{M}^{1/4}%
\leq B_{N}$ and $|\hat{y}_{ti}|\leq|y_{ti}|+|\hat{\theta}_{\rm c}|\leq C_{N}+2C_{0}^{-1}C_{M}^{%
1/4}+C_{B}\epsilon\leq B_{N}$ when the event $\{B_{N}|\hat{\theta}_{\rm c}-\theta_{\rm c}|<C_{B}\epsilon\}$ holds.
On the event $\{B_{N}|\hat{\theta}_{\rm c}-\theta_{\rm c}|<C_{B}\epsilon\}$, by the representation theorem (Bobkov and Ledoux, 2019)[Theorem 2.11], using the similar arguments as those in the proof of Theorem 2.3, we can show that
$$\displaystyle\max_{k}|\int_{0}^{1}(\check{F}_{1\mid k}^{-1}(u)-\check{F}_{0%
\mid k}^{-1}(u))^{2}du-\int_{0}^{1}(\tilde{F}_{1\mid k}^{-1}(u)-\tilde{F}_{0%
\mid k}^{-1}(u))^{2}du|$$
$$\displaystyle\leq 2\max_{k}\iint_{-B_{N}\leq v\leq w\leq B_{N}}[|\check{F}_{1%
\mid k}^{*}(v)-\tilde{F}_{1\mid k}(v)|+|\check{F}_{0\mid k}^{*}(v)-\tilde{F}_{%
0\mid k}(v)|+|\check{F}_{1\mid k}^{*}(w)-\tilde{F}_{1\mid k}(w)|$$
$$\displaystyle\phantom{\leq 2\max_{k}\iint_{-B_{N}\leq v\leq w\leq B_{N}}}+|%
\check{F}_{0\mid k}^{*}(w)-\tilde{F}_{0\mid k}(w)|]dvdw$$
$$\displaystyle\leq 4B_{N}\max_{k}\int_{-B_{N}}^{B_{N}}|\check{F}_{1\mid k}^{*}(%
v)-\tilde{F}_{1\mid k}(v)|dv+4B_{N}\max_{k}\int_{-B_{N}}^{B_{N}}|\check{F}_{0%
\mid k}^{*}(v)-\tilde{F}_{0\mid k}(v)|dv.$$
(4)
Here we only analyze the first term $B_{N}\max_{k}\int_{-B_{N}}^{B_{N}}|\check{F}_{1\mid k}^{*}(v)-F_{1\mid k}(v)|dv$ and the same result can be proved similarly for the second term.
Define
$$\bar{F}_{11\mid k}(y)=\frac{\hat{\lambda}_{1k}^{-1}}{n_{1}}\sum_{T_{i}=1}d_{1i%
}1\{\tilde{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\},$$
$$\bar{F}_{01\mid k}(y)=\frac{\hat{\lambda}_{1k}^{-1}}{n_{0}}\sum_{T_{i}=0}d_{0i%
}1\{\tilde{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\},$$
$$\tilde{F}_{11\mid k}(y)=\frac{\lambda_{k}^{-1}}{N}\sum_{i=1}^{N}d_{1i}1\{%
\tilde{y}_{1i}\leq y\}1\{w_{i}=\xi_{k}\},$$
$$\tilde{F}_{01\mid k}(y)=\frac{\lambda_{k}^{-1}}{N}\sum_{i=1}^{N}d_{0i}1\{%
\tilde{y}_{0i}\leq y\}1\{w_{i}=\xi_{k}\}.$$
for $k=1,\dots,K$. Let $\bar{F}_{1\mid k}(y)=\bar{F}_{11\mid k}(y)-\bar{F}_{01\mid k}(y)$, then the relationship $\check{F}_{1\mid k}(y)=\bar{F}_{1\mid k}(y+\hat{\theta}_{\rm c}-\theta_{\rm c})$ holds.
Because $|\sup_{v\leq y}\bar{F}_{1\mid k}(v)-\sup_{v\leq y}\tilde{F}_{1\mid k}(v)|\leq%
\sup_{v}|\bar{F}_{1\mid k}(v)-\tilde{F}_{1\mid k}(v)|$ and $\sup_{v\leq y}\tilde{F}_{1\mid k}(v)=\tilde{F}_{1\mid k}(y)$, we have
$$\displaystyle|\sup_{v\leq y}\bar{F}_{1\mid k}(v)-\bar{F}_{1\mid k}(y)|$$
$$\displaystyle\leq|\sup_{v\leq y}\bar{F}_{1\mid k}(v)-\sup_{v\leq y}\tilde{F}_{%
1\mid k}(v)|+|\bar{F}_{1\mid k}(y)-\tilde{F}_{1\mid k}(y)|$$
$$\displaystyle\leq 2\sup_{v}|\bar{F}_{1\mid k}(v)-\tilde{F}_{1\mid k}(v)|.$$
Hence
$$\displaystyle|\check{F}_{1\mid k}^{*}(y)-\tilde{F}_{1\mid k}(y)|$$
$$\displaystyle\leq|\sup_{v\leq y+\hat{\theta}_{\rm c}-\theta_{\rm c}}\bar{F}_{1%
\mid k}(v)-\bar{F}_{1\mid k}(y+\hat{\theta}_{\rm c}-\theta_{\rm c})|+|\bar{F}_%
{1\mid k}(y+\hat{\theta}_{\rm c}-\theta_{\rm c})-\tilde{F}_{1\mid k}(y)|$$
$$\displaystyle\leq 2\sup_{v}|\bar{F}_{1\mid k}(v)-\tilde{F}_{1\mid k}(v)|+|\bar%
{F}_{1\mid k}(y+\hat{\theta}_{\rm c}-\theta_{\rm c})-\tilde{F}_{1\mid k}(y+%
\hat{\theta}_{\rm c}-\theta_{\rm c})|$$
$$\displaystyle\phantom{\leq}+|\tilde{F}_{1\mid k}(y+\hat{\theta}_{\rm c}-\theta%
_{\rm c})-\tilde{F}_{1\mid k}(y)|$$
$$\displaystyle\leq 3\sup_{v}|\bar{F}_{1\mid k}(v)-\tilde{F}_{1\mid k}(v)|+|%
\tilde{F}_{1\mid k}(y+\hat{\theta}_{\rm c}-\theta_{\rm c})-\tilde{F}_{1\mid k}%
(y)|.$$
Moreover, because
$$\displaystyle|\tilde{F}_{1\mid k}(y+\hat{\theta}_{\rm c}-\theta_{\rm c})-%
\tilde{F}_{1\mid k}(y)|$$
$$\displaystyle\leq|\tilde{F}_{11\mid k}(y+\hat{\theta}_{\rm c}-\theta_{\rm c})-%
\tilde{F}_{11\mid k}(y)|+|\tilde{F}_{01\mid k}(y+\hat{\theta}_{\rm c}-\theta_{%
\rm c})-\tilde{F}_{01\mid k}(y)|$$
$$\displaystyle\leq\lambda^{-1}_{k}\left(\frac{1}{N}\sum_{i=1}^{N}d_{1i}1\{|%
\tilde{y}_{1i}-y|\leq|\hat{\theta}_{\rm c}-\theta_{\rm c}|\}1\{w_{i}=\xi_{k}\}%
+\frac{1}{N}\sum_{i=1}^{N}d_{0i}1\{|\tilde{y}_{0i}-y|\leq|\hat{\theta}_{\rm c}%
-\theta_{\rm c}|\}1\{w_{i}=\xi_{k}\}\right),$$
we have
$$B_{N}\max_{k}\int_{-B_{N}}^{B_{N}}|\check{F}_{1\mid k}^{*}(v)-F_{1\mid k}(v)|%
dv\leq 3B_{N}^{2}\max_{k}\sup_{v}|\bar{F}_{1\mid k}(v)-\tilde{F}_{1\mid k}(v)|%
+2B_{N}|\hat{\theta}_{\rm c}-\theta_{\rm c}|.$$
By inequality (3), we have $B_{N}|\hat{\theta}_{\rm c}-\theta_{\rm c}|\leq C_{B}\epsilon$ with probability at least $1-\delta/3$ for sufficiently large $N$.
Using the similar arguments as those used to analyze $I_{31}$ in the proof of Theorem 2.3, we can show
$$B_{N}^{2}\max_{k}\sup_{v}|\bar{F}_{1\mid k}(v)-\tilde{F}_{1\mid k}(v)|\leq\epsilon$$
with probability at least $1-\delta/6$ for sufficiently large $N$. By applying the similar arguments to the second term of expression (4), we have $I_{1,\rm c}\leq(48+32C_{B})\epsilon$ with probability at least $1-2\delta/3$ for sufficiently large $N$. Thus we have proved that for any small positive numbers $\epsilon$ and $\delta$, we have $|\check{\phi}^{2}_{\rm L}-\tilde{\phi}^{2}_{\rm L}|\leq(49+32C_{B})\epsilon$ with probability at least $1-\delta$ for sufficiently large $N$ and this implies the consistency of the estimator.
$\square$
References
Abadie [2003]
Alberto Abadie.
Semiparametric instrumental variable estimation of treatment response
models.
Journal of Econometrics, 113:231–263, 2003.
Angrist et al. [1996]
Joshua D Angrist, Guido W Imbens, and Donald B Rubin.
Identification of causal effects using instrumental variables.
Journal of the American Statistical Association, 91:444–455, 1996.
Aronow et al. [2014]
Peter M. Aronow, Donald P. Green, and Donald K. K. Lee.
Sharp bounds on the variance in randomized experiments.
The Annals of Statistics, 42:850–871, 2014.
Bardenet and Maillard [2015]
Remi Bardenet and Odalric-Ambrym Maillard.
Concentration inequalities for sampling without replacement.
Bernoulli, 21:1361–1385, 2015.
Belloni et al. [2014]
Alexandre Belloni, Victor Chernozhukov, and Christian Hansen.
Inference on treatment effects after selection among high-dimensional
controls.
The Review of Economic Studies, 81:608–650, 2014.
Bloniarz et al. [2016]
Adam Bloniarz, Hanzhong Liu, Cun-Hui Zhang, Jasjeet S. Sekhon, and Bin Yu.
Lasso adjustments of treatment effect estimates in randomized
experiments.
Proceedings of the National Academy of Sciences, 113:7383–7390, 2016.
Bobkov and Ledoux [2019]
Sergey Bobkov and Michel Ledoux.
One-dimensional Empirical Measures, Order Statistics, and
Kantorovich Transport Distances, volume 261.
American Mathematical Society, 2019.
Chan et al. [2016]
Kwun Chuen Gary Chan, Sheung Chi Phillip Yam, and Zheng Zhang.
Globally efficient non-parametric inference of average treatment
effects by empirical balancing calibration weighting.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 78:673–700, 2016.
Cochran [1977]
W. G. Cochran.
Sampling Techniques.
Wiley, New York, 3 edition, 1977.
Ding and Dasgupta [2016]
Peng Ding and Tirthankar Dasgupta.
A potential tale of two-by-two tables from completely randomized
experiments.
Journal of the American Statistical Association, 111:157–168, 2016.
Ding and Miratrix [2018]
Peng Ding and Luke W. Miratrix.
Model-free causal inference of binary experimental data.
Scandinavian Journal of Statistics, 46:200–214,
2018.
Ding et al. [2019]
Peng Ding, Avi Feller, and Luke Miratrix.
Decomposing treatment effect variation.
Journal of the American Statistical Association, 114:304–317, 2019.
Frangakis and Rubin [2002]
Constantine E Frangakis and D. B. Rubin.
Principal stratification in causal inference.
Biometrics, 58:21–29, 2002.
Freedman [2008]
David A. Freedman.
On regression adjustments in experiments with several treatments.
The Annals of Applied Statistics, 2:176–196, 2008.
Hammer et al. [1996]
S. M. Hammer, D. A. Katzenstein, M. D. Hughes, H. Gundacker, R. T. Schooley,
R. H. Haubrich, W. K. Henry, M. M. Lederman, J. P. Phair, M. Niu, M. S.
Hirsch, and T. C. Merigan.
A trial comparing nucleoside monotherapy with combination therapy in
hiv-infected adults with cd4 cell counts from 200 to 500 per cubic
millimeter.
The New England Journal of Medicine, 335:1081 –
1090, 1996.
Hirano et al. [2003]
Keisuke Hirano, Guido W Imbens, and Geert Ridder.
Efficient estimation of average treatment effects using the estimated
propensity score.
Econometrica, 71:1161–1189, 2003.
Hong et al. [2020]
Han Hong, Michael P. Leung, and Jessie Li.
Inference on finite-population treatment effects under limited
overlap.
The Econometrics Journal, 23:32–47, 2020.
Imai [2008]
Kosuke Imai.
Variance identification and efficiency analysis in randomized
experiments under the matched-pair design.
Statistics in medicine, 27:4857–4873, 2008.
Imbens [2004]
G. W. Imbens.
Nonparametric estimation of average effects under exogeneity: a
review.
The Review of Economics and Statistics, 86:4–29,
2004.
Imbens and Rosenbaum [2005]
G. W. Imbens and P. R. Rosenbaum.
Robust accurate confidence intervals with a weak instrument.
Journal of the Royal Statistical Society:Series A (Statistics
in Society), 25:305–327, 2005.
Li and Ding [2017]
Xinran Li and Peng Ding.
General forms of finite population central limit theorems with
applications to causal inference.
Journal of the American Statistical Association, 112:1759–1769, 2017.
Lin [2013]
Winston Lin.
Agnostic notes on regression adjustments to experimental data:
Reexamining freedman’s critique.
The Annals of Applied Statistics, 7:295–318, 2013.
Lu [2017]
Jiannan Lu.
Sharpening randomization-based causal inference for 22 factorial
designs with binary outcomes.
Statistical Methods in Medical Research, 28:1064–1078, 2017.
Middleton and Aronow [2015]
Joel A. Middleton and Peter M. Aronow.
Unbiased estimation of the average treatment effect in
cluster-randomized experiments.
Statistics, Politics and Policy, 6, 2015.
Miratrix et al. [2013]
Luke W Miratrix, Jasjeet S. Sekhon, and Bin Yu.
Adjusting treatment effect estimates by post-stratification in
randomized experiments.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 75:369–396, 2013.
Mukerjee et al. [2018]
Rahul Mukerjee, Tirthankar Dasgupta, and Donald B. Rubin.
Using standard tools from finite population sampling to improve
causal inference for complex experiments.
Journal of the American Statistical Association, 113:868–881, 2018.
Neyman [1990]
Jerzy Neyman.
On the application of probability theory to agricultural experiments.
essay on principles. section 9.
Statistical Science, 5:465–472, 1990.
Neyman and Iwaszkiewicz [1935]
Jerzy Neyman and Karolina Iwaszkiewicz.
Statistical problems in agricultural experimentation.
Supplement to the Journal of the Royal Statistical Society,
2:107–180, 1935.
Nolen and Hudgens [2011]
Tracy L. Nolen and Michael G. Hudgens.
Randomization-based inference within principal strata.
Journal of the American Statistical Association, 106:581–593, 2011.
Robins [1988]
James M. Robins.
Confidence intervals for causal parameters.
Statistics in Medicine, 7:773–785, 1988.
van der Vaart [2000]
A. W. van der Vaart.
Asymptotic Statistics.
Cambridge University Press, New York, 2000.
Vinokuir et al. [1995]
A. D. Vinokuir, R. H. Price, and Y. Schul.
Impact of jobs intervention on the unemployed workers varying in risk
for depression.
23:39–74, 1995. |
Abstract
We present models for the chemistry in gas moving towards the
ionization front of an HII region. When it is far from
the ionization front, the gas is highly depleted of elements more
massive than helium. However, as it approaches the ionization front,
ices are destroyed and species formed on the grain surfaces are
injected into the gas phase. Photodissociation removes gas phase
molecular species as the gas flows towards the ionization front. We
identify models for which the OH column densities are comparable to
those measured in observations undertaken to study the magnetic fields
in star forming regions and give results for the column densities of
other species that should be abundant if the observed OH arises
through a combination of the liberation of H${}_{2}$O from surfaces and
photodissociation. They include CH${}_{3}$OH, H${}_{2}$CO, and H${}_{2}$S.
Observations of these other species may help
establish the nature of the OH spatial distribution in the clouds,
which is important for the interpretation of the magnetic field
results.
keywords ISM: HII regions, magnetic fields, molecules - masers - stars: formation
Chemical telemetry of OH observed to measure interstellar magnetic fields
Serena Viti${}^{1}$ Thomas W. Hartquist${}^{2}$ and Philip C. Myers${}^{3}$
Department of Physics and Astronomy, University College London,
London WC1E 6BT, UK sv@star.ucl.ac.uk
School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, UK
Harvard - Smithsonian Center for Astrophysics, 60 Garden Street,
Cambridge MA 02138, USA
1 Introduction
It is not yet clear whether clouds are generally magnetically
supercritical or subcritical in the sense that the mass-to-magnetic flux ratio
is above or below the critical value at which gravity overcomes the support
by the magnetic field (Mouschovias & Spitzer 1976). What is clear, however, is
that this distinction is very important, as the mode of star formation
depends on whether a cloud is magnetically supercritical or subcritical
(e. g. Shu et al. 1987). Measurements of interstellar magnetic field
strengths in molecular clouds are vital.
Crutcher (1999) produced a detailed compilation of reliable results,
obtained up to the time of his writing, for line - of - sight magnetic
field strengths and upper limits inferred from observations
of OH in a sample of molecular clouds. He also collected data on
column densities, line widths, number densities, temperatures, and
molecular cloud sizes. Crutcher (1999) concluded that the values of
the mass - to - magnetic flux ratios are about twice the critical
value. Shu et al. (1999) compared the data with a
model of a highly flattened cloud (Allen & Shu 2000) and inferred
that the mass - to - flux ratios have values that are approximately
equal to the critical value. Bourke et al. (2001) have obtained
further OH and column density data and performed a related analysis
and they concluded that their results are model dependent: if the model
cloud is initially a uniform sphere with a uniform field strength then
the cloud is magnetically supercritical. If instead, the model cloud
is a flattened sheet, the data imply that the system is
stable. In general, their detection rate was low and they explained
this as a result of a selection effect: most of their sources were
towards H II regions that are probably expanding; the expansion of an H II
region leads to the compression of the surrounding gas into a thin
shell-like structure, decreasing the Zeeman effect. This tends to make
clouds that are not necessarily magnetically supercritical appear as
though they are.
The analysis to determine the dynamical significance of the measured
line - of - sight magnetic field strengths are based on the assumption
that the OH Zeeman data provide a measure of the magnetic field
strength throughout the region where most of the material,
contributing to the column density, is. However, the possibility that
OH is particularly abundant in restricted shells around HII regions
has been suggested (Elitzur & de Jong 1978; Hartquist et al. 1995)
in work on maser sources. Due to the spatial coincidence of OH and
CH${}_{3}$OH maser sources in W3(OH) established by Menten et al. (1992),
Hartquist et al. (1995) argued that the release of icy mantles from
grain surfaces in gas moving towards an ionization front plays a major
role in the chemistry in the maser regions. CH${}_{3}$OH is a direct
product of the surface chemistry, and OH is generated through the
photodissociation of H${}_{2}$O injected into the gas phase from the
surfaces. Crutcher (2004, private communication) has argued that the absence of
limb brightening and other observational results demonstrate that in some
sources, for which the OH data yield clear measurements of the magnetic
field strength, the OH is not primarily concentrated in shells. However,
he does accept the possibility that in at least some of the sources
for which the studies yield only upper limits for the field strengths,
OH may be mostly in shells. This might contribute to the null detections
of the Zeeman effect in some sources.
The main motivation of this paper is an attempt to develop a means to establish whether in some cases OH, used in measurements of magnetic field strengths, exists in rather localized regions as a consequence of forming in photodissociation regions.
Our study shows that it is indeed possible to produce OH with a
column density
in the range of those measured by Bourke et al. (2001) in
the manner described by Hartquist et al. (1995). We have developed the
model in the first step in an attempt to chemically determine whether the
molecular gas around H II regions may indeed be swept up into thin shells,
resulting in a nonuniform magnetic field geometry, that leads to measured
values of the field strength that are much lower than the true values. If the
OH is formed as gas approaches the ionization front, our model allows
the prediction of the column densities of other species that may be
observed to confirm the validity of the model. Measured column densities
for these other species that disagree with the predictions would suggest
that the OH is not primarily in gas near the ionization front.
In section 2 we describe the model assumptions. Section 3 contains our
results and discussion, and Section 4 concludes the paper.
2 Description of the Model
The basic chemical model we adopt is a modification of the model
employed in Viti et al. (2003). The chemical network is taken from the
UMIST database (Millar et al. 1997; Le Teuff et al. 2000). We follow
the chemical evolution of 168 species involved in 1777 gas-phase and
grain reactions. This paper presents the first results of such an extensive
chemical model for shells in which OH is particularly abundant. Other
potentially observable species were not considered by Elitzur & de Jong (1978),
and Hartquist et al. (1995) did not give quantitative results, even for CH${}_{3}$OH, relevant to the picture investigated here. The model calculation is carried out in two phases,
both of which are time-dependent. In Phase I, the clump of gas is formed
during the gravitationally driven collapse of an initially diffuse molecular
region from a hydrogen nucleus number density, n${}_{H}$, to a much higher fixed
final number density. Free-fall collapse is assumed. During
this phase, gas-phase chemistry and freeze out on to dust grains with
subsequent processing (mainly hydrogenation) are assumed to occur (see
Viti et al. 2003 for a full description). The initial density of the
clump is taken to be 500 cm${}^{-3}$ while the final density, n${}_{f}$, is
treated as a free parameter. Note that the initial density is
consistent with observations of the
Rosette molecular cloud translucent clumps (Williams, Blitz & Stark 1995). We have run a test model where the initial density was set to 100 cm${}^{-3}$: the final abundances of the collapsed clump do not seem to be significantly affected. Hence, we have taken 500 cm${}^{-3}$ as our initial density for all models.
The gas then remains motionless for some time and subsequently, in
Phase II, begins to move at a constant velocity, v, towards an
ionization front. The time for which the gas remains motionless is
determined by the extent to which we wish depletion to occur for the
particular run. At the time at which motion towards the front begins,
the visual extinction, A${}_{V}$, of material between the location of the
parcel of gas under consideration and the ionization front is 10 mags.
The relationship between the visual extinction and the distance to the
ionization front, z(t), at time t is taken to be
$${\rm A}_{V}={\rm n}_{H}\times{\rm z(t)/1.6}\times{\rm 10}^{21}{\rm cm}^{-2}$$
(1)
The radiation field at the ionization front is assumed to be $\chi$
times the standard unshielded mean interstellar radiation background
of Habing (1968).
At the time Phase I is over, a percentage, FR, of the nuclei of
elements more massive than helium initially in the gas phase has
frozen - out. In Table 2 we show the surface abundances of
several
species at the beginning of Phase II for one of the models. During this latter phase,
as the gas moves towards the ionization
front, A${}_{V}$ decreases until the gas is unshielded enough (at
A${}_{V,evap}$) that all surface material is returned instantaneously to
the gas phase. The assumption of instantaneous evaporation is motivated
in this
context by the high radiation fields involved: the Viti et al. (2003) models
imply that the enhanced molecular
condensations ahead of Herbig-Haro objects, where the radiation field is higher than ambient
but certainly lower than the radiation fields considered here, can only come from evaporated icy mantles which are rapidly injected into the gas phase.
Thus the free parameters are n${}_{f}$, FR, $\chi$, A${}_{V,evap}$, and
v. A further free parameter is the fraction of CO converted into
methanol on the surface of the grains. However, on the basis of
CH${}_{3}$OH absorption observations (Menten et al. 1986) and OH maser models
(e. g. Gray et al. 1992) we know that CH${}_{3}$OH has an abundance comparable to
or even greater than that of OH. There is no known gas phase mechanism at low
temperatures that produces quantities of CH${}_{3}$OH comparable to those
found in CH${}_{3}$OH maser sources. Surface chemistry is almost certainly
involved in the production of CH${}_{3}$OH.
By running a model (Model 0, see Table 1) which simulates the maser
environment (n${}_{H}$ = 10${}^{7}$ cm${}^{-3}$, T = 50 K; $\chi$ =
3$\times$10${}^{5}$) we found that the CH${}_{3}$OH column density is comparable to
the OH column density only if at least 25% of the carbon monoxide sticking
on the grains is converted into methanol. Hence, for all the models presented
here we have assumed that 25% of CO is converted into CH${}_{3}$OH on grains.
The dynamical description that we have used is equally applicable
to a situation in which, in the frame of reference of the star, the
ionization front is expanding at speed v towards a gas that is
motionless, or to a situation in which the gravity of the star causes
gas to fall at a speed of v towards the star and approach an
ionization front that is not moving in the star’s frame (cf. Keto
2002). In the Keto (2002) dynamical model of HII region, infall through a
static front would maintain the overall dynamical and chemical structures
near the front for close to the entire lifetime of an HII region.
We have explored a large parameter space resulting in 33
models. However, after a preliminary analysis, and for simplicity, we
decided to list here only the most plausible ones (see Table 1).
Note that we have retained only those models where Phase II begins with highly
depleted material, which we believe is quite realistic because some dense cores, in
regions where high mass stars have not started to form, have CO fractional abundances
that are close to two orders of magnitude below those elsewhere (e.g. Caselli et al. 2002).
Those cores almost certainly form on a timescale comparable to that of the free-fall time and may well remain quiescent for some time. The regions that we are modelling here have densities that are within a factor of a few to ten of those of the dense cores mentioned above and may well have dynamical and chemical histories similar to those of the dense cores as well.
3 Results and Discussion
Bourke et al. (2001) derived column densities for OH in the range of
4$\times$10${}^{14}$ to 3$\times$10${}^{15}$ cm${}^{-2}$. For many models
for which FR is 25% or 50% (not shown), the OH column density
between A${}_{V}$ = 10 and A${}_{V}$ = A${}_{V,evap}$ is greater than the OH column density for
A${}_{V}$ less than A${}_{V,evap}$. We wish to examine the possibility that
the absorption occurs primarily near the ionization front. Thus, we
consider here only models for which FR = 100%.
Table 1 gives values of the parameters specifying each model. Table 2
gives the column density of various species between A${}_{V}$ = 10 mags
(the beginning of Phase II) and A${}_{V}$ equal to the value given in the
final column; in Figure 1 the fractional abundances of OH for selected
models are shown as a function of A${}_{V}$, always after the grain mantles
have evaporated, up to A${}_{V}$ $\sim$ 3.5 mags, corresponding to a
post-evaporation timescale of the order of 3$\times$10${}^{4}$–3$\times$10${}^{5}$ years.
From Figure 1, we see that soon after evaporation, OH is very abundant
for all models: this is due to the evaporation of water which
efficiently dissociates. OH is then destroyed, also by
photodissociation. We note that a high radiation field (e.g. Model 6) or a
low A${}_{V,evap}$ (e.g. Model 8) accelerates the destruction of OH. Moreover,
another consequence of a high $\chi$ is also a very fast destruction of most
gas phase species.
From Table 2, we note that Models 6, 7 and 8, are the only ones for
which the OH column densities from A${}_{V}$ = 0 to 10 mags are comparable
to those measured by Bourke et al. (2001). For all three of these
models v = 1 km s${}^{-1}$. The values of $\chi$ are 3$\times$10${}^{4}$ and
3$\times$10${}^{3}$, which are within the range expected for early B and
late O type stars at the distances that the ionization fronts of
HII regions are from the central stars. The sensitivity
of the results to the adoption of different values of A${}_{V,evap}$ is
demonstrated by a comparison of the results for Models 7 and
8. A${}_{V,evap}$ has values of 6 and 5, respectively, for those
models. A value of A${}_{V,evap}$ similar to these is compatible with
the inferences drawn by Viti et al. (2003) in their modelling of
chemistry triggered by UV radiation emitted in shocks associated with
Herbig - Haro objects.
Little contribution to the OH column density comes from the region
where A${}_{V}$ is less than 4 mags. Inspection of Figure 1 of Hartquist &
Sternberg (1991) shows that the OH in the present models is, thus, in
regions where the temperature does not greatly exceed 10 K: this
inference can be drawn solely from the high density results from Hartquist &
Sternberg (1991) because for a fixed value of $\chi$/n, the temperature, as a function of A${}_{V}$, does not vary significantly at A${}_{V}$ $>$ 2 mags. For
fixed A${}_{V}$ $<$ 1 mag and for fixed $\chi$/n, the temperature increases with density due mostly to the collisional de-excitation of radiatively pumped H${}_{2}$ becoming more important with increasing density. Nevertheless, in order to
confirm this inference, we ran two test models with the UCL PDR code
(Bell et al. 2005; Papadopoulos et al. 2002) at densities of 10${}^{4}$ and 10${}^{6}$ cm${}^{-3}$.
The results of these tests are shown in Figure 2. From this figure,
it is clear that, while the temperature is high at the outer
edge, at A${}_{V}$ $>$ 4 mags, the temperature
has values well below 100K and close enough to 10 K for this assumption to be appropriate for the chemical calculations.
Note that the UCL PDR code and the chemical code
used for the calculation of the grids reported in this paper are two versions of the same
basic code. The main difference is, of course, in the treatment of the temperature:
it is given
as an input in the chemical code, while
it is self-consistently computed in the UCL PDR code. The latter code has been recently benchmarked against many other PDR codes
(Roellig et al. in prep).
Clearly the highest fractional abundances of OH and of other species
is obtained only in a narrow A${}_{V}$ range near A${}_{V,evap}$. This
result justifies our adoption of plane parallel geometry.
If the observed OH towards the Bourke et al. sources is indeed caused
by grain evaporation in parcels of gas moving toward the ionization
front, then it would be desirable to find other tracers of this
process. In particular, we note that CH${}_{3}$OH should also be enhanced
as a direct consequence of grain evaporation. From Table 2, it is clear
that CH${}_{3}$OH is always less abundant than OH for models 6 to 8, though
in Model 0, which gives a column density of OH in the range of those
in OH masers, the CH${}_{3}$OH and OH column densities are similar. For models
6 to 8, the highest column density of CH${}_{3}$OH is reached for Model 7,
while for these three models the densities of OH and CH${}_{3}$OH differ by the
smallest factor in Model 8.
4 Conclusions
In this paper we have put forward and explored the possibility that
the OH absorption observed toward HII regions is due to the
evaporation of grain mantles primarily near the HII regions’ ionization
fronts.
We show that it is possible for most of the detected OH to be in such
locations provided that the grain mantles evaporate $before$ the gas
becomes completely unshielded to the strong radiation fields typical
of these environments. In order to test this idea, additional absorption
observations are desirable for the lines-of-sight for which
Zeeman measurements of magnetic fields
are made. They should be
made in the lines of species that are often not abundant in cold dark
cloud material but are abundant in the models presented here that give
OH column densities near those measured by Bourke et al. (2001). In
this respect CH${}_{3}$OH and H${}_{2}$S are particularly promising species,
although a high column density of methanol is only obtained if a
substantial amount of CO is converted into CH${}_{3}$OH on the grains
before evaporation.
Methanol is routinely observed towards regions of high mass star
formation, particularly towards UCHII regions, where it is mainly detected
in maser emission (e. g. Walsh et al. 1997), and hot cores.
H${}_{2}$CO is also regularly observed in absorption,
and the model results suggest that it too would be a good candidate
for observations in appropriate directions.
In fact, Downes et al. (1980) reported low spatial resolution observations
of H${}_{2}$CO towards several galactic sources and found that, at least in some
cases, it is seen at the same velocity as the OH absorption.
The high abundance of CH${}_{3}$OH, H${}_{2}$S and H${}_{2}$CO are all a consequence of hydrogenation of simpler species on the grains during the collapse phase; once A${}_{V,evap}$ is reached,
hydrogenated species in the gas phase are enhanced. In the case of methanol, there is an additional contribution from the high radiation field which causes
a high abundance of CH${}_{3}^{+}$; the latter efficiently reacts with water (also enhanced on the grains during the collapse phase) and forms the ion CH${}_{3}$OH${}_{2}^{+}$
which then produces methanol via electronic recombination.
5 Acknowledgements
SV acknowledges individual financial support from a PPARC Advanced Fellowship.
The collaboration was supported by a PPARC Visitors Grant held in Leeds.
6 References
Allen, A., Shu, F. H. 2000, ApJ, 536, 368
Bell, T, Viti, S, Williams, D A, Crawford, I A, Price R J, 2005, 357, 961
Bourke, T. L., Myers, P. C., Robinson, G., Hyland, R. 2001, ApJ, 554, 916
Caselli, P., Walmsley, C. M., Zucconi, A., Tafalla, M., Dore, L., Myers, P. C., 2002, ApJ, 565, 344
Crutcher, R. M. 1999, ApJ, 520, 706
Downes, A. J. B., Wilson, T. L., Bieging J., Wink, J., 1980, A&AS, 40, 379
Elitzur, M., de Jong, T. 1978, A&A, 67, 323
Gray, M. D, Field, D., Doel, R. C. 1992, A&A, 262, 555
Habing, H. J. 1968, BAN, 20, 177
Hartquist, T. W., Sternberg, A. 1991, MNRAS, 248, 48
Hartquist, T. W., Menten, K. M., Lepp, S., Dalgarno, A. 1995, MNRAS, 272,
184
Keto, E. 2002, ApJ, 580, 980
Le Teuff, Y. H., Millar, T. J., Markwick, A. J. 2000, A&AS, 146, 157
Menten, K. M., Walmsley, C. M., Henkel, C, Wilson, T. L. 1986, A &A, 157, 318
Menten, K. M., Reid, M. J., Pratap, P., Moran, J. M., Wilson, T. L. 1992,
ApJ, 401, L39
Millar, T. J., Farquhar, P. R. A., Willacy, K. 1997, A&AS, 121, 139
Mouschovias, T. Ch., Spitzer, L. Jr. 1976, ApJ, 210, 236
Papadopoulos, P, P, Thi, W.-F.; Viti, S, 2002, ApJ, 579, 270
Shu, F. H., Adams, F. C., Lizano, S. 1987, ARA&A, 25, 23
Shu, F. H., Allen, A., Shang, H., Ostriker, E. C., Li, Z.-Y. 1999, in The
Origin of Stars and Planetary Systems, ed. C. J. Lada & N. D. Kylafis
(Dordrecht: Kluwer), 193
Stantcheva, T., Caselli, P., Herbst, E. 2001, A&A, 375, 673
Viti, S., Girart, J. M., Garrod, R., Williams, D. A., Estalella, R. 2003,
A&A, 399, 187
Walsh, A. J., Hyland A. R., Robinson G., Burton M. G., 1997, MNRAS, 291, 261
Williams, J. P., Blitz, L., Stark, A. A., 1995, ApJ, 451, 252 |
THE HUBBLE CONSTANT
Wendy L. Freedman and Barry F. Madore
Carnegie Observatories, 813 Santa Barbara St.,
Pasadena, CA 91101, USA; email: wendy@obs.carnegiescience.edu, barry@obs.carnegiescience.edu
Abstract
Considerable progress has been made in determining the Hubble constant
over the past two decades. We discuss the cosmological context and
importance of an accurate measurement of the Hubble constant, and
focus on six high-precision distance-determination methods: Cepheids,
tip of the red giant branch, maser galaxies, surface-brightnes
fluctuations, the Tully-Fisher relation and Type Ia supernovae. We
discuss in detail known systematic errors in the measurement of galaxy
distances and how to minimize them. Our best current estimate of the
Hubble constant is 73 $\pm$2 (random) $\pm$4
(systematic) km s${}^{-1}$ Mpc${}^{-1}$. The importance of improved
accuracy in the Hubble constant will increase over the next decade
with new missions and experiments designed to increase the precision
in other cosmological parameters. We outline the steps that will be
required to deliver a value of the Hubble constant to 2% systematic
uncertainty and discuss the constraints on other cosmological
parameters that will then be possible with such accuracy.
keywords:
Cosmology, Distance Scale, Cepheids, Supernovae, Age of Universe
\jname
Annu. Rev. Astron. Astrophys.
\jyear2010
\jvol48
\ARinfo1056-8700/97/0610-00
1 INTRODUCTION
In 1929 Carnegie astronomer, Edwin Hubble, published a linear
correlation between the apparent distances to galaxies and their
recessional velocities. This simple plot provided evidence that our
Universe is in a state of expansion, a discovery that still stands as
one the most profound of the twentieth century (Hubble 1929a). This
result had been anticipated earlier by Lemaître (1927), who first
provided a mathematical solution for an expanding universe, and noted
that it provided a natural explanation for the observed receding
velocities of galaxies. These results were published in the Annals
of the Scientific Society of Brussels (in French), and were not widely
known.
Using photographic data obtained at the 100-inch Hooker telescope
situated at Mount Wilson CA, Hubble measured the distances to six
galaxies in the Local Group using the Period-Luminosity relation
(hereafter, the Leavitt Law) for Cepheid variables. He then extended
the sample to an additional 18 galaxies reaching as far as the Virgo
cluster, assuming a constant upper limit to the brightest blue stars
(HII regions) in these galaxies. Combining these distances with
published radial velocity measurements (corrected for solar motion)
Hubble constructed Figure 1. The slope of the velocity versus distance
relation yields the Hubble constant, which parameterizes the current
expansion rate of the Universe.
The Hubble constant is usually expressed in units of kilometers per
second per megaparsec, and sets the cosmic distance scale for the
present Universe. The inverse of the Hubble constant has dimensions
of time. Locally, the Hubble law relates the distance to an object
and its redshift: cz = H${}_{0}$d, where d is the distance to the object
and z is its redshift. The Hubble law relating the distance and the
redshift holds in any Friedman-Lemaitre-Robertson-Walker cosmology
(see §2) for redshifts less than unity. At
greater redshifts, the distance-redshift relationship for such a
cosmology also depends on the energy densities of matter and dark
energy. The exact relation between the expansion age and the Hubble
constant depends on the nature of the mass-energy content of the
Universe, as discussed further in §2 and §6. In a uniformly expanding universe, the Hubble
parameter, H(t), changes as a function of time; H${}_{\circ}$, referred to
as the Hubble constant, is the value at the current time, $t_{\circ}$.
Measurement of the Hubble constant has been an active subject since
Hubble’s original measurements of the distances to galaxies: the
deceptively simple correlation between galaxy distance and recession
velocity discovered eighty years ago did not foreshadow how much of a
challenge large systematic uncertainties would pose in obtaining an
accurate value for the Hubble constant. Only recently have
improvements in linear, solid-state detectors, the launch of the
Hubble Space Telescope (HST), and the development of several different
methods for measuring distances led to a convergence on its current
value.
Determining an accurate value for H${}_{o}$ was one of the primary
motivations for building HST. In the early 1980’s, the first director
of the Space Telescope Science Institute, Riccardo Giacconi, convened
a series of panels to propose observational programs of significant
impact requiring large amounts of Hubble observations. He was
concerned that in the course of any regular time allocation process
there would be reluctance to set aside sufficient time to complete
such large projects in a timely manner. For decades a ‘factor-of-two’
controversy persisted, with values of the Hubble constant falling
between 50 and 100 km s${}^{-1}$ Mpc${}^{-1}$. A goal of 10% accuracy
for H${}_{o}$ was designated as one of HST’s three “Key Projects”. (The
other two were a study of the intergalactic medium using quasar
absorption lines, and a “medium-deep” survey of galaxies.)
This review is organized as follows: We first give a brief overview of
the cosmological context for measurements of the Hubble constant. We
discuss in some detail methods for measuring distances to galaxies,
specifically Cepheids, the tip of the red giant branch (TRGB), masers,
the Tully-Fisher relation and Type Ia supernovae (SNe Ia). We then
turn to a discussion of H${}_{o}$, its systematic uncertainties, other
methods for measuring H${}_{o}$, and future measurements of the Hubble
constant. Our goal is to describe the recent developments that have
resulted in a convergence to better than 10% accuracy in measurements
of the Hubble constant, and to outline how future data can improve
this accuracy. For wide-ranging previous reviews of this subject,
readers are referred to those of Hodge (1982), Huchra (1992), Jacoby
et al. (1992), van den Bergh (1992), Jackson (2007), and Tammann,
Sandage & Reindl (2008). An extensive monograph by Rowan-Robinson
(1985) details the history of the subject as it stood twenty-five
years ago.
2 EXPANSION OF THE UNIVERSE: THE COSMOLOGICAL CONTEXT
Excellent introductions to the subject of cosmology can be found in
Kolb & Turner (1990) and Dodelson (2003). We give a brief
description here to provide the basis for the nomenclature used
throughout this review. The expansion of a homogeneous and isotropic
universe can be described by a Friedmann-Lemaitre-Robertson-Walker
(FLRW) cosmology, which is characterized by parameters that describe
the expansion, the global geometry, and the general composition of the
universe. These parameters are all related via the Friedmann
equation, derived from the Einstein general relativity field
equations:
$$\displaystyle H^{2}(t)=\left({\dot{a}\over a}\right)^{2}$$
$$\displaystyle=$$
$$\displaystyle{8\pi G\over 3}\sum_{i}\rho_{i}(t)-{k\over a^{2}}$$
(1)
where $H(t)$ is the expansion rate, G is the Newtonian gravitational
constant, $a(t)$ is the cosmic scale factor characterizing the
relative size of the universe at time $t$ to the present scale,
$\rho_{i}(t)$ are the individual components of the matter-energy
density, and $k$ (with values of +1, 0, or -1) describes the global
geometry of the universe. The density $\rho_{i}$ characterizes the
matter-energy composition of the universe: the sum of the densities of
baryons, cold dark matter, and hot dark matter, and the contribution
from dark energy. Dividing by H${}^{2}$, we may rewrite the Friedmann
equation as $\Omega_{total}$ - 1 = $\Omega_{k}$ = k/(a${}^{2}$H${}^{2}$). For
the case of a spatially flat universe ($k=0$), $\Omega_{total}$ = 1.
In a matter-dominated universe, the expansion velocity of the Universe
slows down over time owing to the attractive force of
gravity. However, a decade ago two independent groups (Perlmutter et
al. 1999; Riess et al. 1998) found that supernovae at z$\sim$0.5
appear to be about 10% fainter than those observed locally,
consistent instead with models in which the expansion velocity is
increasing; i.e., a universe that is accelerating in its
expansion. Combined with independent estimates of the matter density,
these results are consistent with a universe in which one third of the
overall density is in the form of matter (ordinary plus dark), and two
thirds is in a form having a large, negative pressure, termed dark
energy. In this current standard model the expansion rate of the
Universe is given by
$$\displaystyle H^{2}(z)/H_{\circ}^{2}=\Omega_{matter}(1+z)^{3}+\Omega_{DE}(1+z)%
^{3(1+w)}$$
(2)
where $\Omega_{matter}$ and $\Omega_{DE}$ refer to the densities of
(ordinary, cold and hot dark) matter and dark energy, respectively,
and $w=p/\rho$ is the equation of state of the dark energy, the
ratio of pressure to energy density. Recent observations by the
Wilkinson Microwave Anisotropy Probe (WMAP), based on entirely
independent physics, give results consistent with the supernova data
(Komatsu et al. 2009; Dunkley et al. 2009). Under the assumption of a
flat universe, the current observations of distant supernovae and
measurements by the WMAP satellite are consistent with a cosmological
model where $\Omega_{matter}$ = 0.3, $\Omega_{vacuum}$ = 0.7, and $w=-1$. The observations are inconsistent with cosmological models
without dark energy.
Another critical equation from general relativity involving the second
derivative of the scale factor is:
$$\displaystyle\ddot{a}/a=-4\pi~{}G\sum_{i}(\rho_{i}+3p_{i})$$
(3)
where the sum is over the different contributions to the mass-energy
density of the Universe. According to this equation, both energy and
pressure govern the dynamics of the Universe, unlike the case of
Newtonian gravity where there is no pressure term. It also allows the
possibility of negative pressure, resulting in an effective repulsive
gravity, consistent with the observations of the acceleration.
Any component of the mass-energy density can be parameterized by its
ratio of pressure to energy density, $w$. For ordinary matter $w$ =
0, for radiation $w$ = 1/3, and for the cosmological constant $w$ =
-1. The effect on $\ddot{a}/a$ of an individual component is -4$\pi G\rho_{i}(1+3w_{i})$. If $w<-1/3$ that component will drive an
acceleration (positive $\ddot{a}$) of the Universe. The time evolution
of the equation of state is unknown; a convenient, simple
parameterization is $w(a)=w_{o}+(1-a)w_{a}$, where $w_{o}$ characterizes
the current value of $w$ and $w_{a}$ its derivative.
3 MEASUREMENT OF DISTANCES
In making measurements of extragalactic distances, objects are being
observed at a time when the scale factor of the Universe, $a$, was
smaller, and the age of the Universe, $t$, was younger than at
present. Measuring the cosmic expansion generally involves use of one
of two types of cosmological distances: the luminosity distance,
$$\displaystyle d_{\mathrm{L}}=\sqrt{L\over{4\pi F}}$$
(4)
which relates the observed flux (integrated over all frequencies), $F$, of an object to its intrinsic
luminosity, $L$, emitted in its rest frame; and the angular
diameter distance,
$$\displaystyle d_{\mathrm{A}}={D\over\theta}$$
(5)
which relates the apparent angular size of an object in radians,
$\theta$, to its proper size, $D$. The luminosity and angular
diameter distances are related by:
$$\displaystyle d_{\mathrm{L}}=(1+z)^{2}d_{\mathrm{A}}.$$
(6)
The distance modulus, $\mu$, is related to the luminosity distance as
follows:
$$\displaystyle\mu\equiv m-M=5~{}log~{}d_{\mathrm{L}}-5$$
(7)
where m and M are the apparent and absolute magnitudes of the objects,
respectively, and d${}_{\mathrm{L}}$ is in units of parsecs.
The requirements for measuring an accurate value of H${}_{o}$ are simple
to list in principle, but are extremely difficult to meet in
practice. The measurement of radial velocities from the displacement
of spectral lines is straightforward; the challenge is to measure
accurate distances. Distance measurements must be obtained far enough
away to probe the smooth Hubble expansion (i.e., where the random
velocities induced by gravitational interactions with neighboring
galaxies are small relative to the Hubble velocity), and nearby enough
to calibrate the absolute, not simply the relative distance scale.
The objects under study also need to be sufficiently abundant that
their statistical uncertainties do not dominate the error budget.
Ideally the method has a solid physical underpinning, and is
established to have high internal accuracy, amenable to empirical
tests for systematic errors.
We discuss in detail here three high-precision methods for measuring
distances to nearby galaxies: Cepheids, the tip of the red giant
branch (TRGB) method, and maser galaxies. For more distant galaxies,
we will additionally discuss three methods in detail: the Tully-Fisher
(TF) relation for spiral galaxies, the surface brightness fluctuation
(SBF) method and the maximum luminosities of Type Ia supernovae
(SNe Ia). Although maser distances have so far only been published
for two galaxies (NGC 4258 and UGC 3789), this method has considerable
potential, perhaps even at distances that probe the Hubble flow
directly.
Over the preceding decades a large number of other “distance
indicators” have been explored and applied with varying degrees of
success, often over relatively restricted ranges of distance. Main
sequence fitting, red giant “clump” stars, RR Lyrae stars, the level
of the horizontal branch, Mira variables, novae and planetary nebula
luminosity functions (PNLF), globular cluster luminosity functions
(GCLF), as well as red and blue supergiant stars all fall into this
class. Some, like the RR Lyrae stars, have provided crucial tests for
consistency of zero points but cannot themselves reach very far beyond
the Local Group because of their relatively faint intrinsic
luminosities. The reader is referred to recent papers on the SN II
distance scale (Dessart & Hiller 2005); PNLF (Ciardullo et al. 2002);
and Fundamental Plane (FP; Blakeslee et al. 2002) and references
therein.
Our goal here is not to provide an exhaustive review of all distance
determination methods, but rather to focus on a few methods with
demonstrably low dispersion, some currently understood physical basis,
and with sufficient overlap with other methods to test quantitatively
the accuracy of the calibration, and level of systematic errors for
the determination of H${}_{o}$. Before turning to a discussion of methods
for measuring distances, we discuss the general issue of interstellar
extinction.
3.0.1 Interstellar Extinction
Interstellar extinction will systematically decrease a star or
galaxy’s apparent luminosity. Thus, if extinction is not corrected
for it will result in a derivation of distance that is systematically
too large. Dust may be present along the line of sight either within
our Milky Way galaxy and/or along the same extended line of sight
within the galaxy under study.
Two main observational paths to correct for interstellar extinction
have been pursued: (1) make observations in at least two wavelength
bands and, using the fact that extinction is a known function of
wavelength to solve explicitly for the distance and
color-excess/extinction effects, or (2) observe at the longest
wavelengths practical so as to minimize implicitly the extinction
effects. The former assumes prior knowledge of the interstellar
extinction law and carries the implicit assumption that the extinction
law is universal. The latter path is conceptually more robust, given
that it simply makes use of the (empirically-established) fact that
extinction decreases with increasing wavelength. However, working at
longer and longer wavelengths has been technically more challenging so
this path has taken longer in coming to fruition.
From studies of Galactic O & B stars it is well-established that
interstellar extinction is wavelength dependent, and from the optical
to mid-infrared wavelengths it is a generally decreasing function of
increasing wavelength (see Cardelli, Clayton & Mathis 1996; Draine et
al. 2003 and references therein for empirical and theoretical
considerations). Limited studies of stars in external galaxies
(primarily the LMC and SMC) support this view, with major departures
being confined to the ultraviolet region of the spectrum (particularly
near 2200Å). Both for practical reasons (that is, detector
sensitivity) and because of the nature of interstellar extinction, the
majority of distance-scale applications have avoided the ultraviolet,
so the most blatant changes in the interstellar extinction curve have
been of little practical concern. At another extreme, the
universality of the longer-wavelength (optical through infrared)
portion of the extinction curve appears to break down in regions of
intense star formation and extremely high optical depths within the
Milky Way. However, the general (diffuse) interstellar extinction
curve, as parameterized by ratios of total-to-selective absorption,
such as $R_{V}$ = $A_{V}/E(B-V)$, appears to be much more stable from
region to region. Fortunately Cepheids, TRGB stars, and supernovae are
generally not found deeply embedded in very high optical-depth dust,
but are sufficiently displaced from their original sites of star
formation that they are dimmed mostly by the general, diffuse
interstellar extinction.
3.1 Cepheid Distance Scale
Since the discovery of the Leavitt Law (Leavitt, 1908; Leavitt &
Pickering 1912) and its use by Hubble to measure the distances to the
Local Group galaxies, NGC 6822 (Hubble 1925), M33 (Hubble 1926) and
M31 (Hubble 1929b), Cepheid variables have remained a widely
applicable and powerful method for measuring distances to nearby
galaxies. In 2009, the American Astronomical Society Council passed a
resolution recognizing the 100th anniversary of Henrietta Leavitt’s
first presentation of the Cepheid Period-Luminosity relation (Leavitt
1908). The Council noted that it was pleased to learn of a resolution
adopted by the organizers of the Leavitt symposium, held in November,
2008 at the Harvard-Smithsonian Center for Astrophysics, Cambridge,
MA. There, it was suggested that the Cepheid Period-Luminosity
relation be referred to as the Leavitt Law in recognition of Leavitt’s
fundamental discovery, and we do so here.
Cepheids are observed to pulsate with periods ranging from
2 to over 100 days, and their intrinsic brightnesses correlate with
those periods, ranging from -2 $<$ M${}_{V}$ $<$ -6 mag. The ease of
discovery and identification of these bright, variable supergiants
make them powerful distance indicators. Detailed reviews of the
Cepheid distance scale and its calibration can be found in Madore &
Freedman (1991), Sandage & Tammann (2006), Fouque et al. (2007) and
Barnes (2009). A review of the history of the subject is given
by Fernie (1969).
There are many steps that must be taken in applying Cepheids to the
extragalactic distance scale. The Cepheids must be identified against
the background of fainter, resolved and unresolved stars that
contribute to the surrounding light of the host galaxy. Overcoming
crowding and confusion is the key to the successful discovery,
measurement and use of Cepheids in galaxies beyond the Local Group.
From the ground, atmospheric turbulence degrades the image resolution,
decreasing the contrast of point sources against the background. In
space the resolution limit is set by the aperture of the telescope and
the operating wavelengths of the detectors. HST gives a factor of ten
increased resolution over most groundbased telescopes of comparable
and larger aperture.
As higher precision data have been accumulated for Cepheids in greater
numbers and in different physical environments, it has become possible
to search for and investigate a variety of lower level, but
increasingly important, systematics affecting the Leavitt Law. Below
we briefly discuss these complicating effects (reddening and
metallicity, in specific) and their uncertainties, and quantify their
impact on the extragalactic distance scale. We then elaborate on
methods for correcting for and/or mitigating their impact on distance
determinations. But first we give an overview of the physical basis
for the Cepheid period-luminosity relation in general terms.
3.1.1 Underlying Physics
The basic physics connecting the luminosity and color of a Cepheid to
its period is well understood. Using the Stephan-Boltzmann law
$$\displaystyle L=4\pi R^{2}\sigma T_{e}^{4}$$
(8)
the bolometric luminosities, $L$, of all stars, including Cepheids,
can be derived. Expressed in magnitudes, the Stefan-Boltzmann Law
becomes
$$\displaystyle M_{BOL}=-5~{}log~{}R-10~{}log~{}T_{e}+C.$$
(9)
Hydrostatic equilibrium can be achieved for long periods of time along
the core-helium-burning main sequence. As a result stars are
constrained to reside there most of the time, thereby bounding the
permitted values of the independent radius and temperature variables
for stars in the M${}_{BOL}$ - $logT_{e}$ plane.
If $log~{}T_{e}$ is mapped into an observable intrinsic color (i.e.,
$(B-V)_{o}$ or $(V-I)_{o}$) and radius is mapped into an observable period
(through a period-mean-density relation), the period-luminosity-color
(PLC) relation for Cepheids can be determined (e.g., Sandage 1958;
Sandage & Gratton 1963; and Sandage & Tammann 1968). In its
linearized form for pulsating variables, the Stefan-Boltzmann law
takes on the following form of the PLC: $M_{V}~{}=~{}\alpha~{}logP~{}+~{}\beta(B-V)_{o}~{}+~{}\gamma$.
Cepheid pulsation occurs because of the changing atmospheric opacity
with temperature in the doubly-ionized helium zone. This zone acts
like a heat engine and valve mechanism. During the portion of the
cycle when the ionization layer is opaque to radiation that layer
traps energy resulting in an increase in its internal pressure. This
added pressure acts to elevate the layers of gas above it resulting
in the observed radial expansion. As the star expands it does work
against gravity and the gas cools. As it does so its temperature falls
back to a point where the doubly-ionized helium layer recombines and
becomes transparent again, thereby allowing more radiation to
pass. Without that added source of heating the local pressure drops,
the expansion stops, the star recollapses, and the cycle
repeats. The alternate trapping and releasing of energy in the helium
ionization layer ultimately gives rise to the periodic change in
radius, temperature and luminosity seen at the surface. Not all stars
are unstable to this mechanism. The cool (red) edge of the Cepheid
instability strip is thought to be controlled by the onset of
convection, which then prevents the helium ionization zone from
driving the pulsation. For hotter temperatures, the helium ionization
zone is located too far out in the atmosphere for significant
pulsations to occur. Further details can be found in the classic
stellar pulsation text book by Cox (1980).
Cepheids have been intensively modeled numerically, with increasingly
sophisticated hydrodynamical codes (for a recent review see Buchler
2009). While continuing progress is being made, the challenges remain
formidable in following a dynamical atmosphere, and in modeling
convection with a time-dependent mixing length approximation. In
general, observational and theoretical period-luminosity-color
relations are in reasonable agreement (e.g., Caputo 2008). However, as
discussed in §3.1.3, subtle effects (for example the
effect of metallicity on Cepheid luminosities and colors) remain
difficult to predict from first principles.
3.1.2 Cepheids and Interstellar Extinction
If one adopts a mean extinction law and applies it universally to all
Cepheids, regardless of their parent galaxy’s metallicity, then one
can use the observed colors and magnitudes of the Cepheids to correct
for the total line-of-sight extinction. If, for example, observations
are made at $V$ and $I$ wavelengths (as is commonly done with HST),
and the ratio of total-to-selective absorption $R_{VI}=A_{V}/E(V-I)$
is adopted a priori (e.g., Cardelli, Clayton & Mathis
1989), then one can form from the observed colors and magnitudes an
extinction-free, Wesenheit magnitude, W, (Madore 1982), defined by
$$\displaystyle W=V-R_{VI}\times(V-I)$$
(10)
as well as an intrinsic Wesenheit magnitude, W${}_{o}$
$$\displaystyle W_{o}=V_{o}-R_{VI}\times(V-I)_{o}.$$
(11)
By construction
$$\displaystyle W=V_{o}+A_{V}-R_{VI}\times(V-I)_{o}-R_{VI}\times E(V-I)$$
(12)
$$\displaystyle=V_{o}-R_{VI}(V-I)_{o}+A_{V}-R_{VI}\times E(V-I)$$
(13)
where V = $V_{o}$ + $A_{V}$ and (V-I) = $(V-I)_{o}$ + E(V-I), and $A_{V}=R_{VI}\times E(V-I)$, thereby reducing the last two terms to zero,
leaving $W=V_{o}-R_{VI}\times(V-I)_{o}$ which is equivalent to the
definition of $W_{o}$.
The numerical value of W as constructed from observed data points is
numerically identical to the intrinsic (unreddened) value of the
Wesenheit function, $W_{o}$. Thus, W, for any given star, is dimmed
only by distance and (by its definition) it is unaffected by
extinction, again only to the degree that R is known and is
universal. W can be formed for any combination of
optical/near-infrared bandpasses.
3.1.3 Metallicity
The atmospheres of stars like Cepheids, having effective temperatures
typical of G and K supergiants, are affected by changes in the
atmospheric metal abundance. There are additionally changes in the
overall stellar structure (the mass-radius relation) due to changes in
chemical composition. Thus, it is expected that the colors and
magnitudes of Cepheids, and their corresponding PL relations, should
be a function of metallicity. However, predicting the magnitude (and
even simply the sign of the effect) at either optical or even longer
wavelengths, has proven to be challenging: different theoretical
studies have led to a range of conclusions. We review below the
empirical evidence. For a comparison with recent theoretical studies
we refer the interested readers to papers by Sandage, Bell & Tripicco
(1999), Bono et al. (2008), Caputo (2008) and Romaniello et al. (2008,
2009).
Two tests of the metallicity sensitivity of the Cepheid PL relation
have been proposed. The first test uses measured radial metallicity
gradients within individual galaxies to provide a differential test in
which observed changes in the Cepheid zero point with radius are
ascribed to changes in metallicity. This test assumes that the
Cepheids and the HII regions (which calibrate the measured [O/H]
abundances) share the same metallicity at a given radius, and that
other factors are not contributing to a zero-point shift, such as
radially dependent crowding or changes of the extinction law with
radius, etc. The second test uses the difference between Cepheid and
TRGB distances for galaxies with both measurements and seeks a
correlation of these differences as a function of the Cepheid (i.e.,
HII region) metallicity.
The first test, leveraging metallicity gradients in individual
galaxies, has been undertaken for M31 (Freedman & Madore 1990), M101
(Kennicutt et al. 1998), NGC 4258 (Macri et al. 2006) and M33
(Scowcroft et al. 2009). The second test, comparing TRGB and Cepheid
distances, was first made by Lee, Freedman & Madore (1993). Udalski
et al. (2001) used a newly observed sample of Cepheids in IC 1613 in
comparison to a TRGB distance to that same galaxy, and concluded that,
in comparison with the SMC, LMC and NGC 6822 there was no metallicity
effect over a factor of two in metallicity at low mean metallicity.
An extensive cross comparison of Cepheid and TRGB distances including
high-metallicity systems is well summarized by Sakai et al. (2004).
Individual datasets and metallicity calibrations are still being
debated, but the general concensus is that for the reddening-free
W(VI) calibration of the Cepheid distance scale there is a metallicity
dependence that, once corrected for, increases the distance moduli of
higher metallicity Cepheids if their distances are first determined
using a lower metallicity (e.g., LMC) PL calibration. However, in a
different approach, Romaniello et al (2008) have obtained direct
spectroscopic [Fe/H] abundances for a sample of Galactic, LMC and SMC
Cepheids.
They compare the Leavitt Law for samples of stars
with different mean metallicities and find a dependence of the V-band
residuals with [Fe/H] abundance that is in the opposite sense to these
previous determinations. Clearly the effect of metallicity on the
observed properties of Cepheids is still an active and on-going area
of research.
A remaining uncertainty at the end of the H${}_{o}$ Key Project
(described
further in §4
was due to the fact that the majority of
Key Project galaxies have metallicities more comparable to the Milky
Way than to the LMC, which was used for the calibration. Below, in §3.1.4 we ameliorate this systematic error by
adopting a Galactic calibration provided by new trigonometric
parallaxes of Milky Way Cepheids, not available at the time of the Key
Project. By renormalizing to a high-metallicity (Galactic)
calibration for the Cepheids, metallicity effects are no longer a
major systematic, but rather a random error, whose size will decrease
with time as the sample size increases.
Based on the Cepheid
metallicity calibration of Sakai et al (2004) (with adopted LMC and
Solar values for 12 + log (O/H) of 8.50 and 8.70, respectively; and a
metallicity slope of 0.25 mag/dex), we estimate the metallicity
correction in transforming from an LMC to a Galactic-based Cepheid
zero point to be 0.25 x 0.2 = 0.05 mag, with a residual scatter of
about $\pm$ 0.07 mag.
3.1.4 Galactic Cepheids with Trigonometic Parallaxes
An accurate trigonometric parallax calibration for Galactic Cepheids
has been long sought, but very difficult to achieve in practice. All
known classical (Galactic) Cepheids are more than 250 pc away:
therefore for direct distance estimates good to 10%, parallax
accuracies of $\pm$0.2 milliarcsec are required, necessitating space
observations. The Hipparchos satellite reported parallaxes for 200 of
the nearest Cepheids, but (with the exception of Polaris) even the
best of these were of very low signal-to-noise (Feast & Catchpole
1997).
Recent progress has come with the use of the Fine Guidance Sensor on
HST (Benedict et al. 2007), whereby parallaxes, in many cases accurate
to better than $\pm$10% for individual stars were obtained for 10
Cepheids, spanning the period range 3.7 to 35.6 days. We list the
distance moduli, errors, and distances for these Cepheids in Table
1. These nearby Cepheids span a range of distances from about 300 to
560 pc.
The calibration of the Leavitt relation based on these ten stars leads
to an error on their mean of $\pm$3% (or $\pm$0.06 mag), which we
adopt here as the systematic error on the distance to the LMC
discussed below, and the Cepheid zero point, in general. In what
follows, we adopt the zero-point based on the Galactic calibration,
but retain the slope based on the LMC, since the sample size is still
much larger and statistically better defined. Improvement of this
calibration (both slope and zero point) awaits a larger sample of
(long-period) Cepheids from GAIA. We have adopted a zero-point
calibration based both on these HST data, as well as a calibration
based on the maser galaxy, NGC 4258 (§3.3) and present
a revised value of H${}_{o}$ in §4.
A significant systematic at this time is the calibration zero point.
Its value depends on only ten stars, each of which have
uncertainties in their distances that are individually at the 10%
level. Given the small sample size of the Galactic calibrators, the
error on their mean can be no better than 3% (or $\pm$0.06 mag),
which we adopt here as the newly revised systematic error on the
distance to the LMC, and on the Cepheid zero point in general. In what
follows, we adopt the zero point based on the Galactic calibration,
but retain the slope based on the LMC, because the sample size is
still much larger and therefor statistically better defined. There has recently
been discussion in the literature about possible variations in the
slope of the Leavitt Law occurring around 10 days (see Ngeow, Kanbur
& Nanthakumar (2008) and references therein); however, Riess et
al. (2009a) and Madore & Freedman (2009) both find that when using W,
the differences are not statistically significant. Improvement of
this calibration (both in the slope and zero point) awaits a larger
sample of (long-period) Cepheids from the Global Astrometric
Interforometer for Astrophysics satellite (GAIA).
3.1.5 The Distance to the Large Magellanic Cloud
Because of the abundance of known Cepheids in the Large Magellanic
Cloud this galaxy has historically played a central role in the
calibration of the Cepheid extragalactic distance scale. Several
thousand Cepheids have been identified and cataloged in the LMC
(Leavitt 1908; Alcock et al. 2000; Soszynski et al. 2008), all at
essentially the same distance. Specifically, the slope of the Leavitt
Law is both statistically and systematically better determined in the
LMC than it is for Cepheids in our own Galaxy. This is especially true
for the long-period end of the calibration where the extragalactic
samples in general are far better populated than the more restricted
Milky Way subset available in close proximity to the Sun. In Figure 2
we show the range of values of LMC distance moduli based on
non-Cepheid moduli, published up to 2008. The median value of the
non-Cepheid distance moduli is 18.44$\pm$0.16 mag.
Based on the new results for direct geometric parallaxes to Galactic
Cepheids (Benedict et al. 2007) discussed in §3.1.4, we calibrate the sample of LMC Cepheids used
as fiducial for the HST Key Project. The new Galactic parallaxes now
allow a zero point to be obtained for the Leavitt Law. In Figure 3, we
show BVIJHK Leavitt Laws for the Galaxy and LMC calibrated with the
new parallaxes. As can be seen, the slope of the Leavitt Law
increases with increasing wavelength, with a corresponding decrease in
dispersion. In the past, because of the uncertainty in the Galactic
Cepheid calibration, a distance modulus to the LMC and the mean
Cepheid extinction were obtained using a combination of several
independent methods. Multi-wavelength Leavitt Laws were then used to
obtain differential extragalactic distances and reddenings for
galaxies beyond the LMC. We can show here for the first time the
multiwavelength solution for the distance to the LMC itself based on
the apparent BVIJHK Cepheid distance moduli, fit to a Cardelli et
al. (1989) extinction curve, and adopting a Galactic calibration for
the zero point, and the slope from the LMC data. The LMC
apparent moduli, scaled to the Galactic calibration are shown as a
function of inverse wavelength in Figure 4. The data are well fit by a
Galactic extinction law having a scale factor corresponding to E(B-V)
= 0.10 mag, and an intercept at 1/$\lambda$ = 0.00, corresponding to a
true modulus of $\mu(LMC)_{o}$ = 18.40 $\pm$0.01 mag.
The composite (Galactic + LMC) VI Wesenheit function is shown in
Figure 5. The correspondence between the two independent Cepheid
samples is good, and the dispersion in W remains very small. The
Wesenheit function uses fewer wavelengths, but it employs the two
bandpasses directly associated with the HST Key Project and most
extragalactic Cepheid distances, and so we adopt it here.
The W(V,VI) Wesenheit function gives a minimized fit between the
Galactic and the LMC Cepheids corresponding to a true distance modulus
of $\mu(LMC)_{o}$ = 18.44 $\pm$0.03 mag. Correcting for metallicity
(see §3.1.3) would decrease this to 18.39 mag.
Because of the large numbers of Cepheids involved over numerous
wavelengths, the statistical errors on this value are small; and once
again systematic errors dominate the error budget. As discussed in §3.1.4, we adopt a newly revised systematic error on
the distance to the LMC, of 3% (or $\pm$0.06 mag).
As noted above, the main drawback to using the LMC as the fundamental
calibrator of the Leavitt Law is the fact that the LMC Cepheids are of
lower metallicity than many of the more distant spiral galaxies
useful for measuring the Hubble constant. This systematic is largely
eliminated by adopting the higher-metallicity Galactic calibration as
discussed in §3.1.3, or the NGC 4258 calibration
discussed in §3.3.
3.2 Tip of the Red Giant Branch (TRGB) Method
As discussed briefly in §3.1.3 a completely
independent method for determining distances to nearby galaxies that
has comparable precision to Cepheids is the tip of the red giant
branch (TRGB). The TRGB method uses the theoretically well-understood
and observationally well-defined discontinuity in the luminosity
function of stars evolving up the red giant branch in old, metal-poor
stellar populations. This feature has been calibrated using Galactic
globular clusters, and because of its simplicity and straightforward
application it has been widely used to determine distances to nearby
galaxies. A recent and excellent review of the topic is given by
Rizzi et al. (2007) and Bellazzini (2008).
Using the brightest stars in globular clusters to estimate distances
has a long history (ultimately dating back to Shapley 1930 and later
discussed again by Baade 1944). The method gained widespread
application in a modern context in two papers, one by Da Costa &
Armandroff (1990) (for Galactic globular clusters), and the other by
Lee, Freedman & Madore (1993) (where the use of a quantitative
digital filter to measure the tip location was first introduced in a
extragalactic context).
Approximately 250 galaxies have had their distances measured by the
TRGB method, compared to a total of 57 galaxies with Cepheid
distances. (A comprehensive compilation of direct distance
determinations is available at the following web site:
http://nedwww.ipac.caltech.edu/level5/NED1D/ned1d.html). In practice,
the TRGB method is observationally a much more efficient technique,
since, unlike for Cepheid variables, there is no need to follow them
through a variable light cycle: a single-epoch observation, made at
two wavelengths (to provide color information) is sufficient. A recent
example of applying the TRGB technique to the maser galaxy, NGC 4258,
is shown in Figure 6.
3.2.1 Theory
The evolution of a post-main-sequence low-mass star up the red giant
branch is one of the best-understood phases of stellar evolution
(e.g., Iben & Renzini 1983). For the stars of interest in the context
of the TRGB, a helium core forms at the center, supported by electron
degeneracy pressure. Surrounding the core, and providing the entire
luminosity of the star is a hydrogen-burning shell. The “helium ash”
from the shell increases the mass of the core systematically with
time. In analogy with the white dwarf equation of state and the
consequent scaling relations that interrelate core mass, M${}_{c}$, and
core radius, R${}_{c}$, for degenerate electron support, the core (=
shell) temperature, T${}_{c}$, and the resulting shell luminosity are
simple functions of M${}_{c}$ and R${}_{c}$ alone: $T_{c}\sim M_{c}/Rc$ and $L_{c}\sim M_{c}^{7}/R_{c}^{5}.$ As a result, the core mass increases, the
radius simultaneously shrinks and the luminosity increases due to both
effects. The star ascends the red giant branch with increasing
luminosity and higher core temperatures. When $T_{c}$ exceeds a
physically well-defined temperature, helium ignites throughout the
core. The helium core ignition does not make the star brighter, but
rather it eliminates the shell source by explosively heating and
thereby lifting the electron degeneracy within the core. This dramatic
change in the equation of state is such that the core flash (which
generates the equivalent instantaneous luminosity of an entire galaxy)
is internally quenched in a matter of seconds, inflating the core and
settling down to a lower-luminosity, helium core-burning main
sequence. The transition from the red giant to the horizontal branch
occurs rapidly (within a few million years) so that observationally
the TRGB can be treated as a physical discontinuity. A stellar
evolutionary “phase change” marks the TRGB. The underlying power of
the TRGB is that it is a physically-based and theoretically
well-understood method for determining distance. Nuclear physics
fundamentally controls the stellar luminosity at which the RGB is
truncated, essentially independent of the chemical composition and/or
residual mass of the envelope sitting above the core.
The radiation from stars at the TRGB is redistributed with wavelength
as a function of the metallicity and mass of the envelope.
Empirically it is found that the bolometric corrections are smallest
in the $I$-band, and most recent measurements have been made at this
wavelength. The small residual metallicity effect on the TRGB
luminosity is well documented, and can be empirically calibrated out
(see Madore, Mager & Freedman 2009).
3.2.2 Recent TRGB Results and Calibration of H${}_{o}$
In the context of measuring the Hubble constant, RGB stars are not as
bright as Cepheids, and therefore cannot be seen as far, but they can
still be seen to significant distances ($\sim$20 Mpc and including
Virgo, e.g., Durrell et al. 2007; Caldwell 2006) and, as we have
seen, they can serve an extremely important function as an independent
test of the Cepheid distance scale and check on systematic effects.
Mould & Sakai (2008) have used the TRGB as an alternate calibration
to the Cepheid distance scale for the determination of H${}_{o}$.
They use 14 galaxies for which TRGB distances can be measured
to calibrate the Tully-Fisher relation, and determine a value of $H_{\circ}$ = 73 $\pm$ 5 (statistical only) km s${}^{-1}$ Mpc${}^{-1}$, a value
about 10% higher than found earlier by Sakai et al. (2000) based on a
Cepheid calibration of 23 spiral galaxies with Tully-Fisher
measurements. In subsequent papers they calibrate the SBF method
(Mould & Sakai 2009a) and then go on to check the calibration of the
FP for early-type galaxies and the luminosity scale of Type Ia
supernovae (Mould & Sakai 2009b). They conclude that the TRGB and
Cepheid distances scales are all consistent using SBF, FP, SNe Ia and
the TF relation.
3.3 Maser Galaxies
$H_{2}O$ mega-masers have recently been demonstrated to be a powerful
new geometric tool for accurately measuring extragalactic
distances. An extensive review of both the physical nature and the
application of mega-masers to the extragalactic distance scale can be
found in Lo (2005). The technique utilizes the mapping of 22.2 GHz
water maser sources in the accretion disks of massive black holes
located in spiral galaxies with active galactic nuclei, through
modeling of a rotating disk ideally in pure Keplerian motion. In the
simplest version of the technique, a rotation curve is measured along
the major axis of the disk; proper motions are measured on the near
side of the disk minor axis, and a comparison of the angular
velocities in the latter measurement with the absolute velocities in
km s${}^{-1}$ in the former measurements yields the distance.
The method requires a sample of accretion disks that are relatively
edge on (so that a rotation curve can be obtained from radial-velocity
measurements) and a heating source such as x-rays or shocks to produce
maser emission. The basic assumption is that the maser emission arises
from trace amounts of water vapor ($<$10${}^{-5}$ in number density) in
very small density enhancements in the accretion disk and that they
act as perfect dynamical test particles. The maser sources appear as
discrete peaks in the spectrum or as unresolved spots in the images
constructed from Very Long Baseline Interferometry
(VLBI). Measurements of the acceleration (a = V${}^{2}$/ r) are obtained
directly by monitoring the change of maser radial velocities over time
from single-dish observations. Proper motions are obtained from
observed changes in angular position in interferometer images. The
approximately Keplerian rotation curve for the disk is modeled,
allowing for warps and radial structure. The best studied galaxy, NGC
4258, at a distance of about 7 Mpc, is too close to provide an
independent measurement of the Hubble constant (i.e., free from local
velocity-field perturbations) but it serves as an invaluable
independent check of the Cepheid zero-point calibration.
3.3.1 A Maser Distance to NGC 4258
VLBI observations of $H_{2}O$ maser sources surrounding the active
galactic nucleus of NGC 4258 reveal them to be in a very thin,
differentially rotating, slightly warped disk. The Keplerian velocity
curve has deviations of less than one percent. The disk has a
rotational velocity in excess of 1,000 km/s at distances on the order
of 0.1 pc from the inferred super-massive (10${}^{7}$M${}_{\odot}$) nuclear
black hole. Detailed analyses of the structure of the accretion disk
as traced by the masers have been published (e.g., Herrnstein et
al. 1999; Humphreys, et al, 2008 and references therein). Over time it
has been possible to measure both proper motions and accelerations of
these sources and thereby allow for the derivation of two independent
distance estimates to this galaxy. The excellent agreement of these
two estimates supports the a priori adoption of the Keplerian
disk model and gives distances of 7.2 $\pm$ 0.2 and 7.1 $\pm$ 0.2 Mpc,
respectively.
Because of the simplicity of the structure of the maser system in NGC
4258 and its relative strength, NGC 4258 will remain a primary test
bed for studying systematic effects that may influence distance
estimates. Several problems may limit the ultimate accuracy of this
technique, however. For example, because the masers are only
distributed over a small angular part of the accretion disk, it is
difficult to assess the importance of non-circular orbits. Of
possible concern, eccentric disks of stars have been observed in a
number galactic nuclei where the potential is dominated by the black
hole, as is the case for NGC 4258. In addition, even if the disk is
circular, it is not a given that the masers along the minor axis are
at the same radii as the masers along the major axis. The self
gravity of the disk also may need to be investigated and modeled since
the maser distribution suggests the existence of spiral arms
(Humphreys et al., 2008). Finally, radiative transfer effects may
cause non-physical motions in the maser images. Although the current
agreement of distances using several techniques is comforting, having
only one sole calibrating galaxy for this technique remains a concern,
and further galaxies will be required to ascertain the limiting
uncertainty in this method.
3.3.2 Other Distance Determinations to NGC 4258
The first Cepheid distance to NGC 4258 was published by Maoz et al,
(1999) who found a distance of 8.1$\pm$0.4 Mpc, based on an
LMC-calibrated distance modulus of 18.50 mag. Newman et al. (2001)
found a distance modulus of 29.47 $\pm$ 0.09 (random) $\pm$ 0.15
(systematic) giving a distance of 7.83 $\pm$ 0.3 $\pm$0.5 Mpc. Macri
et al. (2006) reobserved NGC 4258 in two radially (and chemically)
distinct fields discovering 281 Cepheids at BV and I
wavelengths. Their analysis gives a distance modulus of 29.38 $\pm$
0.04 $\pm$0.05 mag (7.52 $\pm$ 0.16 Mpc), if one adopts $\mu(LMC)=18.50$ mag. Several more recent determinations of resolved-star
(Cepheid and TRGB) distance moduli to NGC 4258 are in remarkably
coincident agreement with the maser distance modulus. di Benedetto
(2008) measures a Cepheid distance modulus of 29.28 $\pm$0.03
$\pm$0.03 for NGC 4258, corresponding to a distance of 7.18 Mpc;
Benedict et al. (2007) also find a distance modulus of 29.28 $\pm$ 08
mag ; and Mager, Madore & Freedman (2008) also find a value of 29.28
$\pm$0.04 $\pm$0.12 mag both from Cepheids and from the TRGB
method. These latter studies are in exact agreement with the current
maser distance. Higher accuracy has come from larger samples with
higher signal-to-noise data, and improved treatment of metallicity.
An alternative approach to utilizing the maser galaxy in the distance
scale is to adopt the geometric distance to NGC 4258 as foundational,
use it to calibrate the Leavitt Law, and from there determine the
distance to the LMC. Macri et al. (2006) adopted this approach and
conclude that the true distance modulus to the LMC is
18.41$\pm$0.10 mag. This value agrees well with the new Galactic
Cepheid calibration of the LMC Leavitt law, as discussed in §3.1.5.
3.3.3 NGC 4258 and the Calibration of H${}_{o}$
The distance to NGC 4258 can be used to leapfrog over the LMC
altogether to calibrate the Cepheid PL relation and then secondary
methods. Macri et al. (2006) and Riess et al. (2009a,b) have adopted
the distance to NGC 4258 as a calibration of the supernova distance
scale, as discussed further in §3.6.2.
Attempts to measure distances to other megamasers has proved to be
difficult. About 2000 galaxies have been surveyed for masers and more
than 100 masers discovered to date. The detection rate of about 5% is
likely due to detection sensitivity and the geometric constraint that
the maser disk be viewed nearly edge on because the maser emission is
expected to be highly beamed in the plane of the disk. About 30 of
these masers have spectral profiles indicative of emission from thin
disks: i.e., masers at the galactic systemic velocity and groups of
masers symmetrically spaced in velocity. About a dozen maser galaxies
are sufficiently strong that they can be imaged with phase-referenced
VLBI techniques. Only about five have been found to have sufficiently
simple structure so that they can be fit to dynamical models and have
their distances determined. The most promising examples of these
galaxies is UGC 3789, which has a recessional velocity of greater than
3000 km/s, and is being pursued by the Megamaser Cosmology Project
(Reid et al. 2009).
If a significant number of maser galaxies can be found and precisely
observed well into the Hubble flow, this method, can, in principle,
compete with methods such as SNe Ia for measuring H${}_{\circ}$. The
challenge will be to obtain large enough sample sizes of hundreds of
objects, in order to average over large-scale flows. Unfortunately,
this likely will not be accomplished in the upcoming decade. It is
also hoped that nearby objects will be found where this technique can
be applied, in addition to NGC 4258, and strengthen the zero-point
calibration of the extragalactic distance scale. The future for this
technique (beyond 2020) looks promising, given a high- frequency
capability for the Square Kilometer Array.
3.4 Surface Brightness Fluctuation (SBF) Method
For distances to elliptical galaxies and early-type spirals with large
bulge populations the Surface Brightness Fluctuation (SBF) method,
first introduced by Tonry and Schneider (1988), overlaps with and
substantially exceeds the current reach of the TRGB method. Both
methods use properties of the red giant branch luminosity function to
estimate distances. The SBF method quantifies the effect of distance
on an over-all measure of resolution of the Population II red giant
stars, naturally weighted both by their intrinsic luminosities and
relative numbers. What is measured is the pixel-to-pixel variance in
the photon statistics (scaled by the surface brightness) as derived
from an image of a pure population of red giant branch stars. For
fixed surface brightness, the variance in a pixel (of fixed angular
size) is a function of distance, simply because the total number of
discrete sources contributing to any given pixel increases with the
square of the distance. While the TRGB method relies entirely on the
very brightest red giant stars, the SBF method uses a
luminosity-weighted integral over the entire RGB population in order
to define a typical “fluctuation star” whose mean magnitude,
$\overline{M_{I}}$ is assumed to be universal and can therefore be used
to derive distances. For recent discussions of the SBF method, the
reader is referred to Biscardi et al. (2008) and Blakeslee et
al. (2009).
Aside from the removal of obvious sources of contamination such as
foreground stars, dust patches and globular clusters, the SBF method
does require some additional corrections. It is well known that the
slope of the red giant branch in the color-magnitude diagram is a
function of metallicity, and so the magnitude of the fluctuation star
is both expected and empirically found to be a function metallicity. A
(fairly steep) correction for metallicity has been derived and can be
applied using the mean color of the underlying stellar population
$\overline{M_{I}}=-1.74+4.5(V-I)_{o}-1.15$ (Tonry et al. 2002).
A recent and comprehensive review of the application of the SBF method
to determining cosmic distances, and its comparison to the
Fundamental-Plane (FP) method is given in Blakeslee et al (2002).
Over 170 galaxies enter into the comparison; this analysis leads to
the conclusion that $H_{o}=72\pm 4(random)\pm 11(systematic)$
km/s/Mpc.
3.5 Tully-Fisher Relation
The total luminosity of a spiral galaxy (corrected to face-on
inclination to account for extinction) is strongly correlated with the
galaxy’s maximum (corrected to edge-on inclination) rotation
velocity. This relation, calibrated via the Leavitt Law or TRGB,
becomes a powerful means of determining extragalactic distances (Tully
& Fisher 1977; Aaronson et al. 1986; Pierce & Tully 1988; Giovanelli
et al. 1997). The Tully-Fisher relation at present is one of the most
widely applied methods for distance measurements, providing distances
to thousands of galaxies both in the general field and in groups and
clusters. The scatter in this relation is wavelength-dependent and
approximately $\pm$0.3-0.4 mag or 15-20% in distance (Giovanelli et
al. 1997; Sakai et al. 2000; Tully & Pierce 2000).
In a general sense, the Tully-Fisher relation can be understood in terms
of the virial relation applied to rotationally supported disk
galaxies, under the assumption of a constant mass-to-light ratio
(Aaronson, Mould & Huchra 1979). However, a detailed self-consistent
physical picture that reproduces the Tully-Fisher relation (e.g.,
Steinmetz & Navarro 1999), and the role of dark matter in producing
almost universal spiral galaxy rotation curves (McGaugh et al. 2000)
still remain a challenge.
Spitzer archival data have recently yielded an unexpected and
exciting discovery. Of the 23 nearby galaxies with HST Cepheid
distances that can be used to independently calibrate the Tully-Fisher
relation, there are eight that currently also have 3.6$\mu$m published
total magnitudes (Dale et al. 2007). In Figure 7 (left three panels)
we show the B, I and H-band TF relations for the entire sample of
currently available calibrating galaxies from Sakai et
al. (2000). Their magnitudes have been corrected for
inclination-induced extinction effects and their line widths have been
corrected to edge-on. The scatter is $\pm$0.43, 0.36 and 0.36 mag for
the B, I and H-band relations, respectively; the outer lines follow
the mean regression at $\pm$2-sigma. If it holds up with further data,
this intrinsic scatter means that to measure a distance good to 5%,
say, using even the best of these TF relations one would need to find
a grouping of 16 galaxies in order to beat down the intrinsic rms
scatter. In the right panel of Figure 7 we show the mid-IR TF relation
for the eight galaxies with Cepheid distances and published IRAC
observations, measured here at 3.6$\mu$m. The gains are
impressive. With the magnitudes not even corrected for any inclination
effects, the scatter within this sample is found to be only
$\pm$0.12 mag. Each of these galaxies entered the calibration with its
own independently determined Cepheid-calibrated distance. If this
correlation stands the test of time as additional calibrators enter
the regression, using the mid-IR TF relation a single galaxy could
potentially yield a distance good to $\pm$5%. All TF galaxies, when
observed in the mid-IR, would then individually become precision
probes of large-scale structure, large-scale flows and the Hubble
expansion.
3.6 Type Ia Supernovae
One of the most accurate means of measuring cosmological distances out
into the Hubble flow utilizes the peak brightness of Type Ia
supernovae (SNe Ia). The potential of supernovae for measuring
distances was clear to early researchers (e.g., Baade, Minkowski,
Zwicky) but it was the Hubble diagram of Kowal (1968) that set the
modern course for this field, followed by decades of work by Sandage,
Tammann and collaborators (e.g., Sandage & Tammann 1982; Sandage &
Tammann 1990); see also the review by Branch (1998). Analysis by
Pskovskii (1984), followed by Phillips (1993), established a
correlation between the magnitude of a SN Ia at peak brightness and
the rate at which it declines, thus allowing supernova luminosities to
be “standardized”. This method currently probes farthest into the
unperturbed Hubble flow, and it possesses very low intrinsic scatter:
in recent studies, the decline-rate corrected SN Ia Hubble diagram is
found to have a dispersion of $\pm$7-10% in distance (e.g., Folatelli
et al. 2009, Hicken et al. 2009). A simple lack of Cepheid calibrators
prevented the accurate calibration of Type Ia supernovae for
determination of H${}_{\circ}$ prior to HST. Substantial improvements to
the supernova distance scale have resulted from recent dedicated,
ground-based supernova search and follow-up programs yielding CCD
light curves for nearby supernovae (e.g., Hamuy et al. 1996;
Jha et al. 2006; Contreras et al. 2010). Sandage
and collaborators undertook a major program with HST to find Cepheids
in nearby galaxies that have been host to Type Ia supernovae (Sandage
et al. (1996) Saha et al. 1999), and thereby provided the first
Cepheid zero-point calibration, which has recently been followed up by
Macri et al. (2006) and Riess et al. (2009a,b).
For Hubble constant determinations, the challenge in using SNe Ia
remains that few galaxies in which SN Ia events have been observed are
also close enough for Cepheid distances to be measured. Hence, the
calibration of the SN Ia distance scale is still subject to
small-number statistical uncertainties. At present, the numbers of
galaxies for which there are high-quality Cepheid and SN Ia
measurements (in most cases made with the same telescopes and
instruments as the Hubble flow set) is limited to six objects (Riess
et al. 2009a).
3.6.1 Underlying Theory
SNe Ia result from the thermonuclear runaway explosions of stars. From
observations alone, the presence of SNe Ia in elliptical galaxies
suggests that they do not come from massive stars. Many details of
the explosion are not yet well understood, but the generally accepted
view is that of an carbon-oxygen, electron-degenerate,
nearly-Chandrasekhar-mass white dwarf orbiting in a binary system with
a close companion (Whelan & Iben 1973). As material from the Roche
lobe of the companion is deposited onto the white dwarf, the pressure
and temperature of the core of the white dwarf increases until
explosive burning of carbon and oxygen is triggered. An alternative
model is that of a “double degenerate” system (merger with another
white dwarf). Although on observational grounds, there appear to be
too few white dwarf pairs, this issue has not been conclusively
resolved. A review of the physical nature of SNe Ia can be found in
Hillebrandt & Niemeyer (2000).
A defining characteristic of observed SNe Ia is the lack of hydrogen
and helium in their spectra. It is presumed that the orbiting
companion is transferring hydrogen- and helium-rich material onto the
white dwarf; however, despite extensive searches this hydrogen or
helium has never been detected, and it remains a mystery as to how
such mass transfer could take place with no visible signature. It is
not yet established whether this is a problem of observational
detection, or whether these elements are lost from the system before
the explosion occurs.
Various models for SN Ia explosions have been investigated. The most
favored model is one in which a subsonic deflagration flame is
ignited, which subsequently results in a supersonic detonation wave (a
delayed detonation). The actual mechanism that triggers a SN Ia
explosion is not well understood: successfully initiating a detonation
in a CO white dwarf remains extremely challenging. In recent years,
modeling in 3D has begun, given indications from spectropolarimetry
that the explosions are not spherically symmetric. The radiative
transport calculations for exploding white dwarf stars are
complex. However, there is general consensus that the observed
(exponential shape of the) light curves of SN e Ia are powered by the
radioactive decay of ${}^{56}$Co to ${}^{56}$Fe. The range of observed
supernova peak brightnesses appears to be due to a range in ${}^{56}$Ni
produced. However, the origin of the peak magnitude - decline rate is
still not well understood.
Despite the lack of a solid theoretical understanding of SNe Ia,
empirically they remain one of the best-tested, lowest-dispersion, and
highest-precision means of measuring relative distances out into the
smooth Hubble flow.
3.6.2 Recent Results for SNe Ia and H${}_{\circ}$
The most recent calibration of SNe Ia has come from Riess et
al. 2009a,b from a new calibration of six Cepheid distances to
nearby well-observed supernovae using the Advanced Camera for Surveys
(ACS) and the Near-Infrared Camera and Multi-Object Spectrometer
(NICMOS) on HST. Riess et al. have just completed a program to
discover Cepheids in nearby galaxies known to have been hosts to
relatively recent Type Ia supernovae and then re-observed them in the
near infrared. In so doing, the number of high-quality calibrators
for the supernova distance scale more than doubled, putting the
calibration for SNe Ia on a far more secure foundation. The six
Cepheid-calibrated supernovae include SN1981B in NGC 4536, SN 1990N in
NGC 4639, SN 1998aq in NGC 3982, SN 1994ae in NGC 3370, SN 1995al in
NGC 3021 and finally SN 2002fk in NGC 1309. A comparison of Cepheid
and SNe Ia distances from Riess et al. (2009a) is shown in Figure
8. The supernovae were chosen to meet rather stringent criteria,
requiring, for example that they all were observed with modern
detectors, that they were observed before maximum light, their spectra
were not atypical and that their estimated reddenings were low. Each
galaxy had between 13 and 26 Cepheids observed at random phases in the
H-band (F160W filter) (and were transformed to mean light using
optical data) using NICMOS onboard HST. Extinction in the H-band is
down by a factor of five relative to the optical. The program avoids
issues of cross-instrumental calibration by observing with a single
telescope for the calibration galaxy, NGC 4258, out to the SNe Ia
galaxies. By extending to the near-infrared, these observations of
the newly discovered Cepheids directly address the systematic effects
of metallicity and reddening.
We show in Figure 9, the Hubble diagram for 240 supernovae at z $<$
0.1 from Hicken et al. (2009), which have been calibrated by Riess et
al. (2009a) based on the distance to the maser galaxy, NGC 4258. Riess
et al. quote a value of H${}_{o}$ = 74.2 $\pm$ 3.6 km s${}^{-1}$ Mpc${}^{-1}$
combining systematic and statistical errors into one number, a value
in excellent agreement with that from the Key Project (see next
section), which is calibrated using the Galactic Cepheid parallax
sample. At the current time, there is not much need for larger,
low-redshift samples, since the dominant remaining uncertainties are
systematic, rather than statistical. Recent studies (e.g., Wood-
Vasey et
al. 2008; Folatelli et al. 2009) confirm that supernovae are better
standard candles at near-infrared (JHK) wavelengths and minimize the
uncertainties due to reddening.
Tammann, Sandage & Reindl (2008) have undertaken a recent
re-calibration of supernovae, as well as a comparison of the Cepheid,
RR Lyrae and TRGB distance scales. In contrast, they find a value of
$H_{\circ}=62.3\pm 4.0$ km/s/Mpc, where the quoted (systematic)
error includes their estimated uncertainties in both the Cepheid and
TRGB calibration zero points. Their quoted error is dominated by the
systematic uncertainties in the Cepheid zero point and the small
number of supernova calibrators, both of which are estimated by them
to be at the 3-4% level; however, the H${}_{o}$ values differ by more
than 2-$\sigma$. A discussion of the reason for the differences in
these analyses can be found in Riess et al. (2009a,b): these include
the use of more heavily reddened Galactic Cepheids, the use of less
accurate photographic data and a calibration involving multiple
telescope/instruments for supernovae by Tammann, Sandage & Reindl.
4 THE HUBBLE SPACE TELESCOPE (HST) KEY PROJECT
We briefly summarize below the results from the HST Key Project, and
provide an updated calibration for these data. The primary goals of
the HST Key Project were to discover and measure the distances to
nearby galaxies containing Cepheid variables, calibrate a range of
methods for measuring distances beyond the reach of Cepheids to test
for and minimize sources of systematic uncertainty, and ultimately to
measure $H_{\circ}$ to an accuracy of $\pm$10%. HST provided the
opportunity to measure Cepheid distances a factor of 10 more distant
than could be routinely obtained on the ground. It also presented a
practical advantage in that, for the first time, observations could be
scheduled in a way that optimized the discovery of Cepheids with a
range of periods independent of phase of the moon or weather (Madore
& Freedman 2005).
Cepheid distances to 18 galaxies with distances in the range of 3 to
25 Mpc were measured using WF/PC and (primarily) WFPC2 on
HST. Observations at two wavelengths ($V$- and $I$-band) were made,
chosen to allow corrections for dust. The spacing of observations was
optimized to allow for the discovery of Cepheids with periods in the
range of 10 to 50 days. In addition, 13 additional galaxies with
published Cepheid photometry were analyzed for a total of 31 galaxies.
These Cepheid distances were then used to calibrate the Tully-Fisher
relation for spiral galaxies, the peak brightness of Type Ia SNe, the
D${}_{n}-\sigma$ relation for elliptical galaxies, the Surface Brightness
Fluctuation (SBF) method, and Type II supernovae (Freedman 2001 and
references therein). These methods allowed a calibration of distances
spanning the range of about 70 Mpc (for SBF) out to about 400 Mpc for
Type Ia SNe. These results are summarized in Figure 10. Combining
these results using both Bayesian and frequentist methods yielded a
consistent value of $H_{\circ}$ = 72 $\pm$ 3 (statistical) $\pm$ 7
(systematic) km s${}^{-1}$ Mpc${}^{-1}$.
We update this analysis using the new HST-parallax Galactic
calibration of the Cepheid zero point (Benedict et al. 2007), and the
new supernova data from Hicken et al. (2009). We find a similar value
of H${}_{o}$, but with reduced systematic uncertainty, of H${}_{o}$ = 73
$\pm$2 (random) $\pm$4 (systematic) km s${}^{-1}$ Mpc${}^{-1}$. The
reduced systematic uncertainty, discussed further in §4.1 below, results from having a more robust
zero-point calibration based on the Milky Way galaxy with comparable
metallicity to the spiral galaxies in the HST Key Project
sample. Although, the new parallax calibration results in a shorter
distance to the LMC (which is no longer used here as a calibrator),
the difference in H${}_{o}$ is nearly offset by the fact that no
metallicity correction is needed to offset the difference in
metallicity between the LMC and calibrating galaxies.
4.1 Systematics on $H_{\circ}$ at the End of the Key Project and a
Decade Later
A primary goal of the HST Key Project was the explicit propagation of
statistical errors, combined with the detailed enumeration of and
accounting for known and potential systematic errors. In Table 2 we
recall the systematics error budget given in Freedman et al. (2001).
The purpose of the original tabulation was to clearly identify the
most influential paths to greater accuracy in future efforts to refine
$H_{\circ}$. Here we now discuss what progress has been made, and what we
can expect in the very near future using primarily space-based
facilities, utilizing instruments operating mainly at mid-infrared and
near-infrared wavelengths.
Identified systematic uncertainties in the HST Key Project
determination of the extragalactic distance scale limited its stated
accuracy to $\pm$10%. The dominant systematics were: (a) the zero
point of the Cepheid PL relation, which was tied directly to the
(independently adopted) distance to the LMC; (b) the differential
metallicity corrections to the PL zero point in going from the
relatively low-metallicity (LMC) calibration to target galaxies of
different (and often larger) metallicities; (c) reddening corrections
that required adopting a wavelength dependence of the extinction curve
that is assumed to be universal; and (d) zero-point drift, offsets,
and transformation uncertainties between various cameras on HST and on
the ground. Table 2 compares these uncertainties to what is now being
achieved with HST parallaxes and new HST SNe Ia distances (Table 2,
Column 3 “Revisions”), and then what is expected to be realized by
extending to a largely space-based near and mid-infrared Cepheid
calibration using the combined power of HST, Spitzer and
eventually the James Webb Space Telescope (JWST) and GAIA.
(Column 4, “Anticipated”).
In 2001 the uncertainty on the zero point of the Leavitt Law was the
largest on the list of known systematic uncertainties. Recall that the
Key Project zero point was tied directly to an LMC true distance
modulus of 18.50 mag. As we have seen in §3.1.4
improvement to the zero point has come from new HST parallax
measurements of Galactic Cepheids, improved distance measurements to
the LMC from near-infrared photometry, and measurement of a maser
distance to NGC 4258. We adopt a current zero-point uncertainty of
3%.
We next turn to the issue of metallicity. As discussed in §3.1.3, in the optical, metallicity corrections remain
controversial. However, by shifting the calibration from the
low-metallicity Cepheids in the LMC to the more representative and
high-metallicity Milky Way (or alternatively to) the NGC 4258
Cepheids, the character of the metallicity uncertainty has changed
from being a systematic to a random uncertainty. We conservatively
estimate that the systematic component of the uncertainty on the
metallicity calibration should now drop to $\pm$0.05 mag. Including
the recent results from Benedict et al. (2007) and Riess et
al. (2009a,b), our estimate for the current total uncertainty on
H${}_{\circ}$ is $\pm$ 5%.
In terms of future improvements, as discussed further in §7, with the Global Astrometric Interferometer for
Astrophysics (GAIA), and possibly the Space Interferometry Mission
(SIM), the sample of Cepheids with high precision trignometric
parallaxes will be increased, and as more long-period Cepheids enter
the calibration both the slope and the zero point of the
high-metallicity Galactic Leavitt Law will be improved. By
extending both the calibration of the Leavitt Law and its application
to increasingly longer wavelengths the effects of metallicity and the
impact of total line-of-sight reddening, each drop below the
statistical significance threshold. At mid-infrared wavelengths the
extinction is a factor of $\sim$20 reduced compared to optical
wavelengths. And line blanketting in the mid and near infrared is
predicted theoretically to be small compared to the blue portion of
the spectrum. Direct tests are now being undertaken to establish
whether this is indeed the case and/or calibrate out any residual
impact (§7.3).
In principle, a value of $H_{\circ}$ having a well determined systematic
error budget of only 2-3% is within reach over the next decade. It
is the goal of the new Carnegie Hubble Program, described
briefly in §7.3, based on a mid-infrared
calibration of the extragalactic distance scale using the Spitzer satellite, GAIA and JWST.
5 OTHER METHODS FOR DETERMINING H${}_{o}$
Although the focus of this review is the determination of H${}_{o}$ and
the extragalactic distance scale, we briefly mention two indirect
techniques that probe great cosmological distances independently:
gravitational lensing and the Sunyaev-Zel’dovich effect. We also
discuss measurements of anisotropies in the cosmic microwave
background, which offer a measurement of H${}_{0}$, in combination with
other data.
5.1 Gravitational Lens Time Delays and H${}_{o}$
As first realized by Refsdal (1964), measurements of the differences
in arrival time, coupled with measurements of the angular separation
of strongly lensed images of a time-variable object (such as a quasar
or supernova) can be used to measure H${}_{o}$. The time delay observed
between multiple images is proportional to H${}_{o}^{-1}$, and is less
dependent on other cosmological parameters such as $\Omega_{matter}$
and $\Omega_{\Lambda}$. An extensive review of the physics of lensing
can be found in Blandford & Narayan (1986); the observational issues
have been summarized nicely by Myers (1999) and Schechter (2005).
Initially, the practical implementation of this method suffered from a
number of difficulties. Time delays have proven difficult to measure
accurately, the amplitude of quasar variability is generally small,
and relatively few lens systems that can be modeled simply and cleanly
have been found. Dust obscuration is an issue at optical wavelengths.
A great challenge of this method is that astronomical lenses are
galaxies whose underlying mass distributions are not known, and a
strong physical degeneracy exists between the mass distribution of the
lens and the value of H${}_{o}$. As emphasized by Gorenstein, Shapiro &
Falco (1988), the deflections and distortions do not uniquely
determine the mass distribution: a lens may be located in a group(s)
or cluster(s), which will affect the predicted time delays, an effect
termed the mass sheet degeneracy. Measurements of velocity dispersion
as a function of position can be used to constrain the mass
distribution of the lens, but generally only central velocity
dispersion measurements are feasible. An advantage of the method is
that it offers a probe directly at cosmological distances; the
concomittent disadvantage is that the cosmological model must be
assumed in order to determine H${}_{o}$. Earlier estimates of H${}_{o}$ using
this technique yielded values about 10% lower (analyzing the same
data), assuming what was then the standard cosmological model with
$\Omega_{matter}$ = 1.0, in comparison to the current standard model
with $\Omega_{matter}$ = 0.3 and $\Omega_{\Lambda}$ = 0.7.
The precision and accuracy of this technique has continued to improve
over time. A brief survey of results from gravitational lensing over the
past five years can be found in Suyu et al. (2009), with estimates of
H${}_{o}$ in the range 50 to 85 km s${}^{-1}$ Mpc${}^{-1}$. There is a wide
range in types of modeling and treatment of errors for these different
systems (e.g., assumed isothermal profiles, assumptions about the
density distribution of the environment, and how well the models are
constrained by the data).
A recent extensive analysis of the quadruple lens system B1608+656 has
been carried out by Suyu et al. (2009). This analysis is based on deep
F606W and F814W ACS data, a more accurate measurement of the velocity
dispersion using the Low-Resolution Imaging Spectrometer (LRIS) on
Keck, a more detailed treatment of the lens environment using a
combination of ray tracing through cosmological N-body simulations
(the Millennium Simulation) along with number counts in the field of
B1608+656, in order to help break the mass sheet degeneracy
problem. Adopting the standard cosmological model with $\Omega_{matter}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, and w = -1, they find H${}_{o}$ =
71 $\pm$ 3 km s${}^{-1}$ Mpc${}^{-1}$, a factor of two improvement over
the previous estimate for this lens.
5.2 The Sunyaev-Zel’dovich (SZ) Effect and H${}_{o}$
Sunyaev & Zel’dovich (1969) described the inverse-Compton scattering
of photons from the cosmic microwave background (CMB) off of hot
electrons in the X-ray gas of rich clusters of galaxies. This
scattering leads to a redistribution of the CMB photons so that a
fraction of the photons move from the Rayleigh-Jeans to the Wien side
of the blackbody spectrum, referred to as the Sunyaev-Zel’dovich (SZ)
effect. The measured effect amounts to about 1 mK. The Hubble
constant is obtained using the fact that the measured X-ray flux from
a cluster is distance-dependent, whereas the SZ decrement is
essentially independent of distance. Observations of this effect have
improved enormously in recent years, with high signal-to-noise, high
angular resolution, SZ images obtained with ground-based
interferometric arrays and high-resolution X-ray spectra. The theory
of the SZ effect is covered at length by Birkinshaw (1999); a nice
summary of observational techniques and interferometry results is
given in Carlstrom et al. (2002).
The SZ effect is proportional to the first power of the electron
density, n${}_{e}$: $\Delta T_{SZ}\sim\int dln_{e}T_{e}$, where T${}_{e}$ is
the electron temperature, and d$l$ is the path length along the
line-of-sight, related to the angular diameter distance. The X-ray
emission is proportional to the second power of the density: $S_{x}\sim\int dl\Lambda n_{e}^{2}$, where $\Lambda$ is the cooling function for
the X-ray gas. The angular diameter distance is solved for by
eliminating the electron density (see Carlstrom et al. 2002;
Birkinshaw 1999).
An advantage of this method is that it can be applied at cosmological
distances, well into the Hubble flow. The main uncertainties result
from potential substructure in the gas of the cluster (which has the
effect of reducing H${}_{o}$), projection effects (if the clusters
observed are prolate, the sense of the effect is to increase H${}_{o}$),
the assumption of hydrostatic equilibrium, details of the models for
the gas and electron densities, and potential contamination from point
sources.
The accuracy of this technique has continued to improve as
interferometric radio observations (e.g., Berkeley-Illinois-Maryland
Association, BIMA and Owens Valley Radio Observatory, OVRO) and ROSAT
and now Chandra X-ray data have become available. In a recent study by
Bonamente et al. (2006), new Chandra X-ray measurements for 38
clusters in the redshift range 0.14 $<$ z $<$ 0.89 have been
obtained. Combining these data with BIMA and OVRO data for these same
clusters, and performing a Markov Chain Monte Carlo analysis, these
authors find a value of H${}_{o}$ = 76.9${}^{+3.9~{}+10.0}_{-3.4~{}~{}-8.0}$ km
s${}^{-1}$ Mpc${}^{-1}$, under the assumption of hydrostatic
equilibrium. Relaxing the assumption of hydrostatic equilibrium, and
adopting an isothermal $\beta$ model, they find H${}_{o}$ = 73.7${}^{+4.6~{}+9.5}_{-3.8~{}-7.6}$ km s${}^{-1}$ Mpc${}^{-1}$.
5.3 Measurements of Anisotropies in the Cosmic Microwave
Background
The prediction of acoustic oscillations in the cosmic microwave
background radiation (Peebles & Yu 1970; Sunyaev & Zel’dovich 1970)
and the subsequent measurement of these peaks (culminating most
recently in the five (Dunkley et al. 2009) and seven-year (Bennett et
al. 2010) measurements of WMAP, the Wilkinson Microwave Anisotropy
Probe) is one of the most successful chapters in the history of
cosmology. A recent detailed review of the cosmic microwave
background is given in Hu & Dodelson (2002). The aim in this section
is simply to elucidate the importance of accurate measurements of the
Hubble constant in the context of measurements of the angular power
spectrum of CMB anisotropies, and the complementary nature of the
constraints provided.
The temperature correlations in the maps of the CMB can be described
by a set of spherical harmonics. A plot of the angular power spectrum
as a function of multipole moment, $l$, is shown in Figure 11. This
spectrum can be naturally explained as a result of the tight coupling
between photons and baryons before recombination (where electrons and
protons combine to form neutral hydrogen), and a series of
oscillations are set up as gravity and radiation pressure act on the
cold dark matter and baryons. After recombination, photons free-stream
toward us. The position of the first peak in this diagram is a
projection of the sound horizon at the time of recombination, and
occurs at a scale of about 1 degree.
Although measurements of the CMB power spectrum can be
made to very high statistical precision, there are some nearly exact
degeneracies that limit the accuracy with which cosmological
parameters can be estimated (e.g., Efstathiou & Bond 1999). These
degeneracies impose severe limitations on estimates of curvature
and the Hubble constant derived from CMB anisotropy alone, and are
sometimes overlooked. Specifically, the value of H${}_{o}$ is degenerate
with the value of $\Omega_{\Lambda}$ and $w$. Different combinations of
the matter and energy densities and H${}_{o}$ can produce identical CMB
anisotropy spectra. Alternatively, an accurate independent
measurement of H${}_{o}$ provides a means of constraining the values of
other cosmological parameters based on CMB anisotropy data.
The WMAP data provide strong evidence for the current standard
cosmological model with $\Omega_{matter}$ = 0.23, $\Omega_{\Lambda}$ =
0.73 (Spergel et al. 2003; Komatsu et al. 2010). A prior on H${}_{\circ}$
can help to break some of the degeneracies in the CMB data. The WMAP
data measure $\Omega_{matter}h^{2}$; assuming a flat universe, yields a
stronger constraint on the equation of state, -0.47 $,$ w $<$ 0.42
(95% CL) (Komatsu et al. 2009) than WMAP data alone. Alternatively,
combining the WMAP-5 data with SNe Ia and BAO data yields a value of
H${}_{0}$ = 70.5 $\pm$ 1.3 km s${}^{-1}$ Mpc${}^{-1}$ (Komatsu et al. 2009),
still in excellent agreement with other methods.
5.3.1 Measurements of Baryon Acoustic Oscillations in the
Matter Power Spectrum
Baryon acoustic oscillations (BAO) arise for the same underlying
physical reason as the peaks and valleys in the cosmic microwave
background spectrum: the sound waves that are excited in the hot
plasma owing to the competing effects of radiation pressure and
gravity at the surface of last scattering also leave an imprint on the
galaxy matter power spectrum. The two-point correlation function has a
peak on scales of 100 h${}^{-1}$ Mpc (Eisenstein et al. 2005), which
provides a “standard ruler” for measuring the ratio of distances
between the surface of last scattering of the CMB (at z=1089) and a
given redshift. Measurement of BAO in the matter power spectrum can
also help to break degeneracies in the CMB measurements. For example,
Percival et al. (2009) have combined the Sloan Digital Sky Survey
(SDSS) 7th data release with the Two-degree Field Galaxy Redshift
Survey (2dFGRS) to measure fluctuations in the matter power spectrum
at six redshift slices. For $\Lambda CDM$ models, combining these
results with constraints for the baryon and cold dark matter
densities, $\Omega_{b}h^{2}$, and $\Omega_{CDM}h^{2}$ from WMAP 5, and data
for SNe Ia, yields $\Omega_{matter}$ = 0.29 $\pm$ 0.02 and H${}_{o}$ = 68
$\pm$ 2 km s${}^{-1}$ Mpc${}^{-1}$.
6 AGE OF THE UNIVERSE
There are three independent ways of determining the age of the
Universe. The first is based on an assumed cosmological model and the
current expansion rate of the Universe. The second is based on models
of stellar evolution applied to the oldest stars in the Universe. The
third is based on measurements of the angular power spectrum of
temperature fluctuations in the CMB. All three methods are completely
independent of each other, and so offer an important consistency
check. The kinematic age of the Universe is governed by the rate at
which the Universe is currently expanding, modified by the combined
extent to which gravity slows the expansion and dark energy causes it
to accelerate.
The time back to the Big Bang singularity depends upon $H_{\circ}$ and the
expansion history, which itself depends upon the composition of the
universe:
$$\displaystyle t_{o}=\int_{o}^{\infty}{dz\over(1+z)H(z)}=H_{\circ}^{-1}\int_{o}%
^{\infty}{dz\over(1+z)[\Omega_{matter}(1+z)^{3}+\Omega_{DE}(1+z)^{3(1+w)}]^{1/%
2}}$$
(14)
For a matter-dominated flat universe with no dark energy
($\Omega_{matter}$ = 1.0, $\Omega_{vacuum}$ = 0.0, the age is simply
2/3 of the Hubble time, or only 9.3 billion years for h = 0.7.
Not accounting for the presence of dark energy in the Universe leads
to an underestimate of its age. Before the discovery of dark energy,
an “age controversy” persisted for several decades: values of the
Hubble constant any larger than 40-50 km s${}^{-1}$ Mpc${}^{-1}$ appeared
to yield ages for the universe as a whole that were smaller than
stellar evolution calibrated ages of the oldest stars in the Milky
Way. For a universe with a Hubble constant of $73\,{\rm km\,sec^{-1}\,Mpc^{-1}}$, with $\Omega_{matter}$ = 0.27 and $\Omega_{vacuum}$ =
0.73, the age is 13.3 Gyr. Taking account of the systematic
uncertainties in $H_{\circ}$ alone, the uncertainty in the age of the
Universe is estimated to be about $\pm 0.8$ Gyr.
The most well-developed of the stellar chronometers employs the oldest
stars in globular clusters in the Milky Way (Krauss & Chaboyer
2003). The largest uncertainty for this technique comes from
determination of the distances to the globular clusters. Recent,
detailed stellar evolution models when compared to observations of
globular clusters stars, yield a lower limit to their ages of
10.4 billion years (at the 95% confidence level) with a best-fit age
of 12.6 Gyr. Deriving the age for the Universe from the lower limit
requires allowing for additional time to form the globular clusters:
from theoretical considerations this is estimated to be about 0.8
billion years. This age estimate for the Universe agrees well with
the expansion age. Two other stellar chronometers: the cooling of
the oldest white dwarf stars (for a recent review see Moehler & Bono
2008) and nucleocosmochronology, the decay of radioactive isotopes
(Sneden et al. 2001), yield similar ages.
The expansion age can also be determined from measurements of the CMB
anisotropy. $H_{\circ}$ cannot be measured directly from the CMB alone,
but the heights of the peaks in the CMB spectrum provide a constraint
on the product $\Omega_{matter}H_{\circ}^{2}$, and the position of the
peaks constrain the distance to the last-scattering surface. Assuming
a flat universe yields a consistent age, $t_{o}=13.7\pm 0.13\,$Gyr
(Spergel et al. 2003; Komatsu et al. 2009), again in good agreement
with the other two techniques.
In summary, several methods of estimating the age of the universe are
now in good agreement, to within their quoted uncertainties, with a
value $t_{o}=13.7\pm 0.5\,$Gyr.
7 WHY MEASURE H${}_{o}$ TO HIGHER ACCURACY?
The importance of pinning down $H_{\circ}$ has only grown with time: not
only does it set the scale for all cosmological distances and times,
but its accurate determination is also needed to take full advantage
of the increasingly precise measurements of other cosmological
quantities. The prospects for improving the accuracy of H${}_{o}$ within
the next decade appear to be as exciting as those within the past
couple of decades. We highlight here near-term improvements to the
Cepheid-based extragalactic distance scale that will come from new
measurements of Cepheid parallaxes with GAIA and perhaps the Space
Interferometry Mission (SIM), Spitzer measurements of Cepheids
in the Milky Way, LMC, and other nearby galaxies, including NGC 4258,
Spitzer measurements of the Tully-Fisher relation and a new
calibration of the Type Ia supernova distance scale; and future
measurements of Cepheids with JWST. We describe how a more accurate
value of H${}_{o}$, combined with other future measurements of large-scale
structure and CMB anisotropies (e.g., Planck), can be used to break
degeneracies and place stronger constraints on other cosmological
parameters including the equation of state for dark energy, the energy
density in cold dark matter, and the mass of neutrinos.
While measurements of CMB anisotropies have provided dramatic
confirmation of the standard concordance model, it is important to
keep in mind that the values for many quantities (e.g., $w$, H${}_{o}$,
neutrino masses) are highly model-dependent, owing to the strong
degeneracies. A more accurate, independent measurement of H${}_{o}$ is
critical for providing stronger limits on quantities such as $w$ and
neutrino masses.
7.1 Constraints on Dark Energy
As summarized by Hu (2005), a measurement of H${}_{o}$ to the percent
level, in combination with CMB measurements with the statistical
precision of the Planck satellite offers one of the most precise
measurements of the equation of state at z $\sim$ 0.5. At first this
result appears counter-intuitive, since the CMB anisotropies result
from physical processes imprinted on the surface of last scattering at
z $\sim$ 1100. Alone they give very little information on dark energy,
which contributes most to the expansion at lower redshifts. However,
the sound horizon provides a reference standard ruler that can be used
to provide constraints on a number of parameters including dark energy
and neutrinos. The main deviations in the Hubble parameter, the
angular diameter distance, and the growth factor due to the dark
energy equation of state manifest themselves as variations in the
local Hubble constant. In Figure 12, we show the strong degeneracy
between the equation of state and the value of H${}_{o}$. This figure is
based on a forecast of the precision that will be available with
measurements of CMB fluctuations from the Planck satellite. Improved
accuracy in the measurement of H${}_{o}$ will be critical for constraining
the equation of state for dark energy from CMB data.
7.2 Constraints on the Neutrino Mass
Improved accuracy in the measurement of H${}_{o}$ will have a significant
effect in placing constraints on the neutrino mass from measurements
of CMB anisotropies. Detailed reviews of the subject can be found in
Dolgov (1996), Crotty, Lesgourges & Pastor (2004) and Hannestad
(2006). Briefly, massive neutrinos contribute to the overall matter
density of the Universe through which they have an impact on the
growth of structure; the larger the neutrino mass, the more
free-streaming effects dampen the growth of structure on small
scales. The density in neutrinos is related to the number of massive
neutrinos, $N_{eff}$, and the neutrino mass, $m_{\nu}$, by: $\Delta_{\nu}h^{2}=N_{eff}m_{\nu}$ / 94 eV. From neutrino oscillation experiments,
a measurement of the difference in mass squared, $\Delta m^{2}\sim$
0.002 (eV)${}^{2}$ is obtained.
In the context of the standard cosmological model, cosmological
observations can constrain the number of neutrino species and the
absolute mass scale. Massive neutrinos have a measurable effect on the
cosmic microwave background spectrum: the relative height of the
acoustic peaks decrease with increasing $m_{\nu}$ and the positions of
the peaks shift to higher multipole values. The WMAP 5-year data
provided evidence, for the first time, for a non-zero neutrino
background from CMB data alone, with $\sum m_{\nu}<$ 1.3 eV (95% CL)
(Dunkley et al. 2009). Combining the CMB data with results from SNe Ia
and baryon acoustic oscillations, results in a bound of $\sum m_{\nu}<$ 0.58 eV (95% CL) (Komatsu et al. 2010), reaching close to the range
implied by the neutrino oscillation experiments. Future forecasts with
Planck data suggest that an order of magnitude increase in accuracy
may be feasible. One of the biggest limitations to determining the
neutrino mass from the CMB power spectrum results from a strong
degeneracy between the neutrino mass and the Hubble constant (Komatsu
et al. 2009). As $H_{\circ}$ increases, the neutrino mass becomes
smaller (see Figure 13). An accuracy in H${}_{o}$ to 2-3% percent,
combined with Planck data (for the standard cosmological model) will
provide an order of magnitude improved precision on the neutrino mass.
7.3 Measuring H${}_{o}$ to $\pm$2%
Accuracy in measurement of H${}_{\circ}$ has improved signficantly with
the measurement of HST Galactic Cepheid parallaxes and HST measurement
of Cepheid distances to SNe Ia hosts, as described in §3.1.4 and §3.6.2, respectively. Future
improvements will come with further HST WFC3 and Spitzer
observations of Cepheids. At 3.6 and 4.5$\mu$m the effects of
extinction are a factor of $\sim$20 smaller in comparison to optical
wavelengths. In addition, in the mid-infrared, the surface brightness
of Cepheids is insensitive to temperature. The amplitudes of the
Cepheids are therefore smaller and due to radius variations alone.
The Leavitt Law in the mid-IR then becomes almost equal to the
Period-Radius relation. From archival Spitzer data, the
mid-infrared Leavitt Law has been shown to have very small dispersion
(Freedman et al. 2008; Madore et al. 2009). Furthermore, metallicity
effects are expected to be small in the mid infrared, and Spitzer offers an opportunity to test this expectation
empirically. The calibration can be carried out using Spitzer alone,
once again eliminating cross-calibration uncertainties. A new program
aimed at addressing remaining systematic errors in the Cepheid
distance scale is the Carnegie Hubble Program (CHP: Freedman 2009).
The CHP will measure the distances to 39 Galactic Cepheids (15 of them
in anticipation of the GAIA satellite), 92 well-observed Cepheids in
the LMC, several Local Group galaxies containing known Cepheids (M31,
M33, IC 1613, NGC 6822, NGC 3109, Sextans A, Sextans B and WLM), more
distant galaxies with known Cepheids including NGC 2403 (2.5 Mpc),
Sculptor Group galaxies NGC 300, NGC 247 (3.7 Mpc), Cen A (3.5 Mpc)
and M83 (4.5 Mpc), as well as the maser galaxy NGC 4258 (at 7.2 Mpc).
It will measure the distances to 545 galaxies in 35 clusters with
measured Tully-Fisher distances, which can then be calibrated with
Cepheids as shown in Figure 7. Over 50 galaxies with SNe Ia distances
measured by Folatelli et al. (2009) will also be observed as part of
this program, allowing a determination of H${}_{o}$ with this calibration
well into the far-field Hubble flow.
As discussed earlier, the expected uncertainties from the CHP are
shown in Table 2. Re-observing the known Cepheids in more distant
galaxies will require the aperture, sensitivity and resolution of
JWST. With Spitzer, it will be possible to decrease the
uncertainties in the Cepheid distance scale to the (3-4%) level, with
an application of a new mid-IR Tully-Fisher relation and a Spitzer Cepheid calibration of Type Ia SNe. It is expected that
future JWST measurements will bring the uncertainties to $\pm$2% with
a more firm calibration of SNe Ia.
8 FUTURE IMPROVEMENTS
We summarize here the steps toward measuring the Hubble constant to a
few percent accuracy. Most of these measurements should be feasible
within the next decade.
1. Mid-infrared Galactic Cepheid parallax calibration with Spitzer and GAIA.
2. Mid-infrared calibrations of Galactic and nearby Cepheid galaxies
and the infrared Tully-Fisher relation with Spitzer and JWST.
3. Increased numbers of maser distances.
4. Larger samples and improved systematics and modeling of strong
gravitational lenses and Sunyaev-Zel-dovich clusters.
5. Higher-frequency, greater sensitivity, higher angular resolution
measurements of the CMB angular power spectrum with Planck.
6. Measurements of baryon acoustic oscillations at a range of
redshifts (e.g.,
BOSS [http://cosmology.lbl.gov/BOSS/],
HETDEX [http://hetdex.org/],
WiggleZ [http://wigglez.swin.edu.au/site/]),
JDEM [http://jdem.gsfc.nasa.gov/],
SKA [http://www.skatelescope.org/],
DES [http://www.darkenergysurvey.org/],
PanStarrs [http://pan-starrs.ifa.hawaii.edu/public/],
LSST [http://www.lsst.org/lsst]).
7. Beyond 2020, detection of gravitational radiation from inspiraling
massive black holes with LISA. Coupled with identification with an
electromagnetic source and therefore a redshift, this method offers,
in principle, a 1% measurement of H${}_{0}$.
9 SUMMARY POINTS
(1) Several nearby distance determination methods are now available
that are of high precision, having independent systematics. These
include Cepheid variables, the tip of the red giant branch (TRGB)
stars, and the geometrically determined distances to maser galaxies.
(2) The Cepheid Period-Luminosity relation (Leavitt Law) now has an
absolute calibration based on HST trigonometric parallaxes for
Galactic Cepheids. This calibration and its application at
near-infrared wavelengths significantly reduces two of the four
leading systematic errors previously limiting the accuracy of the
Cepheid-based distance scale: zero-point calibration and metallicity
effects.
(3) The maser galaxy distances, TRGB distances and Cepheid distances
agree to high precision at the one common point of contact where they
can each be simultaneously intercompared, the maser galaxy NGC 4258,
at a distance of 7.2 Mpc.
(4) Galactic Cepheid parallax and NGC 4258 maser calibrations of the
distance to the LMC agree very well. Based on these measurements and
other independent measurements, we adopt a true, metallicity-corrected
distance modulus to the LMC of 18.39 $\pm$ 0.06 mag.
(5) HST optical and near-infrared observations of Cepheids in SNe Ia
galaxies calibrated by the maser galaxy, NGC 4258, have decreased
systematics due to calibration, metallicity and reddening in the SNe
Ia distance scale, and increased the number of well-observed SN
calibrators to six.
(6) The current calibration of the Cepheid and maser extragalactic
distance scales agree to within their quoted errors, yielding a value
of $H_{\circ}$ = 73 $\pm$ 2 (random) $\pm$ 4 (systematic)
km s${}^{-1}$ Mpc${}^{-1}$.
(7) Within a concordance cosmology (that is, $\Omega_{matter}$ = 0.27 and
$\Omega_{vacuum}$ = 0.73) the current value of the Hubble constant
gives an age for the Universe of 13.3 $\pm$ 0.8 Gyr. Several
independent methods (globular cluster ages, white dwarf cooling ages,
CMB anisotropies within a concordance model) all yield values in good
agreement with the expansion age.
(8) Further reductions of the known systematics in the extragalactic
distance scale are anticipated using HST, Spitzer, GAIA and
JWST. A factor of two decrease in the currently identified systematic
errors is within reach, and an uncertainty of 2% in the Hubble
constant is a realistic goal for the next decade.
(9) A Hubble constant measurement to a few percent accuracy, in
combination with measurements of anisotropies in the cosmic microwave
background from Planck, will yield valuable constraints on many other
cosmological parameters, including the equation of state for dark
energy, the mass of neutrinos, and the curvature of the universe.
10 DISCLOSURE STATEMENT
The authors are not aware of any potential biases that might be
perceived as affecting the objectivity of this review.
11 ACKNOWLEDGEMENTS
We thank the Carnegie Institution for Science, which, through its
support of astronomical facilities and research for over a century,
enabled the original discovery of the expansion of the Universe, as
well as continued efforts to measure accurately the expansion of the
universe over cosmic time. Many dedicated individuals have made
contributions to solving the problems encountered in accurately
calibrating the extragalactic distance scale; it has been a community
effort spanning the better part of a century, but it remained a
problem that could not have been solved without NASA and its vision in
supporting space astrophysics. We thank Chris Burns for producing
Figure 12. We also thank Fritz Benedict, Laura Ferrarese, Robert
Kirshner, James Moran, Jeremy Mould, and Adam Riess for their many
constructive suggestions on an earlier draft of this article.
And finally, our sincere thanks to John Kormendy, our ARAA Editor, for his
patience and his insightful and helpful comments on the final version of
this review as it went to press.
References
[Aaronson(1986)]
Aaronson M, Bothun G, Mould J,
Huchra J, Schommer RA, Cornell ME. 1986. Ap. J. 302:536
[Aaronson(1979)]
Aaronson M, Huchra, J, Mould JR. 1979. Ap. J. 229:1
[Albrecht(2006)]
Albrecht, A, Bernstein, G, Cahn, R, Freedman, WL, Hewitt, J, et al.
2006, arXiv:astro-ph/0609591
[Alcock(2000)]
Alcock, C, Allsman RA, Alves, DR,
Axelrod, TS, et al. 2000. A. J. 119:2194
[Baade(1944)]
Baade W. 1944. Ap. J. 100:137
[Barnes(2009)]
Barnes T. 2009. In Stellar
Pulsation: Challenges for Theory and Observation Ed. J. Guzik and
P. Bradley. In Press. astro-ph/0908.3859
[Bellazzini(2008)]
Bellazzini M. 2008. Mem. Soc. Astron. Ital. 79:440
[Benedict(2007)]
Benedict GF, McArthur BE, Feast MW, Barnes TG. Harrison
TE, et al. 2007. Ap. J. 79:453
[Bennett(2010)]
Bennett CL, Hill RS, Hindshaw G, Larson D, Smith KM, et al. 2010. Ap. J. Suppl. submitted, arXiv:1001.4758
[Birkinshaw(1999)]
Birkinshaw, M. 1999. Phys. Rep. 310:97
[Biscardi(2008)]
Biscardi I, Raimondo G, Cantiello M, & Brocato E. 2008. Ap. J.
678:168
[Blakeslee(2002)]
Blakeslee JP, Lucey JR, Tonry JL, Hudson MJ, Narayanan VK, et
al. 2002. MNRAS. 330:443
[Blakeslee(2009)]
Blakeslee JP, Jordan A, Mei S. Cote P, Ferrarese L, et al. 2009. Ap. J. 694:556
[Blandford & Narayan(1992)]
Blandford RD, Narayan R. 1992. Annu. Rev. Astron. Astrophys.
30:311
[Bonamente et al.(2006)]
Bonamente, M, Joy, MK,
LaRoque SJ, Carlstrom, JE, et al. 2006. Ap. J. 647:25
[Bono(2008)]
Bono G, Caputo F, Fiorentino G, Marconi M, Musella I.
2008. Ap. J. 684:102
[Branch(1998)]
Branch D. 1998. Annu. Rev.
Astron. Astrophys. 36:17
[Buchler(2009)]
Buchler JR. 2010. In ”Stellar
Pulsation: Challenges for Theory and Observation, Ed. J. Guzik and
P. Bradley. AIP Conf Proc. 1170:51 astro-ph/0907.1766
[Caldwell(2006)]
Caldwell, N. 2006. Ap. J. 651:822
[Caputo(2008)]
Caputo F. 2008. Mem. Soc. Astron. Ital. 79:453
[Cardelli(1989)]
Cardelli JA, Clayton GG, Mathis JS.
1989. Astron. J. 96:695
[Ciardullo(2002)]
Ciardullo R, Feldmeier JJ, Jacoby GH, Kuzio de Naray R, Laychak MB,
et al. 2002. Ap. J. 577:31
[Carlstrom(2002)]
Carlstrom JE, Holder GP,
Reese ED. 2002. Annu. Rev. Astron. Astrophys. 40: 643
[Contreras(2010)]
Contreras C, Hamuy M, Phillips MM, Folatelli G, Suntzeff NB, et al.
2010. Astron. J. 139:519
[Cox(1980)]
Cox JP. 1980. Theory of Stellar Pulsation, Princeton University
Press: Princeton
[Crotty(2004)]
Crotty P, Lesgourges J, Pastor S. 2004. Phys. Rev. D 69:3007
[DaCosta(1990)]
Da Costa GS, Armandroff TE. 1990. Astron. J. 100:162
[Dale(2007)]
Dale DA, Gil de Paz A, Gordon KD, Hanson HM, Armus L, et al.
2007. Ap. J. 655:863
[diBenedett0(2008)]
di Benedetto GP. 2008. MNRAS 390:1762
[Dessart(2005)]
Dessart L, Hiller DJ. 2005. Astron. Astrophys. 439:671
[Dodelson(2003)]
Dodelson S. 2003. Modern
Cosmology (Amsterdam (Netherlands): Academic Press)
[Dolgov(1996)]
Dolgov AD. 1996. Nucl. Phys. B (Proc. Suppl.) 48:5
[Draine(2003)]
Draine BT. 2003. Annu. Rev. Astron. Astrophys. 41:241
[Durrell(2007)]
Durrell PR, Williams BF, Ciardullo R, Feldmeier JJ, von Hippel T,
et al. 2007. Ap. J. 656:746
[Dunkley(2009)]
Dunkley J, Komatsu E, Nolta MR, Spergel DN, Larson D,
et al. 2009. Ap. J. Suppl. 180:306
[EfstathiouBond(1999)]
Efstathiou G, Bond JR. 1999. MNRAS 304:75
[Eisenstein(2005))]
Eisenstein, DJ, Zehavi, I, Hogg, DW, Scoccimarro, R, Blanton, MR, et
al. 2005, Ap. J., 633:560
[Feast(1997)]
Feast MW, Catchpole RM. 1997. MNRAS 286:L1
[Fernie(1969)]
Fernie JD. 1969. PASP 81:707
[Folatelli(2009)]
Folatelli G, Phillips MM, Burns
CR, Contreras C, Hamuy M, et al. 2010. Astron. J. 139:120
[Fouque(2007)]
Fouque P, Arriagada P, Storm J,
Barnes TG, Nardetto N, et al. 2007. Astron. Astrophys. 476:73
[Freedman(2001)]
Freedman WL, Madore BF, Gibson
BK, Ferrarese L, Kelson DD, et al. 2001. Ap. J. 553:47
[Freedman(1990)]
Freedman WL, Madore BF. 1990. Ap. J. 365:186
[Freedman(2008)]
Freedman WL, Madore BF, Rigby, J,
Persson SE, Sturch, L. 2008. Ap. J. 679:71
[Freedman(2009)]
Freedman WL. 2009.Bull. Amer. Astr. Soc. 41:710
[Giovanelli(1997)]
Giovanelli R, Haynes MP, Herter T, Vogt NP, da Costa LN, et
al. 1997. Astron. J. 113:53
[Gorenstein (1988)]
Gorenstein MV,
Shapiro II, Falco EE, 1988. Ap. J. 327:693
[Hamuy (1996)]
Hamuy, M, Phillips,
MM, Suntzeff, NB, Schommer, RA, Maza, J,
et al., 1996, Astron. J., 112: 2398
[Hannestad(2006)]
Hannestad S. 2006
Prog. Part. Nuc. Phys. 57:309
[Herrnstein(1999)]
Herrnstein JR, Moran JM, Greenhill LJ, Diamond PJ, Inoue M, et al.
1999. Nature 400:539
[Hicken(2009)]
Hicken M, Wood-Vasey WM, Blondin S,
Challis P, Jha S, et al. 2009. Ap. J. 700:1097
[Hille00]
Hillebrandt W, Niemeyer JC.
2000. Annu. Rev. Astron. Astrophys. 38:191
[Hodge(1982)]
Hodge P. 1982. Annu. Rev. Astron. Astrophys. 19:357
[Hu(2005)]
Hu W, 2005. Observing Dark Energy,
339:215
[Hu(2002)]
Hu W, Dodelson S. 2002. Annu. Rev. Astron. Astrophys. 40:171
[Hubble(1925)]
Hubble EE. 1925. Ap. J. 62:409
[Hubble(1926)]
Hubble EP. 1926. Ap. J. 63:236
[Hubble(1929a)]
Hubble EP. 1929a. Proc. Natl. Acad. Sci. USA 15:168
[Hubble(1929b)]
Hubble EP. 1929b. Ap. J. 69:103
[Huchra(1992)]
Huchra JP. 1992. Science. 256:321
[Humphreys(2008)]
Humphreys, EML, Reid MJ,
Greenhill LJ, Moran JM. Argon AL. 2008. Ap. J. 672:800
[Iben(1983)]
Iben I, Renzini A. 1983. Annu. Rev. Astron. Astrophys. 21:271
[Jackson(2007)]
Jackson N. 2007. in Living
Reviews in Relativity 10:4
[Jacoby(1992)]
Jacoby GH, Branch, D, Ciardullo R, Davies RL, Harris WE, et al.
1992. PASP 104: 599
[Jha(2006)]
Jha S, Kirshner RP, Challis P, Garnavich
PM, Matheson T, et al. 2006. Astron. J. 131:527
[Jones(2006)]
Jones WC, Ade PAR, Bock JJ, Bond JR,
Borrill L, et al. 2006. Ap. J. 647:823
[Kara(2003)]
Karachentsev ID, Makarov DI, Sharina ME,
Dolphin AE, Grebel EK. Astron. Astrophys. 398:479
[Kennicutt(1998)]
Kennicutt RC, Stetson PB, Saha A, Kelson D, Rawson, D, et al.
1998. Ap. J. 498:181
[Kolb (1990)]
Kolb EW, Turner MS. 1990. The Early Universe Frontiers in
Physics, Reading, MA: Addison-Wesley.
[Komatsu(2009)]
Komatsu E, Dunkley J, Nolta MR, Bennett CL, Gold B, et al. 2009. Ap. J. Suppl. 180:330
[Komatsu(2010)]
Komatsu E, Smith KM, Dunkley J, Bennett CL, Gold B, Hinshaw G, et
al. 2010. arXiv:1001.4538
[Kowal(1968)]
Kowal CT. 1968. Astron. J. 73:1021
[Krauss(2003)]
Krauss LM, Chaboyer B. 2003. Science. 299:65
[Leavitt(1908)]
Leavitt HS. 1908. Annals of the Harvard College Observatory.
60:87
[Leavitt(1912)]
Leavitt HS, Pickering EC. 1912. Harvard College Observatory
Circulars. 173:1
[Lee(1993)]
Lee MG, Freedman WL, Madore BF. 1993. Ap. J. 417:553
[Lemaître(1927)]
Lemaître, G. 1927,
Annales de la Societe Scientifique de Bruxelles, 47:49
[Lo(2005)]
Lo KY. 2005. Annu. Rev. Astron. Astrophys. 43:625
[Macri(2006)]
Macri LM, Stanek KZ, Bersier D, Greenhill LJ, Reid MJ. 2006. Ap. J. 652:1133
[Madore(1982)]
Madore BF 1982. Ap. J. 253:575
[Madore(1991)]
Madore BF, Freedman WL. 1991. PASP 103:933
[Madore(2005)]
Madore BF, Freedman WL. 2005. Ap. J. 630:1054
[Madore(2009a)]
Madore BF, Mager V, Freedman WF. 2009. Ap. J. 690:389
[Madore(2009b)]
Madore BF, Freedman WL. 2009. Ap. J. 696:1498
[Madore(2009c)]
Madore BF, Freedman WF, Rigby, J, Persson SE, Sturch L, Mager V.
2009. Ap. J. 695:988
[Mager(2008)]
Mager V, Madore BF, Freedman WF. 2008. Ap. J. 689:721
[Maoz(1999)]
Maoz, E, Newman JA, Ferrarese L, Stetson PB, Zepf SE, et
al. 1999. Nature 401:351
[McGaugh(2000)]
McGaugh SS, Schombert JM, Bothun GD, de Blok WJG. 2000. Ap. J. Lett. 533:L99
[MoehlerBono(2008)]
Moehler S, Bono G. 2008, In
”White Dwarfs” eds. M Burleigh, R Napiwotski, Springer-Verlag, ASSL
arXiv:0806.4456
[Mould(2000)]
Mould JR, Huchra JP, Freedman WL,
Kennicutt RC, Ferrarese L, et al. 2000. Ap. J. 529:786
[Mould(2008)]
Mould J, Sakai S. 2008. Ap. J. Lett. 686: L75
[Mould(2009a)]
Mould J, Sakai S. 2009a. Ap. J. 694: 1331
[Mould(2009b)]
Mould J, Sakai S. 2009b. Ap. J. 697: 996
[Myers(1999)]
Myers, ST 1999. Proc. Natl. Acad. Sci. USA 96:4236
[Newman(2001)]
Newman JA, Ferrarese L, Stetson PB,
Maoz E, Zepf SE, et al. 2001. Ap. J. 553:562
[Ngeow(2008)]
Ngeow C, Kanbur SM, Nanthakumar A. 2008. Astron. Astrophys.
477:621
[Nolta(2009)]
Nolta M, Dunkley J, Hill RS, Hinshaw G,
Komatsu E, et al. 2009. Ap. J. Suppl. 180:296
[Peebles & Yu(1970)]
Peebles PJE, Yu JT.
1970. Ap. J. 162:815
[Percival(2009)]
Percival WJ, Reid BA,
Eisenstein DJ, Bahcall NA, Budavari T, et al. 2009, MNRAS 401:2148
[Perlmutter(1999)]
Perlmutter S, Aldering G, Goldhaber G, Knop RA, Nugent P, et al.
1999. Ap. J. 517:565
[Phillips(1993)]
Phillips MM. 1993. Ap. J. Lett. 413:L105
[Pierce(1988)]
Pierce MJ, Tully RB. 1988. Ap. J. 330:579
[Pskovskii(1984)]
Pskovskii YP. 1984. Sov. Astron. 28:658
[Readhead(2004)]
Readhead ACS, Mason BS, Contaldi
CR, Pearson TJ, Bond JR, et al. 2004. Ap. J. 609:498
[Refsdal(1964)]
Refsdal, S. 1964. MNRAS 128:307
[Reichardt(2009)]
Reichardt C, Ade PAR, Bock JJ,
Bond JR, Brevik JA, et al. 2009. Ap. J. 694:1200
[Reid(2009)]
Reid MJ, Braatz JA, Condon JJ, Greenhill
LJ, Henkel C, et al. 2009. Ap. J. 695:287
[Riess(1998)]
Riess AG, Filippenko, AV, Challis P,
Clocchiatti A, Dierecks A, et al. 1998. A.J. 116:1009
[Riess(2009a)]
Riess AG, Macri L, Casertano S, Sosey
M, Lampeitl H, et al. 2009a. Ap. J. 699:539
[Riess(2009b)]
Riess AG, Macri L, Li W, Lampeitl H,
Casertano S, et al. 2009b. Ap. J. Suppl. 183:109
[Rizzi(2007)]
Rizzi L, Tully RB, Makarov D, Makarova
L, Dolphin AE, et al. 2007. Ap. J. 661:81
[Romaniello(2008)]
Romaniello M, Primas F, Mottini M, Pedicelli S, Lemasle B, et al.
2008. A. & A. 488:731
[Romaniello(2009)]
Romaniello M, Primas F, Mottini M, Pedicelli S, Lemasle B, et
al. 2010. Stellar Pulsation: Challenges for Theory and
Observation. ed. J guzik, P bradley. AIP Conf. Proc. 1170:99 arXiv:0907.3655
[RowanRobinson(1985)]
Rowan-Robinson M. 1985. The
Cosmological Distance Ladder: Distance and Time in the
Universe. New York: W.H. Freeman & Co.
[Saha(1999)]
Saha A, Sandage AR, Tammann GA, Labhardt L, Macchetto FD, et al.
1999. Ap. J. 522:802
[Sakai(2000)]
Sakai S, Mould JR, Hughes SMG, Huchra
JP, Macri LM, et al. 2000. Ap. J. 529:698
[Sakai(2004)]
Sakai S, Ferrarese L, Kennicutt RC, Saha A.
2004. Ap. J. 608:42
[Sandage(1958)]
Sandage AR. 1958. Ap.J. 127:513
[Sandage(1999)]
Sandage AR, Bell RA, Tripicco MJ.
1999. Ap. J. 522:250
[Sandage(1963)]
Sandage AR, Gratton L. 1963. Star Evolution, p. 11 New York: Academic
Press
[Sandage(1968)]
Sandage AR, Tammann GA. 1968. Ap. J. 151:531
[Sandage(1982)]
Sandage AR, Tammann GA. 1982. Ap. J. 265:339
[Sandage(1990)]
Sandage AR, Tammann GA. 1990. Ap. J. 365:1
[Sandage(2006)]
Sandage AR, Tammann GA. 2006. Annu. Rev. Astron. Astrophys.
44:93
[Sandage(1996)]
Sandage AR, Saha A. Tammann GA, Labhardt L, Panagia N, Macchetto FD. 1990.
Ap. J. Lett. 460:L15
[Schechter(2005)]
Schechter PS. 2005. In Gravitational Lensing Impact on Cosmology,
edited by Y. Mellier & G. Meylan, (Cambridge, UK: Cambridge
University Press), 225:281
[Scowcroft(2009)]
Scowcroft V, Bersier D, Mould JR, Wood PR. 2009. MNRAS 396:1287
[Shapley(1930)]
Shapley H. 1930. “Star Clusters”, Harvard Observatory Monograph
[Sneden(2001)]
Sneden C, Cowan JJ, Beers TC, Truran JW, Lawler JE, et al. 2001. Astrophysical Ages and Times Scales 245:235
[Soszynski (2008)]
Soszynski I, Poleski R, Udalski A, Szymanski MK, Kubiak M, et al.
2008. Acta. Astron. 58:163
[Spergel(2003)]
Spergel DN, Verde L, Peiris HV,
Komatsu E, Nolta MR. 2003. Ap. J. Suppl. 148:175
[Steinmetz(1999)]
Steinmetz M, Navarro J.
1999. Ap. J. 513:555
[Sunyaev(1969)]
Sunyaev R, Zel’dovich Y. 1969. Astrophys. Space Sci. 4:301
[Sunyaev(1970)]
Sunyaev R, Zel’dovich Y. 1970. Astrophys. Space Sci. 7:3
[Suyu (2009)]
Suyu SH, Marshall PJ, Auger MW, Hilbert S, Blandford RD, et al. 2009.
Ap.J. 691:277
[Tammann(2008)]
Tammann GA, Sandage AR, Reindl B. 2008. Astron. Astrophys. Rev.
15:289
[Tonry1988)]
Tonry J. Schneider DP. 1988 A. J., 96:807
[Tonry(2002)]
Tonry TJ, Blakeslee JP, Ajhar EA, Dressler A. 2002 Ap J. 530:625
[Tully(1977)]
Tully RB, Fisher JR. 1977. Astron. Astrophys. 54:661
[Tully(2000)]
Tully RB, Pierce MJ. 2000. Ap. J. 553:744
[Udalski(2001)]
Udalski A, Wyrzykowski L, Pietrzynski G, Szewczyk O, Szymanski
M, et al. 2001. Acta. Astron. 51:221
[van den Bergh(1992)]
van den Bergh S. 1992. PASP 104:861
[Whelan(1973)]
Whelan J, Iben IJ. 1973. Ap. J.
186:100
[Wood-Vasey(2008)]
Wood-Vasey WM. Friedman
AS. Bloom JS. Hicken M. Modjaz M. et al. 2008, Ap. J 689:377 |
Modeling Molecular Magnets with Large Exchange and On-Site Anisotropies
Sumit Haldar
E-mail: sumithaldar@iisc.ac.in
Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore - 560012, India.
Rajamani Raghunathan
E-mail: rajamani@csr.res.in
UGC-DAE Consortium for Scientific Research, Indore - 452017, India.
Jean-Pascal Sutter
E-mail: jean-pascal.sutter@lcc-toulouse.fr
LCC-CNRS, Université de Toulouse, UPS, INPT, CNRS, Toulouse, France.
S. Ramasesha
E-mail: ramasesh@iisc.ac.in
Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore - 560012, India.
Abstract
Spins in molecular magnets can experience both anisotropic exchange interactions and on-site magnetic anisotropy. In this paper we study the effect of exchange anisotropy on the molecular magnetic anisotropy both with and without on-site anisotropy. When both the anisotropies are small, we find that the axial anisotropy parameter $D_{M}$ in the effective spin Hamiltonian is the sum of the individual contributions due to exchange and on-site anisotropies. We find that even for axial anisotropy of about $15\%$, the low energy spectrum does not correspond to a single parent spin manifold but has intruders states arising from other parent spin. In this case, the low energy spectrum can not be described by an effective Hamiltonian spanning the parent spin space. We study the magnetic susceptibility, specific heat as a function of temperature and magnetization as a function of applied field to characterize the system in this limit. We find that there is synergy between the two anisotropies, particularly for large systems with higher site spins.
Keywords Single Chain Magnets; Anisotropic spin Hamiltonians; On-Site Anisotropy; Low Energy Eigenvalues; Thermodynamic Properties
1 Introduction
Molecular spin clusters such as single molecule magnets (SMMs) and single chain magnets (SCMs) have been studied extensively over the last few decades [1, 2, 3, 4, 5, 6]. These spin clusters have attracted huge interest from both theoretical and experimental stand points because of the promise they hold for applications such as in memory storage devices, in quantum computations and in information technologies in general [7, 8, 9, 10, 11, 12]. The main bottleneck for these applications appears to be the fast relaxation of the magnetization from the fully magnetized to the non magnetized state. This is due to the low blocking temperature, measured as the temperature at which the relaxation time for magnetization, $\tau_{R}$, is 100s and depends on the energy barrier between two fully and oppositely magnetized states, for the presently known SMMs and SCMs [13, 14]. Current research in this field is focused on enhancing the blocking temperature [15, 16].
The energy barrier $\Delta$, between two fully and oppositely magnetized states of an anisotropic spin cluster of spin $S$ is given by $\Delta=|D_{M}|S^{2}$ for an integer spin cluster and $|D_{M}|(S^{2}-1/4)$ for a half-integer spin cluster. Therefore, there are two routes to enhancing $\Delta$, $(i)$ by increasing $D_{M}$ and $(ii)$ by increasing $S$. Increasing $D_{M}$ can be achieved by using magnetic building blocks in unusual coordination number and geometry. Indeed this has been demonstrated for hepta coordinated complexes [17, 18, 19, 20, 21]. Increasing $S$ can be achieved by using rare earth ions in the high spin state as the building blocks. However, it has been shown by Waldmann [22] that the magnetic anisotropy of a ferromagnetic assembly of spins is smaller than the anisotropy of individual spins as each spin center with spin $s_{i}$ only contributes a fraction
$$\displaystyle\frac{s_{i}(2s_{i}-1)}{S(2S-1)}$$
(1)
of the site anisotropy to the anisotropy of the SMM or SCM with total spin is $S$. This result assumes that all the individual magnetic ions have non zero axial anisotropy $d_{i}$ and zero planar anisotropy $e_{i}$, and that all the spin centers have the same magnetic axes. Notwithstanding this nuance, the result is illustrative of the fact that the anisotropy of the clusters is smaller than that of individual ions.
With $3d$ transition metal complexes, the highest blocking temperature reported is $4.5K$, although the energy barrier $\Delta$ is $62cm^{-1}$ [23]. This could be due to the large off-diagonal anisotropy terms that lead to quantum tunneling of magnetization. The anisotropy can be enhanced by choosing ions of $4d$, $5d$ or $4f$ metals wherein the relativistic effects are large, leading to large spin-orbit interactions [24, 15, 25, 26, 27, 28]. For example, in the $Dy_{4}$ systems, the energy barrier is $~{}692cm^{-1}$ [29]. However, large quantum tunneling of magnetization leads to small hysteresis loops. In our previous studies [30], we have shown that large magnetic anisotropy of building blocks leads to breaking the spin symmetry. In this event associating a parent spin state to define the $D_{M}$ and $E_{M}$ parameters of a cluster is not possible due to intrusion of states from different parent spins within the given spin manifold. In these cases, the Waldmann conclusion that the contribution of the individual anisotropies decreases with increasing total spin of the cluster is no longer valid. The properties of the system will have to be computed from the eigenstates of the full Hamiltonian.
The origin of single ion anisotropy as well as anisotropic exchange interactions lie in spin-orbit interactions. Indeed, it is difficult to assume isotropic or simple Heisenberg exchange interactions between spin sites that are highly anisotropic. High nuclearity complexes with large anisotropic interactions are known in a few cases, $[Mn^{III}_{6}Os^{III}]^{3+}$ cluster has $J_{x}=-9cm^{-1},J_{y}=+17cm^{-1}$ and $J_{z}=-16.5cm^{-1}$ [31, 32, 33] and $[Mn^{II}Mo^{III}]$ complex has $J_{z}=-34cm^{-1}$ and $J_{x}=J_{y}=-11cm^{-1}$ [34, 35]. In this study, we employ a generalized ferromagnetic XYZ model for nearest neighbor spin-spin interactions and on-site anisotropy. Using the full Fock space of the Hamiltonian, we follow the properties such as magnetization, susceptibility and specific heat of spin chains with ferromagnetic interaction and different site spins. In the next section we discuss briefly spin Hamiltonian we have studied and present the numerical approach for obtaining the properties of the model. In the third section, we present the result of a purely anisotropic exchange model. This will be followed by the results on a model with both exchange and site anisotropies in section four. We will end the paper with a discussion of all the results.
2 Methodology
The basic starting Hamiltonian for studying most magnetic materials is the isotropic Heisenberg exchange model given by
$$\displaystyle\hat{\mathcal{H}}_{Heis}=\sum_{\langle i,j\rangle}J_{ij}\hat{S}_{%
i}\cdot\hat{S}_{j}$$
(2)
where the summation is over nearest neighbors. This model assumes that spin-orbit interactions are weak and hence the exchange constant $J$ associated with the three components of the spin are equal ($J_{ij}^{x}=J_{ij}^{y}=J_{ij}^{z}$). The isotropic model conserves both total $M_{s}$ and total $S$ and hence we can choose a spin adapted basis such as the valence bond (VB) basis to set up the Hamiltonian matrix. The Rumer-Pauling VB basis is nonorthogonal and hence the Hamiltonian matrix is nonsymmetric. While computing eigenstates of a nonsymmetric matrix is reasonably straight forward, computing properties of the eigenstates in the VB basis is nontrivial. However, the VB eigenstates can be transformed to eigenstates in constant $M_{s}$ basis and the latter basis being orthonormal is easily amenable to computing properties of the eigenstates.
When the spin-orbit interactions are weak, we can include the anisotropy arising from it by adding the site anisotropy term,
$$\displaystyle\hat{\mathcal{H}}_{aniso}=\sum_{i}[d_{i,z}\hat{s}_{i,z}^{2}+d_{i,%
x}\hat{s}_{i,x}^{2}+d_{i,y}\hat{s}_{i,y}^{2}]$$
(3)
($d_{i,x}$, $d_{i,y}$ and $d_{i,z}$ are local ion anisotropies) and treating it as a perturbation. Usually, it is sufficient to deal with just the site diagonal anisotropy and set $d_{i,x}=d_{i,y}=0$. However, if the local anisotropy axis is not aligned with the global spin axis, then we need to include the off-diagonal site anisotropy terms. For weak on-site anisotropy ($\frac{d}{J}<<1$), we can obtain the splitting of a given total spin state perturbatively by determining the molecular anisotropy parameters $D_{M}$ and $E_{M}$ given by the eigenstates of the Hamiltonian in a given spin state $S$ [36],
$$\displaystyle\hat{\mathcal{H}}_{mol}$$
$$\displaystyle=$$
$$\displaystyle D_{M}\left(\hat{S}_{z}^{2}-\frac{1}{3}S(S+1)\right)+E_{M}(\hat{S%
}_{x}^{2}-\hat{S}_{y}^{2})$$
(4)
Spin-orbit interaction can also lead to anisotropy in the exchange Hamiltonian leading to a general $XYZ$ model whose Hamiltonian is given by
$$\displaystyle\hat{\mathcal{H}}_{XYZ}=\sum_{\langle ij\rangle}[J_{ij}^{x}\hat{s%
}_{i}^{x}\cdot\hat{s}_{j}^{x}+J_{ij}^{y}\hat{s}_{i}^{y}\cdot\hat{s}_{j}^{y}+J_%
{ij}^{z}\hat{s}_{i}^{z}\cdot\hat{s}_{j}^{z}]$$
(5)
for $J_{ij}^{x}\neq J_{ij}^{y}\neq J_{ij}^{z}$. In this model, there does not exist any spin symmetry and we need to solve the Hamiltonian for its eigenstates in the full Fock basis with no restrictions on total $S$ or $M_{s}$. In cases where a system has the same exchange constant along x and y directions but different from the exchange constant in the z-direction, we obtain the XXZ model with the Hamiltonian is given by
$$\displaystyle\hat{\mathcal{H}}_{XXZ}=\sum_{\langle ij\rangle}J_{ij}^{x}[\hat{s%
}_{i}^{x}\cdot\hat{s}_{j}^{x}+\hat{s}_{i}^{y}\cdot\hat{s}_{j}^{y}]+J_{ij}^{z}%
\hat{s}_{i}^{z}\cdot\hat{s}_{j}^{z}$$
(6)
For convenience we write the general XYZ Hamiltonian in eqn. 5 as
$$\displaystyle\hat{\mathcal{H}}=\sum_{\langle ij\rangle}J[\hat{s}_{i}^{z}\cdot%
\hat{s}_{j}^{z}+(\gamma+\delta)\hat{s}_{i}^{x}\cdot\hat{s}_{j}^{x}+(\gamma-%
\delta)\hat{s}_{i}^{y}\cdot\hat{s}_{j}^{y}]$$
(7)
where $J_{ij}^{z}=J$, $\gamma=\frac{J_{ij}^{x}+J_{ij}^{y}}{2J}$ and $\delta=\frac{J_{ij}^{x}-J_{ij}^{y}}{2J}$. The deviation of $\frac{J_{ij}^{x}+J_{ij}^{y}}{2}$ from $J_{ij}^{z}$ is then represented by the parameter $\epsilon=1-\gamma$ and the difference between exchange along x and y directions in normalized units is $\delta$. This model can be solved in the constant $M_{s}$ basis. Besides exchange anisotropy, a system can also have site anisotropy in which case, the $\hat{\mathcal{H}}_{aniso}$ should be considered together with the respective Hamiltonian, either perturbatively (for weak on site anisotropy) or in the zeroth order Hamiltonian itself.
The effect of large anisotropic exchange or large site anisotropy is to mix states with different total spin $S$. Thus, the conventional approach to define molecular anisotropy constants through the effective Hamiltonian (eqn. 4) fails as the low-lying multiplet states can not be identified as arising from a unique total spin state, as, the total spin of a state is not conserved. In such situations, the approach we have taken is to obtain the thermodynamic properties such as susceptibility $\chi(T)$, magnetization $M(T)$ and specific heat $C_{v}(T)$ of the system as a function of Hamiltonian parameters. These are computed from the canonical partition function obtained from the full spectrum of the Hamiltonian. The full Fock space of the Hamiltonian is given by $(2s_{i}+1)^{N}$, where N is the number of sites in the spin chain. The largest system we have studied corresponds to $s_{i}=2$ and $N=5$ which spans a Fock space of dimensionality of 3,125. We need to calculate $\langle\langle M_{s}\rangle\rangle$ for the magnetic properties which is a thermodynamic average of the expectation values in the eigenstates. To obtain the spin expectation value $\langle\hat{S}^{2}\rangle$ in an eigenstate we have computed the spin-spin correlation functions $\langle\hat{s}_{i}^{z}\hat{s}_{j}^{z}\rangle$, $\langle\hat{s}_{i}^{x}\hat{s}_{j}^{x}\rangle$ and $\langle\hat{s}_{i}^{y}\hat{s}_{j}^{y}\rangle$.
3 Anisotropic Exchange Models
Here, we discuss the magnetic anisotropy arising only from the exchange anisotropy. In the small exchange anisotropy limit, we first consider the XXZ model and XYZ model with small $\delta$. We will end this section with a discussion of the XYZ models with large anisotropy parameters $\epsilon$ and $\delta$. All the exchange interactions are taken to be ferromagnetic.
3.1 Small Anisotropy models
In this model we set $\delta$ to zero in eqn. 7 and study spin chains with site spins $1$, $3/2$ and $2$ in chains of 4 and 5 sites with open boundary condition. We have not considered spin-1/2 system since we wish to study the synergistic effect of anisotropic exchange and on-site anisotropy. The latter exists only for site spin greater than half. The ground state in each case corresponds to $\pm M_{s}=Ns$ where $N$ is the number of sites and $s$ is the site spin. The total spin of the states is calculated from the eigenstates as expectation value of $\hat{S}^{2}$.
In table 1 we present the energy gaps from the ground state of the low-lying states up to first $M_{s}=0$ state of short spin chains of length up to five spins for different $\epsilon$ values. The table for spin chains of four spins is given in supporting material. We notice from the table that for $\epsilon=0.1$, the lowest energy states of the $s=1$ chains satisfies $E(|M_{s}|=Ns)<E(|M_{s}|=Ns-1)...<E(|M_{s}|=0)$ and the total spin of these states is also very close to $Ns$. In this case we can fit the energy gaps to the Hamiltonian $D_{M}S_{z}^{2}$. The diagonal anisotropy of these states is shown in fig. 1.
In the XXZ model we do not have off-diagonal anisotropy, i.e., $E_{M}=0$ in the anisotropic Hamiltonian given by eqn. 4. We note in table 1 that for spin chains with $s=3/2$ and $s=2$, there are intruder states within the manifold of $S\simeq 7.5$ and $\simeq 10$ respectively. We also find that as $\epsilon$ is increased to $0.15$, even the $s=1$ spin chain has intruders. Furthermore, for site spin $2$, the intruders within the $S=10$ manifold are from progressively lower total spin states, namely $S=9$, $8$ and $7$. Thus, it is not meaningful anymore to define molecular $D_{M}$ and $E_{M}$ parameters. For the $N=4$ chains the intruder states occur in $s=1$ chain for $\epsilon=0.25$ and for $s=3/2$ and $s=2$ chain for $\epsilon=0.20$ (see supporting material). Thus, intruders arise at smaller $\epsilon$ values for longer chains and higher site spin. The $|D_{M}|$ increases linearly with increase in anisotropy (fig. 1).
We have obtained the thermodynamic properties of these spin chains as a function of temperature and the magnetization as a function of magnetic field at a fixed temperature. We show in fig. 2, $\chi_{{}_{xx}}T(=\chi_{{}_{yy}}T)$ and $\chi_{{}_{zz}}T$ dependence on temperature for spin chains of five spins for different values of the site spins. Expectedly the susceptibility increases with site spin in all cases. The $\chi_{{}_{zz}}T$ component is much larger than the $\chi_{{}_{xx}}T$ component and both show a maxima. The maxima is at a higher temperature for $\chi_{{}_{xx}}T$ compared to $\chi_{{}_{zz}}T$ and the $\chi_{{}_{xx}}T$ maxima is also broader. We also note that $\chi_{{}_{zz}}T$ is larger than $\chi_{{}_{xx}}T$ by a factor of between 2 and 3, even though maximum anisotropy $\epsilon$ is only 0.25. Besides the temperature of the maxima also increases with site spin. The $ZZ$ component is larger for large anisotropy while the $XX$ component is smaller at large anisotropy. This is because as $\epsilon$ increases it becomes easier to magnetize along the z-axis, while it becomes harder to magnetize in the x-y plane. This trend is also seen in the magnetization plots as a function of the magnetic field shown in fig. 3. We note that the magnetization $\langle M_{z}\rangle$ increases with $\epsilon$ while $\langle M_{x}\rangle$ decreases with $\epsilon$ for the same applied field.
The dependence of specific heat, $C_{v}$, on temperature for different $\epsilon$ values is shown in fig. 4.
We find that for small $\epsilon$, the specific heat shows two peaks, the first peak is narrow and the second peak is broad. This is seen for all site spins. This can be understood from the nature of the full energy spectrum of the Hamiltonian for different $\epsilon$ values fig. 5. We see that there are two successively small gaps in the spectrum below 0.13J for small anisotropy but these gaps shift to much higher energies for large anisotropy. This implies that at small anisotropy, the specific heat first increases with increase in temperature and then drops as thermal energy can not access higher energy states. As the temperature increases further the higher energy states are populated leading to increase in specific heat. Thus, the magnetic specific heat dependence on temperature can be used as a tool to estimate the anisotropy of the chain.
Introducing small planar anisotropy, $\delta$, does not significantly change the low energy spectrum in table 2 and consequently there is no discernible change in the thermodynamic properties. The main difference is that $M_{s}$ is also not conserved even for small values of $\delta$.
3.2 Large Anisotropy models
To explore the properties of the spin chains in the large anisotropy limit, we have studied $s=1$, $3/2$ and $2$ models with $\epsilon$ up to $0.75$ and $\delta$ up to $0.15$. In this limit, there are no conserved spin quantities, hence we have studied only thermodynamic properties by computing thermodynamic averages from expectation values in the eigenstates of the Hamiltonian.
All the three diagonal components of the susceptibility as a function of temperature are shown in fig. 6. We find that for large anisotropy $\chi_{{}_{zz}}T$ increases with $\epsilon$ and $\delta$, while $\chi_{{}_{xx}}T$ and $\chi_{{}_{yy}}T$ decreases with $\epsilon$ and $\delta$. $\chi_{{}_{zz}}T$ shows a smooth maxima for all cases we have studied but $\chi_{{}_{xx}}T$ and $\chi_{{}_{yy}}T$ do not show a discernible maxima. The $\chi_{{}_{zz}}T$ maxima occur at lower temperature than $\chi_{{}_{xx}}T$ and $\chi_{{}_{yy}}T$ maxima (when they exist). More significantly $\chi_{{}_{zz}}T$ is higher for higher anisotropy while $\chi_{{}_{xx}}T$ and $\chi_{{}_{yy}}T$ are higher for lower anisotropy.
In fig. 7 we show the behaviour of magnetization as a function of the field at $k_{B}T/J=1$. We find very different behaviour for $M_{z}$ compared to $M_{x}$ or $M_{y}$. The $M_{z}$ component shows saturation at low magnetic fields. The saturation field decreases with increasing site spin. On the other hand, the $M_{x}$ and $M_{y}$ components show saturation only for small anisotropy. For large anisotropy they do not show saturation and show a nearly linear increase in magnetization component over the full range of the applied magnetic field. Furthermore, the magnitude of the magnetization decreases with increasing anisotropy at a given field strength.
The specific heat behaviour is similar to the weak anisotropy case, we find a sharp peak at low temperature followed by a broad peak at higher temperatures. At higher anisotropies, we find a single peak in the $C_{v}$ vs $T$ plot 8 and the temperature of the peak maxima is higher for higher anisotropy. For a fixed anisotropy, the peak maximum shifts to higher temperature as the site spin increases from $s=1$ to $s=2$.
4 Systems with Exchange and On-Site Anisotropies
In an earlier paper we discussed the role of on-site single ion anisotropy on the anisotropy of a spin chain. In this section we will discussed the effect of both exchange and on-site anisotropy on the magnetic properties of a spin chain [30].
We have introduced on-site anisotropy ($d/J$) in the eqn. 7 and studied the spin chains with site spins $s=1$, $3/2$ and $2$ of length of five spins. We have also set $\delta=0$ and study only XXZ models in the presence of site anisotropy. We have taken same on-site anisotropy aligned along the z-axis for all the spins. When the on-site anisotropy is weak, we find that the resultant molecular magnetic anisotropy is nearly a sum of the molecular anisotropy due to on-site anisotropy alone and the molecular anisotropy due to exchange anisotropy alone. Thus, the two anisotropies are additive as seen in fig. 1. This is true up to $\epsilon=0.1$ for all the site spins.
In table 3,
we show the low-energy spectrum of the $N=5$ spin chain for $s=1$, $3/2$ and $2$, where both exchange and on-site anisotropies are large. In cases where we can not define the molecular magnetic anisotropy in terms of the parameter $D_{M}$ of the effective spin Hamiltonian, we follow the system by computing the magnetic susceptibilities, magnetization and specific heat. We have shown in fig. 9, the difference in the $\Delta\chi_{{}_{xx}}T~{}=~{}\chi_{{}_{xx}}T(\epsilon,d\neq 0)-\chi_{{}_{xx}}T(%
\epsilon,d=0)$ and $\Delta\chi_{{}_{zz}}T~{}=~{}\chi_{{}_{zz}}T(\epsilon,d\neq 0)-\chi_{{}_{zz}}T(%
\epsilon,d=0)$ of magnetic susceptibility as a function of $d/J$ at $k_{B}T/J=1$ for different $\epsilon$ values. We find that nonzero $d$ enhances $\Delta\chi_{{}_{zz}}T$ but decreases $\Delta\chi_{{}_{xx}}T$ values. In case of site spin $s=1$, the dependence of $\Delta\chi_{{}_{xx}}T$ and $\Delta\chi_{{}_{zz}}T$ on site anisotropy is weak and linear. In case of $s=3/2$ and $s=2$ the difference $\Delta\chi_{{}_{zz}}T$ increases sharply as $d/J$ is increased and for higher $d/J$ it tends to saturate. The saturation is more apparent in the $s=2$ case. $\Delta\chi_{{}_{xx}}T$ on the other hand decreases with increasing $d/J$. This is because the on-site anisotropy is oriented along the z-axis. This is also the reason why $\Delta\chi_{{}_{xx}}T$ shows a sharper drop with $d/J$ for larger $\epsilon$ while $\Delta\chi_{{}_{zz}}T$ shows a sharper rise for larger $\epsilon$.
Similarly in fig. 10, we plot $\Delta M_{x}$ and $\Delta M_{z}$ for different $s$ and $\epsilon$, as a function of $d/J$. The field strength is $g\mu_{B}H~{}=~{}J/2$. We note that the $\Delta M_{x}$ decreases sharply with $d/J$ for $s=2$ and large $\epsilon$ while $\Delta M_{z}$ increases with $d/J$ and saturates for $s=2$ case while in the $s=3/2$ and $s=1$ cases, the saturation does not occur even for $d/J=1.0$. Again $\Delta M_{z}$ is larger when $\epsilon$ is small while $\Delta M_{x}$ is larger for large $\epsilon$.
The specific heat behaviour is shown in fig. 11. We find that the two peak structure persists for small $d/J$ for $\epsilon=0.1$. However, increasing $d/J$ leads to a single peak. The peak position shifts to higher temperatures as $d/J$ increases and the peak also becomes sharper as $d/J$ increases. This is true for all site spins.
5 Conclusions
Our study of anisotropic ferromagnetic exchange models with site anisotropy shows that for small exchange and site anisotropies, the energy level splitting of the total spin states can be characterized by the axial anisotropy parameter $D_{M}$ which is a sum of the exchange alone and ion anisotropy alone $D_{M}$ parameters. For large anisotropic exchange, neither the total spin nor its z-component are conserved and it is not possible to define the molecular anisotropy parameters $D_{M}$ and $E_{M}$. The effect of anisotropy is then studied by following thermodynamics properties such as $\chi$, $C_{v}$ and $M$. This is also true when the on-site anisotropy is large, even in the absence of exchange anisotropy. We find two peak structure in $C_{v}$ vs $T$ when the exchange is weakly anisotropic. We also find that this feature prevails for weak on-site anisotropy as well. The dual peak structure is more pronounced for smaller on-site spins. In general the effect of anisotropy, as seen form the presence of intruder states from different parent spin state, is more pronounced in the case of higher site spins and longer chain length. The synergy between site anisotropy and exchange anisotropy becomes complicated when both are strong. We observe that the difference in susceptibilities as well as magnetization as a function of the site anisotropy strength for large exchange anisotropy becomes highly nonlinear, particularly for systems with higher site spin.
6 Acknowledgements
SR and JPS acknowledge the support through IFCPAR/CEFIPRA projets. SR also thanks DST for support through different projects and a fellowship and Indian Science Academy for Senior Scientist Position. RR thanks TUE-DST for support.
References
[1]
Roberta Sessoli, Dante Gatteschi, Hui Lien Tsai, David N. Hendrickson, Ann R.
Schake, Sheyi Wang, John B. Vincent, George Christou, and Kirsten Folting.
High-Spin Molecules: [Mn12O12(O2CR)16(H2O)4].
Journal of the American Chemical Society, 115(5):1804–1816,
mar 1993.
[2]
Dante Gatteschi and Roberta Sessoli.
Quantum tunneling of magnetization and related phenomena in
molecular materials.
In Angewandte Chemie - International Edition, volume 42, pages
268–297. WILEY - VCH Verlag, jan 2003.
[3]
Minoru Takahashi.
Analytical and Numerical Investigations of Spin Chains.
Progress of Theoretical Physics, 91(1):1–15, 1994.
[4]
L. Thomas, F. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, and B. Barbara.
Macroscopic quantum tunnelling of magnetization in a single crystal
of nanomagnets.
Nature, 383(6596):145–147, sep 1996.
[5]
R. Sessoli, D. Gatteschi, A. Caneschi, and M. A. Novak.
Magnetic bistability in a metal-ion cluster.
Nature, 365(6442):141–143, sep 1993.
[6]
George Christou, Dante Gatteschi, David N. Hendrickson, and Roberta Sessoli.
Single-molecule magnets.
MRS Bulletin, 25(11):66–71, nov 2000.
[7]
J Tejada, E M Chudnovsky, E Del Barco, J M Hernandez, and T P Spiller.
Magnetic qubits as hardware for quantum computers.
Nanotechnology, 12(01):181–186, 2001.
[8]
Richard E. P. Winpenny.
Quantum Information Processing Using Molecular Nanomagnets As
Qubits.
Angewandte Chemie International Edition, 47(42):7992–7994,
2008.
[9]
Jörg Lehmann, Alejandro Gaita-Ariño, Eugenio Coronado, and Daniel
Loss.
Quantum computing with molecular spin systems.
J. Mater. Chem., 19(12):1672–1677, mar 2009.
[10]
Romain Vincent, Svetlana Klyatskaya, Mario Ruben, Wolfgang Wernsdorfer, and
Franck Balestro.
Electronic read-out of a single nuclear spin using a molecular
spin transistor, volume 488.
Nature Publishing Group, aug 2012.
[11]
Filippo Troiani and Marco Affronte.
Molecular spins for quantum information technologies,
volume 40.
The Royal Society of Chemistry, may 2011.
[12]
Michael N. Leuenberger and Daniel Loss.
Quantum computing in molecular magnets.
Nature, 410(6830):789–793, apr 2001.
[13]
Dante Gatteschi, Roberta Sessoli, and Jacques Villain.
Molecular Nanomagnets, volume 9780198567.
Oxford University Press, mar 2007.
[14]
Selvan Demir, Miguel I. Gonzalez, Lucy E. Darago, William J. Evans, and
Jeffrey R. Long.
Giant coercivity and high magnetic blocking temperatures for
N23-radical-bridged dilanthanide complexes upon ligand dissociation.
Nature Communications, 8(1):2144, dec 2017.
[15]
Daniel N. Woodruff, Richard E.P. Winpenny, and Richard A. Layfield.
Lanthanide single-molecule magnets, volume 113.
American Chemical Society, jul 2013.
[16]
Stuart K. Langley, Daniel P. Wielechowski, Boujemaa Moubaraki, and Keith S.
Murray.
Enhancing the magnetic blocking temperature and magnetic coercivity
of ${Cr_{2}^{III}Ln_{2}^{III}}$ single-molecule magnets via bridging ligand
modification.
Chemical Communications, 52(73):10976–10979, sep 2016.
[17]
Renaud Ruamps, Luke J. Batchelor, Rémi Maurice, Nayanmoni Gogoi, Pablo
Jiménez-Lozano, Nathalie Guihéry, Coen Degraaf, Anne Laure Barra,
Jean Pascal Sutter, and Talal Mallah.
Origin of the magnetic anisotropy in heptacoordinate $Ni^{II}$ and
$Co^{II}$ complexes.
Chemistry - A European Journal, 19(3):950–956, jan 2013.
[18]
Nayanmoni Gogoi, Mehrez Thlijeni, Carine Duhayon, and Jean Pascal Sutter.
Heptacoordinated nickel(II) as an ising-type anisotropic building
unit: Illustration with a pentanuclear [(NiL)3{W(CN)8} 2] complex.
Inorganic Chemistry, 52(5):2283–2285, mar 2013.
[19]
Thengarai S. Venkatakrishnan, Shaon Sahoo, Nicolas Bréfuel, Carine
Duhayon, Carley Paulsen, Anne Laure Barra, S. Ramasesha, and Jean Pascal
Sutter.
Enhanced ion anisotropy by nonconventional coordination geometry:
Single-chain magnet behavior for a [{FeIIL}2{Nb IV(CN)8}] helical
chain compound designed with heptacoordinate FeII.
Journal of the American Chemical Society, 132(17):6047–6056,
may 2010.
[20]
Arun Kumar Bar, Céline Pichon, Nayanmoni Gogoi, Carine Duhayon,
S. Ramasesha, and Jean Pascal Sutter.
Single-ion magnet behaviour of heptacoordinated Fe(ii) complexes: On
the importance of supramolecular organization.
Chemical Communications, 51(17):3616–3619, feb 2015.
[21]
Arun Kumar Bar, Nayanmoni Gogoi, Cíline Pichon, V. M.L.Durga Prasad Goli,
Mehrez Thlijeni, Carine Duhayon, Nicolas Suaud, Nathalie Guihíry,
Anne Laure Barra, S. Ramasesha, and Jean Pascal Sutter.
Pentagonal Bipyramid FeIIComplexes: Robust Ising-Spin Units towards
Heteropolynuclear Nanomagnets.
Chemistry - A European Journal, 23(18):4380–4396, mar 2017.
[22]
Oliver Waldmann.
A criterion for the anisotropy barrier in single-molecule magnets.
Inorganic Chemistry, 46(24):10035–10037, 2007.
[23]
Constantinos J. Milios, Alina Vinslava, Wolfgang Wernsdorfer, Stephen Moggach,
Simon Parsons, Spyros P. Perlepes, George Christou, and Euan K. Brechin.
A record anisotropy barrier for a single-molecule magnet.
Journal of the American Chemical Society, 129(10):2754–2755,
2007.
[24]
Jinkui Tang, Ian Hewitt, N. T. Madhu, Guillaume Chastanet, Wolfgang
Wernsdorfer, Christopher E. Anson, Cristiano Benelli, Roberta Sessoli, and
Annie K. Powell.
Dysprosium triangles showing single-molecule magnet behavior of
thermally excited spin states.
Angewandte Chemie - International Edition, 45(11):1729–1733,
mar 2006.
[25]
Jeffrey D. Rinehart and Jeffrey R. Long.
Exploiting single-ion anisotropy in the design of f-element
single-molecule magnets.
Chemical Science, 2(11):2078, oct 2011.
[26]
Jan Dreiser, Kasper S. Pedersen, Alexander Schnegg, Karsten Holldack, Joscha
Nehrkorn, Marc Sigrist, Philip Tregenna-Piggott, Hannu Mutka, Høgni Weihe,
Vladimir S. Mironov, Jesper Bendix, and Oliver Waldmann.
Three-axis anisotropic exchange coupling in the single-molecule
magnets NEt4[MnIII2(5-Brsalen)2(MeOH) 2MIII(CN)6] (M=Ru, Os).
Chemistry - A European Journal, 19(11):3693–3701, mar 2013.
[27]
Vladimir S. Mironov, Liviu F. Chibotaru, and Arnout Ceulemans.
Mechanism of a strongly anisotropic MoIII-CN-MnII spin-spin coupling
in molecular magnets based on the [Mo(CN)7] 4- heptacyanometalate: A new
strategy for single-molecule magnets with high blocking temperatures.
Journal of the American Chemical Society, 125(32):9750–9760,
2003.
[28]
Miriam V. Bennett and Jeffrey R. Long.
New cyanometalate building units: Synthesis and characterization of
[Re(CN)7]3- and [Re(CN)8]3-.
Journal of the American Chemical Society, 125(9):2394–2395,
2003.
[29]
Robin J. Blagg, Liviu Ungur, Floriana Tuna, James Speak, Priyanka Comar, David
Collison, Wolfgang Wernsdorfer, Eric J.L. McInnes, Liviu F. Chibotaru, and
Richard E.P. Winpenny.
Magnetic relaxation pathways in lanthanide single-molecule magnets.
Nature Chemistry, 5(8):673–678, aug 2013.
[30]
Sumit Haldar, Rajamani Raghunathan, Jean Pascal Sutter, and S. Ramasesha.
Modelling magnetic anisotropy of single-chain magnets in |d/J| $\geq$
1 regime.
Molecular Physics, 115(21-22):2849–2859, nov 2017.
[31]
Veronika Hoeke, Anja Stammler, Hartmut Bögge, Jürgen Schnack, and
Thorsten Glaser.
Strong and anisotropic superexchange in the single-molecule magnet
(SMM) [MnIII6OsIII]3+: Promoting SMM behavior through 3d-5d transition metal
substitution.
Inorganic Chemistry, 53(1):257–268, jan 2014.
[32]
Jan Dreiser, Kasper S. Pedersen, Alexander Schnegg, Karsten Holldack, Joscha
Nehrkorn, Marc Sigrist, Philip Tregenna-Piggott, Hannu Mutka, Høgni Weihe,
Vladimir S. Mironov, Jesper Bendix, and Oliver Waldmann.
Three-axis anisotropic exchange coupling in the single-molecule
magnets NEt4[MnIII2(5-Brsalen)2(MeOH) 2MIII(CN)6] (M=Ru, Os).
Chemistry - A European Journal, 19(11):3693–3701, mar 2013.
[33]
Vladimir S. Mironov, Liviu F. Chibotaru, and Arnout Ceulemans.
Mechanism of a strongly anisotropic $Mo^{III}-CN-Mn^{II}$ spin-spin
coupling in molecular magnets based on the $[Mo(CN)7]^{4}-$ heptacyanometalate:
A new strategy for single-molecule magnets with high blocking temperatures.
Journal of the American Chemical Society, 125(32):9750–9760,
2003.
[34]
Vladimir S. Mironov.
Origin of Dissimilar Single-Molecule Magnet Behavior of Three
MnII2MoIII Complexes Based on [MoIII(CN)7]4- Heptacyanomolybdate: Interplay
of MoIII-CN-MnII Anisotropic Exchange Interactions.
Inorganic Chemistry, 54(23):11339–11355, dec 2015.
[35]
Kun Qian, Xing-Cai Huang, Chun Zhou, Xiao-Zeng You, Xin-Yi Wang, and Kim R.
Dunbar.
A Single-Molecule Magnet Based on Heptacyanomolybdate with the
Highest Energy Barrier for a Cyanide Compound.
Journal of the American Chemical Society, 135(36):13302–13305,
sep 2013.
[36]
Rajamani Raghunathan, S. Ramasesha, and Diptiman Sen.
Theoretical approach for computing magnetic anisotropy in single
molecule magnets.
Physical Review B, 78(10):104408, sep 2008. |
Interpreting chest X-rays via CNNs that exploit
disease dependencies and uncertainty labels
Hieu H. Pham${}^{*}$, Tung T. Le, Dat Q. Tran, Dat T. Ngo, Ha Q. Nguyen
Medical Imaging Group, Vingroup Big Data Institute (VinBDI)
458 Minh Khai street, Hai Ba Trung, Hanoi, Vietnam
Abstract
Chest radiography is one of the most common types of diagnostic radiology exams, which is critical for screening and diagnosis of many different thoracic diseases. Specialized algorithms have been developed to detect several specific pathologies such as lung nodule or lung cancer. However, accurately detecting the presence of multiple diseases from chest X-rays (CXRs) is still a challenging task. This paper presents a supervised multi-label classification framework based on deep convolutional neural networks (CNNs) for predicting the risk of 14 common thoracic diseases. We tackle this problem by training state-of-the-art CNNs that exploit dependencies among abnormality labels. We also propose to use the label smoothing technique for a better handling of uncertain samples, which occupy a significant portion of almost every CXR dataset. Our model is trained on over 200,000 CXRs of the recently released CheXpert dataset and achieves a mean area under the curve (AUC) of 0.940 in predicting 5 selected pathologies from the validation set. This is the highest AUC score yet reported to date. The proposed method is also evaluated on the independent test set of the CheXpert competition, which is composed of 500 CXR studies annotated by a panel of 5 experienced radiologists. The performance is on average better than 2.6 out of 3 other individual radiologists with a mean AUC of 0.930, which ranks first on the CheXpert leaderboard at the time of writing this paper.
keywords:
Chest X-ray, CheXpert, Multi-label classification, Uncertainty label, Label smoothing, Label dependency, Hierarchical learning
††journal: Neurocomputing
1 Introduction
Chest X-ray (CXR) is one of the most common radiological exams in diagnosing many different diseases related to lung and heart, with millions of scans performed globally every year NHS_England . Many diseases among them can be deadly if not diagnosed quickly and accurately enough.
A computer-aided diagnosis (CAD) system that is able to interpret CXRs at a performance level comparable to practicing radiologists could provide substantial benefits for many realistic clinical contexts. In this work, we investigate the problem of multi-label classification for CXRs using deep convolutional neural networks (CNNs).
There has been a recent effort to harness advances in machine learning, especially deep learning, to build a new generation of CAD systems for classification and localization of common thoracic diseases from CXR images qin2018computer . Several motivations are behind this transformation: First, interpreting CXRs to accurately diagnose pathologies is difficult. Even the best radiologists are prone to misdiagnoses due to challenges in distinguishing different kinds of pathologies, many of which often have similar visual features delrue2011difficulties . Therefore, a high-precision method for common thorax diseases classification and localization can be used as a second reader to support the decision making process of radiologists and to help reduce the diagnostic error. It also addresses the lack of diagnostic expertise in areas where the radiologists are limited or not available crisp2014global ; theatlantic . Second, such a system can be used as a screening tool that helps reduce waiting time of patients in hospitals and allows care providers to respond to emergency situations sooner or to speed up a diagnostic imaging workflow annarumma2019automated . Third, deep neural networks, in particular deep CNNs, have shown their remarkable performance for various applications in medical imaging analysis Litjens-et-al:2017 , including the CXR interpretation task.
Many deep learning-based approaches have been proposed for classifying lung diseases and proven that they could help radiologists overcome the limitations of human perception and bias, as well as reduce errors in diagnosis. Almost all of these approaches, however, aim to detect some specific diseases such as pneumonia jaiswal2019identifying , tuberculosis lakhani2017deep ; pasa2019efficient , or lung cancer ausawalaithong2018automatic . Meanwhile, building a unified deep learning framework for accurately detecting the presence of multiple common thoracic diseases from CXRs remains a difficult task that requires much research effort. In particular, we recognize that standard multi-label classifiers often ignore domain knowledge. For example, in the case of CXR data, how to leverage clinical taxonomies of disease patterns and how to handle uncertainty labels are still open questions, which have not received much research attention. This observation motivates us to build and optimize a predictive model based on deep CNNs for the CXR interpretation in which dependencies among labels and uncertainty information are taken into account during both the training and inference stages. Specifically, we develop a deep learning-based approach that puts together the ideas of conditional training Chen2019DeepHM and label smoothing Muller2019WhenDL into a novel training procedure for classifying 14 common lung diseases and observations. We trained our system on more than 200,000 CXRs of the CheXpert dataset Irvin2019CheXpertAL —one of the largest CXR dataset currently available, and evaluated it on a validation set containing 200 studies, which were manually annotated by 3 board-certified radiologists. The proposed method is also tested against the majority vote of 5 radiologists on the hidden test set of the CheXpert competition that contains 500 studies.
This study makes several contributions. First, we propose a novel training strategy for multi-label CXR classification that incorporates (1) a conditional training process based on a predefined disease hierarchy and (2) a smoothing policy for uncertainty labels. The benefits of these two key factors are empirically demonstrated through our ablation studies. Second, we train a series of state-of-the-art CNNs on frontal-view CXRs of the CheXpert dataset for classifying 14 common thoracic diseases. Our best model, which is an ensemble of various CNN architectures, achieves the highest area under ROC curve (AUC) score on both the validation set and test set of CheXpert at the time being. Specifically, on the validation set, it yields an averaged AUC of 0.940 in predicting 5 selected lung diseases: Atelectasis (0.909), Cardiomegaly (0.910), Edema (0.958), Consolidation (0.957) and Pleural Effusion (0.964). This model improves the baseline method reported in Irvin2019CheXpertAL by a large margin of 5%. On the independent test set, we obtain a mean AUC of 0.930. More importantly, the proposed deep learning model is on average more accurate than 2.6 out of 3 individual radiologists in predicting the 5 selected thoracic diseases when presented with the same data111Our model (Hierarchical-Learning-V1) currently takes the first place in the CheXpert competition. More information can be found at https://stanfordmlgroup.github.io/competitions/chexpert/. Updated on January 12, 2021..
The rest of the paper is organized as follows. Related works on CNNs in medical imaging and the problem of multi-label classification in CXR images are reviewed in Section 2. In Section 3, we present the details of the proposed method with a focus on how to deal with dependencies among diseases and uncertainty labels. Section 4 provides comprehensive experiments on the CheXpert dataset. Section 5 discusses the experimental results, some key findings and limitations of this research. Finally, Section 6 concludes the paper.
2 Related works
2.1 Deep learning in medical imaging
Recently, thanks to the increased availability of large scale, high-quality labeled datasets wang2017chestx ; Irvin2019CheXpertAL ; johnson2019mimic and high-performing deep network architectures he2016deep ; huang2017densely ; szegedy2017inception ; zoph2018learning , deep learning-based approaches have been able to reach, even outperform expert-level performance for many medical image interpretation tasks rajpurkar2017chexnet ; rajpurkar2018deep ; guan2018diagnose ; shen2019deep . Most successful applications of deep neural networks in medical imaging rely on CNNs, which were introduced in 1998 by LeCun et al. lecun1998gradient and revolutionized in 2012 by Krizhevsky et al. NIPS2012_4824 . State-of-the-art CNN models are rapidly becoming the standard for a wide range of applications in medical imaging such as detection, classification, and segmentation. They were applied successfully for lung cancer detection huang2017added , pulmonary tuberculosis detection lakhani2017deep , skin cancer classification esteva2017dermatologist and many others Litjens-et-al:2017 ; qin2018computer .
2.2 Multi-label classification of CXRs
Multi-label classification is a common setting in CXR interpretation in which each input sample can be associated with one or several labels zhang2013review ; tsoumakas2007multi . Due to its important role in medical imaging, a variety of approaches have been proposed in the literature. For instance, Rajpurkar et al. rajpurkar2017chexnet introduced CheXNet—a DenseNet-121 model that was trained on the ChestX-ray14 dataset wang2017chestx , which achieved state-of-the-art performance on over 14 disease classes and exceeded radiologist performance on pneumonia using the F1 metric. Rajpurkar et al. rajpurkar2018deep subsequently developed CheXNeXt, an improved version of the CheXNet, whose performance is on par with radiologists on a total of 10 pathologies of ChestX-ray14. Another notable work based on ChestX-ray14 was by Kumar et al. kumar2018boosted who presented a cascaded deep neural network to improve the performance of the multi-label classification task. Closely related to our paper is the work of Chen et al. Chen2019DeepHM , in which they proposed to use the conditional training strategy to exploit the hierarchy of lung abnormalities in the PLCO dataset GOHAGAN2000251S . In this method, a DenseNet-121 was first trained on a restricted subset of the data such that all parent nodes in the label hierarchy are positive and then finetuned on the whole data.
Recently, the availability of very large-scale CXR datasets such as CheXpert Irvin2019CheXpertAL and MIMIC-CXR johnson2019mimic provides researchers with an ideal volume of data (224,316 scans of CheXpert and more than 350,000 of MIMIC-CXR) for developing better and more robust supervised learning algorithms. Both of these datasets were automatically labeled by the same report-mining tool with 14 common findings. Irvin et al. Irvin2019CheXpertAL proposed to train a 121-layer DenseNet on CheXpert with various policies for handling the uncertainty labels. In particular, uncertainty labels were either ignored (U-Ignore policy) or mapped to positive (U-Ones policy) or negative (U-Zeros policy). On average, this baseline model outperformed 1.8 out of 3 individual radiologists with an AUC of 0.907 when predicting 5 selected pathologies on a test set of 500 studies. In another work, Rubin et al. rubin2018large introduced DualNet—a novel dual convolutional networks that were jointly trained on both the frontal and lateral CXRs of MIMIC-CXR. Experiments showed that the DualNet provides an improved performance in classifying findings in CXR images when compared to separate baseline (i.e. frontal and lateral) classifiers.
2.3 Key differences
Standard multi-label learning methods (e.g. one-versus-all or one-versus-one zhang2013review ) treat all labels independently. In many clinical contexts, however, there are significant dependencies between disease labels in which lung and cardiovascular pathologies are not an exception van2012relationship . We believe that such dependencies should be exploited in order to improve the performance of the predictive models. In this paper, we adapt the conditional training approach of GOHAGAN2000251S to extensively train a series of CNN architectures for the hierarchy of the 14 CheXpert pathologies Irvin2019CheXpertAL , which is totally different from that of PLCO GOHAGAN2000251S . Unlike previous studies, we also propose the use of the label smoothing regularization (LSR) Muller2019WhenDL to leverage uncertainty labels, which, as experiments will later show, significantly improves
the uncertainty policies originally proposed in Irvin2019CheXpertAL .
3 Proposed Method
In this section, we present details of the proposed method. We first give a formulation of the multi-label classification for CXRs and the evaluation protocol used in this study (Section 3.1). We then describe a new training procedure that exploits the relationship among diseases for improving model performance (Section 3.2). This section also introduces the way we use the LSR to deal with uncertain samples in the training data (Section 3.3).
3.1 Problem formulation
Our focus in this paper is to develop and evaluate a deep learning-based approach that could learn from hundreds of thousands of CXR images and make accurate diagnoses of 14 common thoracic diseases from unseen samples. These 14 diseases include Enlarged Cardiomediastinum, Cardiomegaly, Lung Opacity, Lung Lesion, Edema, Consolidation, Pneumonia, Atelectasis, Pneumothorax, Pleural Effusion, Pleural Other, Fracture, Support Devices, and No Finding. In this multi-label learning scenario, we are given a training set $\mathcal{D}=\left\{\left(\textbf{x}^{(i)},\textbf{y}^{(i)}\right);i=1,\ldots,N\right\}$ that contains $N$ CXRs; each input image $\textbf{x}^{(i)}$ is associated with label $\textbf{y}^{(i)}\in\{0,1\}^{14}$, where 0 and 1 correspond to negative and positive, respectively. During the training stage, our goal is to train a CNN, parameterized by weights $\boldsymbol{\theta}$, that maps $\textbf{x}^{(i)}$ to a prediction $\hat{\textbf{y}}^{(i)}$ such that the cross-entropy loss function is minimized over the training set $\mathcal{D}$. Note that, instead of the softmax function, in the multi-label classification, the sigmoid activation function
$$\displaystyle\hat{y}_{k}=\dfrac{1}{1+\exp(-z_{k})},\quad k=1,\ldots,14,$$
(1)
is applied to the logits $z_{k}$ at the last layer of the CNN in order to output each of the 14 labels. The loss function is then given by
$$\displaystyle\ell(\boldsymbol{\theta})=\sum_{i=1}^{N}\sum_{k=1}^{14}y_{k}^{(i)%
}\log\hat{y}_{k}^{(i)}+\left(1-y_{k}^{(i)}\right)\log\left(1-\hat{y}_{k}^{(i)}%
\right).$$
(2)
A validation set $\mathcal{V}=\left\{\left(\textbf{x}^{(j)},\textbf{y}^{(j)}\right);j=1,\ldots,M\right\}$ contains $M$ CXRs, annotated by a panel of 5 radiologists, is used to evaluate the effectiveness of the proposed method. More specifically, model performance is measured by the AUC scores over 5 diseases: Atelectasis, Cardiomegaly, Consolidation, Edema, and Pleural Effusion from the validation set of the CheXpert dataset Irvin2019CheXpertAL , which were selected based on clinical importance and prevalence. Figure 1 shows an illustration of the task we investigate in this paper.
3.2 Conditional training to learn dependencies among labels
In medical imaging, labels are often organized into hierarchies in form of a tree or a directed acyclic graph (DAG). These hierarchies are constructed by domain experts, e.g. radiologists in the case of CXR data. Diagnoses or observations in CXRs are often conditioned upon their parent labels van2012relationship . This important fact should be leveraged during the model training and prediction. Most existing CXR classification approaches, however, treat each label in an independent manner and do not take the label structure into account. This group of algorithms is known as flat classification methods alaydie2012exploiting . A flat learning model reveals some limitations when applied to hierarchical data as it fails to model the dependency between diseases. For example, from Figure 1, the Cardiomegaly label is positive only if its parent, Enlarged Cardiomediastinum, is positive too. Additionally, some labels that are at the lower levels in the hierarchy, in particular at leaf nodes, have very few positive samples, which makes the the flat learning model more vulnerable to overfitting.
Another group of algorithms called hierarchical multi-label classification methods has been proposed for leveraging the hierarchical relationships among labels in making predictions, which has been successfully exploited for text processing aly-etal-2019-hierarchical , visual recognition bi2012mandatory ; yan2015hd and genomic analysis bi2015bayes . One common approach is to train classifiers on conditional data with all parent-level labels being positive and then to finetune them with the whole dataset Chen2019DeepHM , which contains both the positive and negative samples.
We adapt the idea of Chen et al. Chen2019DeepHM to the lung disease hierarchy in Figure 1, which was initially introduced in Irvin2019CheXpertAL . Presuming the medical validity of the hierarchy, we break the training procedure into two steps. The first step, called conditional training, aims to learn the dependent relationships between parent and child labels and to concentrate on distinguishing lower-level labels, in particular the leaf labels. In this step, a CNN is pretrained on a partial training set containing all positive parent labels to classify the child labels; this procedure is illustrated in Figure 2.
In the second step, transfer learning will be exploited. Specifically, we freeze all the layers of the pretrained network except the last fully connected layer and then retrain it on the full dataset. This training stage aims at improving the capacity of the network in predicting parent-level labels, which could also be either positive or negative.
According to the above training strategy, the output of the network for each label can be viewed as the conditional probability that this label is positive given its parent being positive. During the inference phase, however, all the labels should be unconditionally predicted. Thus, as a simple application of the Bayes rule, the unconditional probability of each label being positive should be computed by multiplying all conditional probabilities produced by the CNN along the path from the root node to the current label. For example, let $\mathcal{C}$ and $\mathcal{D}$ be disease labels at the leaf nodes of a tree $\mathcal{T}$, which also parent labels $\mathcal{A}$ and $\mathcal{B}$, as drawn in Figure 3. Suppose the tuple of conditional predictions $\left(p(\mathcal{A}),p(\mathcal{B}|\mathcal{A}),p(\mathcal{C}|\mathcal{B}),p(%
\mathcal{D}|\mathcal{B})\right)$ are already provided by the network. Then, the unconditional predictions for the presence of $\mathcal{C}$ and $\mathcal{D}$ will be computed as
$$\displaystyle p(\mathcal{C})$$
$$\displaystyle=p(\mathcal{A})p(B|\mathcal{A})p(\mathcal{C}|\mathcal{B}),$$
(3)
$$\displaystyle p(\mathcal{D})$$
$$\displaystyle=p(\mathcal{A})p(B|\mathcal{A})p(\mathcal{D}|\mathcal{B}).$$
(4)
It is important to note that the unconditional inference mentioned above helps ensure that the probability of presence of a child disease is always smaller than the probability of its parent, which is consistent with clinical taxonomies in practice.
3.3 Leveraging uncertainty in CXRs with label smoothing regularization
Another challenging issue in the multi-label classification of CXRs is that we may not have full access to the true labels for all input images provided by the training dataset. A considerable effort has been devoted to creating large-scale CXR datasets with more reliable ground truth, such as CheXpert Irvin2019CheXpertAL and MIMIC-CXR johnson2019mimic . The labeling of these datasets, however, heavily depends on expert systems (i.e. using keyword matching with hard-coded rules), which left many CXR images with uncertainty labels.
Several policies have been proposed in Irvin2019CheXpertAL to deal with these uncertain samples. For example, they can be all ignored (U-Ignore), all mapped to positive (U-Ones), or all mapped to negative (U-Zeros). While U-Ignore could not make use of the whole dataset, both U-Ones and U-Zeros yielded a minimal improvement on CheXpert, as reported in Irvin2019CheXpertAL . This is because setting all uncertainty labels to either 1 or 0 will certainly produce a lot of wrong labels, which misguide the model training.
In this paper, we propose to apply a new advance in machine learning called label smoothing regularization (LSR) szegedy2016rethinking ; Muller2019WhenDL ; pereyra2017regularizing for a better handling of uncertainty samples. Our main goal is to exploit the large amount of uncertain CXRs and, at the same time, to prevent the model from overconfident prediction of the training examples that might contain mislabeled data. Specifically, the U-ones policy is softened by mapping each uncertainty label ($-1$) to a random number close to 1. The proposed U-ones+LSR policy now maps the original label $y_{k}^{(i)}$ to
$$\displaystyle\bar{y}_{k}^{(i)}=\begin{cases}u,&\text{if }y_{k}^{(i)}=-1\\
y_{k}^{(i)},&\text{otherwise},\end{cases}$$
(5)
where $u\sim U(a_{1},b_{1})$ is a uniformly distributed random variable between $a_{1}$ and $b_{1}$—the hyper-parameters of this policy. Similarly, we propose the U-zeros+LSR policy that softens the U-zeros by setting each uncertainty label to a random number $u\sim U(a_{0},b_{0})$ that is closed to 0.
4 Experiments
4.1 CXR dataset and settings
CheXpert dataset Irvin2019CheXpertAL was used to develop and evaluate the proposed method. This is one of the largest and most challenging public CXR dataset currently available, which contains 224,316 scans of unique 65,240 patients, labeled for the presence of 14 common chest radiographic observations. Each observation can be assigned to either positive (1), negative (0), or uncertain (-1).
The main task on CheXpert is to predict the probability of multiple observations from an input CXR. The predictive models take as input a single view CXR and output the probability of each of the 14 observations as shown in Figure 1. The whole dataset is divided into a training set of 223,414 studies, a validation set of 200 studies, and a test set of 500 studies. For the validation set, each study is annotated by 3 board-certified radiologists and the majority vote of these annotations serves as the ground-truth label. Meanwhile, each study in the test set is labeled by the consensus of 5 board-certified radiologists. The authors of CheXpert proposed an evaluation protocol over 5 observations: Atelectasis, Cardiomegaly, Consolidation, Edema, and Pleural Effusion, which were selected based on the clinical importance and prevalence from the validation set. The effectiveness of predictive models is measured by the AUC metric.
4.2 Data cleaning and normalization
The learning performance of deep neural networks on raw CXRs may be affected by the irrelevant noisy areas such as texts or the existence of irregular borders. Moreover, we observe a high ratio of CXRs that have poor alignment. We therefore propose a series of preprocessing steps to reduce the effect of irrelevant factors and focus on the lung area. Specifically, all CXRs were first rescaled to $256\times 256$ pixels. A template matching algorithm brunelli2009template was then used to search and find the location of a template chest image ($224\times 224$ pixels) in the original images. Finally, they were normalized using mean and standard deviation of images from the ImageNet training set NIPS2012_4824 in order to reduce source-dependent variation.
4.3 Network architecture and training methodology
We used DenseNet-121 huang2017densely as a baseline network architecture for verifying our hypotheses on the conditional training procedure (Section 3.2) and the effect of LSR (Section 3.3). In the training stage, all images were fed into the network with a standard size of $224\times 224$ pixels. The final fully-connected layer is a 14-dimensional dense layer, followed by sigmoid activations that were applied to each of the outputs to obtain the predicted probabilities of the presence of the 14 pathology classes. We used Adam optimizer kingma2014adam with default parameters $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999 and a batch size of 32 to find the optimal weights. The learning rate was initially set to $1e-4$ and then reduced by a factor of 10 after each epoch during the training phase. Our network was initialized with the pretrained model on ImageNet NIPS2012_4824 and then trained for 5 epochs, which is equivalent to 50,000 iterations. During training, our goal is to minimize the binary cross-entropy loss function between the ground-truth labels and the predicted labels output by the network over the training samples. The proposed deep network was implemented in Python using Keras with TensorFlow as backend. All experiments were conducted on a Windows 10 machine with a single NVIDIA Geforce RTX 2080 Ti with 11GB memory.
We conducted extensive ablation studies to verify the impact of the proposed conditional training procedure and LSR. Specifically, we first trained independently the baseline network with 3 label policies: U-Ignore, U-Ones, and U-Zeros. We then fixed the hyperparameter settings of these runs above and performed the conditional training procedure on top of them, resulting in 3 other networks: U-Ignore+CT, U-Ones+CT, and U-Zeros+CT, respectively. Next, the LSR technique was applied to the two label policies U-Ones and U-Zeros. For U-Ones, all uncertainty labels were mapped to random numbers uniformly distributed in the interval $[0.55,\ 0.85]$. For U-Zeros, we labeled uncertain samples with random numbers in $[0,\ 0.3]$. Both of these intervals were emperically chosen. Finally, both CT and LSR were combined with U-Ones and U-Zeros using the same set of hyperparameters, resulting in U-Ones+CT+LSR and U-Zeros+CT+LSR, respectively.
4.4 Model ensembling
In a multi-label classification setting, it is hard for a single CNN model to obtain high and consistent AUC scores for all disease labels. In fact, the AUC score for each label often varies with the choice of network architecture. In order to achieve a highly accurate classifier, an ensemble technique should be explored. The key idea of the ensembling is to rely on the diversity of a set of possibly weak classifiers that can be combined into a stronger classifier. To that end, we trained and evaluated a strong set of different state-of-the-art CNN models on the CheXpert. The following architectures were investigated: DenseNet-121, DenseNet-169, DenseNet-201 huang2017densely , Inception-ResNet-v2 szegedy2017inception , Xception chollet2017xception , and NASNetLarge zoph2018learning . The ensemble model was simply obtained by averaging the outputs of all trained networks. In the inference stage, the test-time augmentation (TTA) simonyan2014very was also applied. Specifically, for each test CXR, we applied a random transformation (amongst horizontal flipping, rotating $\pm$7 degrees, scaling $\pm$2%, and shearing $\pm$5 pixels) 10 times and then averaged the outputs of the model on the 10 transformed samples to get the final prediction.
4.5 Quantitative results
Table 1 provides the AUC scores for all experimental settings we have conducted on the CheXpert validation set. We found that the best performing DenseNet-121 model was trained with the U-Ones+CT+LSR policy, which obtained an AUC of 0.894 on the validation set. This is a 4% improvement compared to the baseline trained with the U-Ones policy (mean AUC = 0.860). Additionally, experimental results show that both the proposed conditional training and LSR help boost the model performance.
Our final model, which is an ensemble of six single models, achieved an average AUC of 0.940. As shown in Table 2, this score outperforms all previous state-of-the-art results. Figure 4 plots the ROC curves of the ensemble model for 5 pathologies on the validation set. Figure 5 illustrates some example predictions by the model during the inference stage.
4.6 Independent evaluation and comparison to radiologists
A crucial evaluation of any machine learning-based medical diagnosis system (ML-MDS) is to evaluate how well the system performs on an independent test set in comparison to human expert-level performance. To this end, we evaluated the proposed method on the hidden test set of CheXpert, which contains 500 CXRs labeled by 8 board-certified radiologists. The annotations of 3 of them were used for benchmarking radiologist performance and the majority vote of the other 5 served as ground truth. For each of the 3 individual radiologists, the AUC scores for the 5 selected diseases (Atelectasis, Cardiomegaly, Consolidation, Edema, and Pleural Effusion) were computed against the ground truth to evaluate radiologists’ performance. We then evaluated our ensemble model on the test set and performed ROC analysis to compare the model performance to radiologists. For more details, the ROCs produced by the prediction model and the three radiologists’ operating points were both plotted. For each disease, whether the model is superior to radiologists’ performances was determined by counting the number of radiologists’ operating points lying below the ROC222This test was conducted independently with the support of the Stanford Machine Learning Group as the test set is not released to the public.. The result shows that our deep learning model, when being averaged over the 5 diseases, outperforms 2.6 out of 3 radiologists with an AUC of 0.930. This is the best performance on the CheXpert leaderboard to date. The attained AUC score validates the generalization capability of the trained deep learning model on an unseen dataset. Meanwhile, the total number of radiologists under ROC curves indicates that the proposed method is able to reach human expert-level performance—an important step towards the application of an ML-MDS in real-world scenarios.
5 Discussions
5.1 Key findings and meaning
By training a set of strong CNNs on a large scale dataset, we built a deep learning model that can accurately predict multiple thoracic diseases from CXRs. In particular, we empirically showed a major improvement, in terms of AUC score, by exploiting the dependencies among diseases and by applying the label smoothing technique to uncertain samples. We found that it is especially difficult to obtain a good AUC score for all diseases with a single CNN. It is also observed that the classification performance varies with network architectures, the rate of positive/negative samples, as well as the visual features of the lung disease being detected.
In this case, an ensemble of multiple deep learning models plays a key in boosting the generalization of the final model and its performance. Our findings, along with recent publications rajpurkar2017chexnet ; guan2018diagnose ; rajpurkar2018deep ; kumar2018boosted , continue to assert that deep learning algorithms can accurately identify the risk of many thoracic diseases and is able to assist patient screening, diagnosing, and physician training.
5.2 Limitations
Although a highly accurate performance has been achieved, we acknowledge that the proposed method reveals some limitations. First, the deep learning algorithm was trained and evaluated on a CXR data source collected from a single tertiary care academic institution. Therefore, it may not yield the same level of performance when applied to data from other sources such as from other institutions with different scanners. This phenomenon is called geographic variation. To overcome this, the learning algorithm should be trained on data that are diverse in terms of regions, races, imaging protocols, etc. Second, to make a diagnosis from a CXR, doctors often rely on a broad range of additional data such as patient age, gender, medical history, clinical symptoms, and possibly CXRs from different views. This additional information should also be incorporated into the model training. Third, a finer resolution such as $512\times 512$ or $1024\times 1024$ could be beneficial for the detection of diseases that have small and complex structures on CXRs. This investigation, however, requires much more computational power for training and inference. Third, CXR image quality is another problem. When taking a deeper look at the CheXpert, we found a considerable rate of samples in low quality (e.g. rotated image, low-resolution, samples with texts, noise, etc.) that definitely hurts the model performance. In this case, a template matching-based method as proposed in this work may be insufficient to effectively remove all the undesired samples. A more robust preprocessing technique, such as that proposed in ccalli2019frodo , should be applied to reject almost all out-of-distribution samples.
6 Conclusion
We presented in this paper a comprehensive approach for building a high-precision computer-aided diagnosis system for common thoracic diseases classification from CXRs. We investigated almost every aspect of the task including data cleaning, network design, training, and ensembling. In particular, we introduced a new training procedure in which dependencies among diseases and uncertainty labels are effectively exploited and integrated in training advanced CNNs. Extensive experiments demonstrated that the proposed method outperforms the previous state-of-the-art by a large margin on the CheXpert dataset. More importantly, our deep learning algorithm exhibited a performance on par with specialists in an independent test. There are several possible mechanisms to improve the current method. The most promising direction is to increase the size and quality of the dataset. A larger and high-quality labeled dataset can help deep neural networks generalize better and reduce the need for transfer learning from ImageNet. For instance, extra training data from MIMIC-CXR johnson2019mimic , which uses the same labeling tool as CheXpert, should be considered. We are currently expanding this research by collecting a new large-scale CXR dataset with radiologist-labeled reference from several hospitals and medical centers in Vietnam. The new dataset is needed to validate the proposed method and to confirm its usefulness in different clinical settings.
We believe the cooperation between a machine learning-based medical diagnosis system and radiologists will improve the outcomes of thoracic disease diagnosis and bring benefits to clinicians and their patients.
7 Acknowledgements
This research was supported by the Vingroup Big Data Institute (VinBDI). The authors gratefully acknowledge Jeremy Irvin from the Machine Learning Group, Stanford University for helping us evaluate the proposed method on the hidden test set of CheXpert.
References
(1)
NHS, NHS England: Diagnostic imaging dataset statistical release. February
2019, https://www.england.nhs.uk/, (accessed 30 July 2019).
(2)
C. Qin, D. Yao, Y. Shi, Z. Song, Computer-aided detection in chest radiography
based on artificial intelligence: A survey, Biomedical Engineering Online
17 (1) (2018) 113.
doi:doi:10.1186/s12938-018-0544-y.
(3)
L. Delrue, R. Gosselin, B. Ilsen, A. Van Landeghem, J. de Mey, P. Duyck,
Difficulties in the interpretation of chest radiography, in: Comparative
Interpretation of CT and Standard Radiography of the Chest, Springer, 2011,
pp. 27–49.
doi:https://doi.org/10.1007/978-3-540-79942-9_2.
(4)
N. Crisp, L. Chen, Global supply of health professionals, New England Journal
of Medicine 370 (10) (2014) 950–957.
doi:https://doi.org/10.1056/NEJMra1111610.
(5)
T. Atlantic, Most of the world doesn’t have access to X-rays,
https://www.theatlantic.com/health/archive/2016/09/radiology-gap/501803/,
(accessed 30 July 2019).
(6)
M. Annarumma, S. J. Withey, R. J. Bakewell, E. Pesce, V. Goh, G. Montana,
Automated triaging of adult chest radiographs with deep artificial neural
networks, Radiology 291 (1) (2019) 196–202.
doi:https://doi.org/10.1148/radiol.2018180921.
(7)
G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian,
J. A. W. M. van der Laak, B. van Ginneken, C. I. Sánchez, A survey on
deep learning in medical image analysis, Medical Image Analysis 42 (2017)
60–88.
doi:https://doi.org/10.1016/j.media.2017.07.005.
(8)
A. K. Jaiswal, P. Tiwari, S. Kumar, D. Gupta, A. Khanna, J. J. Rodrigues,
Identifying pneumonia in chest X-rays: A deep learning approach,
Measurement 145 (2019) 511–518.
doi:https://doi.org/10.1016/j.measurement.2019.05.076.
(9)
P. Lakhani, B. Sundaram, Deep learning at chest radiography: Automated
classification of pulmonary tuberculosis by using convolutional neural
networks, Radiology 284 (2) (2017) 574–582.
doi:https://doi.org/10.1148/radiol.2017162326.
(10)
F. Pasa, V. Golkov, F. Pfeiffer, D. Cremers, D. Pfeiffer, Efficient deep
network architectures for fast chest X-ray tuberculosis screening and
visualization, Scientific reports 9 (1) (2019) 6268.
doi:https://doi.org/10.1038/s41598-019-42557-4.
(11)
W. Ausawalaithong, A. Thirach, S. Marukatat, T. Wilaiprasitporn, Automatic lung
cancer prediction from chest X-ray images using the deep learning approach,
in: BMEiCON, IEEE, 2018, pp. 1–5.
doi:https://doi.org/10.1109/bmeicon.2018.8609997.
(12)
H. Chen, S. Miao, D. Xu, G. D. Hager, A. P. Harrison, Deep hierarchical
multi-label classification of chest X-ray images, in: MIDL, 2019, pp.
109–120.
(13)
R. J. Muller, S. Kornblith, G. E. Hinton, When does label smoothing help?,
ArXiv abs/1906.02629.
(14)
J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund,
B. Haghgoo, R. L. Ball, K. Shpanskaya, J. Seekins, D. A. Mong, S. S. Halabi,
J. K. Sandberg, R. Jones, D. B. Larson, C. P. Langlotz, B. N. Patel, M. P.
Lungren, A. Y. Ng, CheXpert: A large chest radiograph dataset with
uncertainty labels and expert comparison, in: AAAI, 2019.
doi:https://doi.org/10.1609/aaai.v33i01.3301590.
(15)
X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, R. M. Summers, Chestx-ray8:
Hospital-scale chest x-ray database and benchmarks on weakly-supervised
classification and localization of common thorax diseases, in: IEEE CVPR,
2017, pp. 2097–2106.
doi:https://doi.org/10.1109/CVPR.2017.369.
(16)
A. E. Johnson, T. J. Pollard, S. Berkowitz, N. R. Greenbaum, M. P. Lungren,
C.-y. Deng, R. G. Mark, S. Horng, MIMIC-CXR: A large publicly available
database of labeled chest radiographs, arXiv preprint arXiv:1901.07042.
(17)
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition,
in: IEEE CVPR, 2016, pp. 770–778.
doi:https://doi.org/10.1109/CVPR.2016.90.
(18)
G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected
convolutional networks, in: IEEE CVPR, 2017, pp. 4700–4708.
doi:https://doi.org/10.1109/CVPR.2017.243.
(19)
C. Szegedy, S. Ioffe, V. Vanhoucke, A. A. Alemi,
Inception-v4,
Inception-ResNet and the impact of residual connections on learning, in:
AAAI, 2017.
URL http://dl.acm.org/citation.cfm?id=3298023.3298188
(20)
B. Zoph, V. Vasudevan, J. Shlens, Q. V. Le, Learning transferable architectures
for scalable image recognition, in: IEEE CVPR, 2018, pp. 8697–8710.
doi:https://doi.org/10.1109/CVPR.2018.00907.
(21)
P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul,
C. Langlotz, K. Shpanskaya, et al., ChexNet: Radiologist-level pneumonia
detection on chest X-rays with deep learning, arXiv preprint
arXiv:1711.05225.
(22)
P. Rajpurkar, J. Irvin, R. L. Ball, K. Zhu, B. Yang, H. Mehta, T. Duan,
D. Ding, A. Bagul, C. P. Langlotz, et al., Deep learning for chest
radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm
to practicing radiologists, PLoS Medicine 15 (11) (2018) e1002686.
doi:https://doi.org/10.1371/journal.pmed.1002686.
(23)
Q. Guan, Y. Huang, Z. Zhong, Z. Zheng, L. Zheng, Y. Yang, Diagnose like a
radiologist: Attention guided convolutional neural network for thorax
disease classification, arXiv preprint arXiv:1801.09927.
(24)
L. Shen, L. R. Margolies, J. H. Rothstein, E. Fluder, R. McBride, W. Sieh,
Deep learning to improve breast cancer detection on screening mammography,
Scientific Reports 9.
doi:https://doi.org/10.1038/s41598-019-48995-4.
(25)
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al., Gradient-based learning
applied to document recognition, Proceedings of the IEEE 86 (11) (1998)
2278–2324.
doi:https://doi.org/10.1109/5.726791.
(26)
A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep
convolutional neural networks, in: F. Pereira, C. J. C. Burges, L. Bottou,
K. Q. Weinberger (Eds.), NIPS, 2012, pp. 1097–1105.
(27)
P. Huang, S. Park, R. Yan, J. Lee, L. C. Chu, C. T. Lin, A. Hussien,
J. Rathmell, B. Thomas, C. Chen, et al., Added value of computer-aided CT
image features for early lung cancer diagnosis with small pulmonary nodules:
A matched case-control study, Radiology 286 (1) (2017) 286–295.
doi:https://doi.org/10.1148/radiol.2017162725.
(28)
A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, S. Thrun,
Dermatologist-level classification of skin cancer with deep neural networks,
Nature 542 (7639) (2017) 115.
doi:https://doi.org/10.1038/nature21056.
(29)
M.-L. Zhang, Z.-H. Zhou, A review on multi-label learning algorithms, IEEE
Transactions on Knowledge and Data Engineering 26 (8) (2013) 1819–1837.
doi:https://doi.org/10.1109/TKDE.2013.39.
(30)
G. Tsoumakas, I. Katakis, Multi-label classification: An overview,
International Journal of Data Warehousing and Mining 3 (3) (2007) 1–13.
doi:https://doi.org/10.4018/jdwm.2007070101.
(31)
P. Kumar, M. Grewal, M. M. Srivastava, Boosted cascaded Convnets for
multilabel classification of thoracic diseases in chest radiographs, in:
ICIAR, 2018, pp. 546–552.
doi:https://doi.org/10.1007/978-3-319-93000-8_62.
(32)
J. K. Gohagan, P. C. Prorok, R. B. Hayes, B.-S. Kramer, The prostate, lung,
colorectal and ovarian (plco) cancer screening trial of the national cancer
institute: History, organization, and status, Controlled Clinical Trials
21 (6, Supplement 1) (2000) 251S – 272S.
doi:https://doi.org/10.1016/S0197-2456(00)00097-0.
(33)
J. Rubin, D. Sanghavi, C. Zhao, K. Lee, A. Qadir, M. Xu-Wilson, Large scale
automated reading of frontal and lateral chest X-rays using dual
convolutional neural networks, arXiv preprint arXiv:1804.07839.
(34)
S. Van Eeden, J. Leipsic, S. Paul Man, D. D. Sin, The relationship between lung
inflammation and cardiovascular disease, American Journal of Respiratory and
Critical Care Medicine 186 (1) (2012) 11–16.
doi:https://doi.org/10.1164/rccm.201203-0455PP.
(35)
N. Alaydie, C. K. Reddy, F. Fotouhi, Exploiting label dependency for
hierarchical multi-label classification, in: PAKDD, Springer, 2012, pp.
294–305.
doi:https://doi.org/10.1007/978-3-642-30217-6_25.
(36)
R. Aly, S. Remus, C. Biemann, Hierarchical multi-label classification of text
with capsule networks, in: Proceedings of the 57th Annual Meeting of the
Association for Computational Linguistics: Student Research Workshop,
Association for Computational Linguistics, 2019, pp. 323–330.
doi:http://dx.doi.org/10.18653/v1/P19-2045.
(37)
W. Bi, J. T. Kwok, Mandatory leaf node prediction in hierarchical multilabel
classification, in: NIPS, 2012, pp. 153–161.
doi:https://doi.org/10.1109/tnnls.2014.2309437.
(38)
Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. DeCoste, W. Di, Y. Yu,
HD-CNN: Hierarchical deep convolutional neural networks for large scale
visual recognition, in: IEEE ICCV, 2015, pp. 2740–2748.
doi:https://doi.org/10.1109/ICCV.2015.314.
(39)
W. Bi, J. T. Kwok, Bayes-optimal hierarchical multilabel classification, IEEE
Transactions on Knowledge and Data Engineering 27 (11) (2015) 2907–2918.
doi:https://doi.org/10.1109/TKDE.2015.2441707.
(40)
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the
Inception architecture for computer vision, in: IEEE CVPR, 2016, pp.
2818–2826.
doi:https://doi.org/10.1109/CVPR.2016.308.
(41)
G. Pereyra, G. Tucker, J. Chorowski, Ł. Kaiser, G. Hinton, Regularizing
neural networks by penalizing confident output distributions, arXiv preprint
arXiv:1701.06548.
(42)
R. Brunelli, Template Matching Techniques in Computer Vision: Theory and
Practice, Wiley Publishing, ISBN: 978-0-470-51706-2, 2009.
(43)
D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv
preprint arXiv:1412.6980.
(44)
F. Chollet, Xception: Deep learning with depthwise separable convolutions,
in: IEEE CVPR, 2017, pp. 1251–1258.
doi:https://doi.org/10.1109/CVPR.2017.195.
(45)
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale
image recognition, arXiv preprint arXiv:1409.1556.
(46)
I. Allaouzi, M. B. Ahmed, A novel approach for multi-label chest X-ray
classification of common thorax diseases, IEEE Access 7 (2019) 64279–64288.
doi:https://doi.org/10.1109/ACCESS.2019.2916849.
(47)
E. Çalli, K. Murphy, E. Sogancioglu, B. van Ginneken, FRODO: Free rejection
of out-of-distribution samples, application to chest X-ray analysis, ArXiv
abs/1907.01253. |
Importance of magnetic fields in highly eccentric discs with applications to tidal disruption events
Elliot M. Lynch and Gordon I. Ogilvie
Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Centre for Mathematical Sciences,
Wilberforce Road, Cambridge CB3 0WA, UK
E-mail: eml52@cam.ac.uk
(Accepted XXX. Received YYY; in original form ZZZ)
Abstract
Whether tidal disruption events (TDEs) circularise or accrete directly as a highly eccentric disc is the subject of current research and appears to depend sensitively on the disc thermodynamics. In a previous paper we applied the theory of eccentric discs to TDE discs using an $\alpha-$prescription for the disc stress, which leads to solutions that exhibit extreme, potentially unphysical, behaviour. In this paper we further explore the dynamical vertical structure of highly eccentric discs using alternative stress models that are better motivated by the behaviour of magnetic fields in eccentric discs. We find that the presence of a coherent magnetic field has a stabilising effect on the dynamics and can significantly alter the behaviour of highly eccentric radiation dominated discs. We conclude that magnetic fields are important for the evolution of TDE discs.
keywords:
accretion, accretion discs – hydrodynamics – black hole physics – MHD
††pubyear: 2019††pagerange: Importance of magnetic fields in highly eccentric discs with applications to tidal disruption events–E
1 Introduction
Tidal disruption events (TDEs) are transient phenomena where an object on a nearly parabolic orbit passes within the tidal radius and is disrupted by the tidal forces, typically a star being disrupted by a supermassive black hole (SMBH). Bound material from the disruption forms a highly eccentric disc, which in the classic TDE model of Rees (1988) are rapidly circularised as the material returns to pericentre. It has, however, been proposed that circularisation in TDEs may be inefficient resulting in the disc remaining highly eccentric (Guillochon et al., 2014; Piran et al., 2015; Krolik et al., 2016; Svirski
et al., 2017). In Lynch &
Ogilvie (2020) (henceforth Paper I) we presented a hydrodynamical model of these highly eccentric discs applied to TDEs where circularisation is inefficient.
Two issues were highlighted in Paper I. One was confirming that radiation pressure dominated discs are thermally unstable when the viscous stress scales with total pressure in highly eccentric discs, a result that has long been known for circular discs (Shakura &
Sunyaev, 1976; Pringle, 1976; Piran, 1978). Circular radiation pressure dominated discs can be stabilised by assuming stress scales with gas pressure (Meier, 1979; Sakimoto &
Coroniti, 1981). For highly eccentric discs it appears that the thermal instability is still present when stress scales with gas pressure; however there exists a stable radiation pressure dominated branch which is the outcome of the thermal instability. For typical TDE parameters, this branch is very hot and often violates the thin-disc assumptions.
The second issue was the extreme behaviour that can occur during pericentre passage. For the radiation pressure dominated disc where stress scales with gas pressure the solution is nearly adiabatic and undergoes extreme compression near pericentre. In models where the viscous stresses contribute to the dynamics we typically found that the vertical viscous stress is comparable to or exceeds the (total) pressure, which is possibly problematic for the $\alpha$-model as it would indicate transonic turbulence. In some of the solutions the vertical viscous stress can exceed the total pressure by an order of magnitude, strongly violating the assumptions of the $\alpha$-model.
In this paper we focus on the second of the two issues by considering alternative turbulent stress models which are better motivated by the physics of the underlying magnetic field to see if this resolves some of the extreme behaviour seen in the $\alpha$-models. We will also see if alternative stress models are more thermally stable than the $\alpha$-model, although it’s possible the solution to this issue is outside the scope of a thin disc model .
Two additional physical effects, not present in an $\alpha$-model, may be important for regulating the extreme behaviour at pericentre. One is the finite response time of the magnetorotational instability (MRI) (see for instance the viscoelastic model of Ogilvie (2001) and discussion therein) which means the viscous stress cannot respond instantly to the rapid increase in pressure and velocity gradients during pericentre passage, potentially weakening the viscous stresses so they no longer exceed the pressure. Another is the relative incompressibility of the magnetic field, compared with the radiation or the gas, with the magnetic pressure providing additional support during pericentre passage which could prevent the extreme compression seen in some models.
Various attempts have been made to rectify some of the deficiencies of the $\alpha-$prescription using alternative closure models for the turbulent stress. Ogilvie (2000, 2001) proposed a viscoelastic model for the dyadic part of the Maxwell stress (i.e. the contribution from magnetic tension $\frac{B^{i}B^{j}}{\mu_{0}}$) to account for the finite response time of the MRI. It was shown in Ogilvie &
Proctor (2003) that for incompressible fluids there is an exact asymptotic correspondence between MHD in the limit of large magnetic Reynolds number and viscoelastic fluids (specifically an Oldroyd-B fluid) in the limit of large relaxation time. Ogilvie (2002) improved upon the compressible viscoelastic model of Ogilvie (2000, 2001) by including an isotropic part to the stress to model the effects of magnetic pressure and correcting the heating rate so that total energy is conserved. Ogilvie (2003) proposed solving for both the Maxwell and Reynolds stresses and suggested a nonlinear closure model based on requiring the turbulent stresses to exhibit certain properties (such as positive definiteness, and relaxation towards equipartition and isotropy) known from simulations and experiments.
Simulations of MRI in circular discs typically find that the magnetic pressure tends to saturate at about 10% of the gas pressure. However in the local linear stability analysis of Pessah &
Psaltis (2005) the toroidal magnetic field only stabilises the MRI when it is highly suprathermal (specifically when the Alfvén speed is greater than the geometric mean of the sound speed and the Keplerian speed). Das
et al. (2018) confirmed this result persists in a global linear eigenmode calculation. In light of this, Begelman &
Pringle (2007) have suggested that, for a strongly magnetised disc, the viscous stress may scale with the magnetic pressure and showed that such a disc would be thermally stable even when radiation pressure dominates over gas pressure. Such a disc was simulated by Sądowski (2016), who indeed found thermal stability.
Throughout this paper we will make use of certain conventions from tensor calculus, such as the Einstein summation convention and the distinction between covariant and contravariant indices, along with the notation for symmetrising indices,
$$X^{(ij)}:=\frac{1}{2}\left(X^{ij}+X^{ji}\right).$$
(1)
This paper is structured as follows. In Section 2 we discuss the geometry of eccentric discs and restate the coordinate system of Ogilvie &
Lynch (2019). In Section 3 we derive the equations for the dynamical vertical structure, including the effects of a Maxwell stress, in this coordinate system. In Section 4 we consider a model with an $\alpha-$viscosity and a coherent magnetic field which obeys the ideal induction equation. In Section 5 we consider a nonlinear constitutive model for the magnetic field. In our discussion we discuss the stability of our solutions (Section 6.1) and the possibility of dynamo action in the disc (6.2). We present our conclusions in Section 7 and additional mathematical details are in the appendices.
2 Orbital Coordinates
Similar to Paper I we assume the dominant motion in a TDE disc consists of elliptical Keplerian orbits, subject to relatively weak perturbations from relativistic precessional effects, pressure and Maxwell stresses. This model is unlikely to be applicable to TDEs where the tidal radius is comparable to the gravitational radius owing to the strong relativistic precession.
Let $(r,\phi)$ be polar coordinates in the disc plane. The polar equation for an elliptical Keplerian orbit of semimajor axis $a$, eccentricity $e$ and longitude of periapsis $\varpi$ is
$$r=\frac{a(1-e^{2})}{1+e\cos f}\quad,$$
(2)
where $f=\phi-\varpi$ is the true anomaly. A planar eccentric disc involves a continuous set of nested elliptical orbits. The shape of the disc can be described by considering $e$ and $\varpi$ to be functions of $a$. The derivatives of these functions are written as $e_{a}$ and $\varpi_{a}$, which can be thought of as the eccentricity gradient and the twist, respectively. The disc evolution is then described by the slow variation in time of the orbital elements $e$ and $\varpi$ due to secular forces such as pressure gradients in the disc and departures from the gravitational field of a Newtonian point mass.
In this work we adopt the (semimajor axis $a$, eccentric anomaly $E$) orbital coordinate system described in Ogilvie &
Lynch (2019). The eccentric anomaly is related to the true anomaly by
$$\cos f=\frac{\cos E-e}{1-e\cos E},\quad\sin f=\frac{\sqrt{1-e^{2}}\sin E}{1-e\cos E}$$
(3)
and the radius can be written as
$$r=a(1-e\cos E)\quad.$$
(4)
The area element in the orbital plane is given by $dA=(an/2)J\,da\,dE$ where $J$ is given by
$$J=\frac{2}{n}\left[\frac{1-e(e+ae_{a})}{\sqrt{1-e^{2}}}-\frac{ae_{a}}{\sqrt{1-e^{2}}}\cos E-ae\varpi_{a}\sin E\right],$$
(5)
which corresponds to the Jacobian of the $(\Lambda,\lambda)$ coordinate system of Ogilvie &
Lynch (2019). Here $n=\sqrt{\frac{GM_{\bullet}}{a^{3}}}$ is the mean motion with $M_{\bullet}$ the mass of the black hole. The Jacobian can be written in terms of the orbital intersection parameter $q$ of Ogilvie &
Lynch (2019):
$$J=(2/n)\frac{1-e(e+ae_{a})}{\sqrt{1-e^{2}}}(1-q\cos(E-E_{0}))$$
(6)
where $q$ is given by
$$q^{2}=\frac{(ae_{a})^{2}+(1-e^{2})(ae\varpi_{a})^{2}}{[1-e(e+ae_{a})]^{2}}\quad,$$
(7)
and we require $|q|<1$ to avoid an orbital intersection (Ogilvie &
Lynch, 2019). The angle $E_{0}$, which determines the location of maximum horizontal compression around the orbit, is determined by the relative contribution of the eccentricity gradient and twist to $q$:
$$\frac{ae_{a}}{1-e(e+ae_{a})}=q\cos E_{0}\quad.$$
(8)
Additionally it can be useful to rewrite time derivatives, following the orbital motion, in terms of the eccentric anomaly:
$$\frac{\partial}{\partial t}=\frac{n}{(1-e\cos E)}\frac{\partial}{\partial E}\quad.$$
(9)
3 Derivation of the equations of vertical structure, including thermal effects
A local model of a thin, Keplerian eccentric disc was developed in Ogilvie &
Barker (2014). In Paper I we developed a purely hydrodynamic model, which included an $\alpha$-viscosity prescription along with radiative cooling, allowing for contributions to the pressure from both radiation and the gas, in the $(a,E)$ coordinate system of Ogilvie &
Lynch (2019). In a similar vein we here develop a local model that allows for a more general treatment of the turbulent/magnetic stress.
The equations, formulated in a frame of reference that follows the elliptical orbital motion, are the vertical equation of motion,
$$\frac{Dv_{z}}{Dt}=-\frac{GM_{\bullet}z}{r^{3}}-\frac{1}{\rho}\frac{\partial}{\partial z}\left(p+\frac{1}{2}M-M_{zz}\right),$$
(10)
the continuity equation,
$$\frac{D\rho}{Dt}=-\rho\left(\Delta+\frac{\partial v_{z}}{\partial z}\right),$$
(11)
and the thermal energy equation,
$$\frac{Dp}{Dt}=-\Gamma_{1}p\left(\Delta+\frac{\partial v_{z}}{\partial z}\right)+(\Gamma_{3}-1)\left(\mathcal{H}-\frac{\partial F}{\partial z}\right),$$
(12)
where, for horizontally invariant “laminar flows”,
$$\frac{D}{Dt}=\frac{\partial}{\partial t}+v_{z}\frac{\partial}{\partial z}$$
(13)
is the Lagrangian time-derivative,
$$\Delta=\frac{1}{J}\frac{dJ}{dt}$$
(14)
is the divergence of the orbital velocity field, which is a known function of $E$ that depends of $e$, $q$ and $E_{0}$. $F=F_{\rm rad}+F_{\rm ext}$ is the total vertical heat flux with
$$F_{\rm rad}=-\frac{16\sigma T^{3}}{3\kappa\rho}\frac{\partial T}{\partial z}$$
(15)
being the vertical radiative heat flux and $F_{\rm ext}$ containing any additional contributions to the heat flux (such as from convection or turbulent heat transport). The tensor
$$M^{ij}:=\frac{B^{i}B^{j}}{\mu_{0}}$$
(16)
is the part of the Maxwell stress tensor arising from magnetic tension. This can include contributions from a large scale mean field and from the disc turbulence. Its trace is denoted $M=M^{i}_{\,\,\,i}$, which corresponds to twice the magnetic pressure. In this paper we shall explore two different closure models for the time-evolution of $M^{ij}$.
Following Paper I, we write the heating rate per unit volume, resulting from the dissipation of magnetic/turbulent energy, as
$$\mathcal{H}=f_{\mathcal{H}}np_{v},$$
(17)
where $f_{\mathcal{H}}$ is a dimensionless expression that depends on the closure model and $p_{v}$ is a pressure to be specified in the Maxwell stress closure model.
In addition to the magnetic pressure, which is included through the $\frac{1}{2}M$ term in equation (10), the pressure includes contributions from radiation and a perfect gas with a ratio of specific heats $\gamma$. We define the hydrodynamic pressure to be the sum of the gas and radiation pressure,
$$p=p_{r}+p_{g}=\frac{4\sigma}{3c}T^{4}+\frac{\mathcal{R}\rho T}{\mu},$$
(18)
and $\beta_{r}$ to be the ratio of radiation to gas pressure:
$$\beta_{r}:=\frac{p_{r}}{p_{g}}=\frac{4\sigma\mu}{3c\mathcal{R}}\frac{T^{3}}{\rho}\quad.$$
(19)
We assume a constant opacity law, applicable to the electron-scattering opacity expected in a TDE, with the opacity denoted by $\kappa$.
We consider a radiation+gas mixture where $F_{\rm ext}$ is assumed to be from convective or turbulent mixing and the first and third adiabatic exponents are given by (Chandrasekhar, 1967)
$$\Gamma_{1}=\frac{1+12(\gamma-1)\beta_{r}+(1+4\beta_{r})^{2}(\gamma-1)}{(1+\beta_{r})(1+12(\gamma-1)\beta_{r})}\quad,$$
(20)
$$\Gamma_{3}=1+\frac{(1+4\beta_{r})(\gamma-1)}{1+12(\gamma-1)\beta_{r}}\quad.$$
(21)
As in Paper I, we propose a separable solution of the form
$$\displaystyle\begin{split}\rho&=\hat{\rho}(t)\tilde{\rho}(\tilde{z}),\\
p&=\hat{p}(t)\tilde{p}(\tilde{z}),\\
M&=\hat{M}_{ij}(t)\tilde{M}(\tilde{z}),\\
F&=\hat{F}(t)\tilde{F}(\tilde{z}),\\
v_{z}&=\frac{dH}{dt}\tilde{z},\\
\end{split}$$
(22)
where
$$\tilde{z}=\frac{z}{H(t)}$$
(23)
is a Lagrangian variable that follows the vertical expansion of the disc, $H(t)$ is the dynamical vertical scaleheight of the disc, and the quantities with tildes are normalized variables that satisfy a standard dimensionless form of the equations of vertical structure.
In order to preserve separability the (modified) Maxwell stress $M^{ij}$ must have the same vertical structure as the pressure ($\tilde{M}=\tilde{p}$)111We can have an additional height independent contribution to $M^{ij}$ (e.g. coming from a height-independent vertical magnetic field), but this has no effect on the dynamics.. This assumption has a couple of important consequences. It corresponds to a plasma-$\beta$, defined as the ratio of hydrodynamic to magnetic pressure $\beta_{m}:=p/p_{m}$, independent of height. Additionally it has implications for the realisability of $M^{ij}$: for a large scale field we require $M^{zz}=0$ in order that the underlying magnetic field obeys the solenoidal condition. For small scale/turbulent fields the solenoidal condition instead implies the mean of $B^{z}$ is independent of height; however $M^{zz}$ has the same vertical structure as pressure.
The separated solution works provided that
$$\frac{d^{2}H}{dt^{2}}=-\frac{GM_{\bullet}}{r^{3}}H+\frac{\hat{p}}{\hat{\rho}H}\left(1+\frac{\hat{M}}{2\hat{p}}-\frac{\hat{M}_{zz}}{\hat{p}}\right),$$
(24)
$$\frac{d\hat{\rho}}{dt}=-\hat{\rho}\left(\Delta+\frac{1}{H}\frac{dH}{dt}\right),$$
(25)
$$\frac{d\hat{p}}{dt}=-\Gamma_{1}\hat{p}\left(\Delta+\frac{1}{H}\frac{dH}{dt}\right)+(\Gamma_{3}-1)\left(f_{\mathcal{H}}n\hat{p}_{v}-\lambda\frac{\hat{F}}{H}\right),$$
(26)
$$\hat{F}=\frac{16\sigma\hat{T}^{4}}{3\kappa\hat{\rho}H}\quad,$$
(27)
$$\hat{p}=(1+\beta_{r})\frac{\mathcal{R}\hat{\rho}\hat{T}}{\mu}\quad,$$
(28)
where the positive constant $\lambda$ is a dimensionless cooling rate that depends on the equations of vertical structure (further details can be found in Paper I) and
$$\beta_{r}=\frac{4\sigma\mu}{3c\mathcal{R}}\frac{\hat{T}^{3}}{\hat{\rho}}\quad.$$
(29)
We must supplement these equations with a closure model for $M^{ij}$ and $f_{\mathcal{H}}$.
Note that the surface density and vertically integrated pressures are (owing to the definitions of the scaleheight and the dimensionless variables)
$$\Sigma=\hat{\rho}H,\quad P=\hat{p}H,\quad P_{v}=\hat{p}_{v}H.$$
(30)
The vertically integrated heating and cooling rates are
$$f_{\mathcal{H}}nP_{v},\quad\lambda\hat{F}\quad.$$
(31)
The cooling rate can also be written as
$$\lambda\hat{F}=2\sigma\hat{T}_{s}^{4}$$
(32)
where $\hat{T}_{s}(t)$ is a representative surface temperature defined by
$$\hat{T}^{4}_{s}=\frac{8\lambda}{3}\frac{\hat{T}^{4}}{\hat{\tau}}$$
(33)
and
$$\hat{\tau}=\kappa\Sigma$$
(34)
is a representative optical thickness.
We then have
$$\frac{1}{H}\frac{d^{2}H}{dt^{2}}=-\frac{GM_{\bullet}}{r^{3}}+\frac{P}{\Sigma H^{2}}\Biggl{(}1+\frac{\hat{M}}{2p}-\frac{\hat{M}_{zz}}{p}\Biggr{)},$$
(35)
$$J\Sigma=\mathrm{constant},$$
(36)
$$\left(\frac{1}{\Gamma_{3}-1}\right)\frac{dP}{dt}=-\frac{\Gamma_{1}P}{\Gamma_{3}-1}\left(\Delta+\frac{1}{H}\frac{dH}{dt}\right)+f_{\mathcal{H}}nP_{v}-\lambda\hat{F},$$
(37)
with
$$\frac{\lambda\hat{F}}{Pn}=\lambda\frac{16\sigma(\mu/\mathcal{R})^{4}}{3\kappa n}P^{3}\Sigma^{-5}(1+\beta_{r})^{-4}\quad.$$
(38)
We assume for a given $\beta_{m}^{\circ}$, $\beta_{r}^{\circ}$ and $n$ there exists an equilibrium solution for a circular disc and use this solution to nondimensionalise the equations. As in the hydrodynamical models considered in Paper I, we use ${}^{\circ}$ to denote the equilibrium values in the reference circular disc (e.g. $H^{\circ}$, $T^{\circ}$ etc). Depending on the closure model there can be multiple equilibrium solutions, some of which can be unstable (particularly in the radiation dominated limit). Our choices of solution branch for our two closure models are specified in Appendices B and D.
Scaling $H$ by $H^{\circ}$, $\hat{T}$ by $T^{\circ}$, $M^{ij}$ by $p^{\circ}$, $t$ by $1/n$ and $J$ by $2/n$ we obtain the dimensionless version
$$\frac{\ddot{H}}{H}=-(1-e\cos E)^{-3}+\frac{T}{H^{2}}\frac{1+\beta_{r}}{1+\beta_{r}^{\circ}}\frac{\Biggl{(}1+\frac{1}{2}\frac{M}{p}-\frac{M^{zz}}{p}\Biggr{)}}{\left[1+\frac{1}{2}\frac{M^{\circ}}{p^{\circ}}-\frac{(M^{zz})^{\circ}}{p^{\circ}}\right]},$$
(39)
$$\displaystyle\begin{split}\dot{T}&=-(\Gamma_{3}-1)T\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)\\
&+(\Gamma_{3}-1)\frac{1+\beta_{r}}{1+4\beta_{r}}T\left(f_{\mathcal{H}}\frac{P_{v}}{P}-\mathcal{C}^{\circ}\frac{1+\beta_{r}^{\circ}}{1+\beta_{r}}J^{2}T^{3}\right),\end{split}$$
(40)
where a dot over a letter indicates a derivative with respect to rescaled time. We have written the thermal energy equation in terms of the temperature. The factor $\frac{\Gamma_{3}-1}{1+4\beta}\propto\frac{1}{c_{V}}$ where $c_{V}$ is the specific heat capacity at constant volume. $\beta_{r}$ can be obtained through
$$\beta_{r}=\beta_{r}^{\circ}JHT^{3}\quad,$$
(41)
where we have introduced $\beta_{r}^{\circ}$, which is the $\beta_{r}$ of the reference circular disc. The equilibrium values of the reference circular disc $H^{\circ}$, $\hat{T}^{\circ}$, etc., are determined by $\beta_{r}^{\circ}$ and $n$. The reference cooling rate can be obtained by setting it equal to the reference heating rate: $\mathcal{C}^{\circ}=f_{\mathcal{H}}^{\circ}\frac{P_{v}^{\circ}}{P^{\circ}}$.
Additionally we introduce the (nondimensional) entropy,
$$s:=4\beta_{r}+\ln(JHT^{1/(\gamma-1)})\quad,$$
(42)
which has contributions from the radiation and the gas.
4 Effect of Magnetic Fields
4.1 Magnetic fields in eccentric discs
In Paper I we found that (when $p_{v}=p_{g}$) our radiation dominated solutions exhibit extreme compression at pericentre, similar to the extreme behaviour of the adiabatic solutions of Ogilvie &
Barker (2014). Many of our solutions with more moderate behaviour have strong viscous stresses at pericentre which call into question the validity of the $\alpha-$prescription.
What additional physical processes could reverse the collapse of the fluid column and prevent the extreme compression seen in the radiation dominated model? Can the collapse be reversed without encountering unphysically strong viscous stresses? An obvious possibility is the presence of a large scale horizontal magnetic field within the disc which will resist vertical compression. Such a field could be weak for the majority of the orbit but, owing to the relative incompressibility of magnetic fields, become dynamically important during the maximum compression at pericentre. In Appendix E we show that in an eccentric disc, a solution to the steady ideal induction equation in an inertial frame is
$$B^{a}=0,\quad B^{E}=\frac{\Omega B^{E}_{0}(a,\tilde{z})}{nJH},\quad B^{z}=\frac{B^{z}_{0}(a)}{J}\quad.$$
(43)
Here $B^{E}$ is the component parallel to the orbital motion (quasi-toroidal) and $B^{z}$ is the vertical component. We use quasi-poloidal to indicate the components $B^{a}$ and $B^{z}$.
The magnetic field of a star undergoing tidal disruption has been studied by Guillochon &
McCourt (2017) and Bonnerot et al. (2017). In these papers it was found that the stretching of the fields during the disruption causes an increase in the magnetic pressure from the field aligned with the orbital direction. Meanwhile the gas pressure and magnetic pressure from the field perpendicular to the orbit drop. Guillochon &
McCourt (2017) found that this tends to result in the magnetic pressure from the parallel field becoming comparable to the gas pressure. Similar results were found in Bonnerot et al. (2017), although with a dependence on the initial field direction. This supports our adopted field configuration, with the vertical field set to zero. As the vertical field does not contribute to the dynamics of the vertical oscillator we can do so without loss of generality.
In addition to the large scale magnetic field, we assume that the effects of the small-scale/turbulent magnetic field can be modelled by an $\alpha$-viscosity,
$$\mu_{s,b}=\alpha_{s,b}\frac{p_{v}}{\omega_{\rm orb}},$$
(44)
where $\mu_{s,b}$ are the dynamic shear and bulk viscosities, $\alpha_{s,b}$ are dimensionless coefficients, $\omega_{\rm orb}$ is some characteristic frequency of the orbital motion (here taken to be $n$) and $p_{v}$ is some choice of pressure. As in Paper I we set the bulk viscosity to zero ($\alpha_{b}=0$).
As discussed in Section 3, in order to preserve separability of the equations we require $B^{E}_{0}(a,\tilde{z})$ to depend on $\tilde{z}$ in such a way as to make $\beta_{m}$ independent of height. The dimensionless equations for the variation of the dimensionless scale height $H$ and temperature $T$ around the orbit (derived in Appendix E) are then
$$\displaystyle\begin{split}\frac{\ddot{H}}{H}&=-(1-e\cos E)^{-3}+\frac{T}{H^{2}}\frac{1+\beta_{r}}{1+\beta_{r}^{\circ}}\left(1+\frac{1}{\beta_{m}^{\circ}}\right)^{-1}\\
&\times\Biggl{[}1+\frac{1}{\beta_{m}}-2\alpha_{s}\frac{P_{v}}{P}\frac{\dot{H}}{H}-\left(\alpha_{b}-\frac{2}{3}\alpha_{s}\right)\frac{P_{v}}{P}\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)\Biggr{]},\end{split}$$
(45)
$$\displaystyle\begin{split}\dot{T}&=-(\Gamma_{3}-1)T\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)\\
&+(\Gamma_{3}-1)\frac{1+\beta_{r}}{1+4\beta_{r}}T\left(f_{\mathcal{H}}\frac{P_{v}}{P}-\frac{9}{4}\alpha_{s}\frac{P_{v}^{\circ}}{P^{\circ}}\frac{1+\beta_{r}^{\circ}}{1+\beta_{r}}J^{2}T^{3}\right),\end{split}$$
(46)
and the plasma-$\beta$ is given by
$$\beta_{m}=\beta_{m}^{\circ}JHT\frac{1+\beta_{r}}{1+\beta_{r}^{\circ}}\frac{1-e\cos E}{1+e\cos E}$$
(47)
where $\beta_{m}^{\circ}$ is the plasma beta in the reference circular disc.
These equations can be solved using the same relaxation method used to solve the purely hydrodynamic equations in Paper I. However caution must be taken when solving the equations with low $\beta_{m}^{\circ}$ (i.e. strong magnetic fields throughout the disc) as the method does not always converge to a periodic solution (or at least takes an excessively long time to do so). This is most likely due to the absence of dissipative effects acting on the magnetic field, so any free oscillations in the magnetic field are not easily damped out. We believe that the quasiperiodic solutions we find for low $\beta_{m}^{\circ}$ are the superposition of the forced solution and a free fast magnetosonic mode. For now we only consider values of $\beta_{m}^{\circ}$ which successfully converge to a periodic solution.
4.2 Viscous stress independent of the magnetic field ($p_{v}=p_{g}$)
Figures 1-3 show the variations of the scale height, $\beta_{r}$ and $\beta_{m}$ around the orbit for a disc with $\alpha_{s}=0.1$, $\alpha_{b}=0$, $e=q=0.9$ and $E_{0}=0$. The magnetic field has a weak effect on the gas pressure dominated ($\beta_{r}^{\circ}=10^{-4}$) solutions. For the radiation pressure dominated ($\beta_{r}^{\circ}=10^{-3}$) case, a strong enough magnetic field stabilises the solution against the thermal instability and, instead of the nearly adiabatic radiation dominated solutions seen in the hydrodynamic case, the solution is only moderately radiation pressure dominated and maintains significant entropy variation around the orbit. This solution is similar to the moderately radiation pressure dominated hydrodynamic solutions. If the field is too weak (e.g. $\beta_{m}^{\circ}=100$ considered here) the magnetic field isn’t capable of stabilising against the thermal instability and the solution tends to the nearly adiabatic radiation dominated solution.
Most of the solutions in Figures 1-3 are not sufficiently radiation pressure dominated to represent most TDEs. Figures 4-6 show solutions which attain much higher $\beta_{r}$. We see it is possible to attain significantly radiation pressure dominated solutions which do not possess the extreme variation of the scale height around the orbit present in the hydrodynamic case. In particular, consider the green curve with $\beta_{r}^{\circ}=1$, $\beta_{m}^{\circ}=0.005$. Like the radiation dominated hydrodynamic solutions the solution with $\beta_{r}^{\circ}=1$, $\beta_{m}^{\circ}=0.005$ is nearly adiabatic; however magnetic pressure dominates over radiation pressure during pericentre passage. This additional support at pericentre prevents the extreme compression, and resultant heating, seen in the hydrodynamic model - resulting in more moderate variation of the scale height around the orbit. Unlike the similarly radiation dominated, unmagnetised, solutions considered in Paper I, this solution remains consistent with the thin disc assumptions for typical TDE parameters.
It should be cautioned that the grey solution (with $\beta_{r}^{\circ}=1$, $\beta_{m}^{\circ}=1$) in Figures 4-6 has not converged. The magnetic field is unimportant for this solution. Based on the radiation dominated hydrodynamic models of Paper I, the disc with $\beta_{r}^{\circ}=1$ will converge on a solution with $\beta_{r}$ much larger than the $\beta_{r}\sim 10^{5}-10^{6}$ which were the most radiation dominated, converged, solutions obtained in Paper I. As the entropy gained per orbit is small compared to the entropy in the disc, this will take a large number of orbits ($>10000$ orbits) to converge, so the converged solution isn’t of much interest when considering transient phenomena like a TDE.
Figure 7 shows the magnitude of different terms in the momentum equation for a disc with $p_{v}=p_{g}$, $\beta_{r}^{\circ}=1$, $\beta_{m}^{\circ}=0.005$, $\alpha_{s}=0.1$, $\alpha_{b}=0$, $e=q=0.9$ and $E_{0}=0$ (i.e. the green solution from Figures 4-6). This shows that the dominant balance in this solution is between the vertical acceleration, gravity and the magnetic force. This suggests that dynamics of radiation pressure dominated TDEs may be controlled by the magnetic field. Being the least compressible pressure term, the magnetic pressure tends to dominate at pericentre, even if it is fairly weak throughout the rest of the disc. However for radiation dominated TDEs pressure is only important near pericentre so even a weak magnetic field will have a disproportionate contribution to the dynamics. This suggests that ignoring even subdominant magnetic fields in TDE discs can lead to fundamental changes to the TDE dynamics.
While it is possible to find a combination of $\beta_{r}^{\circ}$ and $\beta_{m}^{\circ}$ which yields a solution with the desired $\beta_{r}$ exhibiting “reasonable” behaviour, it is not clear that the magnetic field in the disc will always be strong enough to produce the desired behaviour. It is possible that this represents a tuning problem for $\beta_{m}^{\circ}$.
To explore this we look at what happens if $\beta_{m}^{\circ}$ is initially too weak to stabilise against the thermal instability but we gradually raise it over several thermal times. Figure 8 shows what happens when the magnetic field is increased gradually from $\beta_{m}^{\circ}=100$ to $\beta_{m}^{\circ}=10$ for a disc with $p_{v}=p_{g}$, $\beta_{r}^{\circ}=10^{-3}$, $\alpha_{s}=0.1$, $\alpha_{b}=0$, $e=q=0.9$ and $E_{0}=0$. This corresponds to moving from the grey to the green solution in Figures 1-3. This is done by periodically stopping the calculation and restarting with a larger $\beta_{m}^{\circ}$ The resulting $\beta_{r}$ in fact increases with time and remains close to that of the grey solution in Figures 1-3 even as we increase the magnetic field strength, and does not transition to a value consistent with the green solution. This suggests that the solution is sensitive to the path taken and that a magnetic field which grows (from an initially weak seed field), in a nearly adiabatic radiation pressure dominated disc, may not cause the disc to collapse to the gas pressure dominated branch. This is likely because the disc is very expanded meaning the magnetic field is still quite weak and incapable of influencing the dynamics.
We carried out a similar calculation for a disc with $p_{v}=p_{g}$, $\beta_{r}^{\circ}=1$, $\alpha_{s}=0.1$, $\alpha_{b}=0$, $e=q=0.9$ and $E_{0}=0$ moving from $\beta_{m}^{\circ}=1$ to $\beta_{m}^{\circ}=5\times 10^{-3}$ (corresponding to the grey and green solutions of Figures 4-6). In this case $\beta_{r}$ steadily increases with time (apart from a small variation over the orbital period) with the magnetic field having no appreciable influence on the solution. Owing to the relatively large $\beta_{r}^{\circ}$ this solution never reached steady state, as discussed previously. The implication of these two tests is that radiation pressure dominated, magnetised, discs can have two stable solution branches, with the choice of branch determined by the magnetic field history.
Figure 9 shows the pericentre passage for a magnetised disc with $p_{v}=p_{g}$, $\beta_{r}^{\circ}=10^{-3}$, $\beta_{m}^{\circ}=10$, $\alpha_{s}=0.1$, $\alpha_{b}=0$, $e=q=0.9$ and $E_{0}=0$. The magnetic pressure is extremely concentrated within the nozzle and near to the midplane. Like the hydrodynamic nozzle structure considered in Paper I, the nozzle is asymmetric and located prior to pericentre, which is appears to be characteristic of dissipative highly eccentric discs.
4.3 Viscous stress dependent on the magnetic field
Begelman &
Pringle (2007) have suggested that discs with strong toroidal fields may be stable to the thermal instability if the stress depends on the magnetic pressure. In this subsection we explore this possibility for a highly eccentric disc.
Figures 10-12 show variation of the scale height, $\beta_{r}$ and plasma beta for a disc with $p_{v}=p_{m}$, $\alpha_{s}=0.1$, $\alpha_{b}=0$, $e=q=0.9$ and $E_{0}=0$. These have essentially the same behaviour as the nearly adiabatic radiation pressure dominated discs for the hydrodynamic case. This is not surprising as in this limit the gas and magnetic pressures are essentially negligible, which also results in negligible viscous stress/heating when it scales with either of these pressures. Increasing the magnetic field strength stabilises the “gas pressure dominated" branch, where the magnetic field and viscous dissipation become important. This branch can have $p_{r}\gg p_{g}$ around the entire orbit; this is similar to the behaviour of the radiation pressure dominated hydrodynamic discs considered in Paper I with large $\alpha_{s}$.
Figures 13-15 show the variation of the scale height, $\beta_{r}$ and $\beta_{m}$ for a disc with $p_{v}=p+p_{m}$, $\alpha_{s}=0.1$, $\alpha_{b}=0$, $e=q=0.9$ and $E_{0}=0$. Here we find that, with a strong enough magnetic field, we can obtain thermally stable solutions despite the dependence of the stress on the radiation pressure. Generally for thermal stability the magnetic field needs to dominate (over radiation pressure) over part of the orbit. Having such a strong horizontal magnetic field over a sizable fraction of the orbit may lead to flux expulsion through magnetic buoyancy, an effect we do not treat here. If the magnetic field is too weak, however, we encounter the thermal instability similar to the hydrodynamic radiation pressure dominated discs when $p_{v}=p$.
Part of the motivation for introducing the magnetic field was to regularise some of the extreme behaviour encountered at pericentre. Unfortunately, while the prescriptions $p_{v}=p_{m}$ and $p_{v}=p+p_{m}$ are promising as a way of taming the thermal instability they exhibit the same extreme behaviour that the hydrodynamic models possess. In particular when $p_{v}=p_{m}$ the solutions exhibit extreme compression at pericentre, while for the more magnetised discs (with either $p_{v}=p_{m}$ or $p_{v}=p+p_{m}$) we again encounter the issue of the viscous stresses being comparable to or exceeding the pressure (including the magnetic pressure). See, for example, Figure 16 which shows that the viscous stresses exceed the magnetic, gas and radiation pressures during pericentre passage.
5 Nonlinear constitutive model for the magnetic field
The model considered in Section 4 has a number of drawbacks. The first is that the viscous stress and the coherent magnetic field are treated as separate physical effects when they are in fact intrinsically linked (although subsection 4.3 partially addresses this issue). Secondly, the turbulent magnetic field, responsible for the effective viscosity, cannot store energy. Lastly the model neglects resistive effects and, while nonideal MHD effects would be weak if the flow were strictly laminar, the turbulent cascade should always move magnetic energy to scales on which nonideal effects become important. Thus the coherent magnetic field should be affected by some dissipative process.
To address these issues we consider a model of the (modified) Maxwell stress where the magnetic field is forced by a turbulent emf and relaxes to a isotropic field proportional to some pressure $p_{v}$ on a timescale $\tau$. While the “turbulence” in this model acts to isotropise the magnetic field, the presence of the background shear flow feeds off the quasi-radial field component and produces a highly anisotropic field that is predominantly quasi-toroidal. A possible justification for this model based on a stochastically forced induction equation is given in Appendix E. This model has much in common with Ogilvie (2003), but does not solve for the Reynolds stress explicitly.
The Maxwell stress in this model evolves according to
$$\mathcal{D}M^{ij}=-(M^{ij}-\mathcal{B}p_{v}g^{ij})/\tau\quad,$$
(48)
where $g^{ij}$ is the metric tensor and $\mathcal{B}$ is a nondimensional parameter controlling the strength of forcing relative to $p_{v}$. $\mathcal{B}$ can can be taken to be constant by absorbing any variation into the definition of $p_{v}$. $\mathcal{D}$ is the operator from Ogilvie (2001) (a type of weighted Lie derivative) which acts on a rank (2,0) tensor by
$$\mathcal{D}M^{ij}=DM^{ij}-2M^{k(i}\nabla_{k}u^{j)}+2M^{ij}\nabla_{k}u^{k}\quad.$$
(49)
As noted in Ogilvie (2001), $\mathcal{D}M^{ij}=0$ is the equation for the evolution of the (modified) Maxwell stress for a magnetic field which satisfies the ideal induction equation; it states that the magnetic stress is frozen into the fluid.
We adopt the following prescription for the relaxation time:
$$\tau=\mathrm{De}_{0}\frac{1}{\Omega_{z}}\sqrt{\frac{p_{v}}{M}}\quad,$$
(50)
where $\Omega_{z}=\sqrt{GM_{\bullet}/r^{3}}$ is the vertical oscillation frequency and $\mathrm{De}_{0}$ is a dimensionless constant; this matches the functional form for the relaxation time $\tau$ given in the compressible version of Ogilvie (2003). In subsequent equations it will be useful to express this relaxation time as a Deborah number $\mathrm{De}=n\tau$, a dimensionless number used in viscoelastic fluids that is the ratio of the relaxation time to some characteristic timescale of the flow. When $\tau$ is given by Equation 50 then Equation 48 corresponds to the equation for the (modified)-Maxwell stress given in Ogilvie (2003) if the Reynolds stress is isotropic and proportional to some pressure $p_{v}$. One emergent property of such a stress model is that the stress will naturally scale with magnetic pressure, as the latter is the trace of the former (see Appendix A).
From this stress model, we have a nondimensional heating rate,
$$f_{\mathcal{H}}=\frac{1}{2\mathrm{De}}\left(\frac{M}{p_{v}}-3\mathcal{B}\right)\quad,$$
(51)
which ensures that magnetic energy loss/gained via the relaxation terms in Equation 48 is converted to/from the thermal energy (this is shown in Appendix B). Thus energy is conserved within the disc, although it can be lost radiatively from the disc surface.
In Appendix B we obtain the hydrostatic solutions for a circular disc. If $p_{v}$ is independent of $M$ the vertical equation of motion, rescaled by this reference circular disc, is
$$\frac{\ddot{H}}{H}=-(1-e\cos E)^{-3}+\frac{T}{H^{2}}\frac{1+\beta_{r}}{1+\beta_{r}^{\circ}}\frac{\Biggl{(}1+\frac{1}{2}\frac{M}{p}-\frac{M^{zz}}{p}\Biggr{)}}{\left[1+\mathcal{B}\frac{P_{v}^{\circ}}{P^{\circ}}\left(\frac{1}{2}+\frac{9}{4}\mathrm{De}_{0}^{2}\frac{P_{v}^{\circ}}{P^{\circ}}\right)\right]},$$
(52)
while the thermal energy equation is
$$\displaystyle\begin{split}\dot{T}&=-(\Gamma_{3}-1)T\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)\\
&+(\Gamma_{3}-1)\frac{1+\beta_{r}}{1+4\beta_{r}}T\left[\frac{1}{2\mathrm{De}}\left(\frac{M}{p}-3\mathcal{B}\frac{p_{v}}{p}\right)-\mathcal{C}^{\circ}\frac{1+\beta_{r}^{\circ}}{1+\beta_{r}}J^{2}T^{3}\right],\end{split}$$
(53)
where we have introduced a reference cooling rate,
$$\displaystyle\begin{split}\mathcal{C}^{\circ}&=\frac{9}{4}\mathcal{B}\mathrm{De}^{\circ}\frac{P^{\circ}_{v}}{P^{\circ}}\\
&=\left(\frac{3}{2}\right)^{3/2}\mathcal{B}^{1/2}\mathrm{De}_{0}\left(1+\sqrt{1+2\frac{\mathrm{De}_{0}^{2}}{\mathcal{B}}}\right)^{-1/2}\frac{P^{\circ}_{v}}{P^{\circ}}\quad.\end{split}$$
(54)
Here $\mathrm{De}^{\circ}$ is the equilibrium Deborah number in the reference circular disc, which is in general different from $\mathrm{De}_{0}$.
We solve these equations along with the equations for the evolution of the stress components,
$$\displaystyle\dot{M}^{\lambda\lambda}$$
$$\displaystyle+2\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)M^{\lambda\lambda}=-(M^{\lambda\lambda}-\mathcal{B}p_{v}g^{\lambda\lambda})/\mathrm{De},$$
(55)
$$\displaystyle\dot{M}^{\lambda\phi}$$
$$\displaystyle-M^{\lambda\lambda}\Omega_{\lambda}-M^{\lambda\phi}\Omega_{\phi}+2\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)M^{\lambda\phi}$$
$$\displaystyle=-(M^{\lambda\phi}-\mathcal{B}p_{v}g^{\lambda\phi})/\mathrm{De},$$
(56)
$$\displaystyle\dot{M}^{\phi\phi}$$
$$\displaystyle-2M^{\lambda\phi}\Omega_{\lambda}-2M^{\phi\phi}\Omega_{\phi}+2\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)M^{\phi\phi}$$
$$\displaystyle=-(M^{\phi\phi}-\mathcal{B}p_{v}g^{\phi\phi})/\mathrm{De},$$
(57)
$$\displaystyle\dot{M}^{zz}$$
$$\displaystyle+2\frac{\dot{J}}{J}M^{zz}=-(M^{zz}-\mathcal{B}p_{v}g^{zz})/\mathrm{De}\quad.$$
(58)
We solve for these stress components in the $(\lambda,\phi)$ coordinate system of Ogilvie (2001) as this simplifies the metric tensor. We can do this as, apart from $M^{zz}$ (which is the same in both coordinate systems), our equations only depend on $M^{ij}$ through scalar quantities.
Figures 17-19 show the variations of the scale height, $\beta_{r}$ and plasma beta (defined as $\beta_{m}=\frac{2p}{M}$) around the orbit for a disc with $p_{v}=p_{g}$, $\mathrm{De}_{0}=0.5$, $\mathcal{B}=0.1$, $e=q=0.9$ and $E_{0}=0$. Like the ideal induction equation model of Section 3, the coherent magnetic field has a stabilising effect on the dynamics. The effect is not as strong as that seen in the ideal induction equation model as, in that model, we could choose $\beta_{m}^{\circ}$ so as to achieve a much stronger field than achieved by the constitutive model here. Compared with the ideal induction equation model the plasma-$\beta$ is more uniform around the orbit; there is still an abrupt decrease in the plasma-$\beta$ near pericentre, which highlights the importance of the magnetic field during pericentre passage.
Figure 20 shows the pericentre passage for a disc with $p_{v}=p_{g}$, $\beta_{r}^{\circ}=10^{-3}$, $\mathrm{De}_{0}=0.5$, $\mathcal{B}=0.1$, $e=q=0.9$ and $E_{0}=0$. As with the ideal induction equation model, the magnetic pressure is extremely concentrated within the nozzle and near to the midplane. The nozzle is far more symmetric compared to the ideal induction equation model as the weaker field means that the disc is in a modified form of the nearly adiabatic radiation pressure dominated state.
In addition to considering the situation where the fluctuation pressure scales with the gas pressure $p_{v}=p_{g}$, we also considered $p_{v}=p+p_{m}$. As in the ideal induction equation model we found it is possible to stabilise the thermal instability with a strong enough magnetic field; however we found that this requires fine tuning of $\mathrm{De}_{0}$ and $\mathcal{B}$, for which there is no obvious justification. However, instead of stabilising the thermal instability, it is possible to delay its onset by choosing a small enough $\mathrm{De}_{0}$, so that the thermal runaway occurs on a timescale much longer than the orbital time (occurring after $\sim 1000$ orbits). However, these solutions never settle down into a periodic (or nearly periodic) solution and instead have a long phase of quasi-periodic evolution, where the mean scale height remains close to its initial value, before eventually experiencing thermal runaway. A quasi-periodic solution of our model is not self consistent, so the possibility that the thermal instability is delayed in the nonlinear-constitutive MRI model needs to be explored using an alternative method.
The possibility that the thermal instability stalls or is delayed has some support from simulations looking at the thermal stability of MRI active discs (Jiang
et al., 2013; Ross
et al., 2017). In both these papers it was found that the disc was quasi-stable, with thermal instability occurring when a particularly large turbulent fluctuation caused a strong enough perturbation away from the equilibrium. The (modified) Maxwell stress considered here is equal to the expectation value of a modified Maxwell stress which is stochastically forced by fluctuation with amplitude proportional to $p_{v}$ (see Equation 117 of Appendix E and discussion therein), so it is possible that our stress model captures the thermal quasi-stability seen in Jiang
et al. (2013) and Ross
et al. (2017) in some averaged sense. The possibility that the thermal instability is delayed or slowed is particularly relevant for TDEs which are inherently transient phenomena – if the timescale for thermal runaway is made long enough then eccentric TDE discs maybe thermally stable over the lifetime of the disc.
6 Discussion
6.1 Stability of the solutions
As discussed in Paper I, one advantage of our solution method is that the solutions it finds are typically nonlinear attractors (or at least long lived transients) and so are stable against (nonlinear) perturbations to the solution variables ($H$,$\dot{H}$, $T$ and $M^{ij}$ when present). Generally we expect such perturbations to damp on the thermal time or faster. Instabilities such as the thermal instability manifest as a failure to converge to a $2\pi-$periodic solution.
For the ideal induction equation model, our method cannot tell us about the stability of the solution to perturbations to the horizontal magnetic field. Showing this would require a separate linear stability analysis. Perturbations to the vertical field typically have no influence on the dynamics of the disc vertical structure.
However, for the constitutive model, perturbations to the magnetic field are encapsulated in perturbations to $M^{ij}$ so these solutions are stable against (large scale) perturbations to the magnetic field. This is most likely because, unlike the ideal induction equation, dissipation acts on the magnetic field.
Our solution method doesn’t tell us about the stability of our solutions to short wavelength (comparable or less than the scale height) perturbations to our system. So our disc structure could be unstable to such perturbations. Like the hydrodynamic solutions in Paper I, it is likely our discs are unstable to the parametric instability (Papaloizou, 2005a, b; Wienkers &
Ogilvie, 2018; Barker &
Ogilvie, 2014; Pierens
et al., 2020). Additionally if, as assumed, turbulence in highly eccentric discs is caused by the MRI then there must be perturbations to the magnetic field in the ideal induction equation model which are unstable.
Interestingly the simulations of Sądowski (2016) found that the strength of turbulence in magnetised and unmagnetised TDE discs is broadly comparable, something that would not be expected in a circular disc. Sądowski (2016) suggested that a hydrodynamic instability might be responsible for the disc turbulence. The discs considered by Sądowski (2016) still have appreciable eccentricity at the end of their simulation (with $e\approx 0.2$) so an obvious contender would be the parametric instability feeding off the disc eccentricity and breathing mode.
6.2 Resistivity and dynamo action
Even when the magnetic field in our models does a good job of resisting the collapse of the disc, the stream will still be highly compressed at pericentre. The highly compressed flow combined with a very strong field (with $\beta_{m}\ll 1$) makes the nozzle a prime site for magnetic reconnection. This will require that magnetic field lines on neighbouring orbits can have opposite polarities. Our solutions are agnostic to the magnetic field polarity and, in principle, support this possibility.
The simulations of Guillochon &
McCourt (2017) suggest that the initial magnetic field in the disc will be (quasi-)toroidal with periodic reversals in direction. When such a field is compressed both horizontally and vertically during pericentre passage, neighbouring toroidal magnetic field lines of opposite polarity can undergo reconnection, generating a quasi-poloidal field. We thus have a basis for an eccentric $\alpha-\Omega$ dynamo, where strong reconnection in the nozzle creates quasi-poloidal field, which in turn creates a source term in the quasi-toroidal induction equation from the shearing of this quasi-radial field (see Appendix C). In highly eccentric TDE discs, with their large vertical and horizontal compression, this dynamo could potentially be quite strong.
The constitutive model we considered in Section 5 implicitly possesses a dynamo through the source term proportional to $p_{v}$. This is a small scale dynamo which arises from the action of the turbulent velocity field and will be limited by the kinetic energy in the turbulence. A dynamo closed by reconnection in the nozzle would be a large scale dynamo and the coherent field produced could potentially greatly exceed equipartition with the turbulent velocity field.
Lastly reconnection sites can accelerate charged particles. We would therefore expect that TDEs in which there is a line of sight to the pericentre should include a flux of high-energy particles. From most look angles the line of sight to the pericentre “bright point” will be blocked (see also Zanazzi &
Ogilvie (2020)). This will both block the X-ray flux from the disc and any particles accelerated by reconnection in the nozzle. Hence these high energy particles should only be seen in X-ray bright TDEs.
7 Conclusion
In this paper we considered alternative models of the Maxwell stress from the standard $\alpha-$prescription applied to highly eccentric TDE discs. In particular we focus on the effects of the coherent magnetic field on the dynamics. We consider two separate stress models: an $\alpha-$disc with an additional coherent magnetic field obeying the ideal induction equation and a nonlinear constitutive (viscoelastic) model of the magnetic field. In summery our results are
1).
The coherent magnetic field in both models has a stabilising effect on the dynamics, making the gas pressure dominated branch stable at larger radiation pressures and reducing or removing the extreme variation in scale height around the orbit for radiation pressure dominated solutions.
2).
The coherent magnetic field is capable of reversing the collapse at pericentre without the presumably unphysically strong viscous stresses seen in some of the hydrodynamic models.
3).
For the radiation pressure dominated ideal induction equation model with a moderate magnetic field the dynamics of the scale height is set by the magnetic field (along with gravity and vertical motion) and doesn’t feel the effects of gas or radiation pressure. This is because magnetic pressure dominates during pericentre passage which is the only part of the orbit where pressure is important.
At present the behaviour of magnetic fields in eccentric discs is an understudied area. Our investigation suggests that magnetic fields can play an important role in TDE discs and that further work in this area is needed.
Acknowledgements
We thank J. J. Zanazzi, L. E. Held and H. N. Latter for helpful discussions. E. Lynch would like to thank the Science and Technologies Facilities Council (STFC) for funding this work through a STFC studentship. This research was supported by STFC through the grant ST/P000673/1.
8 Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
References
Barker &
Ogilvie (2014)
Barker A. J., Ogilvie G. I., 2014, MNRAS, 445, 2637
Begelman &
Pringle (2007)
Begelman M. C., Pringle J. E., 2007, MNRAS, 375, 1070
Bonnerot et al. (2017)
Bonnerot C., Price D. J., Lodato G., Rossi E. M., 2017, MNRAS, 469, 4879
Chandrasekhar (1967)
Chandrasekhar S., 1967, An introduction to the study of stellar structure
Das
et al. (2018)
Das U., Begelman M. C., Lesur G., 2018, MNRAS, 473, 2791
Guillochon &
McCourt (2017)
Guillochon J., McCourt M., 2017, ApJ, 834, L19
Guillochon et al. (2014)
Guillochon J., Manukian H., Ramirez-Ruiz E., 2014, ApJ, 783, 23
Janiuk &
Misra (2012)
Janiuk A., Misra R., 2012, A&A,
540, A114
Jiang
et al. (2013)
Jiang Y.-F., Stone J. M., Davis S. W., 2013, ApJ, 778, 65
Krolik et al. (2016)
Krolik J., Piran T., Svirski G., Cheng R. M., 2016, ApJ, 827, 127
Lynch &
Ogilvie (2020)
Lynch E. M., Ogilvie G. I., 2020, Monthly Notices of the Royal
Astronomical Society
Meier (1979)
Meier D. L., 1979, ApJ, 233, 664
Ogilvie (2000)
Ogilvie G. I., 2000, MNRAS,
317, 607
Ogilvie (2001)
Ogilvie G. I., 2001, MNRAS,
325, 231
Ogilvie (2002)
Ogilvie G. I., 2002, MNRAS,
330, 937
Ogilvie (2003)
Ogilvie G. I., 2003, MNRAS,
340, 969
Ogilvie &
Barker (2014)
Ogilvie G. I., Barker A. J., 2014, MNRAS, 445, 2621
Ogilvie &
Lynch (2019)
Ogilvie G. I., Lynch E. M., 2019, MNRAS, 483, 4453
Ogilvie &
Proctor (2003)
Ogilvie G. I., Proctor M. R. E., 2003, Journal of Fluid
Mechanics, 476, 389
Papaloizou (2005a)
Papaloizou J. C. B., 2005a, A&A,
432, 743
Papaloizou (2005b)
Papaloizou J. C. B., 2005b, A&A,
432, 757
Pessah &
Psaltis (2005)
Pessah M. E., Psaltis D., 2005, ApJ, 628, 879
Pierens
et al. (2020)
Pierens A., McNally C. P., Nelson R. P., 2020, arXiv e-prints, p. arXiv:2005.14693
Piran (1978)
Piran T., 1978, ApJ, 221, 652
Piran et al. (2015)
Piran T., Svirski G., Krolik J., Cheng R. M., Shiokawa H.,
2015, ApJ, 806, 164
Pringle (1976)
Pringle J. E., 1976, MNRAS, 177, 65
Rees (1988)
Rees M. J., 1988, Nature, 333, 523
Ross
et al. (2017)
Ross J., Latter H. N., Tehranchi M., 2017, MNRAS, 468, 2401
Sakimoto &
Coroniti (1981)
Sakimoto P. J., Coroniti F. V., 1981, ApJ,
247, 19
Shakura &
Sunyaev (1976)
Shakura N. I., Sunyaev R. A., 1976, MNRAS, 175, 613
Sądowski (2016)
Sądowski A., 2016, MNRAS, 462, 960
Svirski
et al. (2017)
Svirski G., Piran T., Krolik J., 2017, MNRAS, 467, 1426
Wienkers &
Ogilvie (2018)
Wienkers A. F., Ogilvie G. I., 2018, MNRAS, 477, 4838
Zanazzi &
Ogilvie (2020)
Zanazzi J. J., Ogilvie G. I., 2020, MNRAS, 499, 5562
Appendix A Properties of our Stress model
In this appendix we show some important properties of our constitutive model for the Maxwell stress. Restated here for clarity, our model for the (modified) Maxwell stress is
$$M^{ij}+\tau\mathcal{D}M^{ij}=\mathcal{B}p_{v}g^{ij}\quad,$$
(59)
where $p_{v}$ is some pressure which controls the magnitude of the magnetic fluctuations. The Deborah number is given by
$$\mathrm{De}=\tau n=\mathrm{De}_{0}\frac{n}{\Omega_{z}}\sqrt{\frac{p_{v}}{M}},$$
(60)
where $\Omega_{z}=\sqrt{GM_{\bullet}/r^{3}}$ and $\mathrm{De}_{0}$ is a dimensionless constant. This matches the decay term in the compressible version of Ogilvie (2003).
A.1 Viscoelastic behaviour
This model is part of a large class of possible viscoelastic models for $M^{ij}$. The elastic limit $\tau\rightarrow\infty$ of this equation is fairly obvious, corresponding to a magnetic field which obeys the ideal induction equation through the “freezing in” of $M^{ij}$ ($\mathcal{D}M^{ij}=0$). However, to obtain the viscous behaviour responsible for the effective viscosity of circular accretion discs requires more work. The viscous limit is obtained when $\mathrm{De}\ll 1$. For simplicity, in what follows, we shall assume $\tau$ and $p_{v}$ are independent of the magnetic field ($M^{ij}$).
We propose a series expansion in $\tau$,
$$M^{ij}=\sum_{k=0}^{\infty}\tau^{k}M_{k}^{ij}\quad.$$
(61)
This expansion is only likely to be valid in the short $\tau$ limit and may break down for material variations on timescales shorter than $\tau$. With that caveat we can find a series solution for equation 59,
$$M^{ij}=\mathcal{B}\sum_{k=0}^{\infty}(-\tau\mathcal{D})^{k}p_{v}g^{ij}\quad.$$
(62)
Keeping the lowest order terms in the expansion we have
$$M^{ij}=\mathcal{B}p_{v}g^{ij}-\mathcal{B}\tau\mathcal{D}(p_{v}g^{ij})+O(\tau^{2})\quad.$$
(63)
The lowest order term is an isotropic stress and evidently a form of magnetic pressure. The operator $\mathcal{D}=D$ when acting on a scalar and when acting on the metric tensor $\mathcal{D}g^{ij}=-2S^{ij}+2\nabla_{k}u^{k}g^{ij}$. Using the product rule,
$$\displaystyle\begin{split}M^{ij}&\approx\mathcal{B}p_{v}g^{ij}+2\mathcal{B}\tau p_{v}S^{ij}-2\mathcal{B}\tau(p_{v}\nabla_{k}u^{k}+Dp_{v})g^{ij}\\
&=\mathcal{B}p_{v}g^{ij}+2\mathcal{B}\tau p_{v}S^{ij}+2\mathcal{B}\tau\left(\left(\frac{\partial p_{v}}{\partial\ln\rho}\right)_{s}-p_{v}\right)\nabla_{k}u^{k}g^{ij}-2\mathcal{B}\tau\left(\frac{\partial p_{v}}{\partial s}\right)_{\rho}(Ds)\,g^{ij}\quad.\end{split}$$
(64)
So the first $O(1)$ term gives rise to a magnetic pressure, the first $O(\tau)$ term is a shear viscous stress, the second is a bulk viscous stress and the final term is an additional nonadiabatic correction which has no obvious analogue in the standard viscous or magnetic models. Higher order terms contain time derivatives of the pressure $p_{v}$ and the velocity gradients $\nabla_{i}u^{j}$. This dependence produces a memory effect in the fluid causing the dynamics of the fluid to depend on its history. The fluid has a finite memory and becomes insensitive to (“forgets about") its state at times $\gtrsim\tau$ in the past.
If $\tau$ and $p_{v}$ depend on $M^{ij}$ then the terms in equation 64 are modified. However the equation still decomposes into an isotropic magnetic pressure term, a shear stress term $\propto S^{ij}$, a bulk stress term $\propto\nabla_{k}u^{k}g^{ij}$ and a non-adiabatic term $\propto(Ds)\,g^{ij}$ (as in equation 64).
A.2 Realisability
In addition to its behaviour in the viscous and elastic limits another necessary property of a model of a Maxwell stress is its realisability from actual magnetic fields. As $M^{ij}=\frac{B^{i}B^{j}}{\mu_{0}}$, $M^{ij}$ must be positive semi-definite. Thus for all positive semi-definite initial conditions $M^{ij}(0)$ our constitutive model equation 59 must conserve the positive semi-definite character of $M^{ij}$. This is equivalent to requiring that the quadratic form $Q=M^{ij}Y_{i}Y_{j}$ satisfy $Q\geq 0$ for all vectors $Y_{i}$, at all points in the fluid.
We will show by contradiction that an initially positive semi-definite $M^{ij}$ cannot evolve into one that is not positive semi-definite. Suppose, to the contrary, that at some point in the flow $Q<0$ for some vector $X_{i}$ at some time after the initial state. Then let us consider a smooth, evolving vector field $Y_{i}$ that matches the vector $X_{i}$ at the given point and time. The corresponding quadratic form $Q$ is then a scalar field that evolves according to
$$\displaystyle\begin{split}\mathcal{D}Q&=Y_{i}Y_{j}\mathcal{D}M^{ij}+M^{ij}\mathcal{D}(Y_{i}Y_{j})\\
&=\left(\mathcal{B}p_{v}Y^{2}-Q\right)/\tau+M^{ij}\mathcal{D}Y_{i}Y_{j}\quad.\\
\end{split}$$
(65)
By assumption, $Q$ is initially positive and evolves continuously to a negative value at the given later time. Therefore $Q$ must pass through zero at some intermediate time, which we denote by $t=0$ without loss of generality. We can also assume, without loss of generality, that the vector field evolves according to $\mathcal{D}Y_{i}=0$, which means that it is advected by the flow. The equation for $Q$ then becomes
$$DQ=\left(\mathcal{B}p_{v}Y^{2}-Q\right)/\tau\quad,$$
(66)
where we have made use of the fact that $\mathcal{D}=D$ when acting on a scalar. Within the disc we expect $p_{v}>0$, additionally as $M^{ij}$ is positive semi-definite $M\geq 0$. Thus at $t=0$, $Q=0$ and the time derivative of $Q$ is given by
$$DQ|_{t=0}=\mathcal{B}p_{v}Y^{2}/\tau\geq 0\quad.$$
(67)
This contradicts the assumption that $Q$ passes through zero from positive to negative at $t=0$. We conclude that $M^{ij}$ remains positive semi-definite if it is initially so.
A.3 Energy Conservation
In order that the interior of our disc conserve total energy we need to derive the appropriate magnetic heating/cooling rate so that energy lost/gained by the magnetic field is transferred to/from the thermal energy. The MHD total energy equation with radiative flux is
$$\partial_{t}\left[\rho\left(\frac{\mathbf{u}^{2}}{2}+\Phi+e\right)+\frac{\mathbf{B}^{2}}{2\mu_{0}}\right]+\nabla\cdot\left[\rho\mathbf{u}\left(\frac{\mathbf{u}^{2}}{2}+\Phi+h\right)+\mathbf{u}\frac{\mathbf{B}^{2}}{\mu_{0}}-\frac{1}{\mu_{0}}(\mathbf{u}\cdot\mathbf{B})\mathbf{B}+\mathbf{F}\right]=0\quad,$$
(68)
where $e$ is the specific internal energy and $h=e+p/\rho$ is the specific enthalpy. In terms of the modified Maxwell stress,
$$\partial_{t}\left[\rho\left(\frac{u^{2}}{2}+\Phi+e\right)+\frac{M}{2}\right]+\nabla_{i}\left[\rho u^{i}\left(\frac{u^{2}}{2}+\Phi+h\right)+u^{i}M-u_{j}M^{ij}+F^{i}\right]=0\quad.$$
(69)
From this we deduce the thermal energy equation,
$$\rho TDs=M^{ij}S_{ij}-\frac{1}{2}(DM+2M\nabla_{i}u^{i})-\nabla_{i}F^{i}\quad.$$
(70)
Using the constitutive relation, Equation 59, we obtain
$$\rho TDs=\frac{1}{2\tau}\left(M-3\mathcal{B}p_{v}\right)-\nabla_{i}F^{i}\quad,$$
(71)
so we have the nondimensional heating rate,
$$f_{\mathcal{H}}=\frac{1}{2\mathrm{De}}\left(\frac{M}{p_{v}}-3\mathcal{B}\right)\quad.$$
(72)
Substituting Equation 64 into Equation 70 we recover, in the viscous limit, terms proportional to $S^{ij}S_{ij}$ and $(\nabla_{i}u^{i})^{2}$ which act like a viscous heating rate.
Appendix B Stress model behaviour in a circular disc
In this appendix we consider the behaviour of our nonlinear constitutive model in a circular disc. We derive the reference circular disc, with respect to which our models are scaled. For a circular disc, the fixed point of equation 48 is
$$\displaystyle M^{RR}$$
$$\displaystyle=M^{zz}=\mathcal{B}p_{v}$$
(73)
$$\displaystyle RM^{R\phi}$$
$$\displaystyle=-\frac{3}{2}\mathrm{De}\mathcal{B}p_{v}$$
(74)
$$\displaystyle R^{2}M^{\phi\phi}$$
$$\displaystyle=\left(1+\frac{9}{2}\mathrm{De}^{2}\right)\mathcal{B}p_{v},$$
(75)
which results in a magnetic pressure of
$$p_{m}=\frac{1}{2}M=\left(\frac{3}{2}+\frac{9}{4}\mathrm{De}^{2}\right)\mathcal{B}p_{v}\quad.$$
(76)
As $\mathrm{De}$ and $p_{v}$ can depend on $M$ this equation needs to be solved to determine the equilibrium $M$. When $p_{v}$ is independent of $M$ this yields
$$M=\frac{3}{2}\left(1+\sqrt{1+2\frac{\mathrm{De}_{0}^{2}}{\mathcal{B}}}\right)\mathcal{B}p_{v}\quad,$$
(77)
while for $p_{v}=p+p_{m}$ we obtain a quadratic equation for $p_{m}$,
$$\left[2-\mathcal{B}\left(3+\frac{9}{4}\mathrm{De}_{0}^{2}\right)\right]p_{m}^{2}=\mathcal{B}p\left(3+\frac{9}{2}\mathrm{De}_{0}^{2}\right)p_{m}+\frac{9}{4}\mathrm{De}_{0}^{2}\mathcal{B}p^{2}.$$
(78)
For physical solutions ($p,p_{m}>0$) the right hand side is positive. Therefore we require $2>\mathcal{B}\left(3+\frac{9}{4}\mathrm{De}_{0}^{2}\right)$ in order that $p_{m}$ is real. This equation has a singular point at $2=\mathcal{B}\left(3+\frac{9}{4}\mathrm{De}_{0}^{2}\right)$, which results in a negative magnetic pressure. Solving for the magnetic pressure,
$$p_{m}=\frac{3}{2}\mathcal{B}p\frac{1+\frac{3}{2}\mathrm{De}_{0}^{2}\pm\sqrt{1+2\frac{\mathrm{De}_{0}^{2}}{\mathcal{B}}}}{2-\mathcal{B}\left(3+\frac{9}{4}\mathrm{De}_{0}^{2}\right)}\quad.$$
(79)
Given the requirement that $2>\mathcal{B}\left(3+\frac{9}{4}\mathrm{De}_{0}^{2}\right)$, the negative root always results in a negative magnetic pressure and is thus unphysical.
In addition to the equilibrium values of $M^{ij}$, the circular reference disc obeys hydrostatic and thermal balance. The equation for hydrostatic equilibrium in a circular disc is
$$\frac{P}{\Sigma H^{2}}\left(1+\frac{M}{2p}-\frac{M_{zz}}{p}\right)=n^{2}\quad,$$
(80)
while the equation for thermal balance is
$$\mathcal{C}^{\circ}=f_{\mathcal{H}}\frac{P^{\circ}_{v}}{P^{\circ}}=\frac{9}{4}\mathcal{B}\mathrm{De}^{\circ}\frac{P^{\circ}_{v}}{P^{\circ}}\quad,$$
(81)
where
$$\mathrm{De}^{\circ}=\mathrm{De}_{0}\sqrt{\frac{p_{v}^{\circ}}{M^{\circ}}}\quad.$$
(82)
Taking the solution to equations 80-81 which has $\beta_{r}=\beta_{r}^{\circ}$ and substituting this into equations 39 and 40 as the circular reference state we obtain
$$\frac{\ddot{H}}{H}=-(1-e\cos E)^{-3}+\frac{T}{H^{2}}\frac{1+\beta_{r}}{1+\beta_{r}^{\circ}}\frac{\Biggl{(}1+\frac{1}{2}\frac{M}{p}-\frac{M^{zz}}{p}\Biggr{)}}{\left[1+\mathcal{B}\frac{P_{v}^{\circ}}{P^{\circ}}\left(\frac{1}{2}+\frac{9}{4}(\mathrm{De}^{\circ})^{2}\right)\right]}\quad,$$
(83)
$$\dot{T}=-(\Gamma_{3}-1)T\left(\frac{\dot{J}}{J}+\frac{\dot{H}}{H}\right)+(\Gamma_{3}-1)\frac{1+\beta_{r}}{1+4\beta_{r}}T\left[\frac{1}{2\mathrm{De}}\left(\frac{M}{p}-3\mathrm{B}\frac{p_{v}}{p}\right)-\mathcal{C}^{\circ}\frac{1+\beta_{r}^{\circ}}{1+\beta_{r}}J^{2}T^{3}\right]\quad,$$
(84)
where the reference cooling rate is given by equation 81.
Appendix C Solution to the Induction Equation
In this appendix we derive the the structure of a steady magnetic field in an eccentric disc. The equations for a horizontally invariant laminar flow in a magnetised disc in an eccentric shearing box were derived in Ogilvie &
Barker (2014). In their coordinate system the induction equation is
$$DB^{\xi}=-B^{\xi}\left(\Delta+\partial_{\varsigma}v_{\varsigma}\right)\quad,$$
(85)
$$DB^{\eta}=\Omega_{\lambda}B^{\xi}+\Omega_{\phi}B^{\eta}-B^{\eta}(\Delta+\partial_{\varsigma}v_{\varsigma})\quad,$$
(86)
$$DB^{\zeta}=-B^{\zeta}\Delta\quad,$$
(87)
where $\Omega_{\lambda}=\frac{\partial\Omega}{\partial\lambda}$ and $\Omega_{\phi}=\frac{\partial\Omega}{\partial\phi}$.
In order to rewrite the terms involving derivatives of $\Omega$ we introduce functions $\aleph$ and $\beth$ defined by
$$\frac{\dot{\aleph}}{\aleph}=-\Omega_{\phi}\quad,\frac{\dot{\beth}}{\aleph}=-\Omega_{\lambda},$$
(88)
which in Ogilvie &
Barker (2014) were denoted $\alpha$ and $\beta$. Noting that222$J$ here being the Jacobian of the $(\Lambda,\lambda)$ coordinate system, as used throughout; as opposed to the Jacobian of the coordinate system of Ogilvie &
Barker (2014) which shares the same symbol. In fact our $J$ is closer to $\mathcal{J}$ of Ogilvie &
Barker (2014). $\Delta=\frac{\dot{J}}{J}$ and $\partial_{\varsigma}v_{\varsigma}=\frac{\dot{H}}{H}$ the $\xi$ and $\zeta$ components of the induction equation become
$$\frac{\dot{B}^{\xi}}{B^{\xi}}+\frac{\dot{J}}{J}+\frac{\dot{H}}{H}=0\quad,$$
(89)
$$\frac{\dot{B}^{\zeta}}{B^{\zeta}}+\frac{\dot{J}}{J}=0\quad,$$
(90)
which have solutions
$$B^{\xi}=\frac{B^{\xi}_{0}(\lambda,\tilde{z})}{JH},\quad B^{\zeta}=\frac{B^{\zeta}_{0}(\lambda)}{J}\quad,$$
(91)
where we have additionally made use of the solenoidal condition to show $B^{\zeta}_{0}$ is independent of $\tilde{z}$. Substituting the solution for $B^{\xi}$ into the $\eta$ component of the induction equation and rearranging we get
$$\aleph JH\dot{B}^{\eta}+\aleph J\dot{H}B^{\eta}+\aleph\dot{J}HB^{\eta}+\dot{\aleph}JHB^{\eta}+\dot{\beth}B^{\xi}_{0}=0\quad,$$
(92)
which has the solution
$$B^{\eta}=\frac{\Omega B^{\eta}_{0}}{nJH}-\frac{\Omega\beth B^{\xi}_{0}}{nJH}\quad,$$
(93)
where we have used $\aleph\propto\Omega^{-1}$ from Ogilvie &
Barker (2014). Thus the large scale magnetic field solution in an eccentric shearing box is given by
$$B^{\xi}=\frac{B^{\xi}_{0}(\lambda,\tilde{z})}{JH},\quad B^{\eta}=\frac{\Omega B^{\eta}_{0}}{nJH}-\frac{\Omega\beth B^{\xi}_{0}}{nJH},\quad B^{\zeta}=\frac{B^{\zeta}_{0}(\lambda)}{J}\quad.$$
(94)
The equation for $\beth$ is given in Ogilvie &
Barker (2014) as
$$\beth=\frac{3}{2}\left(1+\frac{2e\lambda e_{\lambda}}{1-e^{2}}\right)\left(\frac{GM}{\lambda^{3}}\right)^{1/2}t-\frac{\lambda e_{\lambda}(2+e\cos\theta)\sin\theta}{(1-e^{2})(1+e\cos\theta)^{2}}-\frac{\lambda\omega_{\lambda}}{(1+e\cos\theta)^{2}}+\mathrm{constant}\quad,$$
(95)
which in the $(a,E)$ coordinate system is given by
$$\beth=\frac{3nt}{2[1-e(e+2ae_{a})]\sqrt{1-e^{2}}}-\frac{ae_{a}}{(1-e^{2})^{3/2}}\frac{2-e\cos E-e^{2}}{1-e(e+2ae_{a})}\sin E-\frac{a\varpi_{a}(1-e\cos E)^{2}}{(1-e^{2})[1-e(e+2ae_{a})]}+\mathrm{constant}\quad,$$
(96)
this contains a term that grows linearly in time. This means that $B^{\eta}$ can be expected to grow linearly in the presence of a “quasiradial” field.
The ideal induction equation can be written in tensorial form using the operator $\mathcal{D}$ as $\mathcal{D}(B^{i}B^{j})=0$ (Ogilvie, 2001). The solution to this equation for a horizontally invariant laminar flow (along with the solenoidal condition) in an eccentric shearing box is given by Equation 94. In the limit $\tau\rightarrow\infty$ the Maxwell stress obeys $\mathcal{D}M^{ij}=0$ and the corresponding magnetic field obeys the induction equation.
The solenoidal condition is not automatically satisfied by the solutions to $\mathcal{D}M^{ij}=0$ (although it can be imposed). In particular the assumption that the stress has the same vertical dependence as the pressure breaks the solenoidal condition, if $M^{zz}\neq 0$.
Finally, when $B^{\xi}_{0}=0$ it is convenient to write the magnetic field in a form which is independent of the horizontal coordinate system used,
$$\mathbf{B}=\frac{B_{h0}(\tilde{z})}{nJH}\mathbf{v}_{\rm orbital}+\frac{B_{v0}}{J}\hat{e}_{z},$$
(97)
where $B_{v0}$ is a constant, $B_{h0}(\tilde{z})$ is a function of $\tilde{z}$ only and $\mathbf{v}_{\rm orbital}$ is the orbital velocity vector.
Appendix D Derivation of the ideal induction equation model
We here derive the full set of equations for a horizontally invariant laminar flow in an eccentric disc with a magnetic field. Assume the magnetic field can be split into mean and fluctuating parts:
$$\mathbf{B}=\bar{\mathbf{B}}+\mathbf{b}\quad.$$
(98)
In order that we have a steady field we require $B^{\xi}=0$, otherwise there is a source term in the $\eta$ component of the induction equation from the winding up of the “quasiradial” ($B^{\xi}$) field. This trivially satisfies the $\xi$ component of the induction equation. We assume the fluctuating field $b$ is caused by the MRI and its effect on the dynamics is captured by the turbulent stress prescription. Thus keeping the mean field only and dropping the overbar the equations for a horizontally invariant laminar flow in a magnetised disc in the eccentric shearing coordinates of Ogilvie &
Barker (2014) are the $\eta$ and $\zeta$ components of the induction equation
$$DB^{\eta}=\Omega_{\lambda}B^{\xi}+\Omega_{\phi}B^{\eta}-B^{\eta}(\Delta+\partial_{\varsigma}v_{\varsigma})\quad,$$
(99)
$$DB^{\zeta}=-B^{\zeta}\Delta\quad,$$
(100)
the momentum equation
$$Dv_{\zeta}=-\phi_{2}\zeta-\frac{1}{\rho}\partial_{\zeta}\left(p+\frac{B^{2}}{2\mu_{0}}-T_{zz}\right)+\textrm{Tension}\quad,$$
(101)
where Tension are the magnetic tension terms. The solenoidal condition gives $\partial_{\zeta}B^{\zeta}=0$, thus $B^{\zeta}$
is independent of $\zeta$ and the magnetic tension terms in the vertical momentum equation are zero. Finally the thermal energy equation is
$$Dp=-\Gamma_{1}p\left(\Delta+\partial_{\zeta}v_{\zeta}\right)+(\Gamma_{3}-1)(\mathcal{H}-\partial_{\zeta}F_{\zeta})\quad,$$
(102)
and we must specify the equation of state. Making use of the solutions to the induction equation (Equation 97), we obtain an expression for the magnetic pressure,
$$p_{M}=\frac{B_{h0}^{2}(\tilde{z})}{2\mu_{0}(nJH)^{2}}v^{2}+\frac{B_{v0}^{2}}{2\mu_{0}J^{2}}\quad,$$
(103)
with $v^{2}=|\mathbf{v}_{\rm orbital}|^{2}$ is the square of the magnitude of the orbital velocity.
The contribution of the vertical component of the magnetic field to the magnetic pressure is independent of the height in the disc (in order to satisfy the solenoidal condition) and makes no contribution to the dynamics of the vertical structure. As such we neglect the vertical component of the magnetic field from this point on. The magnetic pressure simplifies to
$$p_{M}=\frac{B_{h0}^{2}(\tilde{z})}{2\mu_{0}(nJH)^{2}}v^{2}\quad.$$
(104)
On a circular orbit $v^{2}=(na)^{2}$ so that $p_{m}\propto(JH)^{-2}\propto\rho^{2}$ and the magnetic pressure behaves like perfect gas with $\gamma=2$. Thus, for a magnetised radiation-gas mixture, the magnetic field is the least compressible constituent of the plasma and will be the dominant source of pressure when the plasma is sufficiently compressed. On an eccentric orbit there is an additional source of variability owing to the stretching and compressing of the field by the periodic variation of the velocity tangent to the field lines.
The vertical component of the momentum equation becomes
$$\frac{\ddot{H}}{H}=-\phi_{2}-\frac{1}{\rho\hat{H}^{2}\tilde{z}}\partial_{\tilde{z}}\left(p+\frac{B_{h0}^{2}(\tilde{z})}{2\mu_{0}(nJH)^{2}}v^{2}-T_{zz}\right)\quad,$$
(105)
where we have used $\hat{H}$ to denote the dimensionful scale height, to distinguish it from the dimensionless scale height $H$.
We propose separable solutions with
$$p=\hat{p}(\tau)\tilde{p}(\tilde{z}),\quad T_{zz}=\hat{T}_{zz}(\tau)\tilde{p}(\tilde{z}),\quad\rho=\hat{\rho}(\tau)\tilde{\rho}(\tilde{z})\quad.$$
(106)
The dimensionless functions obey the generalised hydrostatic equilibrium which means the pressure obeys
$$\frac{d\tilde{p}}{d\tilde{z}}=-\tilde{\rho}\tilde{z}\quad.$$
(107)
To maintain separability we require the reference plasma beta to be independent of height,
$$\beta_{m}^{\circ}=\frac{2\mu_{0}\tilde{p}(\tilde{z})p^{\circ}}{a^{2}B_{h0}^{2}(\tilde{z})}\quad.$$
(108)
From this we obtain the equation for variation of the scale height around the orbit,
$$\frac{\ddot{H}}{H}=-\phi_{2}+\frac{\hat{p}}{\hat{\rho}\hat{H}^{2}}\left(1+\frac{1}{\beta_{m}^{\circ}J^{2}H}\frac{P^{\circ}}{\hat{P}}\frac{v^{2}}{(an)^{2}}-\frac{\hat{T}_{zz}}{\hat{p}}\right)\quad,$$
(109)
where square of the velocity is
$$v^{2}=(an)^{2}\frac{1+e\cos E}{1-e\cos E}\quad.$$
(110)
The reference circular disc has $f_{\mathcal{H}}=\frac{9}{4}\alpha_{s}$ as in the hydrodynamic models considered in Paper I. In the reference circular disc, hydrostatic balance is given by
$$\frac{P^{\circ}}{\Sigma^{\circ}H^{\circ}H^{\circ}}\left(1+\frac{1}{\beta_{m}^{\circ}}\right)=n^{2}\quad.$$
(111)
Rescaling Equation 109 by this reference circular disc we obtain
$$\frac{\ddot{H}}{H}=-(1-e\cos E)^{-3}+\frac{T}{H^{2}}\frac{1+\beta_{r}}{1+\beta_{r}^{\circ}}\frac{\left(1+\frac{1+\beta_{r}^{\circ}}{1+\beta_{r}}\frac{1}{\beta_{m}^{\circ}JHT}\frac{1+e\cos E}{1-e\cos E}-\frac{\hat{T}_{zz}}{p}\right)}{\left(1+\frac{1}{\beta_{m}^{\circ}}\right)}\quad,$$
(112)
with the rest of the equations proceeding as in the hydrodynamic laminar flow model considered in Paper I.
Appendix E Microphysical basis of the nonlinear constitutive model
Several authors have looked at the possibility of using stochastic calculus as a model of the MRI (Janiuk &
Misra, 2012; Ross
et al., 2017). Here we assume the magnetic field satisfies a Langevin equation:
$$d\mathbf{B}+(\mathbf{B}\cdot\nabla\mathbf{u}-\mathbf{B}\nabla\cdot\mathbf{u})dt=-\mathbf{\lambda}dt+\mathcal{F}d\mathbf{X}\quad,$$
(113)
where $\mathbf{X}$ is a Wiener process in the sense of Ito calculus. The left hand side of this equation is the ideal terms in the induction equation, the $-\mathbf{\lambda}dt$ term models damping from resistivity, while the $\mathcal{F}d\mathbf{X}$ represents stochastic forcing by a turbulent electromotive force, where $\mathcal{F}$ is some scale factor controlling the strength of the forcing. In the absence of a mean velocity field $u$, the magnetic field would evolve like a damped Brownian motion.
Introducing $\langle\cdot\rangle$ to denote the expectation value, we have the standard result for the Wiener process $X$,
$$\langle X^{i}X^{j}\rangle=g^{ij}t\quad,$$
(114)
where $g^{ij}$ is the inverse metric tensor. So $\mathbf{X}$ is a statistically isotropic vector field. Physically, in this model, turbulent fluctuations act to isotropise the magnetic field. The orbital shear can feed on these fluctuations and induce a highly anisotropic magnetic field that is predominantly aligned/antialigned with the orbital motion. The change in the Maxwell stress can be obtained from Ito’s formula,
$$\mu_{0}dM^{ij}=\sum_{n}\frac{\partial M^{ij}}{\partial B^{n}}dB^{n}+\frac{1}{2}\sum_{nm}\frac{\partial^{2}M^{ij}}{\partial B^{n}\partial B^{m}}d\langle B^{n}B^{m}\rangle\quad,$$
(115)
with the partial derivatives given by
$$\frac{\partial M^{ij}}{\partial B^{n}}=2B^{(i}\delta^{j)}_{n},\quad\frac{\partial^{2}M^{ij}}{\partial B^{n}\partial B^{m}}=2\delta^{(i}_{n}\delta^{j)}_{m}\quad.$$
(116)
After substituting in these and the equation for $dB$ Equation 115 becomes
$$\displaystyle\begin{split}\mu_{0}dM^{ij}&=-2(B^{k}B^{(i}\nabla_{k}u^{j)}-B^{(i}B^{j)}\nabla_{k}u^{k})dt-2B^{(i}\lambda^{j)}dt+2\mathcal{F}B^{(i}dX^{j)}+\sum_{nm}\delta^{(i}_{n}\delta^{j)}_{m}\mathcal{F}^{2}d\langle X^{n}X^{m}\rangle\\
&=-2(B^{k}B^{(i}\nabla_{k}u^{j)}-B^{(i}B^{j)}\nabla_{k}u^{k})dt-2B^{(i}\lambda^{j)}dt+2\mathcal{F}B^{(i}dX^{j)}+\mathcal{F}^{2}g^{ij}dt\quad.\end{split}$$
(117)
Making use of the definition of $\mathcal{D}$, and the fact the Ito integral preserves the martingale property, we can take the expectation of Equation 117 to obtain an equation for the expected Maxwell stress,
$$\mathcal{D}\langle M^{ij}\rangle=-\frac{2}{\mu_{0}}\langle B^{(i}\lambda^{j)}\rangle+\frac{1}{\mu_{0}}\mathcal{F}^{2}g^{ij}\quad.$$
(118)
We should caution that this procedure may not be valid if $\mathcal{F}$ depends on $\mathbf{B}$. Henceforth we shall drop the angle brackets on the modified Maxwell stress and use $M^{ij}$ to denote the expected modified Maxwell stress.
What’s left now is to determine appropriate forms for $\lambda^{i}$ and $\mathcal{F}$. A priori there is no obvious way of directly obtaining these using the underlying physics. However, in a similar vein to Ogilvie (2003), we can place certain constraints on the possible forms of $\lambda^{i}$ and $\mathcal{F}$. In particular on dimensional grounds they both have dimensions of magnetic field over time. $\lambda^{i}$ transforms as a vector and $\mathcal{F}$ transforms as a scalar. Other than this we assume:
1).
Following Ogilvie (2003) neither $\lambda^{i}$ or $\mathcal{F}$ directly know about the mean velocity field, although they could know about the various orbital frequencies.
2).
$\mathcal{F}$ is non-negative.
3).
There is no preferred direction.
This leaves us to construct a vector and a scalar which have the same dimensions as magnetic field over time from $B^{i}$, $p_{g}$, $p_{r}$, $\rho$, $\mu_{0}$ along with the vertical and horizontal epicyclic frequencies and mean motion $\Omega_{z}$, $\kappa$, $n$. Immediately it is apparent that the the only vectorial quantity we have available is the magnetic field $B^{i}$. As such $\lambda^{i}$ must have the form
$$\lambda^{i}=\frac{B^{i}}{2\tau}\quad,$$
(119)
where $\tau$ is some relaxation time which can depend on the mean field quantities. Next on dimensional grounds $\rho$ cannot appear in either $\mathcal{F}$ or $\tau$ and the other terms must only appear in the combination $|B|$, $\mu_{0}p_{g}$ and $\mu_{0}p_{r}$ along with the various orbital frequencies. Without loss of generality we can write
$$\mathcal{F}=\sqrt{\frac{\mu_{0}\mathcal{B}p_{\rm v}}{\tau}}\quad,$$
(120)
where $\tau$ is the relaxation time, $\mathcal{B}$ is a dimensionless constant and $p_{\rm v}$ is some reference pressure of the fluctuations. This means our equation for $M^{ij}$ becomes
$$\mathcal{D}M^{ij}=-\frac{1}{\tau}\left(M^{ij}-\mathcal{B}p_{\rm v}g^{ij}\right)\quad,$$
(121)
where we must specify how the relaxation time and fluctuation pressure depend on $M$, $p_{g}$, $p_{r}$ and the orbital frequencies in order to close the model. |